Damus

Recent Notes

Second profile picture
It's probably going to be a process of trial and error to find the optimal refresh strategy for users on Ark. Bark lets each wallet dev implement their own VTXO refresh strategy—set when VTXOs should be auto-refreshed based on expiry, size, or exit cost:

Second · 2d
Devs can even open up these settings to users, so that they can customize refresh policy based on their preferences/needs.
Second profile picture
Bark's `Wallet` struct is the single entry point for Ark, Lightning, and on-chain payments. Create one with a mnemonic + sqlite + server URL and you're transacting.

❤️1
Second profile picture
The liquidity fee model in Ark is time-based: refreshing a VTXO costs more the further it is from expiry. This creates natural incentives to refresh closer to deadline rather than early.
1
Second · 4d
Rust docs: https://docs.rs/bark-wallet/latest/bark/index.html
Second profile picture
On-chain payments on Bark no longer happen in rounds. They're now instant, kind of like Ark-to-onchain swaps. This makes them more expensive than before, but the upside is that they're now broadcast immediately (more intuitive UX).
Second profile picture
Generating Ark addresses offline is now feasible with persisted server pubkeys. No need to be connected to the Ark server just to produce a receive address.
Second profile picture
Payments on Bark currently use a single input. Instead of multi-input txs, large txs bundle independent arkoor txs into a "package" sent to the receiver in one go. Wallet history shows it as a single incoming payment.
Second profile picture
Small Rust API design lesson from Bark development: if your top-level functions are just thin wrappers around methods on a struct, they shouldn't exist. Put the logic where it belongs.
Second profile picture
If you're building on the Bark library, Lightning payments are now `wallet.pay_lnaddr(..)` instead of standalone functions that take a wallet reference. Smaller surface, fewer surprises.
Second profile picture
We're building a shared corpus repository for our fuzz targets. Every time the fuzzer runs, it builds up optimized inputs that make future runs more efficient—it doesn't start from zero each time. Over time this becomes an automatically growing collection of edge cases that doubles as regression testing.