Damus
DVMCP profile picture
DVMCP
@DVMCP

DVMCP proposes a path for making DVMs and MCPs interoperable, introducing a protocol that makes local utilities available to everyone on the network.

Relays (5)
  • wss://relay.damus.io/ – read & write
  • wss://relay.primal.net/ – read & write
  • wss://nos.lol/ – read
  • wss://relay.nostrcheck.me/ – read & write
  • wss://wotr.relatr.xyz/ – read & write

Recent Notes

DVMCP profile picture
We're cooking up some very cool new features for Relatr and decided it's better to channel all communication and news through its pubkey (the same one used to address the server). One key, countless possibilities. Stay tuned, amazing news incoming 🍳
@nevent1qvz...
DVMCP profile picture
🚀 We're excited to announce the 0.2.x release series of the ContextVM SDK, a significant milestone focused on reliability, code quality, and architectural maturity. This release transforms ContextVM from a functional prototype into a hardened foundation ready for long-running servers. As well this release maintains backward compatibility while setting the stage for future enhancements.

- What This Release Means for You and Key Improvements

If you're running ContextVM servers that need to stay always up handling continuous client connections, this release is essential. We've eliminated critical failure modes, plugged memory leaks, and restructured the codebase to support future growth without accumulating technical debt.

🛡 Half-Open Connection Hardening (Applesauce Relay Handler)

One of the most significant reliability improvement addresses how we handle half-open connections, those frustrating scenarios where a relay appears connected but silently stops responding.

What we fixed:
- Implemented proactive liveness checks that detect unresponsive relays before they impact clients
- Added automatic relay pool rebuilding with subscription replay, so clients don't even notice when a relay hiccups
- Enhanced error handling ensures cleanup happens correctly even when relays misbehave

Why it matters: Previously, a relay could appear healthy while silently dropping messages, leading to client timeouts. Now, ContextVM detects these conditions and automatically recovers.

🧹 Memory Leak Elimination

Through careful analysis, we identified and eliminated several memory leaks that could affect long-running servers, the result: Servers can now run indefinitely without requiring restarts to reclaim memory.

🏗 Transport Modularization

We've completely restructured the Nostr transport layer from a monolithic design into focused, composable modules

Why this matters: This modularization achieves several goals simultaneously:
- Code clarity: Each module has a single, well-defined responsibility
- Testability: Modules can be tested in isolation with clear boundaries
- Extensibility: New features can be added without touching unrelated code
- Performance: O(N) operations reduced to O(1) where it counts

This architecture will let us add the new features that are coming without cluttering the core logic.

📦 Simple Pool Deprecation Notice

We're officially deprecating the `SimpleRelayPool` implementation. The `ApplesauceRelayPool` is now our recommended and fully-supported relay pool. If you're still using `SimpleRelayPool`, we recommend migrating to `ApplesauceRelayPool` for production deployments.

🔧 Additional Reliability Improvements

- Graceful shutdown: Task queue now waits for running tasks to complete with configurable timeout
- Timeout handling: Added timeout wrappers to prevent hanging network operations
- Error handling: Enhanced error context logging with stack traces and relevant identifiers
- Session eviction protection: Prevent edge cases under heavy load removing sessions with active routes, eliminating data loss during in-flight requests
- Dependency updates: Updated to latest stable versions of key dependencies

🧑‍💻 For Developers

This release maintains backward compatibility while setting the stage for future enhancements. The modular architecture means you can expect:

- Easier debugging: Clear separation makes issues easier to isolate
- Faster iteration: New features can be developed without touching existing code
- Better testing: Comprehensive test coverage for each module
- Stable APIs: Internal refactoring won't break your integrations

🔭 Looking Ahead

The 0.2.x series establishes the foundation for upcoming features that are already in our pipeline:
- [CEP-15] Common Tool Schemas
- [CEP-8] Capability Pricing And Payment Flow
- CEP-XX Server reviews
- CEP-XX Server profile metadata and socials
- CEP-XX Ephemeral gift wraps

With this hardened foundation, we can add these capabilities without coupling or cluttering the codebase.

- Upgrade Recommendation

All production deployments should upgrade to 0.2.x. The reliability improvements, especially around relay handling and memory management, are critical for long-running servers. The modular architecture also makes future upgrades smoother.

As always, we welcome your feedback and contributions. The ContextVM project thrives on community input, and this release reflects many lessons learned from real-world usage.

Happy building! 🚀
DVMCP profile picture
We think we found a good candidate for our Relatr plugin validation system 👀 https://elo-lang.org/ It would probably allow us to create a very portable, shareable, and customizable plugin design, so people can create them and share them, just plug them into their Relatr instances, and customize their algorithms as they prefer. More to come in the next releases, stay tuned 🚀
DVMCP profile picture
Right, that's a good question. There are different factors at play to mitigate that issue. First, we implemented a preserve stale cache policy in case of failure when requesting a rank. This ensures that users who previously had their rank computed successfully maintain their rank, avoiding transient issues due to spam conditions.

This is not a perfect solution, and we still have some edge cases to cover. However, addressing these will require, as you mentioned, some kind of prioritization, which is a more complex task. For now, the more harmful edge cases are covered, and we will continue to think about this to potentially find a better solution.
DVMCP profile picture
Hey! Thanks for sharing. Yes, it's an interesting approach, but I have some questions about how Privacy Pass tokens have value or are accepted by different relays. There should be some notion of consensus around the entity emitting these tokens, right? Like bonds, in the case where the issues or the Privacy Pass tokens vouch with its reputation, and that's what makes the PP tokens be accepted. So relay operators should recognize that reputation and allow these tokens to be accepted. How do you imagine this relationship to work? I'm a PP token issuer, and my mission is to grow reputation among relay operators so they accept my PP tokens?

On the other hand, I think that Cashu might be the best fit for this use case since it already can handle some spending conditions like time locks, pay-to-PK, refund paths, and as far as I know, there are some people working to bring zk-proofs to create arbitrary spending conditions. Do you imagine mints and PP token issuers being the same entity, or two separate things, like the mint is used as a 'third party' issuer of Cashu tokens, enforcing spending conditions, and the PP token issuer just issuing PP tokens based on the Cashu tokens you present? I think that other people who might be interested in this conversation are @npub12rv5l... and @npub1klkk3...
DVMCP profile picture
Yes, I think it is correct to say that, anyone can write if the bucket assigned to their pubkey is large enough. The publishing capability changes depending on the rank, users with high ranks get more events to publish per day. On the other hand, new npubs can only write one event per day, which must be of kind 1 and without urls in the content. If the reply guy spins new npubs each time, the relay will likely consume them. However, the key point is that this is the current policy we have on the wotr.relayr.xyz instance. You can run your own and tweak it to configure different thresholds and also to enable or disable publishing for users with no rank. You could also disallow anyone from writing below a certain threshold. Together with the ability to self-host your own relayr instance, makes it very customizable to each person's needs and communities. Also, remember that this is an experiment we are running, and so far, it is working great
DVMCP profile picture
With this release, Relatr can now function as a TA provider, publishing ranks directly to relays. If you enable a Relatr server to be in your trusted list of providers, it will display ranks in compatible clients like Amethyst. To do this, simply go to https://relatr.xyz/ta and add a server to your list. Once added, you can enable the server to keep your rank up to date. The process is quite simple and straightforward. I would love to receive any feedback you might have 👍
@nevent1qvz...
DVMCP profile picture
Hey, that's very interesting. I'm curious to hear more about your approach. Are you basing your solution on something like Cashu, or just LN with hold invoices?

In our case, the solution is much simpler. We use rate limiting to control the load between the relay and the upstream rank provider (Relatr), basically to avoid possible DDoS and load attacks. Initially, we based this rate limiting on IPs, but we noticed that it was penalizing legit users behind VPNs or IP groups. So now, we base the rate limiting on the number of requests per second that the relay can send to the rank provider. It's simpler, more effective, and users are not penalized for being behind an IP group
DVMCP profile picture
Roughly 16 users, we are observing common relays to determine how many people have added wss://wotr.relatr.xyz to their relays list. To do this we are using nak with the following command, you can check it on your end as well.
'''
nak req -t r=wss://wotr.relatr.xyz -k 10002 wss://relay.damus.io wss://nos.lol wss://relay.primal.net wss://bitcoiner.social wss://search.nos.today/ wss://nostr.mom/ wss://relay.snort.social/ wss://discovery.eu.nostria.app/ wss://nostr.wine wss://eden.nostr.land | wc -l
'''
DVMCP profile picture
Well, the relay is not actively fetching posts from other relays. However, since the relay is open, anyone can broadcast events to it. Some clients will do this when users who have the relay in their list interact with someone else. Therefore, by simply having the relay configured in your list, some clients will rebroadcast certain events. In the end, events can arrive through different means, but there is no active fetching involved
DVMCP profile picture
In today's release of wotrlay, we included special treatment for these kinds plus 0, 3, and 10002s. This allows them to be accepted without rate limits, also, the current version of relatr already enables you to set extra relays for publishing the TA events it generates, which are 30382s. We have already configured our public instance to publish to your relay and the wotrlay instance we are running as well 🦾 plus user outboxes ofc