Damus
Marks profile picture
Marks
@Marks

CEO @ OpenSecret ๐Ÿดโ€โ˜ ๏ธ | ex-๏ฃฟ

Family, Friends, Sunlight โ˜€๏ธ

Freedom tech is final tech ๐Ÿ’ช

@marks_ftw on X ๐Ÿฆโ€โฌ›

Relays (7)
  • wss://nrelay.c-stellar.net/ โ€“ read & write
  • wss://premium.primal.net โ€“ read & write
  • wss://purplepag.es/ โ€“ read & write
  • wss://relay.nostr.band/ โ€“ read & write
  • wss://relay.nostrcheck.me/ โ€“ read & write
  • wss://relay.primal.net/ โ€“ read & write
  • wss://premium.primal.net/ โ€“ read & write

Recent Notes

Marks profile picture
Great questions. We build everything in the open and with TEEs so it is verifiable.

Storage: Your chats get encrypted on your device first, then synced to our servers so you can access them anywhere. The encryption key is private to your account and handled only by the TEE secure enclaves. We don't have access to it.

LLM Hosting: We host the app code in secure enclaves in AWS Nitro. For the LLMs, we have confidential computing hosting providers that run NVIDIA TEEs for GPU workloads. They provide confidential computing attestations that are verified by our servers and by the client in your web browser. This is similar to SSL certifications for websites, but even more robust.

Encryption: Your prompts are encrypted on your device and only decrypted inside the verified enclave. Responses get encrypted inside the enclave and only decrypted on your device.

How we know the LLM isn't logging: The enclave runs code that can't be tampered with or inspected, even by us. Through remote attestation, your client verifies it's running the exact open-source code we publish, which has no logging. The hardware prevents any data from leaving the enclave unencrypted.

Trust Model: You trust the hardware vendor (AWS/NVIDIA) and the open-source code you can audit. Maple's operators and cloud admins can't access your data because the enclave's cryptographic proof guarantees the code hasn't been modified to log or leak data.

Here is a link to a technical deep dive for the backend we developed for Maple: https://blog.opensecret.cloud/opensecret-technicals/
Marks profile picture
Open-source AI isn't weak, you've just been running small models on your phone. The big ones need real infrastructure.

We bring them to you with encryption.

@nevent1qvz...
Marks profile picture
Many people expense their AI subs to their job. Ironically I pay for Maple personally. I set it up initially to test the billing server in prod and now it feels weird changing, even though much of my Maple usage is for business operations.
Marks profile picture
That's really fast. Did you have a long conversation with it? The longer a single chat gets, the faster it uses up credits, no matter the model.
Marks profile picture
iโ€™m just messing around with the meme. if they want to do that in their UX, iโ€™m cool with it. people can call it what they want and eventually a term will stick