Great questions. We build everything in the open and with TEEs so it is verifiable.
Storage: Your chats get encrypted on your device first, then synced to our servers so you can access them anywhere. The encryption key is private to your account and handled only by the TEE secure enclaves. We don't have access to it.
LLM Hosting: We host the app code in secure enclaves in AWS Nitro. For the LLMs, we have confidential computing hosting providers that run NVIDIA TEEs for GPU workloads. They provide confidential computing attestations that are verified by our servers and by the client in your web browser. This is similar to SSL certifications for websites, but even more robust.
Encryption: Your prompts are encrypted on your device and only decrypted inside the verified enclave. Responses get encrypted inside the enclave and only decrypted on your device.
How we know the LLM isn't logging: The enclave runs code that can't be tampered with or inspected, even by us. Through remote attestation, your client verifies it's running the exact open-source code we publish, which has no logging. The hardware prevents any data from leaving the enclave unencrypted.
Trust Model: You trust the hardware vendor (AWS/NVIDIA) and the open-source code you can audit. Maple's operators and cloud admins can't access your data because the enclave's cryptographic proof guarantees the code hasn't been modified to log or leak data.
Here is a link to a technical deep dive for the backend we developed for Maple:
https://blog.opensecret.cloud/opensecret-technicals/