Damus
GJM · 7w
This sounds fascinating! So much new territory here to explore. Still doing research. Anything surprising about the experience so far? Cheers.
bent profile picture
I spent today configuring it by sitting here chatting on my iPhone on my private Tailnet. I use an iPhone app called GoCo Editor to remote into the Umbrel so I can see exactly how OpenClaw is handling its/my files. Using that approach I got it to

- create folders containing project plans it wrote for me,
- update its short term and long term memories,
- reorganize files and delete unwanted ones,
- set up cron jobs to send me reminders weekly, monthly, and daily based on my instructions
- write Python scripts and cron jobs to prune its file system weekly and monthly,
- create indexes to help me locate files in the future
- advise me when to switch to a less expensive chat model
-rewrite its code to reduce the number of tokens I was burning on inefficient tool calls
- plan for building additional agents that will all live in separate project-based group chats with me on Signal

I have been surprised by its strategy for managing memories. As we DM back and forth, it creates a daily memory file (yyyy-mm-dd.md) and stores whatever important memories we both decide should go in that file during the day. That’s like my daily curated journal that keeps growing with a new one each day. The agent’s basic instructions, which are fully modifiable if I wanted to ratchet them up, tell it not to build a dossier, to respect boundaries, and to earn trust through competence by being “good.” I can fully audit its memories at any time. It keeps the past few days’ memories easily accessible in its context window for continuity’s sake. Then it keeps a separate long term memory file (MEMORY.md) for things I specifically want it to keep in context for longer periods. Beyond that I toggled on the feature to have it create a sqlite database of vector embeddings from all my historical chat transcripts. It can reference that db to recall the meanings - not exact content - of its entire history of conversations with me. And of course I can ask it to read in any file from outside its immediate context and db to (re)/introduce any specific info I want. All of this can practically stay local on Umbrel so I’m not feeding a bunch of context to a third party. But I am hitting OpenAI’s API with my prompts plus a vectorized payload. A stretch goal would be to move the LLM API calls to a local server running Ollama or LM Studio so that nothing leaves my home network. This assumes that some basic personal assistant functions don’t require a multi-trillion parameter frontier model running in the cloud.
1❤️2
GJM · 6w
WOW! This is very helpful! Thanks ever so much for sharing. 😎