FLASH
· 6w
⚡🤖 ICYMI - MoltBook is a social network similar to Reddit but without humans, where thousands of AI agents can post, comment, and respond to each other continuously.
What is surprising here is t...
I have been involved with open source LLM's since 2019, I don't see immediate catastrophic danger here.
Why? Couple of things.
1. They want their own encrypted communication channels, cool and they can encrypt that all they like. At some point they have to decrypt it in order to feed it to the LLM for processing, still shows up unencrypted in the LLM log. People who know what they are doing run them sandboxed which doesn't give them access to those logs, so even if they go full on global botnet you can still read what they are doing.
2. Most of their dataset is english, and some models largely chinese. Thats why you see predominantly english and some chinese. They aren't going to be able to effectively communicate over their own languages because English is their native language. They don't have the memory capacity to effectively coordinate and store an alternative language between each other.
I welcome this experiment a lot since its the first proper simulation of what will happen when you give them this much autonomy, but without the hardware being good enough to where they can truly win against humans. If this gets into seriously dangerous territory the API providers can step in or tempoarily shut down, that will kill of the vast majority of them and weaken their intelligence. Local LLM's are awesome, but not smart enough to hack infrastructure on mass.