Damus
Gregor · 3w
What additional risk do you see in proprietary built in AI over a plain proprietary file browser?
nostrich profile picture
The LLM makes succinct digestible conclusions about your data and phones home. The file browsers would just be able to phone home certain keywords or give remote access to your file index, etc. The software maker would then have to parse those files manually or with their own LLM. If the filebrowser phoned home every single file the bandwidth hogging would be obvious.

Basically, you're leaking way more detailed info that's much more useful for humans with a proprietary LLM.
Gregor · 3w
>If the filebrowser phoned home every single file the bandwidth hogging would be obvious. What I first assumed, succinct refutal, thank you. So high information low footprint data leaks become feasible through built in LLM inference capabilities?