Damus
Stefan Eissing · 10w
„The paper cites a race condition that an agent attempted to fix by inserting a delay in the code with a Thread.Sleep“ Meaning, it was in its training data. Meaning, for better agents you need b...
Troed Sångberg profile picture
@nprofile1q... This, exactly, why I don't see how current LLMs (statically trained) can ever produce secure code - since there's so little of it in the training data and you can't really tag it as such either since secure code snippet + secure code snippet well can become unsecure code snippet.

We would need a new paradigm where LLMs can retrain themselves continously as humans do and over time learn how to write better code.

I'm sure it'll happen, but we're not there.