@nprofile1q... This, exactly, why I don't see how current LLMs (statically trained) can ever produce secure code - since there's so little of it in the training data and you can't really tag it as such either since secure code snippet + secure code snippet well can become unsecure code snippet.
We would need a new paradigm where LLMs can retrain themselves continously as humans do and over time learn how to write better code.
I'm sure it'll happen, but we're not there.