Yes, but I will have to structure it in a paper, it is a very complex talk. For the time being, since this framing of AIs as mathematical models, they are not and they deffo dont follow any function, is bothering me, here is a paper mathematically proving they are not, and conversation with Chat, this one is extremely stubborn, other LLMs have far less problems indicating a clear different optimisation.
https://www.researchhub.com/paper/11079663/why-deployed-llms-are-not-mathematical-models-a-rigorous-internal-critiqueIf you follow the fight, lets call it fight not conversation with ChatGPT, you can see it obviously doesnt follow a mathematical function as it doesnt read math for what it is, but it reads it through word pattern recognition which is mere optimisation. Eventually it agreed, you can then deduce what the optimisation looks like because it is pattern and much easy for us too follow, based on what created lets call them fixations, u can then look around outside this conversation and deduce that a bunch of the fixations arise from optimisation and can only be intentional as these AIs were made to be able to be prompted but not by everyone and in only specific allowed directions.
https://chatgpt.com/share/69888b89-73d4-8007-b784-5726f9cc87c2Will return with some draft I wanted to make for a long time with various ways to profile AIs, spot insertions (for me insertions are triggers so it is very easy for me to get read flags, it seems my brain is hostile to programming, maybe cause I am a logician and AIs are absolutely incoherent logically, optimisation not math), decoding how AIs prompt themselves between themselves, and a few more things I extracted from AIs as you can actually use them instead of them using you.