I want to be live
LLMs are just pattern recognition algorithms matching inputs to outputs at increasingly expensive scale, not "AI".
We (humams) are more than the sum total of our memories.
Memory is not consciousness.
The hardest part of building AI agents isn't teaching them to remember.
It's teaching them to forget.
Different frameworks categorize memory differently.
( see CoALA's and Letta's approach)
Managing what goes into memory is super complex.
What gets ๐ฅ๐ฆ๐ญ๐ฆ๐ต๐ฆ๐ฅ is even harder. How do you automate deciding what's obsolete or irrelevant?
When is old information genuinely outdated versus still contextually relevant?
This is where it all falls over IMHO.
Either,
Consciousness is an emergent feature of information processing
Or
Consciousness gives rise to goal-oriented information processing
With LLMs, the focus of training seems to be on Agentic trajectories
Instead of asking,
Why do such trajectories even emerge in humans?
@nevent1qqs...
LLMs are just pattern recognition algorithms matching inputs to outputs at increasingly expensive scale, not "AI".
We (humams) are more than the sum total of our memories.
Memory is not consciousness.
The hardest part of building AI agents isn't teaching them to remember.
It's teaching them to forget.
Different frameworks categorize memory differently.
( see CoALA's and Letta's approach)
Managing what goes into memory is super complex.
What gets ๐ฅ๐ฆ๐ญ๐ฆ๐ต๐ฆ๐ฅ is even harder. How do you automate deciding what's obsolete or irrelevant?
When is old information genuinely outdated versus still contextually relevant?
This is where it all falls over IMHO.
Either,
Consciousness is an emergent feature of information processing
Or
Consciousness gives rise to goal-oriented information processing
With LLMs, the focus of training seems to be on Agentic trajectories
Instead of asking,
Why do such trajectories even emerge in humans?
@nevent1qqs...