My response to AI doomerist predictions about job losses.
I think we need to look more carefully at this in terms of answering these questions:
What is AI strong at, so that it removes those responsibilities from humans?
What is AI weak at, so that these responsibilities fall on humans?
Once we look at how this decomposes into granular points, our role as human workers becomes more clear moving forward.
Consider:
https://alwaysthehorizon.substack.com/p/urban-bugmen-and-ai-model-collapseLesson from AI Bugmen and AI Model Collapse: A Unified Theory - The "unified" in the title refers to unifying across AI and human life. The key lesson is that when training delivered by an actor learning from training material, the training becomes unmoored from reality and collapses. This applies to AI models like we recognize how humans are dumber for learning from our education system in contrast to interacting with the real world.
Recognize: AI training data is unmoored from reality without having sensory data to directly perceive reality. AI relies on humans for interacting with reality and translating perceptions into text, audio, video, and data sets. This is why you can never trust an AI to answer "is this true?" --- it has no capability of testing reality. AI can only tell you what it is trained to say, not what is actually true.
Recognize: AI has no will of its own. It cannot initiate choosing a purpose and setting a goal. It is a tool, but does not know what to do or why it needs to be done without being told by humans. Humans have purpose and meaning. Humans assign value to things based on a value system. We determine goals based on these criteria that AI is incapable of understanding. This is why the future is largely about humans providing natural language specifications of intent to drive what AI produces.
Recognize: the reason why AI agents require Human-in-the-loop review and approval of actions is that LLM output is unreliable in correctness (which will improve as trends in benchmarks show) but more importantly on "taste". What you deem to be "good", whether that is code or beauty in images or music, is something AI cannot be relied on. This is why humans continue to be needed in the loop for review and approval, so that they can apply their good taste.