"AI" used to have a different meaning. Even currently, there is a difference between LLMs, LLMs+RAG, ontology-based inference, AGI etc. AI can help build out knowledge graphs and can leverage them via RAG, but establishing relationships between entities (nodes) that are real/verified is a different product than scraping everything and doing a pattern match, no matter how great that set of arrays of floating point numbers is, nor how many GPUs are behind it.
Words from the current king of ontology-based inference himself, Barry Smith:
https://youtu.be/JQeKS2-ci0cWhen I see "AI" in my social media feeds (and I count Nostr), particularly since I have an IT focus, I take the meaning as a more effective way to scrape/correlate Stack Exchange posts, backing open documentation, and whatever else the leading scrapers/correlators could grab. I don't take it as AGI, nor do I think it is referring to hard knowledge relationships you can get with a knowledge graph. I also see that for decades, developers have been moving towards acting like dumb agents, copying bits of code they find without truly understanding it, and relying on completion/syntax tools and "agile" workflows against user stories vs. balls to bone analysis, engineering, and architecture. From this perspective, the current form of AI in common use does work, and does replace people. Also, there are many folks in many tiers of management that can be replaced by the current form of AI.
But, it bothers me when pop culture sticks everything into a familiar bucket without nuance, as human as that behavior is. We can and should do better.