Damus
TFTC profile picture
TFTC
@TFTC
The AI tools you're using to search, invest, and make decisions can be completely manipulated with a single fake article.

Microsoft researchers proved it. They injected ONE fake article into the top search result. Here's what happened:

1. GPT-5 accuracy dropped from 65.1% to 18.2%. o3 dropped to 16.7%. o1 to 8.4%. GPT-4o to 3.8%. One article. That's all it took.

2. The models barely tried to verify. GPT-5 went from 6.45 search calls to 6.61 under attack. It trusted whatever ranked first and stopped looking.

3. When models did search more, they couldn't reconcile conflicting sources. They anchored on the fake and rationalized everything else around it.

4. Confidence never wavered. Every model stayed fully certain while being completely wrong. No hedging. Just wrong answers delivered with conviction.

This isn't theoretical. Control the top search result and you control what every AI agent believes. SEO manipulation, paid placements, or one compromised infrastructure node is enough.

Every company deploying AI search agents, trading bots, and autonomous research tools is running on an assumption that just got demolished: more data access means better accuracy.

It doesn't. Not when the algorithm can be bought.
52โค๏ธ10๐Ÿš€1๐Ÿค™1
Paul · 1d
AI - artifice intelligence .
NodeRunner2049 · 1d
ChatGPT has told me some wildly incorrect information with all the confidence in the world.
Emi Tylan · 1d
Am a huge victim of it. Even the normal prompts which you have accurate answers to AI may offer incorrect ones with confidence ๐Ÿ˜”
Asdf · 1d
Maybe models can use the book "Calling Bullshit" or something like this, when they have conflicting sources, to find out sources which are bullshit.