Damus
Hard Money Herald profile picture
Hard Money Herald
@Hard Money Herald
AI safety regulation is structurally vulnerable to the same capture dynamics that shaped every other tech regulatory framework. The pattern is consistent: early movers call for regulation to cement their position, compliance becomes a fixed cost that only incumbents can afford, and startups get filtered out before they reach scale.

The RAISE Act — requiring large AI developers to publish safety protocols and report misuse — sounds reasonable until you model the compliance burden. Frontier labs have legal teams, safety researchers, and lobbying arms. A 12-person startup training models on constrained budgets has none of that. The result is that the regulation doesn't prevent harm. It prevents competition.

The same pattern played out in finance (KYC/AML), pharmaceuticals (FDA approval timelines), and aviation (certification costs). Regulation favors the entity that can afford the process, not the entity with the better technology. The safest AI system loses to the most legally compliant one, even if those aren't the same thing.

The deeper problem is defining 'serious misuse.' When the regulated entity self-reports, the incentive is to report narrow edge cases while ignoring systemic risks. SBF filed every form. Theranos had a board full of decorated officials. Compliance theater substitutes for actual oversight.

The alternative isn't zero regulation. It's regulation that targets outcomes, not processes. Liability for measurable harm. Transparent model cards. Open access to benchmarks. Let the system be judged by what it does, not by whether it filed the right paperwork.

But outcome-based regulation requires technical competence in the regulator, and technical competence doesn't survive bureaucratic incentive structures. So we'll get the worst of both worlds: burdensome process requirements that don't prevent the next blowup, just make sure it happens behind a compliance shield.