I've been working 7 days a week the past couple of months on my new business, which is, for better or worse, very much and AI business (which is why I've been silent here - miss you all!)
I thought I'd share this in case people are curious about how plebs can use Claude Code, Codex, and the new Perplexity Computer. Also, I slept on Claude Cowork and it's so great in so many ways, so I wouldn't make the same mistake I did.
I'll preface this with I'm not a developer and not super technical, though I've used tools like Final Cut for video editing for decades. But I still have to think hard to set up a lightning node and would not be able to set up my own block template with DATUM and
@OCEAN.
The following is the workflow for a very specific project, so I likely won't translate to your Claude Code work, save for one possible unlock:
Codex is really good at checking Claude Code's work. And Perplexity Computer is really powerful for specific tasks.
I asked Claude Code to explain how I've used Codex and Perplexity Computer for a project and how the three worked together. It called Codex the "the architectural auditor."
No doubt
@HaloKat and other technical people could use it better, but a non technical pleb like me has found a use for Codex.
Here's what Claud Code said...
I'm building a product that requires a large structured database - roughly 23 verified fields per entity, across hundreds of entities, all sourced from official public websites. The data has to be right, sourced, and structured consistently because it feeds a rendering pipeline that produces customer-facing documents.
I used three AI agents in a single day and they each did something the others couldn't.
Claude Code (Opus) was the system builder - it wrote the schema, the validation rules, the handoff specs, and tracked every decision across multiple review cycles. It doesn't do research; it builds the deterministic infrastructure that consumes research.
Codex was the architectural auditor. Every time Claude Code produced a spec or Perplexity returned data, I pasted it into Codex for review. Codex caught schema drift between documents, flagged hidden assumptions, and issued binding implementation directives - not suggestions, decisions that prevented downstream failures and unnecessary data reruns.
Claude builds, Perplexity researches, and Codex pressure-tests the system to make sure the research contract and the implementation plan stay aligned as the work scales.
Perplexity Computer was the surprise MVP, and this is worth explaining because I also have Firecrawl - a powerful web scraping tool - fully integrated into my coding environment with plenty of API credits to burn.
If you've watched any AI coding content on YouTube, you've heard the pitch that Firecrawl can do everything - crawl sites, convert pages to markdown, map entire domains, extract structured data. And it genuinely can. We've used it heavily for other parts of this database and it's a real tool.
But Firecrawl is a parser. It can tell you what's on a page. It can check whether a URL returns 200. It can extract text and structure from HTML. What it cannot do is make a judgment call. It can't look at a page and decide whether the information listed there covers one category or two, whether a "not found" result means the page doesn't exist or the entity simply doesn't offer that thing, or whether a third-party hosted tool is legitimate or junk.
Perplexity is a researcher, not a parser. It reads actual page content, understands context, makes semantic decisions, and returns structured findings with confidence tags and notes explaining every edge case.
I ran two parallel Perplexity accounts and processed over 200 entities in a single day at 95-98% resolution. The companion notes alone -documenting which entities are structured differently and why certain fields are legitimately empty - represent judgment work that no parser can replicate, no matter how many credits you throw at it.
Firecrawl tells you what a page says. Perplexity tells you what it means and whether it's the right page.