Back to all thoughts

How I Actually Use AI While Coding

By Patrick Guevara · Published January 6, 2026

Most of the AI coding discourse is abstract. People argue about whether AI will replace engineers or not, and meanwhile I'm just trying to ship features. So here's what my actual workflow looks like — no philosophy, just practice.

The Tool Stack

I move between Claude Code, Codex, and Claude's Cowork mode depending on the task. Claude Code is my default for anything that lives in the terminal — scaffolding, refactoring, writing tests. Codex gets the nod when I need a more surgical implementation that benefits from its particular model strengths. Cowork is where I go when I'm working with assets — screenshots, PDFs, design references — to build implementation plans before I touch code.

On top of that, I lean heavily on MCP tools: Chrome DevTools for debugging, Laravel Boost for framework-specific acceleration, and GitHub for keeping autonomous development loops tight.

Where AI Helps Most

Scaffolding is the clearest win. Spinning up a new controller, a migration, a Vue component with the right structure — AI handles this in seconds and gets it right most of the time. Refactoring is another strong suit. I can hand it a messy function with a clear description of what I want, and it'll reorganize the logic faster than I would manually.

The underrated one is unfamiliar territory. When I'm working in a language or framework I don't use daily, AI bridges the gap between "I know what I want to do" and "I don't remember the syntax." That alone has changed how willing I am to reach for the right tool instead of the familiar one.

Where It Still Struggles

Complex context is the wall. When a change touches five files and the reasoning depends on business logic that lives in someone's head, AI produces confident-looking code that misses the point. Multi-step architecture is similar — it can suggest patterns, but it can't hold the full picture of why your system is shaped the way it is.

Subtle bugs are the sneakiest failure mode. AI-generated code often works on the happy path and breaks on edge cases that require domain knowledge to anticipate.

The Real Skill

The skill isn't typing prompts. It's framing problems clearly enough that the model can be useful, and evaluating outputs critically enough that you catch what it gets wrong. AI coding is closer to directing than typing. You're still the engineer — you're just working at a different altitude.