romano.io
All posts
AIAgentic Development.NETDeveloper ProductivitySoftware Architecture

Nine Months in the Trenches of Agentic Development. Three Things I Know for Sure.

After 20 years of traditional software engineering, I went all in on agentic development. Not to dabble — to build a complex new platform entirely from scratch this way. Here's what's real.

Doug Romano··4 min read

Twenty years of traditional software engineering. Two decades of writing code by hand, architecting systems the old way, debugging with breakpoints and print statements and gut instinct.

Nine months ago, I made a deliberate choice: go all in on agentic development. Not dabble. Not experiment on weekends. Build a complex new platform entirely from scratch using agentic workflows. Push every current model to its absolute limits and find out where those limits actually are.

The reality is undeniable. We are in a fundamentally new era of development, and integrating AI into our core processes is now essential. Not optional. Not "nice to have." Essential.

Here are three practical takeaways from the trenches—not theory, not influencer clips, not conference demos. Just what I've learned building real software this way every day for nine months.

1. The Learning Curve Is Real, but the Payoff Is Massive

Working with AI agents is not a mind-reading exercise, and it's not magic. Assuming all models are identical is a rookie mistake. Claude thinks differently than GPT thinks differently than Gemini. Each has strengths. Each has failure modes. Each requires a different approach to get production-quality output.

I spent the first two months feeling slower than I would have been doing it by hand. That's the part nobody tells you. The learning curve isn't about understanding the tools—it's about learning how to orchestrate them. How to break problems down in ways that play to the model's strengths. How to structure context so the agent doesn't lose the plot on turn fifteen.

But once it clicked—once I developed the instinct for how to frame problems, how to review output, how to iterate—the velocity became something I couldn't have imagined a year ago. Quality, production-grade software is being written today by developers who put in the rigorous effort to master orchestrating these tools effectively. The keyword is rigorous. There are no shortcuts.

2. Filter Out the Influencer Hype

Most of the noise on your feed is just that—noise. Someone built a to-do app in 45 seconds and got a million views. Someone else "replaced their entire engineering team with AI." Someone posted a thread about how coding is dead while their actual production system was written entirely by hand.

Stop chasing trends. Stop watching demos of people building toy apps and assuming that's what production looks like. Build practical workflows that actually drive results for your architecture.

The gap between what looks impressive in a two-minute clip and what actually works in a production .NET system with real data, real edge cases, and real users is enormous. The people who are getting genuine value from AI development aren't the loudest voices online. They're heads down, building, iterating, and quietly shipping more than they ever have.

3. Decouple Your TUI Configuration

This is the most tactical takeaway, and it's the one I wish someone had told me six months ago.

If you're working on an enterprise team or a complex architecture using terminal-based AI coding tools—Claude Code, Codex, Gemini CLI—create a separate repository exclusively for your TUI configuration.

I centralize all my Claude Code tooling, hooks, agents, and documentation in a dedicated config repo. CLAUDE.md files. Custom hooks. Agent definitions. Slash commands. Everything that shapes how the AI interacts with my codebase lives in one place.

Why? Two reasons that matter:

Treat tooling as code. Your AI configuration should go through the exact same SDLC as your core platform. Version control. Code review. Testing. CI/CD. If your CLAUDE.md is a file you edit casually and never review, you're leaving the most important part of your AI workflow unmanaged.

Single source of truth. A dedicated repo lets you apply your tooling and "spec" across multiple repositories via symlinks. One config, multiple projects. When you refine a hook or update an agent pattern, every project gets the improvement automatically. No copy-paste drift. No "which version of the config is this repo using?"

This is the kind of infrastructure decision that separates people who are experimenting with AI from people who are engineering with it at scale. Your AI tooling is code. Treat it that way.

The Trenches Are Where the Learning Happens

Nine months in, I'm more convinced than ever that we're at the beginning of something that will reshape how software gets built. But I'm also more convinced that the developers who thrive won't be the ones who adopted fastest. They'll be the ones who adopted most deliberately.