You know how to build software. You've got years of C#, SQL Server, and Visual Studio under your belt. You've shipped real systems that real people depend on. And now everyone keeps telling you that AI is going to change everything about how you work.
They're right. But not in the way most of the hype suggests.
AI isn't replacing your architecture skills or your understanding of business domains. It's eliminating the tedious parts of your day and giving you back time for the work that actually matters. This post is about how to get started with that, practically and deliberately, without abandoning the engineering discipline that got you here.
I wrote a companion piece called How to Start Building Apps with AI Agents that covers my full workflow. This guide is the "start here" version. If you've never used an AI coding tool beyond asking ChatGPT a question, this is where to begin.
Step 1: Pick Your Tools (Don't Overthink It)
You need two things: a Thinking AI and a Building AI.
The Thinking AI is where you plan and work through requirements before you write code. Claude (claude.ai or the desktop app) is what I use. ChatGPT and Gemini work too. The point is having a conversational AI where you can think out loud about what you're building without worrying about generating code yet.
The Building AI is whatever integrates into your editor and writes code alongside you. For .NET developers, the best options right now are GitHub Copilot inside Visual Studio (set the model to Claude if your license supports it), VS Code with the Claude extension, or Cursor. Any of these will get you started. You don't need all three.
If you're already in Visual Studio every day, start with Copilot. Smallest change to your existing workflow. Explore other options once you've built the muscle memory.
Step 2: Have the Conversation Before You Write Code
This is the step most developers skip, and it's the one that matters most.
Open your Thinking AI and start a conversation about what you're building. Not "write me a controller." A real conversation. Tell it your stack. Tell it your constraints. Talk through the problem.
Here's what a good opening prompt looks like:
I'm building an internal tool for tracking equipment maintenance schedules. The stack is .NET 10 with C#, SQL Server for the database, and Razor Pages for the frontend. The app needs to let technicians log completed maintenance, managers approve work orders, and the system should flag overdue items automatically. There are about 200 pieces of equipment across three facilities. What questions should I be thinking about for the data model?
You're not asking the AI to write code. You're asking it to think with you. You're providing context about your stack so it doesn't suggest PostgreSQL or React. You're describing the domain so it can ask follow-up questions that actually matter.
The AI will come back with questions about your entities, relationships, and edge cases. Does equipment move between facilities? Can a work order cover multiple pieces of equipment? What does "overdue" actually mean: calendar days, operating hours, or something else?
This is requirements gathering, and AI turns out to be good at it. It asks questions you might not have considered because it's seen thousands of similar domains.
Spend real time here. Twenty minutes of conversation with your Thinking AI saves you hours of rework later.
Step 3: Get the Data Model Right First
If you're a SQL Server developer, you already know this: the data model is the foundation. Everything else is built on top of it. AI doesn't change that. It just makes the conversation faster.
Take the output from your Thinking AI session and refine it into a concrete schema. Ask the AI to generate the CREATE TABLE statements. Review them. Push back when something doesn't look right. Ask it why it chose a particular data type or why it added a specific index.
Here's the kind of exchange that makes this step worth it. You tell the AI: "I need a WorkOrder table that tracks maintenance requests. Each work order belongs to one piece of equipment, gets assigned to a technician, and goes through a status workflow: Submitted, Approved, In Progress, Completed." The AI generates a schema. You look at it and notice it used an NVARCHAR for Status instead of a lookup table. You push back: "I want a StatusType lookup table with an INT foreign key, not a string column." The AI adjusts, and now you've got a schema that matches how you actually build systems.
That's the kind of conversation where AI is useful. You're not accepting defaults. You're having an architecture discussion at the speed of chat.
Once you're happy with the schema, bring it into your Building AI. Paste the SQL into your editor and ask Copilot or Claude to generate the corresponding C# entities, your DbContext configuration (if you're using EF Core), or your Dapper mappings. One thing AI won't automatically get right without guidance is indexing — it's worth reading SQL Index Tuning: The Fundamentals I Keep Coming Back To before you let it generate your migration scripts, because the mental model for which indexes matter and why is something you need to bring to the review. Because you already established the data model in detail, the generated code will be dramatically better than if you'd just asked "generate a data access layer for an equipment tracking app."
The quality of the output is directly proportional to the quality of the input. Every time.
Step 4: Build in Small Phases, Not One Giant Prompt
You already know how to break a project into layers. Do the same thing with AI.
Don't paste a massive prompt that says "build me the entire app." The model will lose coherence. It'll generate a controller that references a service that doesn't exist yet. It'll make assumptions about your data layer that conflict with what it built two thousand tokens ago.
Work in phases. Each phase should be a focused unit of work:
Phase 1: Database schema and migrations. Get your tables, constraints, and seed data in place. Commit it.
Phase 2: Data access layer. Repositories, EF Core configurations, or Dapper queries. Whatever pattern you use. Commit it.
Phase 3: Service layer. Business logic, validation, workflow rules. This is where your Thinking AI conversation pays off because you've already mapped out the rules. Commit it.
Phase 4: API or controllers. Wire up the endpoints. Commit it.
Phase 5: UI. Razor Pages, MVC views, Blazor components. Commit it.
Each phase gets its own prompt, its own AI session, and its own git commit. This keeps the AI focused, keeps your code reviewable, and gives you rollback points when something goes sideways.
There's another reason this matters: token limits. Every AI tool has a context window, a ceiling on how much text it can hold in working memory at once. If you dump your entire application requirements into one prompt, the model starts losing coherence as it approaches that ceiling. It contradicts itself. It forgets a business rule it acknowledged ten paragraphs ago. I've watched models rewrite their own data access layer three times in a single session because they kept "improving" code they'd already generated.
Phases fix this. Each phase starts fresh with a focused prompt and a clear objective. The model stays sharp because it's only thinking about one layer at a time.
Step 5: Set Up Git Before the AI Touches Anything
Create a git repo before your first AI-generated line of code. If you already have one, make sure you're on a clean branch.
AI will occasionally do things you didn't ask for. It renames variables. It refactors a method you told it to leave alone. It "improves" something that was already working. When that happens, and it will, you need to be able to see exactly what changed and roll it back.
After every AI coding session, run git diff. Read the output. Make sure the changes are scoped to what you asked for. If the AI touched files it shouldn't have, revert those files. Then commit the good changes with a clear message.
This isn't optional. This is the safety net that lets you move fast without worrying about the AI breaking something silently.
If you're not comfortable with git log, git reset, and git checkout, spend an afternoon learning them before you start. You will need them.
Step 6: Review Everything the AI Writes
AI-generated code compiles. It usually runs. It often looks reasonable. And sometimes it's subtly wrong in ways that won't show up until production.
You are still the engineer. The AI is a very fast junior developer who never gets tired but also never pushes back. It will write code that technically satisfies your prompt while missing the point entirely. It will use patterns that are valid C# but wrong for your architecture. It will ignore your established conventions unless you explicitly tell it about them.
Review AI-generated code the same way you'd review a pull request from a new team member. Check the logic against your business rules. Verify the SQL is using the right indexes. Make sure the error handling isn't just swallowing exceptions — the Result pattern is worth reading if you want a better approach than exception-based control flow for expected failures. Run your tests.
The deeper reason this review step matters: the model was trained on the internet, and most code on the internet is mediocre. Most of the Code AI Learned From Is Garbage is the honest version of that. Your job as the senior developer isn't just reviewing what the AI produced — it's correcting for what it was trained to produce.
The developers getting the most out of AI aren't the ones generating code fastest. They're the ones reviewing it most carefully.
Step 7: Learn Prompt Patterns That Work for .NET
Once you're comfortable with the basic workflow, you'll start noticing what makes a good prompt versus a mediocre one. A few that work well in the .NET ecosystem:
Always specify your framework version. ".NET 10 with C# 13" prevents the AI from generating code with deprecated APIs or older patterns.
Always specify your database. "SQL Server" stops the AI from defaulting to PostgreSQL or SQLite, which it will do roughly half the time if you don't say otherwise.
Provide your existing patterns. If you use the repository pattern, say so. If you use MediatR, say so. If you have a specific folder structure, describe it. The AI will match your patterns if it knows what they are.
Ask for one thing at a time. "Generate the repository interface and implementation for the Equipment entity using Dapper" is a better prompt than "generate the data access layer." Specificity wins.
Include constraints. "This runs on Azure SQL with a max DTU of 100" or "the users are on a corporate network with no internet access" are the kinds of constraints that change the generated code in ways that matter. Don't assume the AI will infer them.
What You'll Notice After Your First Project
Something clicks about two or three phases into your first AI-assisted project.
You realize the AI didn't change what you do. You still gathered requirements. You still designed a data model. You still made architecture decisions about patterns and layers. You still reviewed code and caught issues. All of that is the same work you've been doing for years.
What changed is where your time went. The hours you used to spend writing boilerplate, wiring up dependency injection, scaffolding CRUD endpoints, writing the fifteenth repository implementation that looks identical to the other fourteen, those collapsed into minutes. The time you saved went back into the work that actually requires a senior developer: understanding the domain, making design trade-offs, catching edge cases, and verifying that the final product does what the business needs.
That's the actual value. Not "AI writes code so you don't have to." It's "AI handles the repetitive implementation so you can focus on the engineering."
Where to Go From Here
Think first, model the data, build in phases, commit often, review everything. That workflow will carry you through your first AI-assisted project and your fiftieth. The tools will change. The models will get better. But the fundamentals are the same ones you've been practicing your entire career: understand the problem, design the solution, build it carefully, and verify it works.
The biggest mistake I see experienced developers make isn't using AI poorly. It's waiting too long to start because the hype makes it sound like you need to learn a completely new way of working. You don't. You need to add a new tool to the process you already have.
Start with one small project. Something low-stakes. A utility app, an internal tool, a prototype. Use the Thinking AI to plan it. Use the Building AI to generate code. Review everything. Commit often. See how it feels.
You'll adapt the workflow to fit how you think. Everyone does. But you have to start to get there.
When you're ready to go deeper, Nine Months in the Trenches of Agentic Development is where this all leads — what it looks like when you stop treating AI as a coding assistant and start treating it as an orchestration layer. And I Haven't Typed Code in Months. I've Never Shipped More. is the honest account of what the end state feels like on a production project.
This is the companion piece to How to Start Building Apps with AI Agents, which goes deeper into the full workflow, tool choices, and the phased approach. If you've followed this guide and want more, that's the next read. The claude-code-conversion-agent repo on GitHub has a working example of this workflow in practice.