How to Onboard AI into Your Dev Team
Published 4 April 2026 · Peter Rossi
Most teams I speak to have already adopted AI coding tools. They've bought the licences, done a short demo, and pointed the tool at the codebase. Engineers are using it. Something is being generated. It sort of works.
What most of them have not done is onboard the AI properly. And that's the gap where the real opportunity sits.
I wrote a short ebook on this called "Your AI's First Day", available free at /book. This post covers the framework from that book. I've tried to give you enough to be genuinely useful here, but the ebook has the full implementation detail, templates, and worked examples if you want to go further.
The core idea is simple. AI coding tools perform much better when they have context. Not just the codebase, but the conventions, the reasoning behind decisions, the parts of the system that are fragile, and the standards your team actually holds each other to. Without that context, you get plausible-looking code that misses the point.
Onboarding an AI is not unlike onboarding a senior hire. You wouldn't give a new senior engineer access to a 200,000-line codebase on day one and say "go". You'd show them around. You'd tell them what matters. You'd wire them into the review process gradually, before giving them ownership of something critical. The same logic applies here.
The Four-Phase Approach
The framework is structured as four phases: build shared context, wire AI into review, use AI for delivery, and iterate and improve. The direction of travel is understand, stabilise, build, accelerate.
The first three phases take three to four weeks for most teams. The fourth phase is ongoing. It does not end. The point is to move from an AI tool that generates plausible code to one that generates code your team would actually ship.
Phase 1: Build Shared Context (Week 1)
This is the foundational work. It is also the most overlooked part of the entire process. Most teams skip it entirely and jump straight to generation, then wonder why the output feels off.
Start by creating a workspace structure. A CLAUDE.md or AGENTS.md file at the repo root, plus a docs/ directory. These files tell the AI about your architecture, your conventions, your fragile areas, and the things that need a human decision rather than an automated one. Writing them forces your team to articulate tacit knowledge that usually lives only in people's heads.
Then let the AI read the codebase and produce a summary of what it understands. This surfaces gaps in your documentation faster than any audit. If the AI's summary is wrong or thin, that is a signal about what context is missing, not a signal about the tool.
Mine the commit history. Git log is an underused context source. Ask the AI to read several months of commits and identify the actual conventions your team follows. Not what the style guide says. What engineers actually do. Feed in any existing documentation too: architecture decision records, design docs, post-mortems. All of it is context.
Phase 2: Wire AI into Review (Week 2)
Before you use AI to write code, bring it into code review. This feels backwards. It isn't. Running the AI as a reviewer first teaches you what it can see and where it misses things. That knowledge is essential before you trust it as a generator.
Define a review policy before you automate anything. What should the AI flag? What is out of scope? What requires a human decision regardless of what the AI says? A vague policy produces a vague reviewer. Spend the time to make it specific.
Then set up CI automation so the AI reviews every pull request. The output is advisory, not blocking. The AI comments. Humans decide. That distinction matters enormously for team buy-in. This step typically takes a week. Teams consistently underestimate the effort required to write a clear review policy. The automation is the easy part.
Phase 3: Use AI for Delivery (Weeks 2 to 3)
With context built and the review loop established, the team is ready to use AI for actual development work. Start from requirements: use the AI to help turn requirements into tickets. This is where the context investment pays off. An AI with good context produces tickets that reflect your system, not a generic interpretation of the problem.
Encourage small, focused commits when using AI for pair programming. The commit discipline is worth tracking. It tells you both the quality of AI output and the discipline of the team working with it.
The key rule for this phase: never ship AI-generated code you cannot explain. The engineer remains accountable for everything that goes to production. This sounds obvious when you say it out loud. It needs to be stated explicitly anyway. The temptation to ship under delivery pressure is real, and the rule protects against it.
Phase 4: Iterate and Improve (Ongoing)
Context goes stale. Code changes. Conventions shift. New patterns emerge. The context files you wrote in week one will be wrong within a few months if you do not maintain them.
Treat context like code. Updates go through pull requests with review, not direct edits. This keeps the team aligned and creates a record of why conventions changed.
Feed production signals back into the context. Bug reports, support tickets, incident notes. When a pattern produces bugs in production, that fact belongs in the context so the AI can flag similar patterns in future.
Run a monthly context health check. Read the files as if you were new to the codebase. Ask whether the AI, reading only those files, would understand what matters. If the answer is no, update them.
Why Context-First Matters
When AI underperforms on a codebase, the problem is almost always about context, not capability. The model is capable. It does not have the information it needs to apply that capability to your specific system.
When the AI misses your conventions, the usual response is to blame the tool. The real problem is that the tool lacked information. The AI does not read your Confluence pages. It does not attend architecture meetings. It does not absorb tribal knowledge from corridor conversations. You have to fill those gaps explicitly. Context is what separates plausible code from code your team would actually ship.
Tools This Works With
The framework is tool-agnostic. Claude Code, OpenAI Codex, and Cursor all work with this approach. The structure of the context files is slightly different for each, but the underlying logic is identical.
Claude Code maps directly to CLAUDE.md. Cursor uses the same files as project context. Codex can reference the context from the system prompt. Pick whichever tool your team already uses. Do not switch tools to implement this framework. The problem is almost never the tool.
The 30-Minute Quickstart
If you only have 30 minutes and want to start today, do these five things:
- Create CLAUDE.md at the repo root. Write two paragraphs: what the system does, and what conventions matter most to your team.
- Ask the AI to read the repo and produce a one-page architecture summary. Save it as docs/architecture.md.
- Ask the AI to read the last 90 days of commits and list the top five patterns it observes. Add those patterns to CLAUDE.md.
- Write one sentence that describes your review policy. Add it to your contributing guide.
- Run one pull request through AI review this week, on an advisory basis only. Note what it catches and what it misses.
That's it. You're not done, but you're started. The rest of the framework builds from here.
Download the Free Ebook
"Your AI's First Day" covers the full four-phase roadmap with templates, example context files, a review policy template, and a worked example of a 30-day build-out. It is the implementation detail behind everything in this post.
It is free. There is no email sequence, no upsell course. Just the playbook.
I work with engineering leaders through fractional CTO and advisory work. If you want to talk through how this applies to your team, reach out via the contact form.
Frequently Asked Questions
How long does it take to onboard AI into a development team properly?
Three to four weeks for the first three phases. Phase 1 alone takes a full week if you do it properly. Phase 4 is ongoing and typically requires three to four hours per month to maintain.
Which AI coding tool should we use?
It depends on your team's preference. Claude Code, Cursor, and Codex are all capable tools. Start with whatever your engineers are already using. Tool choice matters far less than the approach you take to onboarding it.
Do we need to tell engineers how to use AI pair programming, or can we leave it to them?
Some guidance is worth writing down: never ship code you cannot explain, expected commit size when working with AI, and what the review policy means for how AI output is treated. You do not need a detailed playbook. A few clear rules go a long way.
What if engineers are resistant to using AI tools?
Do not mandate adoption. Resistant engineers are often your most thoughtful engineers, and their concerns are usually worth hearing. Bring them into the review policy work. They will have useful opinions on what the AI should and should not flag. Involvement is more effective than instruction.
How does this relate to a fractional CTO engagement?
AI onboarding is often one of the first things I work on in a fractional CTO engagement. It unlocks delivery capacity quickly without requiring a large headcount change. If you are curious about how that kind of engagement works, the first 90 days of a fractional CTO engagement post covers the broader context. You can also read more on the about page.