# Your AI's First Day — Implementation Roadmap

Use this file directly in Claude Code or Codex.

## How to use this roadmap

1. Save this file in your project root.
2. Open Claude Code or Codex in that project directory.
3. Paste this prompt:

```
Read `your-ais-first-day-roadmap.md` and guide me through this plan phase by phase.
Work in small steps. Show me what you found before making major changes.
Ask for confirmation before writing or editing important files.
```

4. Follow the phases in order.

---

## Outcome target

By the end of this roadmap, your AI tools should:
- understand your codebase and architecture,
- follow your team conventions,
- produce review suggestions that fit your standards,
- help with requirements and implementation,
- improve over time through a feedback loop.

---

## Phase 1 — Build shared context (Week 1)

### Step 1.1: Create workspace structure

Create this structure (adapt as needed):

```txt
my-project/
├── CLAUDE.md
├── AGENTS.md
├── docs/
│   ├── INDEX.md
│   ├── architecture.md
│   ├── coding-standards.md
│   ├── team-knowledge.md
│   └── integrations.md
├── inputs/
│   ├── transcripts/
│   └── documents/
└── app/
    ├── CLAUDE.md
    ├── AGENTS.md
    ├── src/
    └── tests/
```

### Step 1.2: Let AI read your codebase

Prompt:

```
Read this repo and summarise:
1) What this application does
2) Main architecture and module boundaries
3) Common patterns and conventions
4) Key uncertainties or unclear areas

Write output to docs/architecture.md
```

### Step 1.3: Mine commit history

Run:

```bash
git log --oneline --since="12 months ago" | head -300
```

Prompt:

```
Analyse commit patterns and infer team conventions, hotspots, and frequent change areas.
Write findings to docs/team-knowledge.md
```

### Step 1.4: Generate visuals

Prompt:

```
Generate Mermaid diagrams for:
- architecture overview
- key request/data flows
- service boundaries

Save diagrams into docs/architecture.md
Mark anything uncertain clearly.
```

### Step 1.5: Interview-driven context

Add meeting/interview transcripts into `inputs/transcripts/`.

Prompt:

```
Read all files in inputs/transcripts and compare against docs/*.
Summarise new insights and propose updates.
Do not edit files until I confirm.
```

### Step 1.6: Build CLAUDE.md and AGENTS.md

Prompt:

```
Draft CLAUDE.md and AGENTS.md based on this project.
Include coding standards, testing rules, architecture constraints, and unsafe patterns to avoid.
Keep it practical and concise.
```

### Step 1.7: Maintain docs index

Prompt:

```
Create or update docs/INDEX.md with:
- each doc name
- short summary
- last-updated date
- what question it answers
```

---

## Phase 2 — Wire AI into review (Week 2)

### Step 2.1: Define review policy

Prompt:

```
Create an AI code review checklist for this repo:
- correctness
- security
- performance
- maintainability
- test coverage
- style consistency

Save to docs/review-policy.md
```

### Step 2.2: Add CI review automation

Prompt:

```
Set up PR automation for AI-assisted review.
If GitHub Actions is available, propose workflow YAML and required secrets.
Show the plan first, then implement after confirmation.
```

### Step 2.3: Human-in-the-loop rule

Adopt this team rule:
- AI comments are advisory.
- Humans approve merges.
- No auto-merge from AI output.

---

## Phase 3 — Use AI for delivery (Week 2-3)

### Step 3.1: Requirements to tickets

Prompt:

```
Given this feature idea, draft:
1) PRD
2) implementation tasks
3) acceptance criteria
4) test plan

Use our existing docs and conventions.
```

### Step 3.2: Ticket-to-code flow

Prompt:

```
Take this ticket and implement it in small commits.
Before coding, restate assumptions and impacted files.
After coding, run tests and summarise results.
```

### Step 3.3: Keep explanation standard high

Team rule:
- Never ship AI-generated code you cannot explain.

---

## Phase 4 — Iterate and improve (Ongoing)

### Step 4.1: Feed production signals back in

Inputs to review weekly:
- bug reports
- support tickets
- incident notes
- recurring PR feedback

Prompt:

```
Review these production signals and propose updates to:
- CLAUDE.md
- AGENTS.md
- docs/coding-standards.md
- docs/team-knowledge.md

Show proposed diffs first.
```

### Step 4.2: Update context like code

Run context updates through PRs:
- changelog entry
- reviewer approval
- merge

### Step 4.3: Monthly context health check

Prompt:

```
Audit our context files for drift, redundancy, and stale guidance.
Recommend deletions, merges, and rewrites.
```

---

## 30-minute quickstart (if you're short on time)

If you only have 30 minutes, do this:

1. Create `docs/INDEX.md`, `CLAUDE.md`, `AGENTS.md`
2. Ask AI to summarise architecture into `docs/architecture.md`
3. Ask AI to identify top 10 unknowns/risk areas
4. Add one real transcript to `inputs/transcripts`
5. Ask AI to propose context updates (no auto-write)

That alone usually improves output quality immediately.

---

## Practical guardrails

- Start small. One repo first.
- Prefer short files over giant docs.
- Review before write for major changes.
- Keep examples from your real codebase.
- Update context continuously, not once.

---

## Suggested command to start

```
Read `your-ais-first-day-roadmap.md` and start with Phase 1, Step 1.1.
Before editing any files, show me the exact plan and proposed folder structure.
```
