Martin Uetz
← Back to Resources
Advanced Course — 12 Modules — 4+ Hours

Claude Code
Advanced Course

Master CLAUDE.md, build a second brain, wield agent harnesses, parallelization, auto research, browser automation, and security — everything to go from competent to exceptional.

12Modules
4h+Content
50+Exercises
>_
Module 1

CLAUDE.md & Advanced System Prompts

Core Concept: CLAUDE.md is not just documentation — it's the contract between you and your AI assistant. It encodes your knowledge, preferences, and learnings to improve every interaction.

The Four Pillars of CLAUDE.md

💬

Knowledge Compression

Distill domain expertise into structured snippets. Instead of repeating explanations, reference your CLAUDE.md.

⚙️

Preferences & Conventions

Document your style, naming conventions, and tool preferences. This reduces ambiguity in every prompt.

🔧

Declaration of Capabilities

List tools, access points, and what Claude can do in your specific context. Prevents hallucination about capabilities.

📋

Log of Failures & Successes

Record what worked and what didn't. This is your training data for improving future interactions.

Global vs. Local Scope

CLAUDE.md operates on two levels:

AspectGlobal CLAUDE.mdLocal CLAUDE.md
ScopeYour entire life & work styleProject-specific learnings
Location~/Documents/CLAUDE.mdProject root directory
PersistenceReferences in every sessionActive during project phase
ExamplesCommunication style, timezone, availabilityArchitecture decisions, API contracts, code patterns

The Local Workflow Loop

  1. Plan — Before building, document what you'll do and why
  2. Instantiate — Execute the plan with Claude
  3. Compile Learnings — What worked? What failed? Why?
  4. Update CLAUDE.md — Add patterns, anti-patterns, and decisions

Why it matters: Each cycle makes the next cycle faster. You're not explaining architecture decisions repeatedly — Claude references your local CLAUDE.md and understands context immediately.

Token Conservation Strategies

1. Structure Over Verbosity

Use lists, tables, and formatted sections instead of prose. A table takes fewer tokens than paragraphs.

2. Reference, Don't Repeat

Instead of pasting code, reference it: "Use the pattern from section 3.2 of our local CLAUDE.md."

3. Compression Hierarchy

abbreviation → reference → full context. Build abbreviations for your patterns, then add them to CLAUDE.md.

4. Archive Completed Knowledge

Move discussions about completed modules to a separate ARCHIVE.md to keep active CLAUDE.md lean.

Advanced CLAUDE.md Tactics
Module 2

Building a Second Brain — The Karpathy Method

Origin: Andrej Karpathy popularized the concept of an "LLM-native knowledge base" — a structured Markdown wiki maintained by AI. Instead of storing knowledge for humans to read, you store it for AI agents to compile, query, and keep current.

The Problem with Traditional Knowledge

We've all been there: hundreds of notes in Notion, Google Docs, scattered PDFs. Knowledge management tools are optimized for human reading — nice formatting, pretty pages. But they're terrible for AI agents that need to parse, synthesize, and maintain information at scale.

The core insight: Your knowledge should be compiled by AI, not manually indexed by you. Let the machine be the librarian.

The Three-Layer Architecture

📥

Raw Layer

A folder where you dump everything: PDFs, URLs, meeting notes, article clippings, voice transcripts. No organization required. Just capture.

🧠

Schema Layer (CLAUDE.md)

Your CLAUDE.md acts as the rules engine — it tells the AI how to read raw data, how to structure the wiki, how to create entity links, and what workflows to support.

🌐

Wiki Layer

AI-generated, structured, interconnected Markdown files. Think of it as your personal Wikipedia, maintained by Claude. Entities link to each other. Knowledge compounds over time.

The Directory Structure

second-brain/ ├── CLAUDE.md # Schema: how AI processes everything ├── raw/ # Dump everything here │ ├── articles/ │ ├── meeting-notes/ │ ├── pdfs/ │ ├── voice-transcripts/ │ └── bookmarks.md ├── wiki/ # AI-compiled knowledge base │ ├── people/ │ │ ├── andrej-karpathy.md │ │ └── sam-altman.md │ ├── concepts/ │ │ ├── recursive-self-improvement.md │ │ ├── agent-harnesses.md │ │ └── llm-wiki.md │ ├── projects/ │ │ ├── humaine-website.md │ │ └── calflow-app.md │ └── index.md # Auto-generated table of contents └── queries/ # Saved queries and their results └── research-log.md

The Core Workflows

Ingest

Drop a file into raw/. Tell Claude: "Ingest this document." The AI extracts entities, facts, and relationships, then creates or updates wiki entries.

# Example prompt: > Ingest raw/articles/karpathy-llm-wiki.pdf > Extract key concepts and link to existing wiki entries

Query

Ask questions that span your entire knowledge base. Claude reads the compiled wiki and synthesizes answers from multiple sources.

# Example prompt: > What are the three main approaches to AI agent > orchestration mentioned across my notes?

Lint

Check structural consistency. Are there broken links? Missing entities? Orphaned pages? Claude audits the wiki and reports issues.

# Example prompt: > Lint the wiki. Find broken wikilinks, > duplicate entities, and orphaned pages.

Compile

Regenerate or update the wiki from raw sources. Useful when you've changed the schema or added a batch of new documents.

# Example prompt: > Recompile wiki/concepts/ from raw sources. > Update cross-references and add new entities.

Why This Is Different from RAG

RAG (Retrieval-Augmented Generation): Every time you ask a question, the system searches through documents, retrieves chunks, and generates an answer. Knowledge is re-derived every time.

LLM Wiki (Second Brain): The AI pre-compiles knowledge into structured artifacts. When you query, it reads an already-organized wiki. Knowledge compounds — each ingestion makes the entire system more capable.

Tooling Stack

ToolRoleWhy
Claude CodeThe engineReads files, writes wiki entries, executes workflows
ObsidianVisualizationGraph view shows entity relationships; great for humans
GitVersion controlTrack every change to the wiki; diff knowledge over time
CLAUDE.mdSchema/rulesDefines how AI processes, structures, and maintains everything

The Compounding Effect

This is the key insight that makes the second brain genuinely powerful:

  • Week 1: You have 10 wiki entries. Queries are basic.
  • Week 4: You have 50 entries with cross-references. Queries start finding non-obvious connections.
  • Month 3: You have 200+ entries. The AI can synthesize from dozens of sources to answer complex questions you never anticipated.
  • Month 6: Your second brain knows your domain, your preferences, your project history. It's not a search engine — it's a thinking partner.
Karpathy's Key Principle: "Knowledge should be compiled and kept current by the AI, not manually indexed by the user." The shift is from humans organizing information to AI maintaining a living, evolving knowledge graph.
Module 3

Practical .md Files — Copy & Use

Ready to use: These are battle-tested .md file templates. Copy them into your projects and adapt to your needs. Each file is designed to work with Claude Code as your primary AI assistant.

1. Project CLAUDE.md Template

CLAUDE.mdTemplate
# Project CLAUDE.md — [Project Name]

## Overview
[One-sentence description of what this project does]

Tech stack: [e.g., Next.js, TypeScript, Supabase, Vercel]
Status: [Active / Maintenance / Archived]

## Architecture Decisions
- **Database:** Supabase with RLS enabled — chosen for real-time + auth
- **Styling:** CSS Modules + design tokens — no Tailwind (per team preference)
- **Deployment:** Vercel with preview branches per PR

## Code Patterns
- Error handling: try/catch at API boundary, typed errors downstream
- Naming: camelCase for JS, kebab-case for files, PascalCase for components
- Module structure: feature-based folders (not type-based)
- All async operations use `async/await`, not raw Promises

## Known Challenges
- Supabase RLS policies must be updated when adding new tables
- Image optimization: use next/image, NOT raw <img> tags
- Rate limiting on external APIs — always implement exponential backoff

## Capabilities Declared
Claude can:
- Read/write all files in src/
- Run terminal commands (npm, git, curl)
- Access browser for visual testing
- Query the Supabase instance via CLI

Claude cannot:
- Deploy to production (requires manual approval)
- Access secrets directly (use env vars)
- Modify CI/CD pipeline config

## Active Context
Currently working on: [brief description of current task]
Last session ended at: [what was accomplished, what's next]

## Learnings Log
[2026-04-13] Discovered RLS policy blocks unauthenticated reads on 'posts' table.
             Fixed by adding "anon" role to SELECT policy.
[2026-04-12] CSS Modules import order matters in Next.js — globals must load first.
             Added explicit import order in layout.tsx.
[2026-04-10] API route returning 405 — was using GET handler but frontend sent POST.
             Pattern: always match HTTP method in route handler.

2. Second Brain CLAUDE.md (Karpathy Method)

CLAUDE.md — Second Brain SchemaSecond Brain
# Second Brain — CLAUDE.md

## Purpose
This is my personal LLM-native knowledge base. You are the librarian.
Your job is to maintain, organize, and query this wiki.

## Directory Structure
- raw/        → Unprocessed inputs (PDFs, notes, URLs, transcripts)
- wiki/       → Compiled knowledge (structured Markdown, interlinked)
- queries/    → Saved queries and research logs

## Workflows

### ingest <file>
1. Read the file from raw/
2. Extract: key entities, facts, relationships, dates
3. For each entity:
   - If wiki page exists → update with new information, cite source
   - If wiki page doesn't exist → create new page
4. Add [[wikilinks]] to connect related entities
5. Update wiki/index.md with new entries
6. Log the ingestion in queries/research-log.md

### query <question>
1. Read relevant wiki pages (use wikilinks to traverse)
2. Synthesize an answer drawing from multiple sources
3. Cite which wiki pages informed the answer
4. If knowledge gaps exist, flag them

### lint
1. Scan all wiki/ files for broken [[wikilinks]]
2. Identify duplicate entities (same person/concept, different pages)
3. Find orphaned pages (no incoming links)
4. Report issues with suggested fixes

### compile
1. Re-process all raw/ files against current schema
2. Update wiki/ entries with any new information
3. Regenerate wiki/index.md
4. Run lint after compilation

## Entity Types
- People: wiki/people/firstname-lastname.md
- Concepts: wiki/concepts/concept-name.md
- Projects: wiki/projects/project-name.md
- Companies: wiki/companies/company-name.md
- Tools: wiki/tools/tool-name.md

## Page Template
Each wiki page should follow:
```
# [Entity Name]
**Type:** [Person|Concept|Project|Company|Tool]
**Last Updated:** [date]

## Summary
[2-3 sentence overview]

## Key Facts
- Fact 1 (source: raw/filename.md)
- Fact 2 (source: raw/filename.md)

## Connections
- [[Related Entity 1]] — relationship description
- [[Related Entity 2]] — relationship description

## Notes
[Additional context, observations, open questions]
```

## Rules
- Always cite sources from raw/ when updating wiki
- Never delete information — mark as [DEPRECATED] if outdated
- Use [[double brackets]] for all entity references
- Keep summaries under 200 words
- Date format: YYYY-MM-DD

3. Auto-Research Loop Template

auto-research.mdResearch
# Auto-Research Loop — [Topic]

## Objective
Optimize: [specific metric to improve]
Baseline: [current measurement]
Target: [desired measurement]

## Methodology
Variable: [what we're changing each iteration]
Control: [what stays constant]
Metric: [how we measure success]

## Experiment Log

### Iteration 1
- **Hypothesis:** [what we think will improve the metric]
- **Change Applied:** [specific modification]
- **Result:** [metric before → metric after]
- **Assessment:** [worth pursuing / dead end / needs more data]
- **Next:** [what to try next]

### Iteration 2
- **Hypothesis:** [...]
- **Change Applied:** [...]
- **Result:** [...]
- **Assessment:** [...]
- **Next:** [...]

## Summary of Findings
| Iteration | Hypothesis | Result | Delta |
|-----------|-----------|--------|-------|
| 1         | [...]     | [...]  | [+/-] |
| 2         | [...]     | [...]  | [+/-] |

## Conclusions
- Best performing approach: [...]
- Diminishing returns observed at: [...]
- Recommended configuration: [...]

## Learnings for CLAUDE.md
[Copy these into your project CLAUDE.md]
- [Learning 1]
- [Learning 2]

4. Agent Team Configuration

agent-team.mdMulti-Agent
# Agent Team — [Task Name]

## Team Structure
- **Coordinator:** Routes tasks, handles failures, synthesizes results
- **Researcher:** Deep dives, reads docs, calls APIs, writes findings
- **Developer:** Writes code, runs tests, implements features
- **QA/Security:** Reviews code, checks edge cases, validates security

## Task Assignment Rules
1. Research tasks → Researcher (use Opus for complex reasoning)
2. Implementation → Developer (use Sonnet for speed)
3. Code review → QA/Security (use Opus for thoroughness)
4. Synthesis → Coordinator (merge results, resolve conflicts)

## Handoff Protocol
When passing work between agents:
1. Write a summary (not full context) to handoff.md
2. Include: what was done, what remains, key decisions made
3. Reference specific files changed
4. Flag any blockers or open questions

## Shared Context
All agents read this file + project CLAUDE.md before starting.
Shared truth: [link to shared CLAUDE.md]

## Cost Budget
- Max per task: $X
- Preferred: 3x Sonnet parallel > 1x Opus sequential
- Escalate to Opus only for: security review, architecture decisions

## Success Criteria
- [ ] All tests pass
- [ ] Code review completed with no critical issues
- [ ] Documentation updated
- [ ] CLAUDE.md updated with learnings

5. Security Audit Checklist

security-checklist.mdSecurity
# Security Audit Checklist

## Pre-Deploy (Must Pass)
- [ ] No API keys in git history: `git log -p | grep "sk-"`
- [ ] .env file in .gitignore
- [ ] All dependencies audited: `npm audit` / `pip audit`
- [ ] HTTPS enforced (no plain HTTP)
- [ ] CORS configured (not allowing all origins)

## Authentication & Authorization
- [ ] Password hashing: bcrypt or Argon2 (not MD5/SHA)
- [ ] Session tokens are httpOnly, secure, sameSite
- [ ] Rate limiting on login endpoints
- [ ] Account lockout after N failed attempts

## Database
- [ ] Row-Level Security (RLS) enabled
- [ ] Parameterized queries (no string concatenation)
- [ ] Backup strategy documented and tested
- [ ] Connection pooling configured

## API Security
- [ ] Input validation on ALL endpoints
- [ ] Rate limiting in place (100 req/min/IP default)
- [ ] Error messages don't leak internals
- [ ] Request size limits configured

## Payments
- [ ] Using Stripe/PayPal (never storing card data)
- [ ] Webhook signature verification implemented
- [ ] Idempotency keys for payment creation

## AI-Specific
- [ ] Prompt injection protections in place
- [ ] User inputs sanitized before passing to LLM
- [ ] LLM outputs validated before executing actions
- [ ] Cost limits / kill switches configured

## Post-Deploy
- [ ] Logging in place (who did what, when)
- [ ] Monitoring & alerting configured
- [ ] Incident response plan documented
- [ ] Regular dependency updates scheduled
Module 4

Agent Harnesses

Definition: An agent harness is the complete system that constrains and guides an LLM: the model, tools, system prompts, hooks, and parameters that together create a capable agent.

Five Key Components

1. LLM Model

The core intelligence. Claude Opus for complex reasoning, Sonnet for speed, Haiku for edge cases.

2. Tool Definitions

What the LLM can do. APIs, file access, code execution, search.

3. System Prompts

The personality and constraints. "You are a security auditor" vs. "You are a code generator."

4. Hooks & Callbacks

Functions that run before/after each action. Logging, validation, rate limiting.

5. Parameters

Temperature, max tokens, stop sequences. These shape behavior at runtime.

The Dog Sled Analogy

Think of a harness like a dog sled:

  • The dogs = the LLM model (their energy and intelligence)
  • The sled = the harness (what it can carry, how it steers)
  • The musher = you (your system prompts and context)

A powerful LLM in a poorly designed harness will crash. A decent LLM in a brilliant harness will outperform. The harness design matters as much as the model quality.

Harness Comparison

HarnessStrengthsBest For
Claude CodeFile system, terminal, browser, memoryLocal development, full-stack
Pydantic AIStructured outputs, type-safeData pipelines, validation
Crew AIMulti-agent orchestrationComplex research, investigation
LangChainFlexible chains, broad integrationsExperimentation, prototyping
Module 5

Parallelization & Agent Teams

Core Insight: Running multiple agents simultaneously is not about speed alone. It's about diversity, quality, and redundancy.

Why Parallelize?

Time Efficiency

Run N operations in parallel. What takes 3 minutes serially takes 1 minute with 3 agents.

Stochastic Diversity

Temperature > 0 means different outputs. Run 3 agents, get 3 perspectives, vote on the best.

Quality via Debate

Agent A says X, Agent B says not-X. The truth often lies in reconciling both perspectives.

Fan-Out / Fan-In Pattern

┌─ Agent 1 ─┐ ├─ Agent 2 ─┤─→ Orchestrator ─→ Result ├─ Agent 3 ─┤ └─ Agent 4 ─┘ Step 1: Fan-Out (Dispatch same task to multiple agents) Step 2: Collect (Wait for all responses) Step 3: Fan-In (Merge, vote, or synthesize results)

Cost: Opus vs. Sonnet

Scenario1x Opus3x Sonnet (parallel)Cost Ratio
Write & test code$0.60$0.183x cheaper
Code review$0.30$0.152x cheaper
Research task$3.00$0.903x cheaper

Key Insight: 3 Sonnet agents debating often outperform 1 Opus agent, at 1/3 the cost and faster turnaround.

Module 6

Skills, Sub-Agents & Context Management

Core Truth: Skills and sub-agents are nearly identical. Both are organized collections of knowledge and tools wrapped in markdown with clear interfaces.
AspectSkillSub-Agent
InstantiationLoad into contextSpawn new session
IsolationLogicalProcess-level
MemoryShared with parentIsolated to session
LatencyInstant (in-context)Milliseconds (new session)
CostAmortizedSeparate billing per agent

Context Management at Scale

  • Context Explosion: 10 agents × 128k tokens = 1.28M tokens just for state
  • Attention Loss: Important details get buried in noise
  • Coherence Drift: Agents' understanding diverges without shared CLAUDE.md

Solutions:

  1. Shared CLAUDE.md: All agents read the same truth file first
  2. Async Handoff Docs: Write a summary, not the full context
  3. Summarization Layers: Long discussions get compressed to bullet points
  4. Archival: Completed sub-tasks move to read-only archives
The Token Budget Rule: Allocate 30% to state (current task), 40% to tools/CLAUDE.md, 30% to reasoning. Don't let this drift.
Module 7

Auto Research — The Karpathy Method

Origin: Andrej Karpathy's famous "neural network from scratch" approach: iterative, measurable, systematic. Apply the same scientific method to AI-powered research loops.

The Research Loop

💡

Hypothesis

Form a testable hypothesis about what might improve your metric.

Execute

Apply the change. One variable at a time. Keep everything else constant.

📊

Assess

Measure the result. Did the metric improve? By how much?

📝

Log

Record everything in CLAUDE.md. Feed forward to next cycle.

Time per cycle: ~2 minutes. Daily capacity: 240+ experiments in 8 hours.

Three Requirements

1. Metric

Quantifiable measure of success. Not "feels better" — measurable values like latency, pass rate, accuracy.

2. Change Method

Systematic variation. Algorithm A → B → C. Thread count: 1 → 2 → 4 → 8. One variable at a time.

Real-World Results

Shopify Liquid Engine

Result: 53% faster execution, 61% fewer allocations after 50+ hypotheses over 2 weeks.

LLM Inference Optimization

Result: 40% latency reduction via batch size, quantization, and caching exploration in 3 days.

Module 8

Browser & Internet Automation

Spectrum: HTTP (fastest, most fragile) → Browser (balanced) → Computer Use (slowest, most capable). Pick based on your use case.

The Three-Level Spectrum

LevelTechnologySpeedGenerality
Level 1: HTTPDirect API calls, curlMillisecondsLimited (API only)
Level 2: BrowserSelenium, Playwright, Chrome DevTools MCP1-2 seconds/actionHigh (UI, JS, dynamic)
Level 3: Computer UseClaude sees screen, clicks, types10-30 seconds/actionUniversal (anything on screen)

Optimization Path

Step 1: Prototype with Browser Use — fast iteration, see results immediately.

Step 2: Measure where time is spent. If it's an API behind the UI, extract the API call.

Step 3: Use HTTP directly → 10 seconds of browser UI → 100ms HTTP API call. 100x speedup.

Decision Tree: When to Use Each Level
Module 9

Performance Fluctuations & Diversification

The Monoculture Problem: Relying on one model, one approach, one dataset is like a farmer planting the same crop every season. One disease wipes you out.

Three Diversification Strategies

Model Diversity

Use Claude Opus + Sonnet + external models. Route complex reasoning to Opus, simple tasks to Sonnet, specialized tasks to alternatives.

Approach Diversity

Chains + Agents + Direct inference. Different problem types need different approaches. Try multiple in parallel.

The 70/30 Rule

70% of your workload goes to Claude (your core). 30% explores alternatives to prevent lock-in. This gives you fallback options and comparative benchmarks.

Performance Monitoring

MetricHow to MeasureAlert When
Accuracy per model% matching expected qualityDrops below 85%
Latency per modelTime to first token, total timeExceeds 5s
Cost per task typeInput + output tokens × price30% over budget
Error rateFailures / totalExceeds 5%
Module 10

Workspace Organization

Principle: Your directory structure encodes your decision-making process. Good structure makes parallelization possible. Bad structure makes everything a bottleneck.

Business Workspace Structure

~/workspace/ ├── CLAUDE.md (global, shared across projects) ├── clients/ │ ├── acme/ │ │ ├── CLAUDE.md (client-specific) │ │ ├── project-a/ │ │ │ ├── CLAUDE.md (project-specific) │ │ │ ├── src/ │ │ │ ├── tests/ │ │ │ └── docs/ │ │ └── project-b/ │ └── client-2/ ├── internal/ │ ├── tools/ (shared utilities) │ ├── automation/ (Claude Code automations) │ └── research/ (ongoing investigations) └── archive/ (completed, for reference)

Key Principles

  • CLAUDE.md at each level: Global → Team → Client → Project hierarchy
  • Isolation: Each client in their own folder
  • Shared tools: Common scripts in /internal/tools, not duplicated
  • Active vs. Archive: Completed projects move to archive

Cleanup Routine

Weekly (15 min)

  • Delete temp files
  • Archive completed tasks
  • Update READMEs

Monthly (1 hour)

  • Review active projects
  • Consolidate duplicates
  • Update CLAUDE.md
Module 11

Security for AI-Powered Projects

80/20 Security Philosophy: 80% of your protection comes from 20% of the effort. Do those 20% things first.

Five Key Vulnerabilities

1. API Key Leakage via Conversation History
2. Package Hallucination (Typosquatting)
3. Database Security (Row-Level Security)
4. Public-Facing Server Risks
5. Credit Card Data (Never Store)

Security Audit Checklist

  • No API keys in git history
  • .env file in .gitignore
  • All external dependencies audited
  • Database has row-level security enabled
  • Password hashing: bcrypt or Argon2
  • HTTPS enforced
  • CORS configured properly
  • Rate limiting in place
  • Input validation on all user inputs
  • Payments use Stripe/PayPal, not stored cards
Module 12

The Future of Claude Code & Work

"The future is here, it's just not evenly distributed." — William Gibson

Early adopters of advanced Claude Code techniques will have a 10x advantage over those still using generic prompts.

The Decreasing Human Involvement Spectrum

EraHuman RoleExample
2024Heavy guidanceWrite detailed prompts, review every output
2025Spot checksSet direction, verify key outputs
2026SupervisionDefine goals, monitor agent performance
2027+Passive oversightSet strategy, review summaries

Economic Moat Shift

Pre-AI: Competitive advantage = build features competitors can't.

AI Era: Features copied in weeks. Advantage shifts to distribution, brand, network effects, and speed-to-market.

The 1% Advantage

In a world where everyone has access to the same AI models, the 1% advantage comes from:

Prompt Craft

Knowing how to ask questions that yield extraordinary responses.

Tool Mastery

Deep understanding of harness design, parallelization, specialization.

Context Management

CLAUDE.md maintenance, knowledge compression, token efficiency.

Feedback Loops

Measure, iterate, improve. Most people never close the loop.

Your Competitive Edge

  • You understand harnesses (most people just use the defaults)
  • You parallelize (most people run single agents)
  • You use auto-research (most people prompt once and move on)
  • You build a second brain (most people lose knowledge between sessions)
  • You maintain CLAUDE.md (most people have chaotic context)
  • You think in feedback loops (most people don't measure)
Final Thought: The future of work is not "AI replaces humans." It's "humans who collaborate with AI outperform humans alone, dramatically." You're learning to be in the first group.
Course Complete
You've covered all 12 modules! Next: Apply these concepts to your own projects using the practical .md templates above.