Claude Code
Advanced Course
Master CLAUDE.md, build a second brain, wield agent harnesses, parallelization, auto research, browser automation, and security — everything to go from competent to exceptional.
CLAUDE.md & Advanced System Prompts
The Four Pillars of CLAUDE.md
Knowledge Compression
Distill domain expertise into structured snippets. Instead of repeating explanations, reference your CLAUDE.md.
Preferences & Conventions
Document your style, naming conventions, and tool preferences. This reduces ambiguity in every prompt.
Declaration of Capabilities
List tools, access points, and what Claude can do in your specific context. Prevents hallucination about capabilities.
Log of Failures & Successes
Record what worked and what didn't. This is your training data for improving future interactions.
Global vs. Local Scope
CLAUDE.md operates on two levels:
| Aspect | Global CLAUDE.md | Local CLAUDE.md |
|---|---|---|
| Scope | Your entire life & work style | Project-specific learnings |
| Location | ~/Documents/CLAUDE.md | Project root directory |
| Persistence | References in every session | Active during project phase |
| Examples | Communication style, timezone, availability | Architecture decisions, API contracts, code patterns |
The Local Workflow Loop
- Plan — Before building, document what you'll do and why
- Instantiate — Execute the plan with Claude
- Compile Learnings — What worked? What failed? Why?
- Update CLAUDE.md — Add patterns, anti-patterns, and decisions
Why it matters: Each cycle makes the next cycle faster. You're not explaining architecture decisions repeatedly — Claude references your local CLAUDE.md and understands context immediately.
Token Conservation Strategies
1. Structure Over Verbosity
Use lists, tables, and formatted sections instead of prose. A table takes fewer tokens than paragraphs.
2. Reference, Don't Repeat
Instead of pasting code, reference it: "Use the pattern from section 3.2 of our local CLAUDE.md."
3. Compression Hierarchy
abbreviation → reference → full context. Build abbreviations for your patterns, then add them to CLAUDE.md.
4. Archive Completed Knowledge
Move discussions about completed modules to a separate ARCHIVE.md to keep active CLAUDE.md lean.
Building a Second Brain — The Karpathy Method
The Problem with Traditional Knowledge
We've all been there: hundreds of notes in Notion, Google Docs, scattered PDFs. Knowledge management tools are optimized for human reading — nice formatting, pretty pages. But they're terrible for AI agents that need to parse, synthesize, and maintain information at scale.
The core insight: Your knowledge should be compiled by AI, not manually indexed by you. Let the machine be the librarian.
The Three-Layer Architecture
Raw Layer
A folder where you dump everything: PDFs, URLs, meeting notes, article clippings, voice transcripts. No organization required. Just capture.
Schema Layer (CLAUDE.md)
Your CLAUDE.md acts as the rules engine — it tells the AI how to read raw data, how to structure the wiki, how to create entity links, and what workflows to support.
Wiki Layer
AI-generated, structured, interconnected Markdown files. Think of it as your personal Wikipedia, maintained by Claude. Entities link to each other. Knowledge compounds over time.
The Directory Structure
The Core Workflows
Ingest
Drop a file into raw/. Tell Claude: "Ingest this document." The AI extracts entities, facts, and relationships, then creates or updates wiki entries.
Query
Ask questions that span your entire knowledge base. Claude reads the compiled wiki and synthesizes answers from multiple sources.
Lint
Check structural consistency. Are there broken links? Missing entities? Orphaned pages? Claude audits the wiki and reports issues.
Compile
Regenerate or update the wiki from raw sources. Useful when you've changed the schema or added a batch of new documents.
Why This Is Different from RAG
LLM Wiki (Second Brain): The AI pre-compiles knowledge into structured artifacts. When you query, it reads an already-organized wiki. Knowledge compounds — each ingestion makes the entire system more capable.
Tooling Stack
| Tool | Role | Why |
|---|---|---|
| Claude Code | The engine | Reads files, writes wiki entries, executes workflows |
| Obsidian | Visualization | Graph view shows entity relationships; great for humans |
| Git | Version control | Track every change to the wiki; diff knowledge over time |
| CLAUDE.md | Schema/rules | Defines how AI processes, structures, and maintains everything |
The Compounding Effect
This is the key insight that makes the second brain genuinely powerful:
- Week 1: You have 10 wiki entries. Queries are basic.
- Week 4: You have 50 entries with cross-references. Queries start finding non-obvious connections.
- Month 3: You have 200+ entries. The AI can synthesize from dozens of sources to answer complex questions you never anticipated.
- Month 6: Your second brain knows your domain, your preferences, your project history. It's not a search engine — it's a thinking partner.
Practical .md Files — Copy & Use
1. Project CLAUDE.md Template
# Project CLAUDE.md — [Project Name]
## Overview
[One-sentence description of what this project does]
Tech stack: [e.g., Next.js, TypeScript, Supabase, Vercel]
Status: [Active / Maintenance / Archived]
## Architecture Decisions
- **Database:** Supabase with RLS enabled — chosen for real-time + auth
- **Styling:** CSS Modules + design tokens — no Tailwind (per team preference)
- **Deployment:** Vercel with preview branches per PR
## Code Patterns
- Error handling: try/catch at API boundary, typed errors downstream
- Naming: camelCase for JS, kebab-case for files, PascalCase for components
- Module structure: feature-based folders (not type-based)
- All async operations use `async/await`, not raw Promises
## Known Challenges
- Supabase RLS policies must be updated when adding new tables
- Image optimization: use next/image, NOT raw <img> tags
- Rate limiting on external APIs — always implement exponential backoff
## Capabilities Declared
Claude can:
- Read/write all files in src/
- Run terminal commands (npm, git, curl)
- Access browser for visual testing
- Query the Supabase instance via CLI
Claude cannot:
- Deploy to production (requires manual approval)
- Access secrets directly (use env vars)
- Modify CI/CD pipeline config
## Active Context
Currently working on: [brief description of current task]
Last session ended at: [what was accomplished, what's next]
## Learnings Log
[2026-04-13] Discovered RLS policy blocks unauthenticated reads on 'posts' table.
Fixed by adding "anon" role to SELECT policy.
[2026-04-12] CSS Modules import order matters in Next.js — globals must load first.
Added explicit import order in layout.tsx.
[2026-04-10] API route returning 405 — was using GET handler but frontend sent POST.
Pattern: always match HTTP method in route handler.2. Second Brain CLAUDE.md (Karpathy Method)
# Second Brain — CLAUDE.md ## Purpose This is my personal LLM-native knowledge base. You are the librarian. Your job is to maintain, organize, and query this wiki. ## Directory Structure - raw/ → Unprocessed inputs (PDFs, notes, URLs, transcripts) - wiki/ → Compiled knowledge (structured Markdown, interlinked) - queries/ → Saved queries and research logs ## Workflows ### ingest <file> 1. Read the file from raw/ 2. Extract: key entities, facts, relationships, dates 3. For each entity: - If wiki page exists → update with new information, cite source - If wiki page doesn't exist → create new page 4. Add [[wikilinks]] to connect related entities 5. Update wiki/index.md with new entries 6. Log the ingestion in queries/research-log.md ### query <question> 1. Read relevant wiki pages (use wikilinks to traverse) 2. Synthesize an answer drawing from multiple sources 3. Cite which wiki pages informed the answer 4. If knowledge gaps exist, flag them ### lint 1. Scan all wiki/ files for broken [[wikilinks]] 2. Identify duplicate entities (same person/concept, different pages) 3. Find orphaned pages (no incoming links) 4. Report issues with suggested fixes ### compile 1. Re-process all raw/ files against current schema 2. Update wiki/ entries with any new information 3. Regenerate wiki/index.md 4. Run lint after compilation ## Entity Types - People: wiki/people/firstname-lastname.md - Concepts: wiki/concepts/concept-name.md - Projects: wiki/projects/project-name.md - Companies: wiki/companies/company-name.md - Tools: wiki/tools/tool-name.md ## Page Template Each wiki page should follow: ``` # [Entity Name] **Type:** [Person|Concept|Project|Company|Tool] **Last Updated:** [date] ## Summary [2-3 sentence overview] ## Key Facts - Fact 1 (source: raw/filename.md) - Fact 2 (source: raw/filename.md) ## Connections - [[Related Entity 1]] — relationship description - [[Related Entity 2]] — relationship description ## Notes [Additional context, observations, open questions] ``` ## Rules - Always cite sources from raw/ when updating wiki - Never delete information — mark as [DEPRECATED] if outdated - Use [[double brackets]] for all entity references - Keep summaries under 200 words - Date format: YYYY-MM-DD
3. Auto-Research Loop Template
# Auto-Research Loop — [Topic] ## Objective Optimize: [specific metric to improve] Baseline: [current measurement] Target: [desired measurement] ## Methodology Variable: [what we're changing each iteration] Control: [what stays constant] Metric: [how we measure success] ## Experiment Log ### Iteration 1 - **Hypothesis:** [what we think will improve the metric] - **Change Applied:** [specific modification] - **Result:** [metric before → metric after] - **Assessment:** [worth pursuing / dead end / needs more data] - **Next:** [what to try next] ### Iteration 2 - **Hypothesis:** [...] - **Change Applied:** [...] - **Result:** [...] - **Assessment:** [...] - **Next:** [...] ## Summary of Findings | Iteration | Hypothesis | Result | Delta | |-----------|-----------|--------|-------| | 1 | [...] | [...] | [+/-] | | 2 | [...] | [...] | [+/-] | ## Conclusions - Best performing approach: [...] - Diminishing returns observed at: [...] - Recommended configuration: [...] ## Learnings for CLAUDE.md [Copy these into your project CLAUDE.md] - [Learning 1] - [Learning 2]
4. Agent Team Configuration
# Agent Team — [Task Name] ## Team Structure - **Coordinator:** Routes tasks, handles failures, synthesizes results - **Researcher:** Deep dives, reads docs, calls APIs, writes findings - **Developer:** Writes code, runs tests, implements features - **QA/Security:** Reviews code, checks edge cases, validates security ## Task Assignment Rules 1. Research tasks → Researcher (use Opus for complex reasoning) 2. Implementation → Developer (use Sonnet for speed) 3. Code review → QA/Security (use Opus for thoroughness) 4. Synthesis → Coordinator (merge results, resolve conflicts) ## Handoff Protocol When passing work between agents: 1. Write a summary (not full context) to handoff.md 2. Include: what was done, what remains, key decisions made 3. Reference specific files changed 4. Flag any blockers or open questions ## Shared Context All agents read this file + project CLAUDE.md before starting. Shared truth: [link to shared CLAUDE.md] ## Cost Budget - Max per task: $X - Preferred: 3x Sonnet parallel > 1x Opus sequential - Escalate to Opus only for: security review, architecture decisions ## Success Criteria - [ ] All tests pass - [ ] Code review completed with no critical issues - [ ] Documentation updated - [ ] CLAUDE.md updated with learnings
5. Security Audit Checklist
# Security Audit Checklist ## Pre-Deploy (Must Pass) - [ ] No API keys in git history: `git log -p | grep "sk-"` - [ ] .env file in .gitignore - [ ] All dependencies audited: `npm audit` / `pip audit` - [ ] HTTPS enforced (no plain HTTP) - [ ] CORS configured (not allowing all origins) ## Authentication & Authorization - [ ] Password hashing: bcrypt or Argon2 (not MD5/SHA) - [ ] Session tokens are httpOnly, secure, sameSite - [ ] Rate limiting on login endpoints - [ ] Account lockout after N failed attempts ## Database - [ ] Row-Level Security (RLS) enabled - [ ] Parameterized queries (no string concatenation) - [ ] Backup strategy documented and tested - [ ] Connection pooling configured ## API Security - [ ] Input validation on ALL endpoints - [ ] Rate limiting in place (100 req/min/IP default) - [ ] Error messages don't leak internals - [ ] Request size limits configured ## Payments - [ ] Using Stripe/PayPal (never storing card data) - [ ] Webhook signature verification implemented - [ ] Idempotency keys for payment creation ## AI-Specific - [ ] Prompt injection protections in place - [ ] User inputs sanitized before passing to LLM - [ ] LLM outputs validated before executing actions - [ ] Cost limits / kill switches configured ## Post-Deploy - [ ] Logging in place (who did what, when) - [ ] Monitoring & alerting configured - [ ] Incident response plan documented - [ ] Regular dependency updates scheduled
Agent Harnesses
Five Key Components
1. LLM Model
The core intelligence. Claude Opus for complex reasoning, Sonnet for speed, Haiku for edge cases.
2. Tool Definitions
What the LLM can do. APIs, file access, code execution, search.
3. System Prompts
The personality and constraints. "You are a security auditor" vs. "You are a code generator."
4. Hooks & Callbacks
Functions that run before/after each action. Logging, validation, rate limiting.
5. Parameters
Temperature, max tokens, stop sequences. These shape behavior at runtime.
The Dog Sled Analogy
Think of a harness like a dog sled:
- The dogs = the LLM model (their energy and intelligence)
- The sled = the harness (what it can carry, how it steers)
- The musher = you (your system prompts and context)
A powerful LLM in a poorly designed harness will crash. A decent LLM in a brilliant harness will outperform. The harness design matters as much as the model quality.
Harness Comparison
| Harness | Strengths | Best For |
|---|---|---|
| Claude Code | File system, terminal, browser, memory | Local development, full-stack |
| Pydantic AI | Structured outputs, type-safe | Data pipelines, validation |
| Crew AI | Multi-agent orchestration | Complex research, investigation |
| LangChain | Flexible chains, broad integrations | Experimentation, prototyping |
Parallelization & Agent Teams
Why Parallelize?
Time Efficiency
Run N operations in parallel. What takes 3 minutes serially takes 1 minute with 3 agents.
Stochastic Diversity
Temperature > 0 means different outputs. Run 3 agents, get 3 perspectives, vote on the best.
Quality via Debate
Agent A says X, Agent B says not-X. The truth often lies in reconciling both perspectives.
Fan-Out / Fan-In Pattern
Cost: Opus vs. Sonnet
| Scenario | 1x Opus | 3x Sonnet (parallel) | Cost Ratio |
|---|---|---|---|
| Write & test code | $0.60 | $0.18 | 3x cheaper |
| Code review | $0.30 | $0.15 | 2x cheaper |
| Research task | $3.00 | $0.90 | 3x cheaper |
Key Insight: 3 Sonnet agents debating often outperform 1 Opus agent, at 1/3 the cost and faster turnaround.
Skills, Sub-Agents & Context Management
| Aspect | Skill | Sub-Agent |
|---|---|---|
| Instantiation | Load into context | Spawn new session |
| Isolation | Logical | Process-level |
| Memory | Shared with parent | Isolated to session |
| Latency | Instant (in-context) | Milliseconds (new session) |
| Cost | Amortized | Separate billing per agent |
Context Management at Scale
- Context Explosion: 10 agents × 128k tokens = 1.28M tokens just for state
- Attention Loss: Important details get buried in noise
- Coherence Drift: Agents' understanding diverges without shared CLAUDE.md
Solutions:
- Shared CLAUDE.md: All agents read the same truth file first
- Async Handoff Docs: Write a summary, not the full context
- Summarization Layers: Long discussions get compressed to bullet points
- Archival: Completed sub-tasks move to read-only archives
Auto Research — The Karpathy Method
The Research Loop
Hypothesis
Form a testable hypothesis about what might improve your metric.
Execute
Apply the change. One variable at a time. Keep everything else constant.
Assess
Measure the result. Did the metric improve? By how much?
Log
Record everything in CLAUDE.md. Feed forward to next cycle.
Time per cycle: ~2 minutes. Daily capacity: 240+ experiments in 8 hours.
Three Requirements
1. Metric
Quantifiable measure of success. Not "feels better" — measurable values like latency, pass rate, accuracy.
2. Change Method
Systematic variation. Algorithm A → B → C. Thread count: 1 → 2 → 4 → 8. One variable at a time.
Real-World Results
Shopify Liquid Engine
Result: 53% faster execution, 61% fewer allocations after 50+ hypotheses over 2 weeks.
LLM Inference Optimization
Result: 40% latency reduction via batch size, quantization, and caching exploration in 3 days.
Browser & Internet Automation
The Three-Level Spectrum
| Level | Technology | Speed | Generality |
|---|---|---|---|
| Level 1: HTTP | Direct API calls, curl | Milliseconds | Limited (API only) |
| Level 2: Browser | Selenium, Playwright, Chrome DevTools MCP | 1-2 seconds/action | High (UI, JS, dynamic) |
| Level 3: Computer Use | Claude sees screen, clicks, types | 10-30 seconds/action | Universal (anything on screen) |
Optimization Path
Step 1: Prototype with Browser Use — fast iteration, see results immediately.
Step 2: Measure where time is spent. If it's an API behind the UI, extract the API call.
Step 3: Use HTTP directly → 10 seconds of browser UI → 100ms HTTP API call. 100x speedup.
Performance Fluctuations & Diversification
Three Diversification Strategies
Model Diversity
Use Claude Opus + Sonnet + external models. Route complex reasoning to Opus, simple tasks to Sonnet, specialized tasks to alternatives.
Approach Diversity
Chains + Agents + Direct inference. Different problem types need different approaches. Try multiple in parallel.
The 70/30 Rule
70% of your workload goes to Claude (your core). 30% explores alternatives to prevent lock-in. This gives you fallback options and comparative benchmarks.
Performance Monitoring
| Metric | How to Measure | Alert When |
|---|---|---|
| Accuracy per model | % matching expected quality | Drops below 85% |
| Latency per model | Time to first token, total time | Exceeds 5s |
| Cost per task type | Input + output tokens × price | 30% over budget |
| Error rate | Failures / total | Exceeds 5% |
Workspace Organization
Business Workspace Structure
Key Principles
- CLAUDE.md at each level: Global → Team → Client → Project hierarchy
- Isolation: Each client in their own folder
- Shared tools: Common scripts in /internal/tools, not duplicated
- Active vs. Archive: Completed projects move to archive
Cleanup Routine
Weekly (15 min)
- Delete temp files
- Archive completed tasks
- Update READMEs
Monthly (1 hour)
- Review active projects
- Consolidate duplicates
- Update CLAUDE.md
Security for AI-Powered Projects
Five Key Vulnerabilities
Security Audit Checklist
- No API keys in git history
- .env file in .gitignore
- All external dependencies audited
- Database has row-level security enabled
- Password hashing: bcrypt or Argon2
- HTTPS enforced
- CORS configured properly
- Rate limiting in place
- Input validation on all user inputs
- Payments use Stripe/PayPal, not stored cards
The Future of Claude Code & Work
"The future is here, it's just not evenly distributed." — William GibsonEarly adopters of advanced Claude Code techniques will have a 10x advantage over those still using generic prompts.
The Decreasing Human Involvement Spectrum
| Era | Human Role | Example |
|---|---|---|
| 2024 | Heavy guidance | Write detailed prompts, review every output |
| 2025 | Spot checks | Set direction, verify key outputs |
| 2026 | Supervision | Define goals, monitor agent performance |
| 2027+ | Passive oversight | Set strategy, review summaries |
Economic Moat Shift
Pre-AI: Competitive advantage = build features competitors can't.
AI Era: Features copied in weeks. Advantage shifts to distribution, brand, network effects, and speed-to-market.
The 1% Advantage
In a world where everyone has access to the same AI models, the 1% advantage comes from:
Prompt Craft
Knowing how to ask questions that yield extraordinary responses.
Tool Mastery
Deep understanding of harness design, parallelization, specialization.
Context Management
CLAUDE.md maintenance, knowledge compression, token efficiency.
Feedback Loops
Measure, iterate, improve. Most people never close the loop.
Your Competitive Edge
- You understand harnesses (most people just use the defaults)
- You parallelize (most people run single agents)
- You use auto-research (most people prompt once and move on)
- You build a second brain (most people lose knowledge between sessions)
- You maintain CLAUDE.md (most people have chaotic context)
- You think in feedback loops (most people don't measure)