Curriculum

Learn

Move through a structured curriculum with clear tracks, progress signals, and lesson cards that tell you what to do next.

From zero to practical

Learn prompt and context engineering with real examples.

Start with how models read instructions, then learn how to shape context, choose techniques, control output, and evaluate results. This is built for people who want to understand the work, not memorize prompt tricks.

1

Open one lesson

Each lesson starts with the problem it solves, when to use it, and a practical example prompt.

2

Try the pattern

Use the Analyzer or Rewrite tool when you want to test the idea against your own prompt.

3

Track locally

Progress is saved in your browser, so this stays lightweight without accounts or sign-ins.

Start here

The core idea before any technique

Prompt engineering

Design the instruction so the model understands the job, audience, constraints, and desired output.

You are a senior support lead.
Summarize this customer ticket for an engineering handoff.
Return: issue, affected user, urgency, likely cause, missing information.

Context engineering

Decide what information enters the context window: documents, examples, user history, tool results, memory, and rules.

Use only the policy excerpts and ticket history below.
Ignore unrelated notes.
Cite the exact policy section behind the recommendation.
Zero-shot

Use when the task is simple and the format is obvious.

Few-shot

Use when examples clarify labels, tone, or output shape better than explanation.

Structured output

Use when the answer feeds a workflow, UI, spreadsheet, or automation.

Reasoning models

Use for hard multi-step work. Give clear goals and checks instead of forcing visible chain-of-thought.

Modules

Choose a track, then open the lessons inside it.

Module 1

Foundations

Core mental models for prompt and context engineering.

1 lesson
01
Core Track

Prompt Engineering Became Context Engineering

Not started

The 2025-2026 shift is from isolated prompts to whole systems that supply the right instructions, retrieval, memory, and tools.

Use when

When you are designing agents, retrieval workflows, or multi-step prompt systems.

Avoid when

When a short direct prompt already solves the task cleanly.

Best for

All frontier models, especially long-context and reasoning models.

Module 2

Controls

Parameters, structure, and format controls that change how models behave.

2 lessons
01
Builder Track

Sampling Controls: Temperature, Top-P, and Top-K

Not started

Use low randomness for extraction and code, balanced settings for assistants, and broader sampling only when creative range actually helps.

Use when

When you need to tune consistency versus creativity in the builder.

Avoid when

When you have not yet verified whether the prompt itself is the real issue.

Best for

OpenAI, Claude, Gemini, Mistral, and OpenRouter chat models.

02
Core Track

Formatting: Markdown, XML, JSON, and Position Engineering

Not started

The format you choose changes how the model parses the task, and instruction placement changes whether the model notices the important parts.

Use when

When you need more reliable execution or more structured outputs.

Avoid when

When the task is simple enough that extra formatting only adds tokens without adding clarity.

Best for

Markdown for OpenAI and Gemini, XML-like structure for Claude, JSON for app outputs.

Module 3

Core Techniques

The prompt patterns worth learning before you layer on advanced systems work.

3 lessons
01
Core Track

Zero-Shot, Few-Shot, and Role Prompting

Not started

Use zero-shot first on strong models, add few-shot only when examples clarify the task, and use role prompting mainly for tone or framing.

Use when

When you need a clear baseline technique before trying advanced prompting patterns.

Avoid when

When you are already working inside a tool-using or retrieval-driven system.

Best for

All strong chat models; few-shot is most helpful for style transfer and label boundaries.

02
Advanced Track

Chain-of-Thought, Self-Consistency, and Tree of Thoughts

Not started

These reasoning patterns still matter, but mainly for the right model class and the right task complexity.

Use when

When solving hard multi-step tasks on models that benefit from explicit reasoning scaffolds.

Avoid when

When using reasoning-native models or when the task is simple enough that extra tokens only slow things down.

Best for

Non-reasoning frontier models and open-source models on hard reasoning tasks.

03
Core Track

What Is Outdated: Universal CoT, Magic Phrases, and Prompt Lore

Not started

What used to work everywhere no longer does. The best modern prompting is model-aware, evaluation-driven, and systems-oriented.

Use when

When you are modernizing old prompt practices or teaching newer prompting standards.

Avoid when

Do not treat old techniques as dead forever; some still matter on the right model class.

Best for

All frontier models when you want current best practice.

Module 4

Reasoning Models

How prompting shifts on modern reasoning-native model families.

1 lesson
01
Builder Track

Reasoning Models: Use Cleaner, Shorter Prompts

Not started

Modern reasoning models already think internally, so the prompt should focus on clear tasks, output constraints, and success criteria.

Use when

When prompting GPT-5-style, Claude thinking, and similar reasoning-first models.

Avoid when

When using smaller or non-reasoning models that still benefit from explicit structure examples.

Best for

Reasoning-native OpenAI, Claude, and Gemini flows.

Module 5

Execution

Operational workflows for shipping, evaluating, and scaling prompt systems.

5 lessons
01
Systems Track

Context Compression, RAG, and Memory

Not started

Modern systems win by sending less junk, retrieving the right evidence, and preserving the right continuity across sessions.

Use when

When prompts are becoming long, retrieval quality matters, or the system needs continuity over time.

Avoid when

When a short standalone task already performs well without extra orchestration.

Best for

All long-context models and any production workflow with retrieval.

02
Systems Track

ReAct, Prompt Chaining, DSPy, and Agentic Workflows

Not started

Modern builders increasingly design flows rather than single prompts: reason, act, retrieve, validate, and continue.

Use when

When one prompt is not enough and the work naturally breaks into steps or tool calls.

Avoid when

When you can still solve the task clearly with one response and a strong output contract.

Best for

Tool-using models, agent SDKs, and orchestration frameworks.

03
Security Track

Prompt Injection and System Prompt Safety

Not started

Prompt injection is still the top practical risk in LLM apps, so prompts, tool rules, and untrusted inputs must be separated deliberately.

Use when

When the model consumes user content, web content, files, or tool results.

Avoid when

Never ignore this in production flows that touch untrusted data.

Best for

All providers and all apps that mix trusted and untrusted context.

04
Quality Track

Hallucination Reduction and Evaluation Loops

Not started

The strongest prompt systems are measured, verified, and tuned against real tasks rather than vibes.

Use when

When accuracy matters or you need to compare prompt versions responsibly.

Avoid when

When the task is pure brainstorming and factual correctness is not the main bar.

Best for

All models, especially high-stakes research and operational workflows.

05
Everyday Track

Everyday Refinement Loops for Chat Users

Not started

Most practical gains come from iterative prompting: ask, inspect, tighten, and continue with better constraints.

Use when

When you are using chat products directly for writing, planning, analysis, or coding support.

Avoid when

When you have already automated the workflow and manual refinement is no longer the bottleneck.

Best for

ChatGPT, Claude, Gemini, and similar assistant products.