Getting Started
Welcome to the pi-coding-agent tutorial -- a progressive, hands-on guide to building AI agents from scratch. By the end of these eight chapters, you will go from a single prompt-response script to a fully-featured CLI agent complete with streaming output, custom tools, session persistence, user confirmations, and multi-session management.
Why pi-coding-agent?
Most AI tutorials stop at "call the API and print the response." Real-world agents, however, need much more: they stream responses in real time, invoke external tools, remember previous conversations, ask the user for permission before taking dangerous actions, and manage multiple concurrent sessions. pi-coding-agent is a framework that handles all of this plumbing so you can focus on the behavior of your agent rather than the infrastructure.
Think of pi-coding-agent as the "web framework" for AI agents. Just as Express or Fastify gives you routing, middleware, and request handling so you don't rewrite HTTP parsing from scratch, pi-coding-agent gives you session management, tool execution, event streaming, and resource loading so you don't reinvent the agent lifecycle from scratch.
What You'll Build
Across the eight chapters, you'll progressively construct a CLI agent that can:
- Talk -- Send a prompt and receive a response (Chapter 01)
- Stream -- Display responses in real time, typewriter-style (Chapter 02)
- Use tools -- Call external functions like weather APIs and calculators (Chapter 03)
- Remember -- Persist conversations to disk and resume them later (Chapter 04)
- Ask permission -- Pause and ask the user before executing dangerous operations (Chapter 05)
- Customize personality -- Load system prompts and skills dynamically (Chapter 06)
- Juggle conversations -- Manage multiple sessions in parallel (Chapter 07)
- Put it all together -- A production-quality CLI agent combining every technique (Chapter 08)
Here is a preview of the final Chapter 08 agent in action:
Each chapter is self-contained and runnable on its own, so you can jump to any topic that interests you -- though working through them in order gives the best learning experience.
Prerequisites
Before you begin, make sure you have:
- Bun runtime (v1.0 or later) -- This project uses Bun as its package manager and script runner. Bun is fast and supports TypeScript out of the box.
- An API key for at least one supported LLM provider: Anthropic, OpenAI, Google, or DeepSeek.
If you don't have Bun installed, run curl -fsSL https://bun.sh/install | bash on macOS/Linux. On Windows, use powershell -c "irm bun.sh/install.ps1 | iex".
Installation
Clone the repository and install dependencies:
You must use bun install, not npm install or pnpm install. The project's lockfile and scripts are configured for Bun. Using another package manager may result in dependency resolution errors or missing binaries.
Configuration
Copy the example environment file and fill in your API key:
Edit .env to set your provider and key:
The shared/model.ts helper reads these environment variables at startup and constructs a Model object that every chapter uses. This means you configure your provider once and every chapter picks it up automatically.
Supported Providers
DeepSeek uses the OpenAI-compatible API format, which is why it shares the OPENAI_API_KEY environment variable. If you want to use both OpenAI and DeepSeek, you'll need to switch the key when switching providers.
Run Any Chapter
Each chapter is independently runnable. You don't need to complete earlier chapters to run later ones:
Under the hood, each bun run chXX command invokes tsx chapters/XX-name/index.ts, which runs the TypeScript file directly without a separate compilation step.
Project Structure
Each chapter directory contains an index.ts entry point and occasionally additional files (like tools.ts for tool definitions). The shared/ directory holds utilities used across all chapters.
Key Dependencies
You don't need to understand all of these packages before starting. Each chapter introduces the relevant packages as they become needed.
Troubleshooting
"Missing ANTHROPIC_API_KEY" error
The shared/model.ts helper checks for the presence of your API key at startup. If you see this error:
- Make sure you've created a
.envfile (not.env.example) - Verify the key name matches your provider -- e.g.,
ANTHROPIC_API_KEYfor Anthropic,OPENAI_API_KEYfor OpenAI - Ensure there are no extra spaces or quotes around the key value
"Command not found: bun"
You need to install the Bun runtime. See bun.sh for installation instructions.
"Cannot find module '@mariozechner/pi-coding-agent'"
Run bun install from the project root. If you previously used npm install, delete node_modules/ first and re-run with Bun.
Chapters run but produce no output
Make sure your API key is valid and has available credits. You can test connectivity by running bun run ch01 first, which sends a single prompt and prints the response.
Next Steps
Start with Chapter 01: Hello Agent to create your first agent. It takes less than 30 lines of code to go from zero to a working AI agent.