Aider is a command-line AI pair-programming tool launched by Paul Gauthier in May 2023. It connects to OpenAI, Anthropic, or local models, and lets the developer edit a real codebase by chatting with the LLM. Aider's distinctive engineering contribution is its careful study of edit formats, the wire format the model uses to express file modifications.
The workflow
$ aider main.py utils.py
aider> Refactor utils.py to use pathlib instead of os.path
Aider edits utils.py, runs tests, commits with message:
"refactor: replace os.path with pathlib in utils.py"
Aider:
- Loads the repo into context.
- Builds a repo map (a tree-sitter-derived summary of every file).
- Sends the user request + relevant files + repo map to the LLM.
- Parses the response in one of several edit formats.
- Applies edits, runs lints/tests, and commits.
Edit formats
Gauthier's blog famously measures which edit format yields the highest SWE-bench-style success. Formats include:
| Format | Description | Best for |
|---|---|---|
| whole | Model rewrites entire file | Small files, weak models |
| diff | Unified diff blocks | Standard, widely supported |
| search-replace | <<<<<<< SEARCH ... ======= ... >>>>>>> REPLACE |
Most reliable on Claude, GPT-4 |
| udiff | Strict unified diff syntax | Strong models only |
Aider publishes a leaderboard of LLMs' code-editing skill on the same benchmark, regularly updated. This empirical rigour is highly unusual in the coding-agent space.
Distinctive features
- Git-native, every change is a commit; rollback is trivial.
- Auto-test loop,
--auto-testruns your test command, feeds failures back to the model. - Repo map with tree-sitter, summaries of all files via AST, so the model knows what's there even without loading every file.
- Lint integration, model is shown lint errors and asked to fix them.
- Chat mode vs apply mode,
/architectmode separates planning from editing.
Modern relevance
Aider was the first open-source tool to demonstrate that LLMs could maintain a real codebase rather than write toy snippets. By 2025 it is the canonical citation in coding-agent research and shapes the design of OpenHands, Devin, OpenAI Codex CLI (2025), and Anthropic's Claude Code. Its leaderboard remains one of the most-trusted real-world evaluations of model coding ability.
Citation
Gauthier, P. (2023). Aider. https://aider.chat/.
Related terms: OpenHands, Devin / AI Software Engineer, OpenAI Codex (2025 generation), SWE-Bench
Discussed in:
- Chapter 15: Modern AI, Modern AI