Glossary

AutoGen

AutoGen (Wu et al., Microsoft Research, 2023) is one of the earliest and most influential multi-agent orchestration frameworks. Its central abstraction is the ConversableAgent, an entity with an inbox, an outbox, optional tools, and an LLM brain.

Core abstractions

from autogen import AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager

coder = AssistantAgent(
    name="coder",
    system_message="You write Python.",
    llm_config={"model": "gpt-4"},
)

critic = AssistantAgent(
    name="critic",
    system_message="You review code for bugs.",
)

user = UserProxyAgent(
    name="user",
    code_execution_config={"work_dir": "tmp"},
)

chat = GroupChat(agents=[coder, critic, user], max_round=10)
manager = GroupChatManager(groupchat=chat)
user.initiate_chat(manager, message="Write a quick-sort function.")

Agents take turns speaking; a GroupChatManager picks the next speaker either round-robin, by LLM vote, or by a developer-supplied rule.

Distinctive features

  1. Code execution, UserProxyAgent can execute Python code blocks in a sandbox, closing the agent loop without manual intervention.
  2. Human-in-the-loop, the same UserProxyAgent can prompt a real human at configurable points.
  3. Auto-reply, agents register reply functions that fire when their inbox matches a pattern.
  4. Teachability, long-term memory plug-in that vector-stores feedback.

Patterns enabled

  • Two-agent chat, assistant + user-proxy with code exec; the simplest agentic loop.
  • Group chat, N agents, manager picks speaker.
  • Nested chats, an agent's reply triggers a sub-conversation.
  • Sequential chats, pipeline of focused conversations sharing state.

Versions

  • AutoGen 0.2, original Python framework, conversable-agent centric.
  • AutoGen 0.4 (2024), async, event-driven rewrite; cleaner extension model.
  • AG2 (2024 community fork), continuation of 0.2 line.

Modern relevance

AutoGen's group-chat metaphor has been hugely influential , CrewAI, LangGraph, and OpenAI Swarm all owe it conceptual debts. However, by 2025 the field has cooled on N-agent group chats in favour of single-agent + good tool use (see multi-agent orchestration note on diminishing returns). AutoGen remains a strong choice for research prototyping and tasks that genuinely need multiple LLM voices (e.g. debate).

Citation

Wu, Q. et al. (2023). AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation. arXiv:2308.08155.

Related terms: Multi-Agent Orchestration, CrewAI, MetaGPT, LangChain

Discussed in:

This site is currently in Beta. Contact: Chris Paton

Textbook of Usability · Textbook of Digital Health

Auckland Maths and Science Tutoring

AI tools used: Claude (research, coding, text), ChatGPT (diagrams, images), Grammarly (editing).