Quick start guide to get Microsoft AutoGen running.
Using pip:
# Install AgentChat + OpenAI support
pip install -U "autogen-agentchat" "autogen-ext[openai]"
Using uv:
uv pip install -U "autogen-agentchat" "autogen-ext[openai]"
Legacy v0.2.x (if needed):
pip install pyautogen~=0.2.0
export OPENAI_API_KEY="sk-your-openai-api-key"
Or for Azure OpenAI:
export AZURE_OPENAI_API_KEY="your-key"
export AZURE_OPENAI_ENDPOINT="https://your-endpoint.openai.azure.com/"
Create hello.py:
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main() -> None:
# Create model client
model_client = OpenAIChatCompletionClient(model="gpt-4o")
# Create assistant agent
agent = AssistantAgent(
"assistant",
model_client=model_client
)
# Run the agent
result = await agent.run(task="Say 'Hello World!'")
print(result)
# Close the client
await model_client.close()
asyncio.run(main())
Run it:
python hello.py
Create multi_agent.py:
import asyncio
from autogen_agentchat.agents import AssistantAgent, UserProxyAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4o")
# Create agents
coder = AssistantAgent(
"coder",
model_client=model_client,
description="A helpful coding assistant",
system_message="You are a helpful coding assistant. Write code when requested."
)
reviewer = AssistantAgent(
"reviewer",
model_client=model_client,
description="A code reviewer",
system_message="You are a code reviewer. Review code for quality and best practices."
)
user_proxy = UserProxyAgent(
"user_proxy",
description="A human user proxy",
code_execution_config={"work_dir": "coding"}
)
# Create a team with round-robin chat
team = RoundRobinGroupChat(
[coder, reviewer, user_proxy],
max_turns=10
)
# Run the team
result = await team.run(
task="Create a Python function to calculate fibonacci numbers"
)
print(result)
await model_client.close()
asyncio.run(main())
Run it:
python multi_agent.py
# AgentChat + OpenAI
pip install -U "autogen-agentchat" "autogen-ext[openai]"
# Azure OpenAI
pip install -U "autogen-ext[azure]"
# Anthropic
pip install -U "autogen-ext[anthropic]"
# Ollama (local LLM)
pip install -U "autogen-ext[ollama]"
# MCP (Model Context Protocol)
pip install -U "autogen-ext[mcp]"
pip install -U autogenstudio
# Run the studio
autogenstudio ui --port 8080
Access at: http://localhost:8080
If you need the older v0.2.x API:
pip install pyautogen~=0.2.0
Example for v0.2.x:
from autogen import AssistantAgent, UserProxyAgent
# Create agents
assistant = AssistantAgent(
"assistant",
llm_config={"config_list": [{"model": "gpt-4o"}]}
)
user_proxy = UserProxyAgent(
"user_proxy",
code_execution_config={"work_dir": "coding"}
)
# Start conversation
user_proxy.initiate_chat(
assistant,
message="Plot a chart of NVDA stock price change YTD."
)
dotnet add package Microsoft.AutoGen
Example:
using Microsoft.AutoGen;
var agent = new AssistantAgent(
"assistant",
llmConfig: new LLMConfig { Model = "gpt-4o" }
);
await agent.RunAsync("Say hello!");
AutoGen Studio provides a visual interface for building and testing agents:
# Install
pip install -U autogenstudio
# Run
autogenstudio ui --port 8080 --appdir ./myapp
Features: