AutoGen 0.4 to gruntownie przebudowana wersja frameworka Microsoft do budowy systemow multi-agent. Nowa modularna architektura, pelna asynchronicznosc, integracja z Azure i Semantic Kernel — to nie jest zwykly upgrade, to calkowita przebudowa.
- Nowa modularna architektura AutoGen 0.4
- Kluczowe roznice vs AutoGen 0.2
- Tworzenie agentow i zespolow
- Narzedzia i integracje
- Warunki zakonczenia konwersacji
- Rozne typy zespolow (GroupChat)
- Praktyczne przyklady
Dlaczego AutoGen 0.4?
Microsoft przepisal AutoGen od zera aby rozwiazac problemy wersji 0.2:
Nowa architektura modularna
AutoGen 0.4 sklada sie z trzech glownych pakietow:
Instalacja
# Podstawowa instalacja (OpenAI)
pip install autogen-agentchat autogen-ext[openai]
# Z Azure OpenAI
pip install autogen-agentchat autogen-ext[azure]
# Z Anthropic (Claude)
pip install autogen-agentchat autogen-ext[anthropic]
# Pelna instalacja
pip install autogen-agentchat autogen-ext[openai,azure,anthropic,docker]
# Sprawdz wersje
pip show autogen-agentchat
Pierwszy agent (0.4 style)
Podstawowy przyklad agenta w nowym API:
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main():
# 1. Utworz klienta modelu
model_client = OpenAIChatCompletionClient(
model="gpt-4o",
# api_key pobierany automatycznie z OPENAI_API_KEY
)
# 2. Utworz agenta
agent = AssistantAgent(
name="assistant",
model_client=model_client,
system_message="""You are a helpful AI assistant.
Be concise and informative in your responses."""
)
# 3. Uruchom zadanie (MUSI byc await!)
response = await agent.run(task="Explain quantum computing in 3 sentences.")
# 4. Pobierz ostatnia wiadomosc
print(response.messages[-1].content)
# Uruchom async funkcje
asyncio.run(main())
AutoGen 0.4 jest w pelni asynchroniczny. Kazde wywolanie run() musi miec await. Zapomnisz — dostaniesz RuntimeWarning: coroutine was never awaited.
Agent z narzedziami
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
# Definicja narzedzi (funkcje Python z docstrings)
def calculate(expression: str) -> str:
"""Evaluate a mathematical expression.
Args:
expression: Mathematical expression to evaluate (e.g., "2 + 2 * 3")
Returns:
Result of the calculation as string
"""
try:
result = eval(expression)
return f"Result: {result}"
except Exception as e:
return f"Error: {str(e)}"
def get_weather(city: str) -> str:
"""Get current weather for a city.
Args:
city: Name of the city
Returns:
Weather information as string
"""
# Symulacja - w produkcji uzyj prawdziwego API
return f"Weather in {city}: 22C, sunny"
async def main():
model_client = OpenAIChatCompletionClient(model="gpt-4o")
# Agent z narzedziami
agent = AssistantAgent(
name="assistant",
model_client=model_client,
tools=[calculate, get_weather],
system_message="""You are a helpful assistant with access to tools.
Use the calculate tool for math and get_weather for weather queries.
Always use tools when appropriate."""
)
# Agent automatycznie uzyje odpowiednich narzedzi
result = await agent.run(
task="What's 15% of 250? Also, what's the weather in Warsaw?"
)
for msg in result.messages:
print(f"{msg.source}: {msg.content}")
asyncio.run(main())
Zespoly agentow (GroupChat)
Sila AutoGen to mozliwosc tworzenia zespolow agentow, ktore wspolpracuja:
RoundRobinGroupChat
Agenci odpowiadaja po kolei, cyklicznie:
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main():
model_client = OpenAIChatCompletionClient(model="gpt-4o")
# Researcher - szuka informacji
researcher = AssistantAgent(
name="Researcher",
model_client=model_client,
system_message="""You are a research specialist.
Your job is to find and present factual information.
Be thorough but concise."""
)
# Critic - analizuje i krytykuje
critic = AssistantAgent(
name="Critic",
model_client=model_client,
system_message="""You are a critical analyst.
Your job is to find flaws, gaps, and suggest improvements.
Be constructive but thorough."""
)
# Writer - tworzy koncowy output
writer = AssistantAgent(
name="Writer",
model_client=model_client,
system_message="""You are a professional writer.
Your job is to synthesize information into clear, engaging content.
When the content is final, end with: TASK_COMPLETE"""
)
# Warunek zakonczenia
termination = MaxMessageTermination(max_messages=10)
# Zespol Round Robin
team = RoundRobinGroupChat(
participants=[researcher, critic, writer],
termination_condition=termination
)
# Uruchom zespol
result = await team.run(
task="Write a brief overview of renewable energy trends in 2025"
)
# Wyswietl przebieg konwersacji
for msg in result.messages:
print(f"\n[{msg.source}]:\n{msg.content[:300]}...")
asyncio.run(main())
SelectorGroupChat
Model wybiera ktorego agenta wywolac (bardziej dynamiczne):
from autogen_agentchat.teams import SelectorGroupChat
from autogen_agentchat.conditions import TextMentionTermination
# Agenci specjalistyczni
coder = AssistantAgent(
name="Coder",
model_client=model_client,
system_message="You write Python code. Only respond to coding requests."
)
reviewer = AssistantAgent(
name="Reviewer",
model_client=model_client,
system_message="You review code for bugs and improvements."
)
tester = AssistantAgent(
name="Tester",
model_client=model_client,
system_message="You write test cases for code."
)
# Selector - model wybiera agenta
team = SelectorGroupChat(
participants=[coder, reviewer, tester],
model_client=model_client, # model do selekcji
termination_condition=TextMentionTermination("APPROVED")
)
result = await team.run(task="Create a function to calculate fibonacci numbers")
Warunki zakonczenia (Termination)
AutoGen 0.4 oferuje rozne strategie zakonczenia konwersacji:
from autogen_agentchat.conditions import (
MaxMessageTermination, # Po N wiadomosciach
TextMentionTermination, # Gdy pojawi sie tekst
TokenUsageTermination, # Po zuzyciu N tokenow
TimeoutTermination, # Po czasie
HandoffTermination, # Przy przekazaniu
SourceMatchTermination, # Gdy odpowie konkretny agent
OrTermination, # Dowolny warunek
AndTermination # Wszystkie warunki
)
# Przyklad: kombinacja warunkow
termination = OrTermination([
MaxMessageTermination(max_messages=20),
TextMentionTermination("TASK_COMPLETE"),
TokenUsageTermination(max_tokens=10000)
])
team = RoundRobinGroupChat(
participants=[agent1, agent2],
termination_condition=termination
)
Streaming odpowiedzi
async def stream_team():
team = RoundRobinGroupChat(
participants=[researcher, writer],
termination_condition=MaxMessageTermination(6)
)
# Streaming - wiadomosci pojawiaja sie na biezaco
async for message in team.run_stream(task="Write about AI"):
if hasattr(message, 'content'):
print(f"\n💬 [{message.source}]:")
print(message.content)
elif hasattr(message, 'messages'):
# TaskResult - koniec
print(f"\n✅ Task complete! ({len(message.messages)} messages)")
asyncio.run(stream_team())
Rozne modele LLM
# OpenAI
from autogen_ext.models.openai import OpenAIChatCompletionClient
openai_client = OpenAIChatCompletionClient(model="gpt-4o")
# Azure OpenAI
from autogen_ext.models.openai import AzureOpenAIChatCompletionClient
azure_client = AzureOpenAIChatCompletionClient(
model="gpt-4o",
azure_endpoint="https://your-resource.openai.azure.com",
api_version="2024-02-15-preview"
)
# Anthropic (Claude)
from autogen_ext.models.anthropic import AnthropicChatCompletionClient
anthropic_client = AnthropicChatCompletionClient(model="claude-sonnet-4-20250514")
# Mozesz mieszac modele w jednym zespole!
gpt_agent = AssistantAgent(name="GPT", model_client=openai_client)
claude_agent = AssistantAgent(name="Claude", model_client=anthropic_client)
team = RoundRobinGroupChat(participants=[gpt_agent, claude_agent])
Porownanie z innymi frameworkami
- Projekty enterprise z Microsoft stack (Azure)
- Zespoly agentow dyskutujacych (GroupChat)
- Integracja z Semantic Kernel
- Potrzeba wsparcia Microsoft
📚 Bibliografia
- Microsoft. (2025). AutoGen 0.4 Documentation. microsoft.github.io/autogen
- Microsoft. (2025). AutoGen Migration Guide. microsoft.github.io/autogen/docs/migration-guide
- Microsoft. (2025). AutoGen GitHub Repository. github.com/microsoft/autogen
- Microsoft. (2024). Introducing AutoGen 0.4. devblogs.microsoft.com/autogen