AgentSettings gives you a structured, serializable way to define an agent’s model, tools, and optional subsystems like the condenser. Use it when you want to store agent configuration in JSON, send it over an API, or rebuild agents from validated settings later.
Once validated, create a working agent directly from the settings object.
agent = settings.create_agent()
You can then pass that agent into a Conversation, or derive another agent by changing the settings payload. For example, the full example below also shows how removing FileEditorTool and disabling the condenser produces a different agent configuration without rewriting the rest of the setup.
"""Create, serialize, and deserialize AgentSettings, then build a working agent.Demonstrates:1. Configuring an agent entirely through AgentSettings (LLM, tools, condenser).2. Serializing settings to JSON and restoring them.3. Building an Agent from settings via ``create_agent()``.4. Running a short conversation to prove the settings take effect.5. Changing the tool list and showing the agent's capabilities change."""import jsonimport osfrom pydantic import SecretStrfrom openhands.sdk import LLM, AgentSettings, Conversation, Toolfrom openhands.sdk.settings import CondenserSettingsfrom openhands.tools.file_editor import FileEditorToolfrom openhands.tools.terminal import TerminalTool# ── 1. Build settings ────────────────────────────────────────────────────api_key = os.getenv("LLM_API_KEY")assert api_key is not None, "LLM_API_KEY environment variable is not set."settings = AgentSettings( llm=LLM( model=os.getenv("LLM_MODEL", "anthropic/claude-sonnet-4-5-20250929"), api_key=SecretStr(api_key), base_url=os.getenv("LLM_BASE_URL"), ), tools=[ Tool(name=TerminalTool.name), Tool(name=FileEditorTool.name), ], condenser=CondenserSettings(enabled=True, max_size=50),)# ── 2. Serialize → JSON → deserialize ────────────────────────────────────payload = settings.model_dump(mode="json")print("Serialized settings (JSON):")print(json.dumps(payload, indent=2, default=str)[:800], "…")print()restored = AgentSettings.model_validate(payload)assert restored.condenser.enabled is Trueassert restored.condenser.max_size == 50assert len(restored.tools) == 2print("✓ Roundtrip deserialization successful — all fields preserved")print()# ── 3. Create agent from settings and run a task ─────────────────────────agent = settings.create_agent()print(f"Agent created: llm.model={agent.llm.model}")print(f" tools={[t.name for t in agent.tools]}")print(f" condenser={type(agent.condenser).__name__}")print()cwd = os.getcwd()conversation = Conversation(agent=agent, workspace=cwd)conversation.send_message( "Create a file called hello_settings.txt containing " "'Agent settings work!' then confirm the file exists with ls.")conversation.run()# Verify the agent actually wrote the fileassert os.path.exists(os.path.join(cwd, "hello_settings.txt")), ( "Agent should have created hello_settings.txt")print("✓ Agent created hello_settings.txt — settings drove real behavior")print()# ── 4. Different settings → different behavior ───────────────────────────# Now create settings with ONLY the terminal tool and condenser disabled.terminal_only_settings = AgentSettings( llm=settings.llm, tools=[Tool(name=TerminalTool.name)], condenser=CondenserSettings(enabled=False),)terminal_agent = terminal_only_settings.create_agent()print(f"Terminal-only agent tools: {[t.name for t in terminal_agent.tools]}")assert len(terminal_agent.tools) == 1assert terminal_agent.condenser is None # condenser disabled in these settingsprint("✓ Different settings produce different agent configuration")print()# ── Cleanup ──────────────────────────────────────────────────────────────os.remove(os.path.join(cwd, "hello_settings.txt"))# Report costcost = conversation.conversation_stats.get_combined_metrics().accumulated_costprint(f"\nEXAMPLE_COST: {cost}")
You can run the example code as-is.
The model name should follow the LiteLLM convention: provider/model_name (e.g., anthropic/claude-sonnet-4-5-20250929, openai/gpt-4o).
The LLM_API_KEY should be the API key for your chosen provider.
ChatGPT Plus/Pro subscribers: You can use LLM.subscription_login() to authenticate with your ChatGPT account and access Codex models without consuming API credits. See the LLM Subscriptions guide for details.