Installation¶
For Users¶
uv add ace-framework[all] # All optional features
uv add ace-framework[instructor] # Structured outputs (Instructor)
uv add ace-framework[langchain] # LangChain integration
uv add ace-framework[browser-use] # Browser automation
uv add ace-framework[claude-code] # Claude Code CLI integration
uv add ace-framework[claude-sdk] # Anthropic SDK integration steps
uv add ace-framework[observability] # Opik monitoring + cost tracking
uv add ace-framework[deduplication] # Skill deduplication (embeddings)
uv add ace-framework[transformers] # Local model support
For Contributors¶
Requirements¶
- Python 3.12
- An API key for your LLM provider
Configure Your LLM¶
The recommended way to set up your API keys and model selection:
This interactive wizard validates your API key and model, then saves config to ace.toml (model names, safe to commit) and .env (API keys, gitignored). See Setup for full details.
Manual alternative¶
If you prefer not to use the wizard, set environment variables directly:
Or create a .env file (add to .gitignore):
Verify Installation¶
from ace import ACELiteLLM
# Uses ace.toml + .env from `ace setup`
agent = ACELiteLLM.from_setup()
print(agent.ask("Hello!"))
Or without ace setup:
Set Up Coding Agent Skills (Optional)¶
If you use Claude Code, install the Kayba pipeline skill:
This installs the evaluation pipeline skill to .claude/skills/ and prints CLI instructions. See Hosted API for details.
What to Read Next¶
- Setup — configure models and API keys
- Quick Start — build your first self-learning agent
- How ACE Works — understand the architecture