ACE Framework Setup Guide¶
Quick setup and configuration guide for ACE Framework.
Requirements¶
- Python 3.12
- API key for your LLM provider (OpenAI, Anthropic, Google, etc.)
Check Python version:
Installation¶
For Users¶
# Basic installation
pip install ace-framework
# With optional features
pip install ace-framework[observability] # Opik monitoring + cost tracking
pip install ace-framework[browser-use] # Browser automation
pip install ace-framework[langchain] # LangChain integration
pip install ace-framework[all] # All features
For Contributors¶
git clone https://github.com/kayba-ai/agentic-context-engine
cd agentic-context-engine
uv sync # Installs everything automatically (10-100x faster than pip)
API Key Setup¶
Option 1: Environment Variable (Recommended)¶
# Set in your shell
export OPENAI_API_KEY="sk-..."
# Or create .env file
echo "OPENAI_API_KEY=sk-..." > .env
Load in Python:
Option 2: Direct in Code¶
from ace import LiteLLMClient
client = LiteLLMClient(
model="gpt-4o-mini",
api_key="your-key-here" # Not recommended for production
)
Provider Examples¶
OpenAI¶
- Get API key: platform.openai.com
- Set key:
export OPENAI_API_KEY="sk-..." - Use it:
Anthropic Claude¶
- Get API key: console.anthropic.com
- Set key:
export ANTHROPIC_API_KEY="sk-ant-..." - Use it:
Google Gemini¶
- Get API key: makersuite.google.com
- Set key:
export GOOGLE_API_KEY="AIza..." - Use it:
Local Models (Ollama)¶
- Install Ollama: ollama.ai
- Pull model:
ollama pull llama2 - Use it:
Supported Providers: 100+ via LiteLLM (AWS Bedrock, Azure, Cohere, Hugging Face, etc.)
Advanced Configuration¶
Custom LLM Parameters¶
from ace import LiteLLMClient
client = LiteLLMClient(
model="gpt-4o-mini",
temperature=0.7,
max_tokens=2048,
timeout=60 # seconds
)
Production Monitoring (Opik)¶
Opik automatically tracks: - Token usage per LLM call - Cost per operation - Agent/Reflector/SkillManager performance - Skillbook evolution over time
View dashboard: comet.com/opik
Skillbook Storage¶
from ace import Skillbook
# Save skillbook
skillbook.save_to_file("my_skillbook.json")
# Load skillbook
skillbook = Skillbook.load_from_file("my_skillbook.json")
# For production: Use database storage
# PostgreSQL, SQLite, or vector stores supported
Checkpoint Saving¶
from ace import OfflineACE
adapter = OfflineACE(
skillbook=skillbook,
agent=agent,
reflector=reflector,
skill_manager=skill_manager
)
# Save skillbook every 10 samples during training
results = adapter.run(
samples,
environment,
checkpoint_interval=10,
checkpoint_dir="./checkpoints"
)
Troubleshooting¶
Import Errors¶
# Upgrade to latest version
pip install --upgrade ace-framework
# Check installation
pip show ace-framework
API Key Not Working¶
# Verify key is set
echo $OPENAI_API_KEY
# Test different model
from ace import LiteLLMClient
client = LiteLLMClient(model="gpt-3.5-turbo") # Cheaper for testing
Rate Limits¶
from ace import LiteLLMClient
# Add delays between calls
import time
time.sleep(1) # 1 second between calls
# Or use a cheaper/faster model
client = LiteLLMClient(model="gpt-3.5-turbo")
JSON Parse Failures¶
# Increase max_tokens for SkillManager/Reflector
from ace import SkillManager, Reflector
llm = LiteLLMClient(model="gpt-4o-mini", max_tokens=2048) # Higher limit
skill_manager = SkillManager(llm)
reflector = Reflector(llm)
Need More Help?¶
- GitHub Issues: github.com/kayba-ai/agentic-context-engine/issues
- Discord Community: discord.gg/mqCqH7sTyK
- Documentation: Complete Guide, Quick Start, Integration Guide
Next Steps: Check out the Quick Start Guide to build your first self-learning agent!