Classifiers¶
All classifiers satisfy the Classifier protocol:
class Classifier(Protocol):
def classify(self, trajectory: Trajectory, task: str) -> FailureType: ...
classify() is always def (not async def). triage runs it via anyio.to_thread.run_sync() to keep the event loop unblocked.
RulesClassifier¶
Pattern-based, zero API calls, microseconds per call.
Rules (priority order):
LOOP_DETECTED— last 3 steps: sametool_called+ identical canonicaltool_inputWRONG_TOOL_CALLED— error matchestool.{0,30}not found|no tool namedSCHEMA_MISMATCH— error matchesvalidation error|json.*parse|jsondecodeerrorEXTERNAL_FAULT— error contains"429","500","502", or"503"CONSTRAINT_IGNORED— any step'sllm_outputcontains a constraint stringUNKNOWN— no rule matched
LLMClassifier¶
Semantic classifier. Calls an LLM to read the trajectory and name the failure type.
# Anthropic (default)
clf = LLMClassifier()
clf = LLMClassifier(api_key="sk-ant-...", model="claude-haiku-4-5-20251001")
# OpenAI-compatible
clf = LLMClassifier(base_url="http://localhost:11434/v1", model="llama3.2")
clf = LLMClassifier(base_url="https://api.groq.com/openai/v1",
api_key="gsk_...", model="llama-3.1-8b-instant")
Parameters:
| Parameter | Default | Description |
|---|---|---|
api_key |
None → TRIAGE_LLM_API_KEY env var |
API key |
model |
claude-haiku-4-5-20251001 (Anthropic) or llama3.2 (OpenAI-compat) |
Model name |
max_trajectory_steps |
10 |
Steps included in the prompt |
base_url |
None → TRIAGE_LLM_BASE_URL env var |
If set, uses OpenAI-compatible client |
Falls back to UNKNOWN silently on any error.
Install:
pip install "triage-agent[anthropic]" # Anthropic backend
pip install openai # OpenAI-compatible backend
HybridClassifier¶
Runs RulesClassifier first; calls the LLM only when rules return UNKNOWN.
Recommended for production: free for structural failures, semantic fallback for ambiguous ones.
Custom classifiers¶
Any class with a synchronous classify() method satisfies the protocol: