HoneyHive Python SDK Documentation
==================================
**LLM Observability and Evaluation Platform**
The HoneyHive Python SDK provides comprehensive observability, tracing, and evaluation capabilities for LLM applications with OpenTelemetry integration and a "Bring Your Own Instrumentor" architecture.
.. note::
**Project Configuration**: The ``project`` parameter is required when initializing the tracer. This identifies which HoneyHive project your traces belong to and must match your project name in the HoneyHive dashboard.
🚀 **Quick Start**
New to HoneyHive? Start here:
.. raw:: html
.. raw:: html
🔄 **Key Features**
**Bring Your Own Instrumentor (BYOI) Architecture**
Avoid dependency conflicts by choosing exactly which LLM libraries to instrument. Supports multiple instrumentor providers:
- OpenInference
- Traceloop
- Build your own custom instrumentors
**Multi-Instance Tracer Support**
Create independent tracer instances for different environments, workflows, or services within the same application.
**Zero Code Changes for LLM Tracing**
Add comprehensive observability to existing LLM provider code without modifications:
- OpenAI
- Anthropic
- Google AI
**Production-Ready Evaluation**
Built-in and custom evaluators with threading support for high-performance LLM evaluation workflows.
**OpenTelemetry Native**
Built on industry-standard OpenTelemetry for maximum compatibility and future-proofing.
📖 **Getting Started Path**
**👋 New to HoneyHive?**
1. :doc:`tutorials/01-setup-first-tracer` - Set up your first tracer in minutes
2. :doc:`tutorials/02-add-llm-tracing-5min` - Add LLM tracing to existing apps
3. :doc:`tutorials/03-enable-span-enrichment` - Enrich traces with metadata
4. :doc:`tutorials/04-configure-multi-instance` - Configure multiple tracers
**🔧 Solving Specific Problems?**
- :doc:`how-to/index` - Fix common issues (see Troubleshooting section)
- :doc:`development/index` - SDK testing practices
- :doc:`how-to/deployment/production` - Deploy to production
- :doc:`how-to/integrations/openai` - OpenAI integration patterns
- :doc:`how-to/evaluation/index` - Evaluation and analysis
**📚 Need Technical Details?**
- :doc:`reference/api/tracer` - HoneyHiveTracer API
- :doc:`reference/api/decorators` - @trace and @evaluate decorators
- :doc:`reference/configuration/environment-vars` - Environment variables
- :doc:`explanation/index` - Python & instrumentor compatibility
**🤔 Want to Understand the Design?**
- :doc:`explanation/architecture/byoi-design` - Why "Bring Your Own Instrumentor"
- :doc:`explanation/concepts/llm-observability` - LLM observability concepts
- :doc:`explanation/architecture/overview` - System architecture
🔗 **Main Documentation Sections**
.. toctree::
:maxdepth: 1
tutorials/index
how-to/index
reference/index
explanation/index
changelog
development/index
📦 **Installation**
.. code-block:: bash
# Core SDK only (minimal dependencies)
pip install honeyhive
# With LLM provider support (recommended)
pip install honeyhive[openinference-openai] # OpenAI via OpenInference
pip install honeyhive[openinference-anthropic] # Anthropic via OpenInference
pip install honeyhive[all-openinference] # All OpenInference integrations
🔧 **Quick Example**
.. raw:: html
.. code-block:: python
from honeyhive import HoneyHiveTracer, trace
from openinference.instrumentation.openai import OpenAIInstrumentor
import openai
# Initialize with BYOI architecture
tracer = HoneyHiveTracer.init(
api_key="your-api-key",
project="your-project"
)
# Initialize instrumentor separately (correct pattern)
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
# Use @trace for custom functions
@trace(tracer=tracer)
def analyze_sentiment(text: str) -> str:
# OpenAI calls automatically traced via instrumentor
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": f"Analyze sentiment: {text}"}]
)
return response.choices[0].message.content
# Both the function and the OpenAI call are traced!
result = analyze_sentiment("I love this new feature!")
.. raw:: html
.. code-block:: python
from honeyhive import HoneyHiveTracer, trace, evaluate
from honeyhive.models import EventType
from honeyhive.evaluation import QualityScoreEvaluator
from openinference.instrumentation.openai import OpenAIInstrumentor
import openai
tracer = HoneyHiveTracer.init(
api_key="your-api-key",
project="your-project"
)
# Initialize instrumentor separately (correct pattern)
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
# Add automatic evaluation
quality_evaluator = QualityScoreEvaluator(criteria=["relevance", "clarity"])
@trace(tracer=tracer, event_type=EventType.model)
@evaluate(evaluator=quality_evaluator)
def handle_customer_query(query: str) -> str:
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a helpful customer service agent."},
{"role": "user", "content": query}
]
)
return response.choices[0].message.content
# Automatically traced AND evaluated for quality
result = handle_customer_query("How do I reset my password?")
.. raw:: html