1. Set Up Your First Tracer
Problem: You need to integrate HoneyHive tracing into your LLM application quickly to start monitoring calls and performance.
Solution: Initialize a HoneyHive tracer with minimal configuration and verify it’s working in under 5 minutes.
This guide walks you through setting up your first tracer, making a traced LLM call, and verifying the trace appears in your HoneyHive dashboard.
1.1. Prerequisites
Python 3.11+ installed
HoneyHive API key (get one at https://app.honeyhive.ai)
A HoneyHive project created (or we’ll create one for you)
1.2. Installation
Install the HoneyHive SDK:
pip install honeyhive
For LLM provider integrations, install with the provider extra:
# For OpenAI
pip install honeyhive[openinference-openai]
# For Anthropic
pip install honeyhive[openinference-anthropic]
# For multiple providers
pip install honeyhive[openinference-openai,openinference-anthropic]
1.3. Step 1: Set Up Environment Variables
Create a .env file in your project root:
# HoneyHive configuration
HH_API_KEY=your-honeyhive-api-key
HH_PROJECT=my-first-project
HH_SOURCE=development
# Your LLM provider API key
OPENAI_API_KEY=your-openai-api-key
1.4. Step 2: Initialize Your First Tracer
Create a simple Python script to initialize the tracer:
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.openai import OpenAIInstrumentor
import openai
import os
# Step 1: Initialize HoneyHive tracer (loads config from environment)
tracer = HoneyHiveTracer.init(
project="my-first-project", # Or use HH_PROJECT env var
source="development" # Or use HH_SOURCE env var
) # API key loaded from HH_API_KEY
# Step 2: Initialize instrumentor with tracer provider
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
print("✅ Tracer initialized successfully!")
What’s happening here:
HoneyHiveTracer.init()creates a tracer instance configured for your projectOpenAIInstrumentorautomatically captures OpenAI SDK callsinstrumentor.instrument(tracer_provider=tracer.provider)connects the instrumentor to HoneyHive
1.5. Step 3: Make Your First Traced Call
Add a simple LLM call to test tracing:
# Make a traced OpenAI call
client = openai.OpenAI() # Uses OPENAI_API_KEY from environment
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello! This is my first traced call."}
]
)
print(f"Response: {response.choices[0].message.content}")
print("✅ Trace sent to HoneyHive!")
Automatic tracing: Because the instrumentor is active, this call is automatically traced without any decorators or manual span creation.
1.6. Step 4: Verify in HoneyHive Dashboard
Go to https://app.honeyhive.ai
Navigate to your project (
my-first-project)Click “Traces” in the left sidebar
You should see your trace with: - Model:
gpt-3.5-turbo- Input message: “Hello! This is my first traced call.” - Response from the model - Timing information - Token counts
Tip
Traces typically appear within 1-2 seconds. If you don’t see your trace:
Check that
HH_API_KEYis set correctlyVerify your project name matches
Look for error messages in your Python output
1.7. Complete Example
Here’s the complete working script:
"""
first_tracer.py - Your first HoneyHive traced application
Run: python first_tracer.py
"""
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.openai import OpenAIInstrumentor
import openai
import os
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
def main():
# Initialize tracer
tracer = HoneyHiveTracer.init(
project="my-first-project",
source="development"
)
# Initialize instrumentor
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
print("✅ Tracer initialized!")
# Make traced call
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello! This is my first traced call."}
]
)
print(f"\n📝 Response: {response.choices[0].message.content}")
print("\n✅ Trace sent to HoneyHive!")
print("👉 View at: https://app.honeyhive.ai")
if __name__ == "__main__":
main()
1.8. Running the Example
# Install dependencies
pip install honeyhive[openinference-openai] python-dotenv
# Run the script
python first_tracer.py
Expected output:
✅ Tracer initialized!
📝 Response: Hello! I'm happy to help you with your first traced call...
✅ Trace sent to HoneyHive!
👉 View at: https://app.honeyhive.ai
1.9. Troubleshooting
Tracer initialization fails:
Verify
HH_API_KEYis set correctly (check.envfile)Ensure you have network connectivity to HoneyHive servers
Check API key is valid at https://app.honeyhive.ai/settings/api-keys
No traces appearing:
Wait 2-3 seconds for traces to process
Check project name matches in code and dashboard
Look for error messages in Python console
Verify instrumentor was initialized correctly
Import errors:
# Install the correct extras
pip install honeyhive[openinference-openai]
# Or install instrumentor directly
pip install honeyhive openinference-instrumentation-openai openai
1.10. Next Steps
Now that your tracer is working:
Add LLM Tracing in 5 Minutes - Add tracing to existing applications
Enable Span Enrichment - Add custom metadata to traces
Integrate with OpenAI - Deep dive into OpenAI integration
Advanced Configuration Guide - Advanced configuration options
Completion time: ~5 minutes from installation to first trace ✨