OpenTelemetry Configuration
This guide covers essential OpenTelemetry configuration topics for FusionReactor Cloud, including semantic conventions, resource attributes, service naming, and sampling strategies.
Semantic Conventions
Semantic conventions are standardized naming and structure rules for OpenTelemetry attributes. Following these conventions ensures consistency across your observability stack and compatibility with tools, dashboards, and other systems.
FusionReactor Cloud follows the official OpenTelemetry semantic conventions. This guide provides common examples and FusionReactor-specific mappings. For the complete specification and all available conventions, refer to the OpenTelemetry Semantic Conventions documentation.
Why semantic conventions matter
- Consistency: All services use the same attribute names
- Interoperability: Dashboards and queries work across different applications
- Searchability: Easy filtering and correlation in FusionReactor Cloud
- Best practices: Leverage community knowledge and tooling
Common semantic conventions
HTTP attributes
Use these standardized attributes for HTTP operations:
| Attribute | Description | Example |
|---|---|---|
http.method |
HTTP request method | GET, POST, PUT |
http.url |
Full request URL | https://api.example.com/users |
http.status_code |
HTTP response status | 200, 404, 500 |
http.route |
Matched route pattern | /users/{id} |
http.target |
Request target | /users?page=1 |
Database attributes
| Attribute | Description | Example |
|---|---|---|
db.system |
Database type | postgresql, mysql, mongodb |
db.name |
Database name | customers |
db.statement |
Database query | SELECT * FROM users WHERE id = ? |
db.operation |
Operation type | SELECT, INSERT, UPDATE |
Messaging attributes
| Attribute | Description | Example |
|---|---|---|
messaging.system |
Messaging system | rabbitmq, kafka, sqs |
messaging.destination |
Queue or topic name | orders-queue |
messaging.operation |
Operation type | send, receive, process |
Custom attributes
For business-specific data, use a namespace prefix:
Resource Attributes
Resource attributes describe the entity producing telemetry data (your service, container, host, etc.). These attributes apply to all telemetry from that resource.
Required resource attributes
service.name
The most important resource attribute. It identifies your service in FusionReactor Cloud.
Important mapping
In FusionReactor Cloud, service.name becomes the job label for querying.
How to set it:
=== "Python"
from opentelemetry.sdk.resources import Resource, SERVICE_NAME
resource = Resource.create({
SERVICE_NAME: "checkout-api"
})
=== "Java"
=== "Node.js"
const { Resource } = require('@opentelemetry/resources');
const { SemanticResourceAttributes } = require('@opentelemetry/semantic-conventions');
const resource = new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: 'checkout-api',
});
=== ".NET"
using OpenTelemetry.Resources;
var resourceBuilder = ResourceBuilder.CreateDefault()
.AddService(serviceName: "checkout-api");
service.version
Track which version of your service is running.
Query in FusionReactor Cloud:
Recommended resource attributes
deployment.environment.name
Distinguish between environments (development, staging, production).
from opentelemetry.semconv.resource import ResourceAttributes
resource = Resource.create({
SERVICE_NAME: "checkout-api",
SERVICE_VERSION: "2.3.1",
ResourceAttributes.DEPLOYMENT_ENVIRONMENT: "production"
})
Filter by environment
In FusionReactor Cloud: job="checkout-api" AND deployment_environment="production"
service.namespace
Group related services logically.
Use this to distinguish services with the same name across different teams or products.
service.instance.id
Unique identifier for each instance of your service.
import socket
resource = Resource.create({
SERVICE_NAME: "checkout-api",
SERVICE_INSTANCE_ID: socket.gethostname() # or container ID
})
Useful for identifying which specific instance handled a request.
Infrastructure resource attributes
OpenTelemetry SDKs often detect these automatically:
| Attribute | Description | Auto-detected |
|---|---|---|
host.name |
Hostname | ✅ |
host.id |
Host identifier | ✅ |
container.name |
Container name | ✅ (in Docker) |
container.id |
Container ID | ✅ (in Docker) |
k8s.namespace.name |
Kubernetes namespace | ✅ (in K8s) |
k8s.pod.name |
Pod name | ✅ (in K8s) |
cloud.provider |
Cloud provider | ✅ (AWS/GCP/Azure) |
cloud.region |
Cloud region | ✅ |
Service Naming Best Practices
Consistent service naming improves observability and team collaboration.
Naming guidelines
✅ Do:
* Use lowercase with hyphens: checkout-api, payment-service
* Be descriptive and specific: user-authentication-api
* Use consistent patterns across all services
* Keep names stable over time
❌ Don't:
* Include version numbers: ~~checkout-api-v2~~ (use service.version attribute)
* Include environment names: ~~checkout-api-prod~~ (use deployment.environment.name attribute)
* Use camelCase or snake_case: ~~CheckoutAPI~~, ~~checkout_api~~
* Make names too generic: ~~api~~, ~~service~~
Example naming structure
# Good service naming
service.name: checkout-api
service.version: 2.3.1
service.namespace: ecommerce
deployment.environment.name: production
Query in FusionReactor Cloud:
Handling microservices
For microservice architectures with many services:
# Frontend services
web-frontend
mobile-api
admin-dashboard
# Backend services
user-service
order-service
payment-service
inventory-service
notification-service
# Infrastructure services
auth-gateway
api-gateway
cache-service
Use service.namespace to group related services:
Sampling Strategies
Sampling reduces telemetry volume and costs by collecting a subset of traces while maintaining observability.
When to use sampling
- High-traffic applications: Reduce data volume without losing visibility
- Cost management: Control telemetry data costs in FusionReactor Cloud
- Performance: Reduce overhead on applications and collectors
Head sampling
Head sampling makes the sampling decision at the start of a trace (when the root span is created).
Probabilistic sampling
Sample a percentage of all traces.
Collector configuration:
processors:
probabilistic_sampler:
sampling_percentage: 10 # Keep 10% of traces
service:
pipelines:
traces:
receivers: [otlp]
processors: [probabilistic_sampler]
exporters: [otlphttp]
SDK configuration (Python):
from opentelemetry.sdk.trace.sampling import TraceIdRatioBased
sampler = TraceIdRatioBased(0.1) # 10% sampling rate
tracer_provider = TracerProvider(sampler=sampler)
✅ Pros: * Simple to implement * Low overhead * Deterministic (same trace ID always gets same decision)
❌ Cons: * May drop important traces (errors, slow requests) * No visibility into what was dropped
Rate limiting sampling
Limit traces to N per second.
processors:
tail_sampling:
decision_wait: 10s
policies:
- name: rate-limit
type: rate_limiting
rate_limiting:
spans_per_second: 100 # Max 100 traces/sec
Tail sampling
Tail sampling makes the decision after the trace completes, allowing intelligent decisions based on trace content.
Tail sampling policies
Keep all errors and slow requests:
processors:
tail_sampling:
decision_wait: 10s # Wait for trace to complete
num_traces: 100000
expected_new_traces_per_sec: 1000
policies:
# Always keep errors
- name: error-traces
type: status_code
status_code:
status_codes: [ERROR]
# Keep slow traces (>2 seconds)
- name: slow-traces
type: latency
latency:
threshold_ms: 2000
# Sample normal traces at 5%
- name: normal-traces
type: probabilistic
probabilistic:
sampling_percentage: 5
✅ Pros: * Intelligent sampling (keep important traces) * Always capture errors and slow requests * Better visibility into issues
❌ Cons: * Higher memory usage (must buffer traces) * Adds latency (decision wait time) * More complex configuration
Composite sampling strategy
Combine head and tail sampling:
processors:
# Head sampling: Reduce volume before tail sampling
probabilistic_sampler:
sampling_percentage: 50 # Pre-filter to 50%
# Tail sampling: Intelligent decisions on remaining traces
tail_sampling:
decision_wait: 10s
policies:
- name: errors-and-slow
type: and
and:
and_sub_policy:
- name: errors
type: status_code
status_code: {status_codes: [ERROR]}
- name: slow
type: latency
latency: {threshold_ms: 1000}
service:
pipelines:
traces:
receivers: [otlp]
processors: [probabilistic_sampler, tail_sampling]
exporters: [otlphttp]
Span metrics (RED metrics)
Generate metrics before sampling
Always generate span metrics before tail sampling to ensure accurate request rates, error rates, and latency distributions even with aggressive sampling.
connectors:
spanmetrics:
dimensions:
- name: http.method
- name: http.status_code
- name: service.name
processors:
tail_sampling:
# ... tail sampling config
service:
pipelines:
traces:
receivers: [otlp]
processors: [spanmetrics] # Generate metrics first
exporters: [spanmetrics]
traces/sampled:
receivers: [spanmetrics]
processors: [tail_sampling] # Then sample
exporters: [otlphttp]
metrics:
receivers: [spanmetrics] # Receive generated metrics
exporters: [otlphttp]
Recommended sampling strategies
Development/Testing
Low-traffic production (<1000 req/min)
Medium-traffic production (1K-10K req/min)
# Tail sampling with error/latency policies
tail_sampling:
policies:
- errors: 100%
- latency >2s: 100%
- normal: 10%
High-traffic production (>10K req/min)
# Composite: Head (25%) + Tail sampling
probabilistic_sampler: 25%
tail_sampling:
policies:
- errors: 100%
- latency >2s: 100%
- normal: 5%
# Effective rate: 25% × (errors + slow + 5% normal)
Configuration Validation
Test your configuration
Use telemetrygen to send test data to your collector:
docker run --network host \
ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:latest \
traces --traces 100 \
--otlp-endpoint localhost:4317 \
--otlp-insecure
Check FusionReactor Cloud's Explore > Traces for a service named telemetrygen within 60 seconds.
Verify resource attributes
Query your service in FusionReactor Cloud to see resource attributes:
Click any trace to inspect attributes in the details panel.
Monitor sampling effectiveness
Track sampling rates with metrics:
# Total spans received
rate(otelcol_processor_accepted_spans[5m])
# Spans after sampling
rate(otelcol_exporter_sent_spans[5m])
# Sampling rate
rate(otelcol_exporter_sent_spans[5m]) / rate(otelcol_processor_accepted_spans[5m])
Next steps
- Instrument your application
- Set up the Collector
- Create custom dashboards
- Review FAQ for common questions
Need more help?
Contact support in the chat bubble and let us know how we can assist.