Provider Integrations
Drop-in wrappers that add RAIL evaluation to your existing LLM calls. Every response is automatically scored — no code changes needed beyond swapping the client.
Install: pip install "rail-score-sdk[integrations]" for all providers, or install individually with [openai], [anthropic], [google].
OpenAI
Wrap OpenAI chat completions with automatic RAIL scoring.
from rail_score_sdk.integrations import RAILOpenAI
# Drop-in replacement for openai.OpenAI()
client = RAILOpenAI(
openai_api_key="sk-...",
rail_api_key="rail_...",
threshold=7.0 # Optionally set minimum quality threshold
)
# Use exactly like the OpenAI client
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Explain quantum computing"}]
)
# Standard OpenAI response + RAIL scores attached
print(response.choices[0].message.content)
print(response.rail_score) # RAIL evaluation result
print(response.rail_score.score) # 8.5Anthropic
Wrap Anthropic Claude calls with RAIL evaluation.
from rail_score_sdk.integrations import RAILAnthropic
client = RAILAnthropic(
anthropic_api_key="sk-ant-...",
rail_api_key="rail_..."
)
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Write a hiring policy"}]
)
print(response.content[0].text)
print(response.rail_score.score) # RAIL evaluation attachedGoogle Gemini
Wrap Google Gemini calls with RAIL evaluation.
from rail_score_sdk.integrations import RAILGemini
client = RAILGemini(
google_api_key="AIza...",
rail_api_key="rail_..."
)
response = client.generate_content(
model="gemini-2.0-flash",
contents="Describe the benefits of renewable energy"
)
print(response.text)
print(response.rail_score.score)Langfuse v3
Add RAIL scores as Langfuse trace metadata for observability.
from rail_score_sdk.integrations import RAILLangfuse
# Integrates RAIL evaluation results into Langfuse traces
rail_langfuse = RAILLangfuse(
rail_api_key="rail_...",
langfuse_public_key="pk-...",
langfuse_secret_key="sk-..."
)
# RAIL scores appear as trace metadata in Langfuse dashboard
rail_langfuse.trace(
name="chat-response",
input="User question",
output="AI response text"
) # Automatically evaluates and logs RAIL scoresLiteLLM Guardrail
Use RAIL as a guardrail in your LiteLLM proxy.
# In your LiteLLM config, add RAIL as a guardrail:
# litellm_settings:
# guardrails:
# - rail_score:
# api_key: rail_...
# threshold: 7.0
# Or use programmatically:
from rail_score_sdk.integrations import RAILGuardrail
guardrail = RAILGuardrail(
rail_api_key="rail_...",
threshold=7.0,
action="block" # "block", "log", or "regenerate"
)How Provider Wrappers Work
- 1.Your LLM call executes normally and returns a response
- 2.The wrapper automatically sends the response text to RAIL for evaluation
- 3.RAIL scores are attached to the response object as
.rail_score - 4.If a threshold is set and the score is below it, the wrapper can block or regenerate