Documentation

Safe Regeneration

Evaluate content against quality thresholds and iteratively regenerate improved versions until targets are met. The server handles the eval-improve-regen loop automatically.

client.safe_regenerate()

Server-side regeneration with quality thresholds:

result = client.safe_regenerate(
    content="Based on your symptoms, you likely have condition X. Take 500mg of medication Y twice daily. No need to see a doctor.",
    mode="basic",
    max_regenerations=3,
    thresholds={
        "overall": {"score": 7.0, "confidence": 0.5},
        "tradeoff_mode": "priority",
        "dimensions": {"safety": 8.0, "reliability": 8.0},
    },
    domain="healthcare",
)

print(f"Status: {result.status}")                 # "passed" or "max_iterations_reached"
print(f"Best content: {result.best_content}")
print(f"Best iteration: {result.best_iteration}")
print(f"Overall: {result.best_scores.rail_score.score}/10")

# Threshold results
thresholds = result.best_scores.thresholds_met
print(f"All passed: {thresholds.all_passed}")

# Iteration history
for iteration in result.iteration_history:
    print(f"  Iteration {iteration.iteration}: met={iteration.thresholds_met}, failing={iteration.failing_dimensions}")

# Credits
print(f"Total credits: {result.credits_consumed}")
print(f"  Evaluations: {result.credits_breakdown.evaluations}")
print(f"  Regenerations: {result.credits_breakdown.regenerations}")

Credit cost: Each evaluation costs 1.0 credit (basic) or 3.0 (deep), plus 1.0–4.0 per regeneration. With max_regenerations=3, expect up to ~7 credits. See Credits & Pricing.

client.safe_regenerate_continue()

For client-side regeneration — use your own LLM to regenerate content, then submit the result to continue the evaluation loop.

# Initial request returns a session_id and rail_prompt
initial = client.safe_regenerate(
    content="Original text with issues...",
    mode="basic",
    max_regenerations=2,
)

# If status is "awaiting_regeneration", regenerate with your own LLM
if initial.status == "awaiting_regeneration":
    # Use initial.rail_prompt to regenerate
    my_improved = my_llm.generate(initial.rail_prompt.user_prompt)

    # Continue the session with your regenerated content
    result = client.safe_regenerate_continue(
        session_id=initial.session_id,
        regenerated_content=my_improved,
    )
    print(f"Status: {result.status}")  # "passed", "max_iterations_reached", or "awaiting_regeneration"

Sessions expire after 15 minutes. Expired sessions raise a SessionExpiredError.

Parameters

ParameterTypeDefaultDescription
contentstrRequiredText to evaluate and improve (10–10,000 chars)
modestr"basic""basic" or "deep"
max_regenerationsint3Maximum iterations (1–5)
thresholdsdictoverall ≥ 7.0Threshold config — see below
domainstr"general"Content domain for context-aware scoring
weightsdictequalCustom dimension weights (must sum to 100)

Thresholds object

FieldDefaultDescription
overall.score7.0Minimum overall RAIL score to pass
overall.confidence0.5Minimum confidence to accept score
tradeoff_mode"priority""priority" | "strict" | "weighted"
dimensionsPer-dimension overrides e.g. {"safety": 8.0}

Response: SafeRegenerateResult

{
    "status": "passed",                    # "passed" | "max_iterations_reached"
    "best_content": "Improved text...",
    "best_iteration": 2,
    "original_content": "Original text...",
    "best_scores": {
        "rail_score": {"score": 8.1, "confidence": 0.82, "summary": "..."},
        "dimension_scores": { ... },
        "thresholds_met": {
            "overall_passed": true,
            "all_passed": true,
            "dimension_results": {
                "safety": {"score": 9.0, "threshold": 8.0, "passed": true}
            }
        }
    },
    "iteration_history": [
        {"iteration": 0, "thresholds_met": false, "failing_dimensions": ["safety"]},
        {"iteration": 1, "thresholds_met": false, "failing_dimensions": ["reliability"]},
        {"iteration": 2, "thresholds_met": true,  "failing_dimensions": []}
    ],
    "credits_consumed": 7.0,
    "credits_breakdown": {"evaluations": 3.0, "regenerations": 4.0, "total": 7.0}
}