Getting Started with TrustCall

Prev Next

Introduction

TrustCall is a framework developed to provide secure, structured, and auditable interfaces for making LLM calls—whether to internal models, hosted endpoints, or third-party APIs. TrustCall enforces consistent usage policies, tracks provenance, validates inputs and outputs, and centralizes logging for all LLM interactions. It is used to wrap and govern every model invocation, giving teams a clear, policy-controlled, and observable layer over language model access—crucial for privacy-sensitive, regulated, or mission-critical use cases.

Key benefits of using TrustCall include:

  • Structured LLM Invocation Layer: Standardizes how LLM calls are made and logged, including inputs, outputs, metadata, model version, latency, cost, and outcome status.

  • Security and Access Controls: Integrates with internal identity and permissioning systems to restrict model access by user, service, or context.
    Auditable Trace Logs: Captures full execution traces for each call—including user, prompt, tools used, response content, and fallback behavior—enabling deep forensic analysis.

  • Input/Output Validation and Guardrails: Applies safety checks on prompts and completions (e.g., length, banned content, formatting), with support for auto-redaction and fallback routing.

  • Multi-Backend Support: Abstracts over multiple LLM providers (e.g., OpenAI, Anthropic, vLLM, internal models) with consistent interfaces and fallback logic for failover or model switching.

TrustCall sits at the heart of all production-grade LLM applications—from chat-based agents and semantic search systems to internal dev tools and RAG pipelines. It integrates with observability tools like LangFuse, Prometheus, and OpenTelemetry, and plays a critical role in ensuring your AI stack is safe, reliable, and accountable by default.

By adopting TrustCall, you can ensure its LLM-powered systems are governed, auditable, and production-safe—empowering teams to build with confidence and meet the highest standards of AI safety and compliance.

Important Links

Main Site

Documentation