Home / Products / T.R.U.S.T
Verification Telemetry

T.R.U.S.T

Traceable Reasoning, Usage & Source Telemetry

Per-claim verification of generative AI output.

T.R.U.S.T verifies AI-generated content against its source. Every factual claim is traced, scored, and flagged when it cannot be substantiated. Built for environments where AI outputs become regulatory record.

5
Verification dimensions
7
Reasoning failure categories
3
Deployment models
t.r.u.s.t · claim verification
Model output under review:
"Treasury yields rose 47 basis points in Q3, exceeding the prior peak of Q1 1994."
logic
1.1
sym_scope
6.8
sym_unit
1.4
usage
3.9
arithmetic
1.7
Flagged for review Scope qualifier omitted · sym_scope exceeds threshold
Verification Dimensions

Five independent verification dimensions

Rather than producing a single confidence score, T.R.U.S.T evaluates model output across five independently-trained verification dimensions. Each dimension specialises in a distinct category of reasoning failure, allowing reviewers to focus precisely on the aspect of output requiring attention.

Dimension 01

Logic

Detects internal inconsistency, invalid inference steps, and reasoning that does not follow from stated premises. Particularly relevant to multi-step analytical workflows.

Detects: contradiction · invalid inference · circular reasoning
Dimension 02

Symbolic Scope

Detects omitted or incorrectly applied qualifiers and modifiers — for example, generalisations from a sample to a population without supporting evidence.

Detects: missing qualifiers · over-generalisation · scope drift
Dimension 03

Symbolic Unit

Detects unit-of-measure errors, currency mismatches, and inconsistencies in dimensional analysis. Critical for financial and scientific applications.

Detects: unit confusion · currency error · scale mismatch
Dimension 04

Source Usage

Detects misattribution and unsupported claims. Verifies that conclusions drawn from cited sources are consistent with the underlying source content.

Detects: misattribution · fabricated citation · source contradiction
Dimension 05

Arithmetic

Detects calculation errors, incorrect aggregation, and statistical misapplication. Particularly relevant to quantitative analysis and numerical reporting.

Detects: calculation error · incorrect aggregation · statistical error
Deployment

Three deployment models

T.R.U.S.T supports three deployment models to accommodate the operational and regulatory constraints of customer environments. Selection between them is typically determined by data residency, network architecture, and audit requirements.

Cloud Service

Managed REST API hosted on Aigos infrastructure. Lowest operational burden, suitable for non-sensitive workloads and proof-of-value engagements.

OperationsFully managed
Data egressRequired
Update modelContinuous

On-Premises

Air-gapped binary distribution for classified, sovereign, and otherwise restricted environments. No outbound network connectivity required at any stage of operation.

OperationsCustomer-operated
Data egressNone
Update modelManual transfer
Industry Applications

Operational use across regulated sectors

T.R.U.S.T is deployed in production at organisations where the consequences of unverified AI output are operationally and legally significant. Representative applications are described below; full case studies are available under non-disclosure to qualified prospects.

Financial Services

Verification of AI-generated equity research, credit memoranda, and regulatory filings prior to analyst review.

Healthcare

Verification of AI-assisted clinical documentation, prior-authorisation correspondence, and patient communication drafts.

Government

Verification of AI-assisted policy analysis, briefing materials, and constituent correspondence prior to publication.

Legal

Verification of AI-generated case research, deposition summaries, and contract analysis prior to attorney review.

In-context verification for analyst workflows

The T.R.U.S.T browser extension presents verification results inline alongside generative model output, eliminating the context-switching that arises from evaluating AI-generated content in a separate review interface. The extension supports ChatGPT, Claude, Gemini, and Perplexity, and integrates with internal LLM deployments via configurable endpoints.

  • Inline per-claim verification overlay
  • Configurable severity thresholds
  • Export of verification logs for audit
  • Compatible with major commercial LLM interfaces
Request Extension Access
claude.ai/chat

"In our analysis of the 2023 fiscal year, revenue grew by 23% over the prior period, driven primarily by enterprise contract expansion."

Verified 5/5 dimensions clear

"This represents the strongest growth in the company's history."

Flagged sym_scope · claim not supported in source

Evaluate T.R.U.S.T against your workflow

Initial engagements typically include a verification accuracy benchmark against representative output from your existing AI workflow, followed by deployment planning aligned to your regulatory and audit requirements.

Schedule a Briefing Read the Research
Scroll to Top