Retab

k-LLMs adds consensus and field-level likelihoods to OpenAI-compatible extraction.

Run K independent generations on the same prompt, align structured outputs, and reconcile every field into one parsed result with measurable uncertainty.

Built by retab

pip install k-llms

How it works

Sample K independent outputs
Set n > 1 to generate parallel candidates from the same prompt.
Align structured outputs
Match corresponding list items and nested objects before comparing values.
Reconcile per field
Apply type-aware consensus and return parsed data plus schema-shaped likelihoods.

Basic usage

k-LLMs keeps the OpenAI call shape and adds consensus with n. In Retab extraction APIs, the equivalent control is n_consensus.

Each candidate is parsed to your schema, then reconciled field-by-field. For arrays of objects, alignment runs first so equivalent items are compared even when order differs across model outputs.

Alignment uses stable-key discovery (or composite keys), fuzzy normalization for near-equivalent keys, multi-pass semantic matching for residuals, and preserves unmatched items as solo rows (zero-loss) before consensus scoring.

from k_llms import KLLMs
from pydantic import BaseModel

client = KLLMs()

class Event(BaseModel):
    name: str
    date: str
    participants: list[str]

response = client.chat.completions.parse(
    model="gpt-5.2",
    messages=[{"role": "user", "content": "Extract event info: Alice and Bob are going to a science fair on Friday."}],
    response_format=Event,
    n=4,  # Run 4 independent generations for consensus
)

print("Content:", response.choices[0].message.content)  # Event dict as a json string
print("Parsed:", response.choices[0].message.parsed)  # Event object
print("Likelihoods:", response.likelihoods)  # Schema-shaped confidence scores

for i in range(1, len(response.choices)):
    print(f"Choice {response.choices[i].index}: {response.choices[i].message.content}")

Why k-llms?

Make non-deterministic extraction operationally reliable

Single-run outputs hide ambiguity. k-LLMs makes disagreement explicit by returning likelihoods that mirror your schema.

You can route high-likelihood fields automatically and send ambiguous fields to validation or human review policies.

Background: this approach follows broader K-LLM patterns explored by teams like Palantir.

k-LLMs consensus overview video thumbnail

Watch: consensus and alignment in practice

A short walkthrough of how parallel candidates become one routed output.

Start with one call. Ship with measurable confidence.