Foodgeist Verification Methodology

Public document. Last updated 2026-04-13.

This document describes how Foodgeist verifies every claim in the gastronomic graph before it reaches a user. It is published deliberately: no other food platform makes its verification process auditable, and we believe rigor should be the default — not the exception.


Why publish this

Most food databases, recipe sites, and trend-intelligence platforms treat their data as a black box. Crowd contributions, scraped recipes, AI-generated text, and outdated science all coexist with no marker that lets a reader tell which is which. We think that's a defect, not a feature.

Foodgeist's claim is that every entry in our graph has been adversarially reviewed, that every verifiable claim carries a confidence tier, and that the methodology can be inspected. This page is the inspection surface.


The pipeline, end to end

Every entry — whether a cooking technique, an ingredient, a recipe, or a flavor pairing — passes through six stages before becoming visible on a public page.

Stage 1 — Generation

Entries enter the system from one of three pathways:

  1. Imports from public datasets with permissive licensing. Provenance is preserved on every record (data.source, data.source_url, data.license, data.imported_at).
  2. Trend-driven discovery — the platform watches a wide set of public signals and proposes new entries when coverage is thin in a relevant area.
  3. Generative drafting — for entries where structured data must be synthesized (technique science, sensory profiles, molecular pairings), a generation model produces a draft against a strict schema with all required fields.
  4. Generated entries are flagged with data.provenance = 'generated' and pass through the full validation chain. Imported entries inherit the source's reliability and pass through a lighter audit.

    Stage 2 — Structural validation

    Every entry must satisfy the schema for its type. Structural validation checks:

    • All required fields are present and non-empty.
    • Numeric fields fall within plausible bounds (temperatures, pH, ratios).
    • Enumerated fields use known values (transformation type, complexity level, dietary flags).
    • References to other entities (techniques mentioned in recipes, compounds mentioned in pairings) resolve to existing rows.

    Entries that fail structural validation are rejected at the boundary and never enter the graph. The rejection is logged for the discovery agent to learn from.

    Stage 3 — Domain-pattern filtering

    Beyond structure, generated entries are filtered against known failure modes specific to AI-authored food content:

    • Fake-pattern detection rejects entries whose names or descriptions contain hallmarks of fabricated content (e.g., physically impossible ingredient claims, anachronistic origin dates, made-up brand names dressed as techniques).
    • Temperature plausibility checks against established culinary ranges from foundational food-science references — entries that claim temperatures outside the realistic range for their reaction class are flagged.
    • Compound plausibility checks compounds named in molecular_pairing.key_compounds_produced against PubChem and ChEBI for existence. Invented compounds are flagged.
    • Civilization plausibility checks origin claims against a curated list of recognized cuisines, civilizations, and historical periods.

    Stage 4 — Adversarial review

    Entries that survive structural and domain filtering go through an adversarial-review stage powered by a high-reasoning model whose role is to find reasons to reject. The reviewer is instructed not to give the entry the benefit of the doubt: if a claim cannot be defended, it is flagged or removed.

    The adversarial reviewer outputs a verdict (pass / flag / reject) and a list of issues. Entries with a reject verdict are not added to the graph; entries with a flag verdict are added with reduced confidence and surfaced for human review.

    Stage 5 — External verification

    For claims that can be cross-checked against authoritative external sources, Foodgeist queries those sources directly:

    • Compound names are verified against PubChem and ChEBI.
    • Cultural origin is cross-referenced with Wikipedia and academic culinary databases where licensing permits.
    • Temperature and reaction claims are weighted against peer-reviewed food-science literature embedded in our reference base.

    The external verifier assigns each entry a confidence tier from A to F:

    TierMeaning
    AConfirmed by peer-reviewed academic source
    BDocumented in a reputable secondary source (Wikipedia, established cookbook, food-science reference)
    CCultural / regional knowledge — the technique exists and is practiced but is not deeply documented
    DOral tradition — accepted as cultural knowledge but lacks formal documentation
    ERecent novel invention by a named chef or institution
    FUnder review — contains claims that have not yet been verified

    The tier is published per entry and visible on the public page.

    Stage 6 — Continuous freshness audit

    Verification is not a one-time event. A continuously-running audit re-evaluates older entries against newly-imported reference data. Entries that drift out of step with current evidence are flagged and either re-enriched or sent back through the adversarial review.

    Each entry carries a freshness_score, a last_audited_at timestamp, and an audit log of every check it has undergone.


    What this gets us — and what it doesn't

    This pipeline catches a meaningful share of common failure modes: hallucinated compounds, impossible temperatures, fabricated cultural attributions, schema violations, drift over time. We measure ourselves against the question: "If a credentialed food scientist read 100 random entries, how many would they accept as accurate?"

    What this pipeline does not do:

    • It does not turn opinion into fact. Sensory descriptors ("bright, mineral, slightly bitter") are inherently subjective and survive the pipeline as long as they are coherent.
    • It does not eliminate all errors. Verification is statistical: we drive error rates down, not to zero.
    • It does not replace human judgment for culturally sensitive claims. Origin attributions in particular are an active area of refinement.

    We invite anyone who finds an inaccurate claim to flag it directly on the entry. Flagged claims are queued for re-review.


    Independent audit

    A periodic independent audit is part of the methodology. Audit reports — including any discovered error rates — are published in full on this page. The next audit is scheduled for week 8 of the current scaling push, when our verified-entry sample is statistically meaningful.


    Source of truth

    This document is checked into the public Foodgeist repository at docs/VERIFICATION_METHODOLOGY.md. Every revision is in git; if anything changes, the change is auditable.

    The pipeline implementation lives in src/lib/cerebrus/ and nexus-service/. Routes that perform validation, adversarial review, and external verification are public — their endpoints can be inspected via the platform's standard observability surface.


    Foodgeist is the only food platform that publishes its verification methodology. We believe that should change. Until it does, this is the document.

Every claim sourced · methodology · pricing · api

Foodgeist is developed, created, and maintained by one food geek. It gets lonely. It gets expensive. Say hi →

Foodgeist is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com. As an Amazon Associate we earn from qualifying purchases.