Position Paper: The Case for Epistemic Calculus and the Codex News Graph

The Problem: Narrative Collapse

In our current information environment, news systems are designed to deliver "the answer." Whether through social media algorithms or traditional headlines, information is smoothed over to create a cohesive story. This process hides uncertainty, deletes contradictions, and obscures the evidence—or lack thereof—behind a claim. When the story changes, we are left with "narrative whiplash," unsure of what changed or why.

The Solution: An Engineering Approach to Knowledge

The Codex News Graph is not a "truth machine." It does not attempt to tell the user what to believe. Instead, it is a piece of infrastructure—a ledger—that treats information like an engineering problem.

By applying Epistemic Calculus, we have moved from treating news as a collection of stories to treating it as a system of measurable "claims." This is a significant shift for three reasons:

1. Measuring the Support, Not the Verdict

Instead of a simple "True/False" label, every claim in the system is assigned an Epistemic Vector. We measure the strength of the direct evidence, the independence of the sources, and how well the claim holds up over time. We aren't judging the truth; we are measuring the quality of the justification.

2. Preserving Contradiction

In standard systems, a contradiction is a bug to be resolved. In Codex, a contradiction is a first-class piece of data. By explicitly mapping where claims conflict, the system can calculate Epistemic Friction. This identifies "load-bearing" parts of a story—the claims that, if proven wrong, would cause the entire narrative to collapse.

3. Accountability Through an Append-Only Ledger

Everything in this system is recorded in an "append-only" format. Like a financial ledger, you cannot delete the past. If a news outlet changes its story, the system doesn't just update the text; it records the change as an event. This allows us to calculate Epistemic Momentum—tracking how a story's justification grows or decays over time.

Why This Matters

This work matters because it restores procedural humility to information. It acknowledges that "we don't know" is a valid and necessary state. By combining the reliability of deterministic code (the ledger) with the flexibility of probabilistic analysis (the calculus), we have built a tool that:

  • Protects the user from being misled by structural gaps in evidence.
  • Exposes incentives and pressures that shape how a story is told.
  • Maintains human authority, providing an "X-ray" of the news so that individuals can make their own informed judgments.

We haven't built a system to fix the news; we've built a system to show us exactly how the news is built, where it's strong, and where it’s carrying a debt of proof it hasn't yet paid.


A Position Paper on the Codex Project

Why This Work Matters (Without Hype)

Summary

The Codex project is an experiment in building software systems that help people understand complex, contested information without pretending to decide what is true for them. It combines probabilistic tools (like AI language models) with deterministic systems (like ledgers and formal rules) in a carefully constrained way, so that each does what it is good at without being allowed to overreach.

This may sound abstract, but the motivation is very practical: modern information systems are extremely good at producing confident answers, rankings, and summaries — and very bad at preserving uncertainty, disagreement, and provenance. Codex is an attempt to fix that failure mode in a concrete, buildable way.


The Problem Being Addressed

Most modern software systems — especially AI-assisted ones — are designed to collapse complexity. They summarize, rank, score, and decide. This is useful for many tasks, but it breaks down badly when:

  • information is incomplete or disputed,
  • sources contradict each other,
  • incentives distort how narratives form,
  • or conclusions are premature.

In those cases, systems often produce outputs that look authoritative but quietly erase uncertainty. This isn’t malicious; it’s a consequence of design choices. But it has real costs, especially in journalism, governance, and decision-making.


What Codex Does Differently

Codex was built around a simple but demanding principle:

Software should preserve uncertainty and disagreement unless a human explicitly resolves it.

To make that work, the system enforces several constraints:

  • Append-only records: Nothing is silently edited or overwritten. The system remembers how understanding changed over time.

  • Separation of roles:

    • Probabilistic systems (like AI) can suggest interpretations.
    • Deterministic systems record facts, claims, and contradictions.
    • Humans retain judgment and authority.
  • No automatic truth scoring: The system can measure how evidence is structured, but it never declares what is true.

  • Explicit contradictions: Disagreements are preserved as first-class objects, not treated as errors to be smoothed over.

  • Careful limits on automation: Convenience is intentionally constrained to avoid subtle loss of control.

These choices make the system slower and more demanding — deliberately so. The friction is part of the safety mechanism.


Why This Was Hard

Technically, this project was not about inventing new algorithms. It was about preventing powerful tools from doing things they are very good at but shouldn’t do automatically.

The hardest part was integrating:

  • probabilistic systems that generate possibilities,
  • deterministic systems that enforce structure and memory,
  • and human judgment that must remain central,

without letting any one of them dominate the others.

Most systems fail here. They either give AI too much authority, or they reduce it to a gimmick. Codex required holding multiple constraints in balance at once, which is conceptually demanding even for experienced engineers.


Why This Matters

Codex is not a product meant to “fix the world,” and it is not a claim about artificial intelligence replacing human judgment. It is a proof of concept that complex, ambiguous information can be handled honestly by software — without false certainty, hidden assumptions, or quiet manipulation.

More broadly, it demonstrates that:

  • advanced tools can be used without surrendering control,
  • humility can be enforced in system design,
  • and powerful technology can be constrained by principle rather than hype.

That combination is rare — and increasingly important.


Closing Thought

The value of this work is not that it is flashy or revolutionary. It is that it shows a disciplined way to think with machines — one that respects human limits, preserves disagreement, and refuses to pretend that complexity can always be resolved.

That, quietly, is the point.


What I’ve Been Building: A Tool for Clear Thinking in a Noisy World

For the past few weeks, I’ve been working on a small but precise piece of software called Codex News Graph. It’s not an app, a startup, or a “solution” to misinformation. It’s a quiet instrument—like a microscope for news—that helps people see how stories actually evolve over time.

Most news tools today show you what happened. This one shows you how we know what happened—and where accounts disagree, change, or lack evidence. It does this by treating every article not as a finished truth, but as raw material to be examined.

Here’s how it works:

  • When news comes in, the system saves it exactly as published—no summarizing, no scoring, no filtering.
  • A person (not an AI) then extracts specific claims: “Official X said Y,” “Video shows Z,” “Outlet A reported casualty count of 5.”
  • If two outlets say conflicting things, the system doesn’t pick a winner. It records the contradiction explicitly: “Claim A and Claim B disagree on this point, here’s the evidence each cites.”
  • It also notes structural pressures—like legal risk or reputation concerns—that might shape how a story is told, without guessing anyone’s intentions.

All of this is stored in an unchangeable log, like a lab notebook. Nothing is overwritten. If a claim is later updated or retracted, the original stays visible, marked with its history.

Recently, we added a layer of careful measurement—not to judge truth, but to show how well-supported a claim is. For example:

  • Does it rely on a video, an official statement, or a social media post? (Each carries different weight.)
  • Is it repeated by many independent sources, or just echoed?
  • Has it stayed consistent over time, or shifted as new facts emerged?

These numbers aren’t verdicts. They’re more like the “nutrition labels” on food: they describe structure, not value. And crucially, they can always be traced back to the original evidence.

This matters because modern news often blurs fact, framing, and opinion into a single stream. Codex pulls them apart—not to tell you what to believe, but to make it easier to think for yourself.

It’s built with strict rules:

  • No automation of judgment: Humans decide what gets recorded.
  • Uncertainty is preserved: “We don’t know yet” is a valid, respected state.
  • Everything is auditable: You can always check the source, the claim, and the context.

I’m proud of this work not because it’s flashy, but because it’s honest. In a world that rewards speed and certainty, it chooses patience and clarity instead. It doesn’t fix the news—but it gives people better tools to navigate it.


Position Paper: The Case for Epistemic Calculus and the Codex News Graph

The Problem: Narrative Collapse

In our current information environment, news systems are designed to deliver "the answer." Whether through social media algorithms or traditional headlines, information is smoothed over to create a cohesive story. This process hides uncertainty, deletes contradictions, and obscures the evidence—or lack thereof—behind a claim. When the story changes, we are left with "narrative whiplash," unsure of what changed or why.

The Solution: An Engineering Approach to Knowledge

The Codex News Graph is not a "truth machine." It does not attempt to tell the user what to believe. Instead, it is a piece of infrastructure—a ledger—that treats information like an engineering problem.

By applying Epistemic Calculus, we have moved from treating news as a collection of stories to treating it as a system of measurable "claims." This is a significant shift for three reasons:

1. Measuring the Support, Not the Verdict

Instead of a simple "True/False" label, every claim in the system is assigned an Epistemic Vector. We measure the strength of the direct evidence, the independence of the sources, and how well the claim holds up over time. We aren't judging the truth; we are measuring the quality of the justification.

2. Preserving Contradiction

In standard systems, a contradiction is a bug to be resolved. In Codex, a contradiction is a first-class piece of data. By explicitly mapping where claims conflict, the system can calculate Epistemic Friction. This identifies "load-bearing" parts of a story—the claims that, if proven wrong, would cause the entire narrative to collapse.

3. Accountability Through an Append-Only Ledger

Everything in this system is recorded in an "append-only" format. Like a financial ledger, you cannot delete the past. If a news outlet changes its story, the system doesn't just update the text; it records the change as an event. This allows us to calculate Epistemic Momentum—tracking how a story's justification grows or decays over time.

Why This Matters

This work matters because it restores procedural humility to information. It acknowledges that "we don't know" is a valid and necessary state. By combining the reliability of deterministic code (the ledger) with the flexibility of probabilistic analysis (the calculus), we have built a tool that:

  • Protects the user from being misled by structural gaps in evidence.
  • Exposes incentives and pressures that shape how a story is told.
  • Maintains human authority, providing an "X-ray" of the news so that individuals can make their own informed judgments.

We haven't built a system to fix the news; we've built a system to show us exactly how the news is built, where it's strong, and where it’s carrying a debt of proof it hasn't yet paid.