Skip to main content
Tools for Democracy
ToolsThe ProblemResourcesFAQAboutRequest a Demo

Tools for Democracy

Knowledge infrastructure for governments that want to fix themselves. Built by Cory Weinstein.

Capabilities

  • Knowledge Capture
  • Confidence Monitor
  • Structured Reasoning
  • Cross-Domain Connector
  • After-Action Review
  • Policy Analyzer

Company

  • About
  • The Problem
  • All Tools
  • Resources
  • Case Studies
  • Request a Demo

Legal

  • Privacy Policy
  • Terms of Service
  • Accessibility
  • Security
© 2026 Tools for Democracy LLC. All rights reserved.
  1. Tools
  2. /Confidence Monitor

Confidence Monitor

Know how sure you should be about what you know.

The problem it solves

Not all information ages the same way. A policy analysis from last month might still be solid. A threat assessment from last year might be dangerously outdated. Most organizations treat all stored knowledge as equally reliable — and make decisions based on information they shouldn't trust anymore.

What it does

  • 01

    Tracks reliability over time — every piece of knowledge gets a confidence score that naturally decays as conditions change.

  • 02

    Flags stale assessments — before someone makes a decision based on outdated analysis, the system warns them.

  • 03

    Identifies blind spots — shows where your knowledge base has gaps, where confidence is concentrated in a single source, or where assessments haven't been refreshed.

  • 04

    Calibrates over time — learns which types of knowledge decay faster and adjusts its scoring accordingly.

  • 05

    Records outcomes to measure accuracy — when predictions resolve, record whether they came true. The system tracks your hit rate and shows where your team is overconfident or underconfident.

How it works

1

Register assessments with initial confidence

Each claim, estimate, or assessment gets a confidence score (0-100%) and a domain that determines how fast it decays.

2

Confidence decays automatically over time

Intelligence assessments lose half their reliability every 90 days without review. Policy knowledge lasts ~180 days. The system tracks this automatically.

3

Alerts flag what needs re-evaluation

Before anyone makes a decision based on stale information, the system flags which assessments have decayed below your threshold.

4

Record outcomes as predictions resolve

When an assessment proves correct or incorrect, log the result with cm_outcome. The system calculates a Brier score showing how well-calibrated your team's confidence is — and which domains need recalibration.

Sample output

This is what Confidence Monitor actually produces. Real format, sample data.

cm_alerts --threshold 50
3 assessments below 50% confidence threshold

[CRITICAL] "Southwest border traffic pattern analysis"
  Current confidence: 31% (was 88% on 2025-06-12)
  Decay rate: intelligence/tactical (half-life: 90 days)
  Last verified: 247 days ago
  Action: 2 active decisions reference this assessment
  >> Used in: Border Resource Allocation (Decision #1847)
  >> Used in: Staffing Model Q2 2026 (Decision #1903)

[WARNING] "Ransomware threat to municipal water systems"
  Current confidence: 44% (was 92% on 2025-09-01)
  Decay rate: cybersecurity/threat (half-life: 120 days)
  Last verified: 183 days ago
  Action: 1 active decision references this assessment

[WARNING] "Fentanyl supply chain origin analysis"
  Current confidence: 49% (was 85% on 2025-08-15)
  Decay rate: intelligence/strategic (half-life: 180 days)
  Last verified: 199 days ago
  Action: No active decisions — archive candidate

Team calibration score: 0.73 (well-calibrated)
  Overconfident on: cybersecurity assessments (+12%)
  Underconfident on: policy stability estimates (-8%)

Who it's for

Intelligence analysts, policy researchers, emergency management teams — anyone who makes decisions based on assessments that may have changed since they were written.

What this tool doesn't do

  • —Does not generate assessments — it tracks the reliability of assessments your analysts produce
  • —Decay rates are estimates based on domain research, not guarantees — your actual decay may vary
  • —Requires discipline to record outcomes — calibration only works if you close the feedback loop
  • —Not a real-time monitoring system — it tracks assessment reliability, not live data feeds

How to get started

1

Free demo

90 minutes. Your real data. I show you what Confidence Monitorfinds that you didn't know you were missing.

2

Bounded trial

4-6 weeks on one specific problem. Fixed scope, fixed fee. You see results before you commit to anything larger.

3

Annual license

Deploy on your infrastructure. Your data stays yours. Cancel anytime — I earn renewal through value, not lock-in.

Works well with

Knowledge Capture→Structured Reasoning→Policy Analyzer→

See Confidence Monitor in action

Bring a real problem. I'll analyze it live — and tell you honestly whether this tool solves it.

Request a Demo