The Anomalousness Index
Every Trump PURSUE UFO file on this site has a 0-to-100 score. This page explains what the score means, how it's computed, and what it deliberately is not.
What this score is not. It is not a "probability of aliens" or "% chance of extraterrestrial origin." That number is not honestly computable from these files. Any tracker or paper publishing such a number is selling you something. We refused to publish one.
What this score is. The evidentiary weight that an encounter remains unexplained after conventional analysis. A fast-read heuristic useful for triage. Six weighted components, weights sum to exactly 1.00, math is fully open at /data/scoring-rubric.json so anyone can audit or recompute every score on this site.
The six components and their weights
| Component | Weight | What it captures |
|---|---|---|
| Sensor quality | 0.25 | Multi-sensor military > single-sensor military > civilian aviation > photographic > eyewitness only > sketch only |
| Witness credibility | 0.20 | Astronaut > trained aviator / federal agent > military personnel > law enforcement > credentialed civilian > anonymous civilian |
| Corroboration | 0.20 | Multi-witness multi-instrument > multi-witness single-instrument > single-witness with instrument > multi-witness without instrument > single witness only |
| Kinematic anomaly | 0.15 | Physically impossible for known craft > edge of envelope > unusual but explainable > consistent with known craft |
| Mundane-explanation availability | 0.10 | No plausible mundane > weak mundane candidate > plausible mundane > strong mundane candidate > resolved mundane |
| Official disposition | 0.10 | Open after review > unresolved no review > partial resolution > resolved conventional |
How the score is computed
For each file, AI reads the publicly reported description and selects one enumerated value per component. Each value maps to a 0-100 integer (defined in the rubric JSON). The final score is:
score = sum(component_value × component_weight)
Result is clamped to the 0-100 range. We round to the nearest integer for display.
Worked example: Greece, January 2024 (score 66)
- Sensor quality:
single_sensor_military→ 80 × 0.25 = 20.0 - Witness credibility:
military_personnel→ 80 × 0.20 = 16.0 - Corroboration:
multi_witness_single_instrument→ 80 × 0.20 = 16.0 - Kinematic anomaly:
physically_impossible_for_known_craft→ 100 × 0.15 = 15.0 - Mundane explanation:
no_plausible_mundane→ 100 × 0.10 = 10.0 - Official disposition:
open_after_review→ 90 × 0.10 = 9.0
Sum: 86. (Note: the canonical CSV scoring uses slightly different component selections per the live manifest; this preset is from the rubric file's `presets` field as a worked example. The actual live score for this file is 66 because corroboration and mundane-explanation components are scored more conservatively in the operational manifest.)
How we use AI (and how we don't)
AI does three things on this site, all human-supervised:
- Rubric application. Claude (Anthropic) reads each file's publicly reported description and selects which rubric value matches each component. The rubric and weights are human-designed.
- Video transcription. OpenAI Whisper generates the .vtt subtitle tracks and full transcripts on every video page.
- PDF text extraction. pdfplumber (open-source, not AI) pulls searchable text out of every PDF for the on-site search index.
We do not use AI to decide whether files prove aliens exist. No model can honestly do that and we refuse to publish such a number. The score is a triage heuristic, not a conclusion.
How to recompute every score yourself
- Download the rubric:
https://pursueufotracker.com/data/scoring-rubric.json - Download the manifest:
https://pursueufotracker.com/generated/api/files.json - For each file, locate the
score.componentsobject. Each key is a component name; each value is the choice (e.g."single_sensor_military"). - Look up the choice's integer value in the rubric, multiply by the component's weight, sum across all six components.
- Compare to the published
score.value. They should match within rounding (±1).
If you find a discrepancy, the rubric is the source of truth, not the displayed score. File an issue at github.com/FongShuiLabs/pursueufotracker.
Known limitations
- The score reflects what the released description says, not the underlying classified raw data. If AARO has additional sensor data they haven't released, the actual evidentiary weight could be higher or lower. We can only score the public record.
- The rubric is editorial, not Pentagon-issued. AARO does not publish a public scoring rubric. This rubric is our best attempt to formalize the consensus heuristic that researchers and journalists use, weighted by what we judge predictive of "unresolved after review."
- Tied scores are common. Many files share the same component values (especially the 80+ DoD MISREPs) and therefore the same score. Use the score for triage, not for fine-grained ranking.
- Stub entries (files added by war.gov after the original May 8 release that we have not yet enriched with full descriptions) are scored using heuristic defaults. Marked with
pending_verification: truein the manifest.
For further reading on this site
- The honest verdict - what these files do and do not prove
- Top 10 most anomalous - the highest-scoring files
- War.gov revision log - what changed on May 11, 2026
- FAQ - methodology questions in Q&A form
- Glossary - PURSUE, AARO, MISREP, SWIR, etc. defined