Vibe Econometrics and the Analysis Contract
Why AI-powered analysis hides bad assumptions better than humans do
AI tools that run statistical analyses can make flawed reasoning look polished and credible, even when the underlying assumptions are wrong. The problem isn't that AI creates new mistakes—economists have always made them—but that it packages weak analysis so convincingly and distributes it so fast that spotting the errors becomes much harder. The author proposes a pre-commitment framework that forces researchers to document their methods and define what would prove them wrong before running the analysis, not after.
As AI tools become standard for policy analysis, business forecasting, and academic research, faulty causal claims now spread with unprecedented speed and polish, making their errors harder to catch. When a formatted spreadsheet or polished chart is your only signal of validity, and recognizing problems requires expertise the AI workflow sidesteps, bad analysis can drive real decisions—from business strategy to public policy—before anyone spots the mistake. The proposed Analysis Contract creates an audit trail that forces rigor back into the process.