← Selected Essays · Home

Permitting, Goodhart's Law, and OPEF

February 2026

Permitting has a reputation problem. It is widely seen as slow, expensive, and maddeningly inconsistent. The fixes people argue for are familiar and well-intentioned: clearer rules, firmer timelines, better guidance up front, fewer rounds of review, stricter page limits, more standardized templates, more deference to prior approvals. Some argue for tighter procedural discipline; others for more discretion. But underneath these disagreements sits a shared assumption: permitting is slow because the process is inefficient, and if we tune the process (make it clearer, leaner, more predictable), the outcomes will improve.

Then you watch what actually happens.

A project team spends months producing a document that is technically "complete," yet no one involved feels more certain about the environmental reality it describes. An agency issues a decision that is procedurally defensible, yet still vulnerable in court because the reasoning chain is hard to reconstruct. Comment periods generate thousands of pages of responses that read like a ritual: everyone did the motions, and almost nothing new was learned. The system looks busy. It does not look wiser.

That's the fracture.

If smart, well-meaning people are generating mountains of effort that don't reliably produce understanding, then the bottleneck isn't processing speed. Something deeper is selecting for the wrong behavior.

The missing lens is Goodhart's Law: when a measure becomes a target, it stops being a good measure.

Permitting runs on proxies. Time-to-permit stands in for efficiency. Page count stands in for rigor. Checklist completion stands in for compliance. Number of comments addressed stands in for public engagement. These measures weren't created because anyone loved bureaucracy. They were created because environmental review is hard to evaluate directly. Ecosystems are complex. Impacts are uncertain. Rules are conditional. People needed signals: handlebars on a slippery object.

But once those signals became targets (used to grade performance, allocate resources, manage liability, win contracts), the system adapted. Not consciously. Systemically.

If speed is rewarded, uncertainty becomes something to hide. If completeness is rewarded, verbosity becomes safety. If defensibility is rewarded, the goal shifts from understanding to covering the flank. Each proxy, once targeted, begins to drift away from what it originally represented. You get more of the measurable thing and less of the meaningful thing.

This is why "streamlining" so often disappoints. It makes the proxies easier to hit.

Make the process faster and you don't automatically get better decisions; you often get faster production of performative compliance. Standardize the template and you don't automatically get clearer reasoning; you often get standardized ambiguity. Add automation and you don't automatically get legitimacy; you often get high-throughput paperwork, now with a thin gloss of modernity.

A smart skeptic might say: fine, then pick better metrics. Measure the right things. Measure environmental outcomes. Measure equity. Measure ecological resilience. But that escape hatch closes quickly. Outcomes are delayed, noisy, and confounded by factors far outside any single permit decision. If you tie institutions tightly to outcome metrics they can't control, you get another kind of Goodhart: risk aversion and blame-shifting. People optimize for not being responsible rather than being correct.

So the problem isn't that permitting has the wrong dashboard. The problem is that permitting is trying to govern a knowledge process using output metrics.

Environmental review is not a workflow problem. It is an epistemic problem: a system for turning uncertain facts, competing claims, and evolving rules into decisions that can withstand scrutiny over time.

When you frame it that way, the current failure mode becomes legible. The permitting system stores artifacts (documents, PDFs, comment letters) but does not reliably store the reasoning that connects evidence to conclusions. It has memory, but it is the wrong kind of memory. It remembers that a box was checked. It often cannot replay why it was checked.

And if the system cannot replay its own reasoning, then legitimacy becomes expensive. Every new cycle requires re-deriving context from fragments. Every disagreement turns into a battle of narratives rather than a comparison of claims and evidence. Every regulatory change forces a painful archaeology: what changed, why did we decide this last time, and which assumptions no longer hold?

Under those conditions, a specific outcome is inevitable. Institutions will prefer symbols that travel (standard forms, thick reports, canonical templates) because symbols are easier to transmit than thought. The system will select for paperwork because paperwork is portable, while reasoning, when not captured, dies with the meeting.

That's the constraint. That's the thesis. Not "we should do better," but: as long as permitting is a document system rather than a reasoning system, the process will optimize for appearances, because appearances are what survive.

This is where OPEF enters, not as a promise of speed, but as a change in what gets recorded.

OPEF treats permitting as a system of record for environmental truth: not truth as certainty, but truth as a traceable chain of claims, evidence, rules, and acknowledged uncertainty. The move is simple to state and hard to fake: instead of rewarding outputs, reward inspectability.

In practical terms, inspectability means the system can answer questions that the current process routinely struggles to answer without heroic effort: What exactly changed between yesterday's regulation and today's? Which parts of this decision depend on which assumptions? If a court challenges one link in the chain, what else collapses? If a project portfolio is rerun under a new rule, where do the outcomes diverge, and why?

Those aren't workflow questions. They are reasoning questions. And they are the questions that matter when regulations change, when portfolios are large, when agencies rotate staff, when public trust is fragile, and when litigation is real.

Goodhart's Law still applies. It always will. The trick is not to imagine a world without gaming, but to design so that gaming is harder than honesty.

If the easiest way to "win" in a system is to produce more pages, people will produce more pages. If the easiest way to "win" is to hit a timeline, people will compress uncertainty. But if the easiest way to "win" is to make your reasoning legible (assumptions explicit, evidence linked, logic testable), then the system begins to select for a different behavior. Not because everyone suddenly becomes virtuous, but because the incentives change shape.

In a reasoning-first system, hiding uncertainty is a liability. Overstating certainty is a fragility. Hand-waving becomes visible. The safest path becomes the most honest one: show your work.

That is the deeper PR claim hiding inside a technical shift. OPEF isn't "software that speeds up permitting." It's infrastructure for legitimacy under complexity. When a decision can be inspected, replayed, and improved without erasing its history, trust becomes less about deference and more about comprehension.

And once you see the permitting system through Goodhart's Law, you start noticing the pattern elsewhere. Any institution that governs reality through proxies eventually drifts. It starts measuring itself instead of measuring the world. It produces reports instead of understanding. It rewards performance instead of learning.

The uncomfortable implication is this: if we keep modernizing permitting by optimizing the same proxies (faster PDFs, cleaner templates, higher-throughput checklists), we are not solving the problem. We are mechanizing the illusion.

The alternative is not moralism or blame. It is a different kind of memory.

When a system can store its reasoning, it can adapt without forgetting. It can change regulations without restarting from zero. It can answer "what changed and why?" without mythology. It can move faster without losing contact with reality, because speed is coming from clarity, not compression.

Goodhart's Law tells you what goes wrong when symbols become targets. OPEF is a bet that we can rebuild the interface between symbols and reality by making reasoning the thing we record, the thing we can test, and the thing we can trust.

Once you accept that permitting is a knowledge system, not a workflow, a lot of the old debate dissolves. The fight stops being "faster vs stricter," "development vs environment," "people vs process." It becomes a question of whether we're willing to build institutions that can remember how they think.

And if we do, the strangest part is what becomes impossible to unsee: many of our most stubborn failures aren't failures of effort. They're failures of what we chose to measure and what we failed to preserve.

← Selected Essays · Home