Approvals are truth
Completion is not enough. The system has to make approval, evidence, fallback paths, and state transitions explicit.
Daniel G. Wilson / Founder / Legion Health
Austin, Texas
I work on AI control planes, agent UX, and operational software for high-stakes environments. Most of my current thinking comes down to one question: how do you make AI reliable enough to ship when the world is messy, regulated, and full of edge cases?
01 / Thesis
Reliable AI is not a model-selection problem. The model matters, but production truth is usually decided by the harness around it and the environment it has to survive in.
Completion is not enough. The system has to make approval, evidence, fallback paths, and state transitions explicit.
The bottleneck is not code generation. The bottleneck is whether you can tell what is correct, what failed, and what should happen next.
Tool surfaces, safe defaults, ceilings, and operator context all change what an agent can reliably do.
I prefer constrained v1 systems with replayable evidence and explicit gates over big replacement stories that collapse in production.
02 / Selected Work
The common thread is not a sector. It is software that has to do real work in the world: auditable systems, sharp interfaces, and products clear enough that people can trust them.

Co-founder
Building AI-native psychiatric operations with staged autonomy, approval rails, and regulator-grade evidence.
2018-2020Product manager
Learned what coordination debt looks like inside a giant organization and why explicit systems beat alignment theater.
Before and alongsideBuilder
Launches, iOS apps, motion work, and creative tools that sharpened product taste and shipping instincts.
03 / Writing
The writing side of the site is becoming a calmer place for doctrine, field notes, and sharper explanations of what actually makes AI systems work.
How approvals, queues, reason codes, and operator surfaces become part of the product.
Why the wrapper around the model usually determines whether an AI system survives contact with production.
What it takes to let agents operate on real workflows without losing traceability, safety, or human control.
The first public essays will focus on model / harness / environment, verifiability, staged autonomy, and agent UX.
Visit /writing04 / Proof
Enough proof to establish credibility without turning the homepage into a profile dump.