Daniel G. Wilson / Founder / Legion Health

Austin, Texas

I build AI systems that survive contact with reality.

I work on AI control planes, agent UX, and operational software for high-stakes environments. Most of my current thinking comes down to one question: how do you make AI reliable enough to ship when the world is messy, regulated, and full of edge cases?

Legion Health / YC S21Princeton ORFE '18Former Microsoft PMStack Overflow 15K+

Model. Harness. Environment.

Reliable AI is not a model-selection problem. The model matters, but production truth is usually decided by the harness around it and the environment it has to survive in.

01

Approvals are truth

Completion is not enough. The system has to make approval, evidence, fallback paths, and state transitions explicit.

02

Verifiability defines throughput

The bottleneck is not code generation. The bottleneck is whether you can tell what is correct, what failed, and what should happen next.

03

Agent UX is control-plane design

Tool surfaces, safe defaults, ceilings, and operator context all change what an agent can reliably do.

04

Staged autonomy beats narrative theater

I prefer constrained v1 systems with replayable evidence and explicit gates over big replacement stories that collapse in production.

Founder-led software, operator-grade systems, and long-tail product taste.

The common thread is not a sector. It is software that has to do real work in the world: auditable systems, sharp interfaces, and products clear enough that people can trust them.

Legion Health interface preview
Current system: Legion Health's operational software for modern psychiatric care.

Working notes on reliable AI.

The writing side of the site is becoming a calmer place for doctrine, field notes, and sharper explanations of what actually makes AI systems work.

AI control planes for messy environments

How approvals, queues, reason codes, and operator surfaces become part of the product.

Model, harness, environment

Why the wrapper around the model usually determines whether an AI system survives contact with production.

AI-readable and AI-mutable systems

What it takes to let agents operate on real workflows without losing traceability, safety, or human control.

Publishing in progress

The first public essays will focus on model / harness / environment, verifiability, staged autonomy, and agent UX.

Visit /writing

If you are building serious AI systems, thinking about agent UX, or comparing notes on software that has to hold up in the real world, say hello.