When the challenge is novel, the architecture will lock in for years, and the wrong AI bet can burn millions — I absorb the domain, find the real constraint, decide what belongs in AI and what doesn't, and build a working proof of concept. I use frontier models, coding agents, rapid research, and principal-level technical judgment to get to truth fast — before you hire a team, fund the roadmap, or commit the budget.
With the right AI-native build stack, I can compress research, explore more architecture, generate and refactor code, stand up agents and workflows, build evaluation harnesses, and test ideas against reality in days or weeks instead of quarters.
The trap is assuming speed replaces judgment. It doesn't. Cheap AI can generate plausible code and plausible nonsense at industrial scale. My job is to use the leverage without inheriting the slop: choose the right models, constrain the system, design guardrails, secure the data, test the failure modes, and build something that survives contact with reality.
Three ways to engage. All start the same way: I learn the domain fast, find the real constraint, decide what should be AI, software, workflow, or human judgment, and build toward proof.
1–2 weeks. You have a foggy opportunity or an expensive technical question. I deliver a working proof and an architecture you can act on.
1–2 days/week. Ongoing technical leadership for teams building AI-native products, internal agents, or novel systems.
For investors, acquirers, and operators deciding whether an AI story is real, buildable, defensible, and worth the money.
Five steps. No ticket queue. One accountable brain using an AI-native build system from first question to working proof.
I don't assemble templates. I invent systems.
The AI era made the invention loop faster, not easier.
For 45+ years, people have brought me hard technical problems and asked me to invent the answer. What changed is leverage.
Today I work with an AI-native build stack: frontier models, coding agents, research agents, automated evaluation, and rapid prototyping loops. That lets me absorb new domains faster, explore more design space, and build working proofs in days or weeks that used to take a small team and a quarter.
But AI is not the pitch. Judgment is. Most companies don't need more AI enthusiasm. They need someone who can decide what the system should be: where AI belongs, where it doesn't, how to secure it, how to test it, and how to make it survive real users, real data, and real constraints.
I've been doing that for decades. At Applied Minds, a military planning challenge became Flying Logic. At Blockchain Commons, hard problems in coordination, identity, and cryptography became Multipart URs, LifeHash, and Gordian Envelope. Same pattern today: hard problem in, invention out.
I don't sell AI theater. I build the proof.
I take a small number of clients because this work is judgment-heavy, architecture-heavy, and close to the core. If you want principal-level invention, AI-native execution, and one accountable brain on the problem, you're in the right place.