I’m Adam Hyland. I work on the parts of systems that decide what’s possible while pretending not to exist: numerical standards, learned model behavior, and operating‑system enforcement. Those can interact in illegible ways, with modern agentic AI harnessed by a “sandbox” last officially supported 4 years before OpenAI was formed. I try to make those invisible constraints legible, not by telling a cleaner story, but by grappling with the mess as it actually behaves.

If you’re here because “vibe coding” isn’t cutting it for your problem and turning agentic AI into a factory floor just seems to waste money on compute, you’re in the right place. Tell me: how does it feel to send off two agents for an answer and get responses that are both right for the wrong reason? The unease it ought to provoke can’t be worth the money. If your team wants agentic AI to work inside real, confounding systems—where correctness isn’t a single oracle and progress depends on clean interfaces between people, tools, and ground truth—you can hire me.

I don’t manage agents; I ferret out conditions for success. I build contracts, harnesses, receipts, and interfaces that let autonomy happen without silent drift or pretend control. You can manage agentic AI with stricture if you have the resources and patience of a frontier lab. For everyone else, total operational control is a sign you should write your own code or find harder problems. If you’re moving serious work grounded in the infrastructure of the 20th century through the 21st, I can build and ship the bridge—especially if it’s supposed to be impossible.

Below are some things I am, did, or do. Reach out to adam@adampunk.com if you want to know more.


Software:

I build and maintain the following with/via/because/despite agentic coding. A recurring theme: agents can build reliable structure so long as you embody ruthless nonchalance.

Interpretability and robustness of large language models:

With Ruoxi Shang, I looked at what it means to treat LLMs (GPT-4, Llama, etc.) as trustworthy interfaces to computing. Interpretability is about what counts as an explanation; robustness is about how explanations and behavior degrade under manipulation. One of our products was Interpreting Robustness — a Spring 2023 course at the University of Washington connecting interpretability and robustness across a literature that treated separately. I don’t like to crow about my own work but students in this class saw 2026 in advance.

Computer arithmetic:

I’m interested in how arithmetic becomes infrastructure: negotiated, standardized, embedded into libraries and interfaces, and then treated as “just how computers work.” I like teaching the point where the math ends and the coordination begins.

AI image generation:

I like finding the promise, premise and limits of machine image generation by taking the systems literally—probing them with aberrant and adversarial prompts until their assumptions become visible. “Glitches” are often just the model telling you what it thinks you asked for.


Teaching:

I teach Information Visualization, which is tool-laden in the way modern life is tool-laden. A core premise of my teaching is that students aren’t “bad at tools” so much as they’re encountering concentrated history: interfaces that embody thousands of person-hours, compromises, and hidden assumptions. Good visualization—and a lot of what we call learning—happens when we struggle with that together without pretending it should be effortless.

Courses taught:


Peer reviewed publications:


Other:


*: Don’t believe me(?|,) just watch.