I'm Adam Hyland, a PhD student working in Charlotte Lee's lab in the department of Human Centered Design and Engineering at the University of Washington. I am interested in how communities of practice (often but not always engineering communities) coordinate around standards, standards processes, and things which look a bit like standards but aren’t. I care a lot about making the invisible parts of systems all around us easier to understand. Not just so we become better informed (which is cool) but so we are equipped to play with, break, and transform those systems (which is way cooler).
--------------Areas of work:
Computer Arithmetic: My interest in computer arithmetic is in demonstrating and teaching the dramatic impact and importance of this invisible, negotiated, coordinated part of our world. Arithmetic on computers has moved from a central part of programmer training to being supplied by a vast array of libraries and standardized interfaces. Showing how this embedding works and has changed us as users, programmers, and designers is my goal.
- The "Fast" Inverse (Reciprocal) Square Root: In 2009 I wrote a Wikipedia article about a famously terse bit of code to help people understand it. Now I maintain 0x5f37642f.com, which helps people understand how the spread and use of a strange access to the logarithm in floating point arithmetic.
- Coordinating Arithmetic: A Sloan Foundation funded (Sloan Grant number G-2023-21011) project to document the history of how we standardized computer arithmetic with IEEE 754 through oral history interviews and archival research.
- I am the co-secretary for the upcoming IEEE 754-2029 working group, which determines standards for binary and decimal floating point arithmetic, excluding those covered under P3109, the machine learning floating point standards group.
AI Image Generation: I work to help designers and artists understand the promise and limitations of machine image generation, principally through exploring and experimenting with abberant and adversarial image prompts. Doing so is effective in helping people care about and inspect their functioning, which is increasingly critical as more and more of our visual culture is generated by these systems.
- Generative AI Glitch Art: Looking for meaning in all the wrong places, for VCU's Workshop on the Workshop--the 14th of March, 2023.
- Grappling with widespread machine image generation: Slides for a talk at Georgia Tech's CS3001 course, July 19 2023.
- Prompt Surfing: A directed research group at the University of Washington which I co-lead with Ruoxi Shang and Professor Brock Craft to help students with a variety of backgrounds work with and critique generative AI.
- Hands Are Hard: Unlearning How We Talk About Machine Learning in the Arts, listed below, is an attempt to introduce artists to machine image generation by focusing on an area where early generators like Stable Diffusion struggle: human hands. Artists must understand how these systems work; one highly effective way to do so is to understand where they don't.
Interpretability and Robustness of Large Language Models: My work with Ruoxi Shang investigates new challenges with Large Language Models (LLMs) like GPT-4 and Llama as trustworthy interfaces to computing. Making sense of how model output can be understood (interpretability) and how models can be protected from manipulated input (robustness) is crucial for ensuring aligned behavior in high-stakes, high-complexity tasks, more of which are being turned over to LLMs every day.
- Interpreting Robustness is a course offered in Spring 2023 in University of Washington's Human Centered Design and Engineering department where students from inside and outside computer science engage with cutting edge research on the topic. Our aim is to expose students to the deep connections between model interpretability and robustness, connections which are understudied in the literature.
Peer Reviewed Publications:
- Keyes, Oscar K. and Hyland, Adam (2023) "Hands Are Hard: Unlearning How We Talk About Machine Learning in the Arts," Tradition Innovations in Arts, Design, and Media Higher Education, 1(1)
- Perkins, K., Ghosh, S., Vera, J., Aragon, C., & Hyland, A. (2022). The Persistence of Safety Silence: How Flight Deck Microcultures Influence the Efficacy of Crew Resource Management. International Journal of Aviation, Aeronautics, and Aerospace, 9(3). DOI: https://doi.org/10.15394/ijaaa.2022.1728
Teaching:
I teach Information Visualization, a complex and tool-laden topic like so much in Human-Computer Interaction. My approach--which I feel helps students remain curious in a space like this--is to show that struggling with tools (many of which represent thousands upon thousands of person-hours of work) is not their fault. Really excellent visualization and in many ways, the kinds of personal growth we sometimes call 'learning', can only occur if we productively struggle together.
Courses taught:
- HCDE 411 -- Information Visualization for Undergraduates
- Autumn 2022, Spring 2023 (With Murtaza Ali), Autumn 2023 (With Murtaza Ali), Winter 2024 (With Murtaza Ali)
- HCDE 511 -- Information Visualization for Masters Students
- Winter 2023 (With Murtaza Ali), Summer 2023 (With Dr. Brock Craft)
Other:
- Missing on the net: A small collection of documents which for whatever reason aren't on the web elsewhere.
Contact: achyland @ UW 'dot' edu, or find me on LinkedIn