Hybrid Human-AI Conference: Day 5 of Summer in Munich - Humans in the loop

Sharing pizza with new friends Valerie, Chris, and Frederik.

If someone asked me to describe the most important messages from HHAI 2023, I would tell them that technology design is a deeply social process. We do not get to decide how to use AI tools on our best day – well-rested, when we're in sync with the other department heads, when we're patient and attentive. We make these calls with our current brains. Exhausted, distracted, disconnected, and often totally clueless! 

I fall into this trap repeatedly, assigning activities to some mythical future version of myself that doesn't exist. The brilliant people here at HHAI resist this mental decay by exploring what people are actually like when they make decisions about AI and how specific applications of AI can make those decisions less likely to be terrible.

Three Key Learnings

Calibration

AI models can still be useful even if they're inaccurate sometimes, as long as they can show how likely they are to be wrong. We call this calibration. Next time your team is building a buzzy new prototype, challenge yourself to highlight calibration metrics alongside a golden path example.

Explainability

Explainable AI has a failure mode similar to advertising - explaining how an AI system arrives at its result helps give users confidence, but that confidence can be misplaced. If users are overly convinced by an explanation, they can rely too much on the model.

Pretzels are a distinctly German phenomenon, and so are butter pretzels, where each one is stuffed with inch-high slabs of cold butter (!?)

Roles that AI can play

In collectives, Abhishek and I are considering what roles and group structures might best facilitate collective intelligence. It's hard to consider what level of abstraction to begin designing solutions at (ex: exact composition of groups vs. more archetypal approach). Designing collaborative environments can begin with exploring the division of roles: AI checking human work, human checking AI work, or joint ideation and synthesis.

Three Humans of HHAI

During breaks between sessions, we gather in the dining area to debate what we heard (and to eat butter pretzels)

Valerie Krug is a PhD student at Otto-von-Guericke-University Magdeburg, researching explainable AI, specifically how to make deep neural networks less opaque using introspection methods. We love mechanistic interpretability pioneer Chris Olah's work.

Olaf Adan researches how design can improve verbal and non-verbal communication between interacting humans and AIs. His project demo won the HHAI grand prize today!

Postdoctoral researcher Burcu Sayin's work focuses on improving trustworthiness with human-in-the-loop decision-making. Measuring this is very difficult, so she is tackling how (and when) to define new ways of formally describing the value hybrid systems can provide.

Two Sessions I Enjoyed

In "Value-Based Hybrid Intelligence," Burku presented her and her coauthors' work to define hybrid systems' value in terms of several variables, including the value of rejecting/classifying/misclassifying an item, prediction accuracy, and proportion of accepted to rejected items. They also devised a clever threshold filter that responds to different calibration levels. This sounds complex, but her work is beautifully simple. How often does something work? What factors drive those results? These are often well-expressed as a series of mathematical relationships. 

HHAI 2023 happened because of lots of hardworking student volunteers (like Mochi here).

AI-based clinical support systems for medical professionals can boost an industry facing labor shortages. In "Design of a Human-in-the-Loop Centered AI-Based Clinical Decision Support System for Professional Care Planning," the authors flagged some promising benefits of these systems alongside obstacles to implementation. We have a strong need for medical care providers to make the best decisions possible but also to protect patient data! Working within digital healthcare infrastructure is difficult, and I'm glad others are stepping up.

Looking forward to tomorrow

Just when I felt satisfied - like I could check HHAI off my list -- they hit me with a teaser for HHAI 2024, "Hybrid Human Artificial Intelligence for social good" in Sweden. Debates have been going about whether to grow HHAI from its current size (100-200 researchers and industry professionals) to something much larger. I'm conflicted. It's hard to follow in-depth talks far outside my research area, but HHAI's chief strength is its interdisciplinary. You can't talk about human-AI collaboration without going deep into interface design, psychology, computer science, evolutionary biology, and sociology (at least). What are people? How are they built? What do they need from their tools? How might we build that? How might our communities change?  

Getting excited for HHAI 2024! 

Well, we don't know. And we don't have a ton of time to find out! The rise of generalized AI models and capabilities has thrown our planet a curveball, and we better pay attention. Grateful to have spent this time in solidarity with the growing handful of thinkers who are trying, as best as we can, to pay attention.

Emily Dardaman

Emily Dardaman is a BCG Henderson Institute Ambassador studying augmented collected intelligence alongside Abhishek Gupta. She explores how artificial intelligence can improve team performance and how executives can manage risks from advanced AI systems.

Previously, Emily served BCG BrightHouse as a senior strategist, where she worked to align executive teams of Fortune 500s and governing bodies on organizational purpose, mission, vision, and values. Emily holds undergraduate and master’s degrees in Emerging Media from the University of Georgia. She lives in Atlanta and enjoys reading, volunteering, and spending time with her two dogs.

https://bcghendersoninstitute.com/contributors/emily-dardaman/
Previous
Previous

Designed for us: FOSSY Day 1

Next
Next

Hybrid Human-AI Conference: Day 4 of Summer in Munich - Secret Agency