Hybrid Human-AI Conference: Day 3 of Summer in Munich - Trust the machines?

HHAI, even more than other conferences I've attended, has a clear sense of self. We are here for one reason - to study how AI can support human goals and human flourishing. That requires interdisciplinary study, especially from psychologists, computer scientists, and HCI professionals, but it leads to a unity of purpose that encourages openness and discourages 80s-style ego posturing. My only regret is that the material is so interesting I won't have time to explore as much of Munich!

Munich is home to many designers and has a rich intellectual life, which you can see in its mix of modern and classic architecture

Three Key Learnings

Guesstimations

One of the best tools in our strategic thinking arsenal is guesstimation. Guesstimation tasks require estimating unknown quantities where precise quantitative modeling information is unavailable. They're often featured in job interviews, like "How many golf balls could fit in a 747?" In hybrid teams, we must look for ways AI tools can help people make better guesstimations, including forecasting environmental changes.

Intertwining of creative and machine thinking

Using generative tools for music raises tricky questions about what, exactly, is creative expression. Ilya Borovik and Vladimir Viro argue in an unreleased paper that expression is emotion augmented by music. AIs can compose music now, but the human emotional component is still "real." For some, separating the athlete from the sport does not lessen the enjoyment. Controversial claims like these put pressure on our unquestioned assumption about what it means to be human. Is making music human? What about experiencing pleasure or emotion from music? 

Trust and machines

How and when do humans trust algorithmic decision-making tools? Well, it depends - and on four factors in particular. The context, likely impact, level of human involvement, and who is being asked can all impact the level of trust. Remember! In every human-AI interaction (especially at the collective level), the goal is trust calibration – avoiding misuse by trusting the AI too much and disuse by trusting it too little.

Vildan Salikutluk challenges the audience to consider use cases of human-AI teaming

Three Humans of HHAI

Melanie McGrath is a social psychologist and research fellow of Human Trust in Collaborative Intelligence Systems. Her work explores the practical and moral dimensions of human-AI teaming.

Patrizia Di Campli San Vito takes the prize for the most inspiring interdisciplinary project at HHAI. In "RadioMe: Supporting Individuals with Dementia in Their Own Home... and Beyond," Patrizia brings together a team of music therapists, music engineers, and more to design a system that detects agitation in dementia patients and delivers a customized radio program in real-time of sounds that soothe the patient. 

Emily and Patrizia catching up over a glass of wine at the HHAI reception event

Vildan Salikutluk shares my interest in forecasting as a tool for making better decisions. She studies models of higher cognition at TU Darmstadt and presented work at HHAI exploring the potential of large language models to assist "guesstimation" problem-solving activities.

Two Sessions I Enjoyed

In Constanza Alfieri and Donatella Donati's Ethical Preferences in the Digital World, I was introduced to the idea of an ethical intermediary – an AI assistant given information about your moral beliefs and preferences that could intercede on your behalf in complex situations (e.g., to decipher a software's terms and conditions or settle a dispute between drivers of autonomous vehicles). Intuitively, it feels harder to trust automated support with moral decisions, but all decisions - including about the design of technology have moral dimensions. Constanza and Donatella's work makes that aspect of human-AI interaction more explicit. 

A little piece of Bavarian history: a sign outside the reception venue welcoming participants to the Ludwig Maximilian University of Munich, which was first established in 1472.

Frederic Gerdon's talk on "Humans vs. machines: who is perceived to decide fairer? An experiment about citizens attitudes" was a nice revisitation of the themes from my day 1 workshop. Can AIs make fair decisions or fairer than humans? Yes, of course. Humans make terrible decisions. But a slight mismatch between AI's idea of fairness and our own in given situations can lead to the rapid proliferation of a bad outcome - so it's important to study closely when (and why) we are trusting AI systems.

Looking forward to tomorrow

Three days in, many of us have had the chance to connect with people whose work inspires and challenges our own. On day four, I want to seek out those connections, make sure we share contact information and begin making plans to stay in touch. Of course - there are still many interesting pieces of work I haven't seen yet!

Emily Dardaman

Emily Dardaman is a BCG Henderson Institute Ambassador studying augmented collected intelligence alongside Abhishek Gupta. She explores how artificial intelligence can improve team performance and how executives can manage risks from advanced AI systems.

Previously, Emily served BCG BrightHouse as a senior strategist, where she worked to align executive teams of Fortune 500s and governing bodies on organizational purpose, mission, vision, and values. Emily holds undergraduate and master’s degrees in Emerging Media from the University of Georgia. She lives in Atlanta and enjoys reading, volunteering, and spending time with her two dogs.

https://bcghendersoninstitute.com/contributors/emily-dardaman/
Previous
Previous

Hybrid Human-AI Conference: Day 4 of Summer in Munich - Secret Agency

Next
Next

A Matter of Taste: Artificial Teammates, Experiment 5