Hybrid Human-AI Conference: Day 4 of Summer in Munich - Secret Agency

Speaking the truth, plainly and clearly, is hard. Especially in science, where overconfidence in a result leads to disaster and under-confidence delays progress on urgent problems. Across the field(s) of human-AI interaction, this “calibration problem” rears its head. Some of the most important work being done today involves improving how AI tools can explain their behavior - and how we can explain AIs to each other. Frank van Harmelen calls this “meaning negotiation” and told me the field must progress long before we agree on terms.

The way people react to a given problem is hard to distinguish from the problem itself, and the overall fractally-complicated mess of causes and effects depends equally on both of them…if you zoom out, you can notice very broad patterns, but these patterns are protean and never share exactly the same features.
— Scott Alexander, psychologist and author

Speaking of: if anyone has a substitute term for AI that does not reference human cognitive abilities & processes (intelligence, learning, agency), please let me know, and I will post a follow-up here.

Three Key Learnings

Agency

One of the most important ideas in human-computer interaction is agency - the ability to act with intention in an environment. Like “intelligence” or “learning,” agency is defined differently depending on the field. For anthropologists, the agency is more contextualized within our social environments. 

Munich’s design offices are surrounded by landscaping and public art

Intention

How do we (or AIs!) act with intention in our environments? We need a map of the territory. We must be able to predict many things well, from signs of a coming rain to subtle social dynamics. Science is a great quest to make the world more predictable and increase our agency as human beings.

HHAI 2023’s prizewinning demo considering spatial relationships in decision making.

Prediction markets

Prediction markets are fascinating because they allow us to throw our collective intelligence at highly uncertain and important questions - and make progress towards resolving them. Human forecasting techniques have improved significantly over the past ten years. Over the next few years, we’ll see hybrid human-AI prediction markets and other decision-support tools increasingly driving leaders' choices

HHAI 2023’s prizewinning demo considering spatial relationships in decision making.

Three Humans of HHAI

Tatiana Chakravarti is a second-year Ph.D. student at the University of Pennsylvania working with DARPA to develop applications of hybrid collective intelligence to improve decision-making.

Frank van Harmelen’s keynote this morning was so popular he was practically mobbed by enthusiastic question-askers afterward (I was part of the problem.) He leads the Knowledge Representation and Reasoning group in the CS Department of the VU University Amsterdam. I’m looking forward to sharing the recorded talk with the rest of my team.

Frank closing his talk before the stampede began

Jakob Schoeffer was the first author of this year’s Best Paper, “On the Interdependence of Reliance Behavior and Accuracy in AI-Assisted Decision-Making. In it, he proposes a novel visual framework that shows the relationship between human reliance on AI and decision accuracy, spotlighting the vital role of interventions such as explanations in helping us better navigate the complex world of AI-assisted decision-making.

Two Sessions I Enjoyed

A prototype hybrid prediction market for estimating the replicability of published work: Tatiana gave a blockbuster talk on her team’s experience building a hybrid prediction market to help solve the replication crisis. They found hybrid prediction markets are more trusted and accurate in certain circumstances than artificial- or human-only markets. Often the AI systems lacked sufficient data to make a valuable forecast, so including human experts helped make the model more sustainable. 

In “A Design-oriented XAI Typology,” Chiara Natali and her team broke down the fuzzy idea of “explanation” into more manageable parts. Within every explanation lies three parts: the explanandum (what is being explained), the explanans (what/who is explaining), and the relationship between them.

Looking forward to tomorrow

A man serving piping-hot pizza to entrepreneurs at the German Entrepreneurship event

This evening is a founder match event at German Entrepreneurship, a popular coworking hub for those looking to launch businesses or social enterprises. Spending time with people with great ambition and a desire to connect with others is so encouraging. I am not looking for a founder, but Abhishek and I always look for project collaborators and coauthors!

Emily Dardaman

Emily Dardaman is a BCG Henderson Institute Ambassador studying augmented collected intelligence alongside Abhishek Gupta. She explores how artificial intelligence can improve team performance and how executives can manage risks from advanced AI systems.

Previously, Emily served BCG BrightHouse as a senior strategist, where she worked to align executive teams of Fortune 500s and governing bodies on organizational purpose, mission, vision, and values. Emily holds undergraduate and master’s degrees in Emerging Media from the University of Georgia. She lives in Atlanta and enjoys reading, volunteering, and spending time with her two dogs.

https://bcghendersoninstitute.com/contributors/emily-dardaman/
Previous
Previous

Hybrid Human-AI Conference: Day 5 of Summer in Munich - Humans in the loop

Next
Next

Hybrid Human-AI Conference: Day 3 of Summer in Munich - Trust the machines?