Deciding who decides: AIES 2023 - Day 1

Warming my hands by a digital campfire with wooden-log beanbags at the conference venue.

There’s a running joke in my department about academic conferences, where many talks go something like this: 

We have discovered a problem. It’s a really important problem, and future research should cover how to fix it. To conclude, we should engage diverse stakeholders. 

…But who? How?

Who has the power to determine what we make collective decisions about, and at what point in time?
— Annette Zimmermann

I wrote about this for VentureBeat in February with Abhishek Gupta, Kes Sampanther, and Steven Mills.

These are the sort of questions that get AIES attendees up in the morning. Shifting the AI governance paradigm from an oligopoly into something more democratic is how we can all reap the benefits of collective intelligence. We’re here in Montreal this week to learn from the researchers chipping away at each piece of this problem.

Three Key Insights

Enough engagement?

AI recommenders function like hospital triage units – determining what (and when) to escalate to a human supervisor. It’s not clear yet how recommendations influence their recipients, and most “supervisors” are disincentivized to test each of the AI’s results painstakingly.  

Computer scientists, ethicists, safety researchers, and industry professionals mingling in Montreal.

Meaningful participation

The most crucial part of democratization is the power to set democratic agendas. Whoever sets the agenda heavily influences what participants pay attention to and constrains our choices. A high concentration of agenda-setting power kneecaps public participation and the success of the whole endeavor.

The role of philosophy

For parents, PR professionals, and DC staffers, this idea might not be a surprise. But for many AI industry professionals, the time has come to stand on the shoulders of giants in political philosophy.

Three Faces of AIES

Lingwei Cheng is a Ph.D. student at Carnegie Mellon and a research fellow at Stanford exploring how to improve algorithmic-assisted decision-making in public policy, healthcare, and beyond. Her current work focuses on algorithmic fairness.

Getting to meet Harry Jiang, who collaborated with Abhishek Gupta and Timnit Gebru on a recent piece on artists in the age of AI.

Carlos Ignacio Gutierrez researches critical issues in AI governance at the Future of Life Institute, founded by multihyphenate scientist Max Tegmark. For Carlos’s dissertation, he created a comprehensive literature review of AI governance drivers and regulatory gaps in the US - the first of its kind.

Amanda Leal’s AI governance research has supported Mila, CIDOB, UN-Habitat, UNESCO, and more. She is accredited to practice law in Brazil and brings rich global thinking to her work.

Two Sessions I Enjoyed

The political philosopher Annette Zimmerman brought the house down with her keynote, “The GenAI Deployment Rush:  How to Democratize the Politics of Pace.” (The insight above about democratizing agendas is Annette’s). Deployment dynamics reveal who has decision-making power in AI. We must ask, who has the ability to deploy powerful models unilaterally – and how can we use a combination of deployment regulation, industry self-regulation, and novel democratic deliberation forums to make sure that the decisions reflect the best interest of the general public?

In “Large Language Models: Hype, Hope, and Harms,” panelists got candid about their personal and professional experiences in the white-hot AI bubble. For the University of Waterloo’s Kate Larson, current AI researchers must learn from the field’s boom and bust history and avoid making overstated claims. Panelists Atoosa Kasirzadeh, Roxana Daneshjou, and Gary Marchant agreed that the future is uncertain for students and teachers and that AI ethics, safety, and governance researchers need to engage in deliberate, democratic conversation to convene and guide the development of AIs.

Shining light inside and outside AIES at the Palais des Congrès de Montréal.

Looking Forward To Tomorrow

Tomorrow morning, I’ll grab my coffee and race to the early panel, “AI for Society: Developing, Deploying, and Auditing Public-Facing AI.” What pricked my ears most was the “auditing” piece. Developing a robust set of risk evaluations will be invaluable to building confidence and supporting strong human-AI teams.

Emily Dardaman

Emily Dardaman is a BCG Henderson Institute Ambassador studying augmented collected intelligence alongside Abhishek Gupta. She explores how artificial intelligence can improve team performance and how executives can manage risks from advanced AI systems.

Previously, Emily served BCG BrightHouse as a senior strategist, where she worked to align executive teams of Fortune 500s and governing bodies on organizational purpose, mission, vision, and values. Emily holds undergraduate and master’s degrees in Emerging Media from the University of Georgia. She lives in Atlanta and enjoys reading, volunteering, and spending time with her two dogs.

https://bcghendersoninstitute.com/contributors/emily-dardaman/
Previous
Previous

Democratizing AI: AIES 2023 - Day 2

Next
Next

Poor facsimile: The problem in chatbot conversations with historical figures