Deciding who decides: AIES 2023 - Day 1
There’s a running joke in my department about academic conferences, where many talks go something like this:
We have discovered a problem. It’s a really important problem, and future research should cover how to fix it. To conclude, we should engage diverse stakeholders.
…But who? How?
These are the sort of questions that get AIES attendees up in the morning. Shifting the AI governance paradigm from an oligopoly into something more democratic is how we can all reap the benefits of collective intelligence. We’re here in Montreal this week to learn from the researchers chipping away at each piece of this problem.
Three Key Insights
Enough engagement?
AI recommenders function like hospital triage units – determining what (and when) to escalate to a human supervisor. It’s not clear yet how recommendations influence their recipients, and most “supervisors” are disincentivized to test each of the AI’s results painstakingly.
Meaningful participation
The most crucial part of democratization is the power to set democratic agendas. Whoever sets the agenda heavily influences what participants pay attention to and constrains our choices. A high concentration of agenda-setting power kneecaps public participation and the success of the whole endeavor.
The role of philosophy
For parents, PR professionals, and DC staffers, this idea might not be a surprise. But for many AI industry professionals, the time has come to stand on the shoulders of giants in political philosophy.
Three Faces of AIES
Lingwei Cheng is a Ph.D. student at Carnegie Mellon and a research fellow at Stanford exploring how to improve algorithmic-assisted decision-making in public policy, healthcare, and beyond. Her current work focuses on algorithmic fairness.
Carlos Ignacio Gutierrez researches critical issues in AI governance at the Future of Life Institute, founded by multihyphenate scientist Max Tegmark. For Carlos’s dissertation, he created a comprehensive literature review of AI governance drivers and regulatory gaps in the US - the first of its kind.
Amanda Leal’s AI governance research has supported Mila, CIDOB, UN-Habitat, UNESCO, and more. She is accredited to practice law in Brazil and brings rich global thinking to her work.
Two Sessions I Enjoyed
The political philosopher Annette Zimmerman brought the house down with her keynote, “The GenAI Deployment Rush: How to Democratize the Politics of Pace.” (The insight above about democratizing agendas is Annette’s). Deployment dynamics reveal who has decision-making power in AI. We must ask, who has the ability to deploy powerful models unilaterally – and how can we use a combination of deployment regulation, industry self-regulation, and novel democratic deliberation forums to make sure that the decisions reflect the best interest of the general public?
In “Large Language Models: Hype, Hope, and Harms,” panelists got candid about their personal and professional experiences in the white-hot AI bubble. For the University of Waterloo’s Kate Larson, current AI researchers must learn from the field’s boom and bust history and avoid making overstated claims. Panelists Atoosa Kasirzadeh, Roxana Daneshjou, and Gary Marchant agreed that the future is uncertain for students and teachers and that AI ethics, safety, and governance researchers need to engage in deliberate, democratic conversation to convene and guide the development of AIs.
Looking Forward To Tomorrow
Tomorrow morning, I’ll grab my coffee and race to the early panel, “AI for Society: Developing, Deploying, and Auditing Public-Facing AI.” What pricked my ears most was the “auditing” piece. Developing a robust set of risk evaluations will be invaluable to building confidence and supporting strong human-AI teams.