Seeing the invisible: AIES 2023 - Day 3

Hot Pot at Happy Lamb with some of the best and brightest thinkers in AI governance.

One way to think of AI is as “invisible work.” By design, it performs tasks that humans would otherwise complete with invisible effort. It can become easy to stop asking how it was made or with whose data. Today at AIES, we’re talking about how it takes a village to raise an AI system and learn what that village needs.

Democratization of AI means not only that people can freely use AI, but also that people can collectively decide how AI is to be used. In particular, collective decision-making power is required to redress the negative externalities from the development of increasingly advanced AI systems.
— Herbie Bradley and coauthors

Three Key Insights

Coping with uncertainty

Science is probably humanity’s greatest attempt to cope with uncertainty; how we develop our AIs to respond to uncertainty determines their usefulness and value. For tools to be truly useful, they need to be interactive and robust to our uncertain or flawed input – to take human error into account.

The staff of AIES kept the event running smoothly with energy, compassion, and joy.

Red teaming

Red teamers are the unsung heroes of human-computer interaction, like minesweepers who find traps and warn others nearby. To effectively mine-sweep terrain as complex as a powerful AI model, we need new ways of assembling hybrid human-AI teams to capitalize on each other’s strengths. 

Content moderation

Hate speech moderators join the ranks above in helping to filter and clean datasets such that the output space of AI systems remains somewhat palatable and we’re spared the worst that the dark corners of the Internet have to offer.

Three Faces of AIES

Nora Freya Lindemann researches the power dynamics behind creating chatbots at the University of Osnabrück. Her lightning talk, “Sealed Knowledges: A Critical Approach to the Usage of LLMs as Search Engines,” asks hard questions about the ethics of algorithmic decision-making and its impact on society.

Cambridge Ph.D. student Herbie Bradley researches large language models at ElutherAI and CarperAI. His recent work argues for a public data trust to regulate training data access to mitigate harmful externalities imposed by unrestricted power to develop AI. 

We worked hard for our appetite.

Shin-Shin Hua studies the impacts of antitrust/competition policy on AI risk management strategy. She practices tech and competition law with an additional specialty in Public International Law. At AIES, she presented a novel anticipatory governance framework to assess EU competition law under different AI capability progression scenarios, which is my equivalent of a Marvel movie release.

Two Sessions I Enjoyed

Globally recognized AI ethicist Paola Ricaurte delivered a stunning keynote, “AI for/by the majority world: From technologies of dispossession to technologies of radical care,” in which she directly outlines AI’s disastrous potential to disrupt our social and ecological systems. She said using AI systems to support existing power disparities was like building a “bio-necro-techno-political machine” that could dominate all other life. Paola is a systems thinker and a passionate speaker and has rich experience to share.

At the intersection of our natural and built environments.

For an AI project to be self-sustaining, it must be able to endure many trials – to be robust against real-world bumps and bruises. By studying the work of radiologists in Denmark, Kenya, and Thailand, Hubert Zajac seeks to understand how medical data is gathered, used and where existing examples are limited or nonexistent. He uses participatory design principles to work alongside medical professionals and patients in each location to benefit from their collective intelligence and witness emergent behaviors in context. In “Ground Truth Or Dare: Factors Affecting The Creation Of Medical Datasets For Training AI,” Hubert invited the audience out of their offices and into the unseen front lines of medical AI applications, where human lives are on the line.

Looking forward to tomorrow

It’s not often sunny in Montreal, making the sunsets even more impressive.

It’s time to hit the ground (tarmac?) running! I may be flying home to Atlanta, but I’m eager to return to Abhishek and my research. All the pieces from this summer – how open-source communities self-organize, how capabilities emerge in artificial life, how thoughtfully-designed human-computer interfaces can improve collaboration – together, they are coming into a picture of what augmented collective intelligence can provide for organizations.

Emily Dardaman

Emily Dardaman is a BCG Henderson Institute Ambassador studying augmented collected intelligence alongside Abhishek Gupta. She explores how artificial intelligence can improve team performance and how executives can manage risks from advanced AI systems.

Previously, Emily served BCG BrightHouse as a senior strategist, where she worked to align executive teams of Fortune 500s and governing bodies on organizational purpose, mission, vision, and values. Emily holds undergraduate and master’s degrees in Emerging Media from the University of Georgia. She lives in Atlanta and enjoys reading, volunteering, and spending time with her two dogs.

https://bcghendersoninstitute.com/contributors/emily-dardaman/
Previous
Previous

Good futurism, bad futurism: A global tour of augmented collective intelligence

Next
Next

Democratizing AI: AIES 2023 - Day 2