Seeing the invisible: AIES 2023 - Day 3
One way to think of AI is as “invisible work.” By design, it performs tasks that humans would otherwise complete with invisible effort. It can become easy to stop asking how it was made or with whose data. Today at AIES, we’re talking about how it takes a village to raise an AI system and learn what that village needs.
Three Key Insights
Coping with uncertainty
Science is probably humanity’s greatest attempt to cope with uncertainty; how we develop our AIs to respond to uncertainty determines their usefulness and value. For tools to be truly useful, they need to be interactive and robust to our uncertain or flawed input – to take human error into account.
Red teaming
Red teamers are the unsung heroes of human-computer interaction, like minesweepers who find traps and warn others nearby. To effectively mine-sweep terrain as complex as a powerful AI model, we need new ways of assembling hybrid human-AI teams to capitalize on each other’s strengths.
Content moderation
Hate speech moderators join the ranks above in helping to filter and clean datasets such that the output space of AI systems remains somewhat palatable and we’re spared the worst that the dark corners of the Internet have to offer.
Three Faces of AIES
Nora Freya Lindemann researches the power dynamics behind creating chatbots at the University of Osnabrück. Her lightning talk, “Sealed Knowledges: A Critical Approach to the Usage of LLMs as Search Engines,” asks hard questions about the ethics of algorithmic decision-making and its impact on society.
Cambridge Ph.D. student Herbie Bradley researches large language models at ElutherAI and CarperAI. His recent work argues for a public data trust to regulate training data access to mitigate harmful externalities imposed by unrestricted power to develop AI.
Shin-Shin Hua studies the impacts of antitrust/competition policy on AI risk management strategy. She practices tech and competition law with an additional specialty in Public International Law. At AIES, she presented a novel anticipatory governance framework to assess EU competition law under different AI capability progression scenarios, which is my equivalent of a Marvel movie release.
Two Sessions I Enjoyed
Globally recognized AI ethicist Paola Ricaurte delivered a stunning keynote, “AI for/by the majority world: From technologies of dispossession to technologies of radical care,” in which she directly outlines AI’s disastrous potential to disrupt our social and ecological systems. She said using AI systems to support existing power disparities was like building a “bio-necro-techno-political machine” that could dominate all other life. Paola is a systems thinker and a passionate speaker and has rich experience to share.
For an AI project to be self-sustaining, it must be able to endure many trials – to be robust against real-world bumps and bruises. By studying the work of radiologists in Denmark, Kenya, and Thailand, Hubert Zajac seeks to understand how medical data is gathered, used and where existing examples are limited or nonexistent. He uses participatory design principles to work alongside medical professionals and patients in each location to benefit from their collective intelligence and witness emergent behaviors in context. In “Ground Truth Or Dare: Factors Affecting The Creation Of Medical Datasets For Training AI,” Hubert invited the audience out of their offices and into the unseen front lines of medical AI applications, where human lives are on the line.
Looking forward to tomorrow
It’s time to hit the ground (tarmac?) running! I may be flying home to Atlanta, but I’m eager to return to Abhishek and my research. All the pieces from this summer – how open-source communities self-organize, how capabilities emerge in artificial life, how thoughtfully-designed human-computer interfaces can improve collaboration – together, they are coming into a picture of what augmented collective intelligence can provide for organizations.