Democratizing AI: AIES 2023 - Day 2

Historic Chinatown in Montreal

We think of bias in machine learning, like we do about people. We’re biased or unbiased; we’re corrupt, or we’re pure. The beauty and irony of machine learning lie in how difficult it is to make an orderly representation of our will when our will is anything but orderly. Bias mitigation is not a one-stop shop. It’s hard, and sometimes our efforts backfire. Today, we’re looking at the old problems behind our newest technology.

Mitigated models sometimes do more harm than unmitigated models – and we saw that the disadvantaged group experienced negative impacts the majority of the time.
— MacKenzie Jorgensen

Three Key Insights

New tech, old problems.

Every sector in the economy is working to realize the potential of augmented collective intelligence (even if they don’t know the term). The problems that pop up in the AI implementation process directly reflect the sector’s role in society. Take media, which exists to convey information across communities. It shouldn’t be a surprise, then, that misinformation is such a threat. Cars transport us; we fear their sudden or unwanted stops. To understand where AI will empower (or threaten!) collectives most, look at the purpose of your organization.

A nudge from a ski tour company in Montreal - you don’t have to tell me twice!

Don’t nudge me there!

Recommender systems provide digital nudging to encourage particular choices, often in ways too subtle for the user to detect. Like bad pastors, they can lead us down an undesirable path for their own benefit. Two things are required to help collectives protect against predatory algorithms: good explanations to empower users and a design solution to get people to read those explanations. (Terms and conditions, anyone?)

Statements are easy; governance is hard.

Democracies must always balance participation with progress; their inefficiency protects against totalitarianism but is challenging when regulation needs to move fast. The US government is on an uphill climb to put much of any AI regulation into practice. 88% of government agencies required to make an AI strategy have yet to turn them in.

Three Faces of AIES

Christie Lawrence splits her time between Stanford, Cambridge, and Washington, DC. A JD candidate at Stanford and an MPP student at Harvard, Christie works to find and strengthen weak links in the legal protections for America’s AI innovation system.

Eran Tal is a proud Montreal local happy to welcome AIES to his city. He is an ​​Associate Professor and Canada Research Chair at McGill University, Abhishek Gupta’s alma mater. Eran is a philosopher of science who considers how what we measure in AI reflects our collective ethics and values. 

A few of the brightest minds in AI governance research, including my collaborator, Elizabeth Seger (second from right).

Aidan Kierans is a Google Policy Fellow at the Center for Democracy and Technology while getting his Ph.D. in Artificial Intelligence at the University of Connecticut. He studies how human collective intelligence (CI) might be safely - or unsafely - applied to AI, including the consequences of open-sourcing large models.

Two Sessions I Enjoyed

In “Democratising AI: Multiple Meanings, Goals, and Methods,” Elizabeth Seger gave a compelling argument that democratizing AI is not the same as making an easily accessible AI product. You may democratize its use, but what about its production? Or its governance? Or its profits? Each of these four areas has pros and cons, but the most important message is that AI must be governed democratically for anyone to “democratize AI.” I am excited to be listed as a co-author with Elizabeth and Abhishek on an upcoming paper about balancing the costs and benefits of open-sourcing foundation models.  

These posters hold the key to a better future.

Have you ever had trouble getting through to another department? Institutions navigating socio-technical issues in AI need help with wide gaps in language and understanding across disciplines. In “A multidomain relational framework to guide institutional AI research and adoption,” Vincent Straub explains the conceptual framework his team has designed to alleviate this problem, where terms are unified under three domains: Operational, Epistemic, and Normative.

Looking forward to tomorrow

If you’re looking for smart people to review your book, AIES is a great place to find them.

Governance and CI seem to have little connection, but they’re highly interdependent. External conditions play a significant role in influencing the individual interactions between organisms that make CI possible. To that end, I’m excited to learn more about how approaches to governing AI systems can facilitate more beneficial cooperation and collaboration.

Emily Dardaman

Emily Dardaman is a BCG Henderson Institute Ambassador studying augmented collected intelligence alongside Abhishek Gupta. She explores how artificial intelligence can improve team performance and how executives can manage risks from advanced AI systems.

Previously, Emily served BCG BrightHouse as a senior strategist, where she worked to align executive teams of Fortune 500s and governing bodies on organizational purpose, mission, vision, and values. Emily holds undergraduate and master’s degrees in Emerging Media from the University of Georgia. She lives in Atlanta and enjoys reading, volunteering, and spending time with her two dogs.

https://bcghendersoninstitute.com/contributors/emily-dardaman/
Previous
Previous

Seeing the invisible: AIES 2023 - Day 3

Next
Next

Deciding who decides: AIES 2023 - Day 1