Collective Intelligence: Foundations + Radical Ideas - Day 3 at SFI

Our closing celebration included a private party at Meow Wolf, an incredible immersive art experience. Pictured: Emily (left), organizer Caitlin McShea (center), project associate Melissa Miller (left of center), and grateful event attendees.

It was bittersweet to realize the final day of the Collective Intelligence Symposium had arrived. Given the scope of our interests, we all had started conversations we had no hope of finishing. How can we apply insights from the animal kingdom to our work in organizations? What can we know and not know about AI’s likely impact on our work? The only answer is to keep conversing with the bright people we’ve met and puzzle them out together.

I think the most worrisome aspect of AI systems in the short term is that we will give them too much autonomy without being fully aware of their limitations and vulnerabilities. We tend to anthropomorphize AI systems: we impute human qualities to them and end up overestimating the extent to which these systems can actually be fully trusted
— Melanie Mitchell

Three Key Learnings

Moving from individual to individual in an environment

Throughout our discussion of intelligence, the common theme has been to shift focus from the *individual* to the *individual in the environment.* Take IQ, for instance. Recent scholarship has challenged whether the variable created to describe human performance on various tasks is meaningful. 

Emergence

In collective intelligence or complexity studies, one of the big topics we discuss is emergence – a step-change in capabilities that appears at each higher level of complexity. In AI, we talk about emergent capabilities, making assessing whether models will demonstrate dangerous behavior difficult. Emergence is a form of innovation where a complex system collectively solves a problem. Keeping a multi-level perspective is key. 

Defining risk

The word "risk" is often used synonymously with "threat" or "danger," which creates confusion and slows down problem-solving. Risk is composed of three elements: threat, vulnerability, and impact. Risk is also a consequence of time - if a risk is not explicitly time-bound, for example: "There is an X% risk of wildfires in central California," you're actually describing a threat. This isn't just semantics. Protecting systems requires examining all 3 elements on a timescale so that people can prioritize properly and clearly define success.

Three Humans of SFI

Drew Nelson is a Teaching Fellow at Harvard Extension School, where he teaches “Intro to Mind, Brain, Health and Education.” He applies complex adaptive systems thinking to uncover a transdisciplinary “Science of Learning.” His razor-sharp wit keeps his students and colleagues laughing.

Isaiah Mack is a biochemistry entrepreneur exploring opportunities to improve the resilience of our agricultural systems, and he has been opening my mind to the possibilities of emerging technologies to keep our society fed. 

Norman Lee Johnson has been researching collective intelligence for over 30 years. He served as Deputy Group Leader at Los Alamos National Lab, where he led critical biothreat modeling projects and developed strategies to manage malware threats for DARPA and ONR. He is also a talented photographer!

Two Sessions I Enjoyed

Today was the long-awaited panel discussion on "Challenges for Deriving Measures of Intelligence," moderated by Jessica Flack, featuring Ian Couzin, Nikta Fakhri, Chris Kempes, Maxim Raginsky, and Guy Theraluz. Measuring collective intelligence is a chicken-and-egg problem because it is an emergent property of many levels (think cells, individuals, cities) working interdependently. Members of the audience shared their best guesses. While incomplete, my favorite suggestion was that intelligence is "agents' functions to derive answers from incomplete data."

Physicist Geoffrey West thanks filmmaker Godfrey Reggio for his contribution to the event

At the close of the conference, experimental filmmaker and activist Godfrey Reggio shared selections of his work and his personal reflections on it. His work is immersive and haunting – many in the audience were moved. "We are homo technicus," he told us. "We have a language that no longer describes our world."

Norman Lee shares principles of risk management following the existential risk debate

Bonus: I organized a group of 12-15 attendees, including Norman Lee and Ted Chiang, to watch Melanie Mitchell, along with Yann LeCun, debate Max Tegmark and Yoshua Bengio on whether or not AI constitutes a near-term existential risk. After the debate, we had a lively discussion on interdisciplinary approaches to risk identification, and management, and how collective intelligence can be applied to both.

Looking forward to tomorrow

The clear skies of the Santa Fe mesa made it possible to bring many interesting conversations outdoors - and made it hard to leave!

At the Collective Intelligence Symposium, we flowed between economics, biology, physics, art, and human behavior. Everyone shares a level of ambition here that we can and should tackle all of these subjects and their rich overlaps. Tomorrow, I will sit with my notes and dig into what this means for our work – how leaders can unleash CI and team with AI in ways that preserve the integrity of all of our systems.

Emily Dardaman

Emily Dardaman is a BCG Henderson Institute Ambassador studying augmented collected intelligence alongside Abhishek Gupta. She explores how artificial intelligence can improve team performance and how executives can manage risks from advanced AI systems.

Previously, Emily served BCG BrightHouse as a senior strategist, where she worked to align executive teams of Fortune 500s and governing bodies on organizational purpose, mission, vision, and values. Emily holds undergraduate and master’s degrees in Emerging Media from the University of Georgia. She lives in Atlanta and enjoys reading, volunteering, and spending time with her two dogs.

https://bcghendersoninstitute.com/contributors/emily-dardaman/
Previous
Previous

Hybrid Human-AI Conference: Day 1 of Summer in Munich

Next
Next

Collective Intelligence: Foundations + Radical Ideas - Day 2 at SFI