ChatGPT is a single-player experience. That’s about to change.

 
We’re used to human teammates, with all their wonders and surprises. But, are we ready for machine teammates?
 

BORING CHATGPT? One of the most interesting factors in ChatGPT's meteoric rise (see below!) has been that, unlike other popular apps like TikTok, it doesn't provide a social experience natively in the application. That means it doesn’t benefit from network effects that led to the rapid user acquisition for other popular apps. ChatGPT is a single-player text-based game. It sends you no notifications, doesn't hang out without you, and always looks the same. It should be boring.

Source: Statista

Source: CNBC

But it isn't. ChatGPT and other products based on large language models (LLMs) are solving our problems, writing emails, and gobbling up data. Many of us are using LLMs daily to increase our work output. It can be awkward, though, to integrate them into our group conversations. Right now, we treat LLMs like uninvited (and often uncredited) colleagues.


LONELY CHATGPT: Teams work in the first place because of collective intelligence (CI), a problem-solving superpower that only emerges in groups, mediated by members' ability to perceive and respond to each other in real-time. CI isn't just determined by how intelligent individual group members are. It's practical, too – by "intelligence," we simply mean "ability to solve a wide range of problems."

We suspect that the future of work relies on augmented collective intelligence, or ACI – forming hybrid human-AI teams where everyone can contribute to the best of their abilities.

EXPERIMENT TIME: This brings us back to our lonely 1:1 interactions with ChatGPT. What if it didn't have to be that way? We're trying to find out. Together, we are running a series of experiments designed to test ChatGPT's (via GPT-4) performance as a teammate, not just a tool. We seek to answer the following questions:

  1. What archetypal team roles can the system play?

  2. What roles are out of reach for now?

  3. How will humans respond to the presence of an AI-based teammate?


Hearing the word "teammate" may make you uneasy. We are not implying that the AIs are sentient, have a soul, or should be anthropomorphized (a common failure mode in thinking about AI). We chose "teammate" because these systems can be our peers in ability, and because we find ourselves in a startling moment in history where AI models can perform many human abilities well.

WHAT'S NEXT: We'll show our first experiment in human-AI teaming, in which a group of five people + ChatGPT attempt to decide what to eat for lunch. This is the first problem in a series of escalating difficulties and complexity. Read on to discover if humans and AI can work together and what happens when they disagree.

GO DEEPER: Here are some references that can help you dive deeper into the ideas in this article:

  1. Wolpert, D. H., & Tumer, K. (1999). An introduction to collective intelligence. arXiv preprint cs/9908014.

  2. ​​Ha, David, and Yujin Tang. "Collective intelligence for deep learning: A survey of recent developments." Collective Intelligence 1.1 (2022): 26339137221114874.

  3. The Collective Intelligence Design Playbook. Nesta. (n.d.). Retrieved April 18, 2023, from https://media.nesta.org.uk/documents/Nesta_CID_Activities_001_Web.pdf

  4. Kim, Young Ji, et al. "What makes a strong team? Using collective intelligence to predict team performance in League of Legends." Proceedings of the 2017 ACM conference on computer supported cooperative work and social computing. 2017.

Emily Dardaman and Abhishek Gupta

BCG Henderson Institute Ambassador, Augmented Collective Intelligence

Previous
Previous

Pleasing Everyone: Artificial Teammates, Experiment 1

Next
Next

Collective Intelligence: Foundations + Radical Ideas - Day 1 at SFI