A Research Roadmap for an Augmented World

Five years ago, Jack Ma gave a speech to the World Economic Forum.

He said,  "If we don't change the way we teach 30 years later, we'll be in trouble. The things we've taught our kids for the past 200 years are knowledge-based, and we cannot teach our kids to compete with machines - they're smarter. You have to teach something unique so that machines can never catch up with us - values, believing, independent thinking, teamwork, care for others. I think we should teach our kids sports, music, painting and art. Making sure humans should be different - everything we teach should be different than machines. If machines can do it better, we've got something we need to think about." 

That was in 2018. I share 2018 Jack Ma’s dream, but I don't share his 30-year timelines or his confidence that these social and emotional dimensions he mentions will be only for humans. 

Today, let’s take a step back and look at the research that brought us here today and then consider some themes around what it looks like to design a research roadmap to understand how our skill sets can still be relevant in the economy of the future as sustainability professionals.

Right now, we have this vision of a smooth transition - how can we leverage these powerful systems to serve our values and not have us serve them? When we look at this new transition of adopting these tools into organizations and bringing the changes we want, that smooth transition is not happening. 

We are in this move fast and break things phase (rather, we should be adopting a move fast and fix things approach), and leaders are having a hard time keeping up, which is understandable - we are all having a hard time keeping up. It will be up to a tiny handful of people paying close attention, like those of us here, that may stand between us and a future that does not reflect our values.

Some of you remember your first encounter with a modern large language model (LLM). When this happened a year or two ago, I realized,  “This will change how I work and my career plan.” I had to ask what skills I needed, what I'd teach. 

At a granular level - what skills may be valued in a human-machine economy? We don't have the answers but hope to define the problem today, bring in examples, and outline a collaborative research agenda so the community's collective intelligence (CI) can progress.  That way, leaders will have a glimpse ahead with better questions, so the transition is less bumpy.

It’s critical to understand that this pace of change will continue. To understand valued skills, we need to look at bottlenecks - the pieces harder to augment. Six months ago, I began playing with a “theory of human distinctiveness” - a domain of ability robust to capability increases, which would remain a competitive human advantage over time. But even when it comes to consciousness, there is no magical domain of work only done by humans, including what Jack Ma mentioned - communication, teamwork, and even emotional and social labor.

Instead of human distinctiveness, we should disaggregate abilities and forecast the time before successful automation. Current models are flawed; they try to identify human-AI complementarity based on current capabilities. We need tools to estimate future capabilities. Every organization needs a carefully-calibrated forecasting model looking at AI capability over 6, 12-, and 18-month horizons.

It’s time to challenge the complementarity myth that humans and AI complete each other, and there will always be something humans are better at. We need a clear view of the future to make sense of sustainability careers. 

I think Jack Ma was right that we would lose if we raced against machines. Our task is to shift our focus and look at leading AI labs - can they load values? Do they share our sustainability values? This needs concrete dialogue, policy, and certainty before things advance further. There is an ancient Chinese curse, "May you live in interesting times." Things will only get stranger. This moment will be the most normal as AI progresses. 

Recently a student asked Abhishek, "Should I change what I study? Will there be jobs in 10-15 years that don't exist now?" He said, "There'll be jobs that don't exist next year." We need tools to assess and forecast future capabilities. Those are questions that would benefit us and thus constitute our research roadmap.

Emily Dardaman

Emily Dardaman is a BCG Henderson Institute Ambassador studying augmented collected intelligence alongside Abhishek Gupta. She explores how artificial intelligence can improve team performance and how executives can manage risks from advanced AI systems.

Previously, Emily served BCG BrightHouse as a senior strategist, where she worked to align executive teams of Fortune 500s and governing bodies on organizational purpose, mission, vision, and values. Emily holds undergraduate and master’s degrees in Emerging Media from the University of Georgia. She lives in Atlanta and enjoys reading, volunteering, and spending time with her two dogs.

https://bcghendersoninstitute.com/contributors/emily-dardaman/
Previous
Previous

Bridging the intention-action gap in the Universal Guidelines on AI

Next
Next

Good futurism, bad futurism: A global tour of augmented collective intelligence