The Future of AI in Quebec: Bridging Gaps to Drive Innovation, Growth and Social Good
Abhishek Gupta Abhishek Gupta

The Future of AI in Quebec: Bridging Gaps to Drive Innovation, Growth and Social Good

Artificial intelligence (AI) is transforming societies and economies around the world at a rapid pace. However, Quebec risks falling behind in leveraging the opportunities of AI due to several gaps in its ecosystem. In this comprehensive blog post, I analyze the current limitations around AI development, adoption, and governance in Quebec across the public, private, and academic sectors. Based on this diagnosis, I then provide targeted, actionable recommendations on how Quebec can build understanding, expertise, collaboration, and oversight to unlock the full potential of AI as a force for economic and social good. Read on for insights into the seven key areas requiring intervention and over 40 proposed solutions to propel Quebec into a leadership position in the global AI landscape.

Read More
Bridging the intention-action gap in the Universal Guidelines on AI
Abhishek Gupta Abhishek Gupta

Bridging the intention-action gap in the Universal Guidelines on AI

We are now firmly in a world where organizations are beginning to see returns from their investments in AI adoption within their organizations. At the same time, they are also experiencing growing pains, such as the emergence of shadow AI, that raises cybersecurity concerns. While useful as a North Star, guidelines need accompanying details that help implement them in practice. Right now, we have an unmitigated intention-action gap that needs to be addressed - it can help strengthen the UGAI and enhance its impact as organizations adopt this as their de facto set of guidelines.

Read More
A Research Roadmap for an Augmented World
Emily Dardaman Emily Dardaman

A Research Roadmap for an Augmented World

At a granular level - what skills may be valued in a human-machine economy? We don't have the answers but hope to define the problem today, bring in examples, and outline a collaborative research agenda so the community's collective intelligence (CI) can progress. That way, leaders will have a glimpse ahead with better questions, so the transition is less bumpy.

Read More
Good futurism, bad futurism: A global tour of augmented collective intelligence
Emily Dardaman Emily Dardaman

Good futurism, bad futurism: A global tour of augmented collective intelligence

This fall, Abhishek Gupta and I are rolling our insights into a series of experiments. (If you’d like to be a human volunteer, send me a note!) It is our hope to understand not just the principles underlying ACI but to catch it in action in a hybrid human-AI team exercise. We want our legacy to be a set of stepping stones towards a greater understanding of human-AI teaming, its risks and benefits, and how responsible organizations can implement large language models (LLMs) in their daily work.

Read More
Seeing the invisible: AIES 2023 - Day 3
Emily Dardaman Emily Dardaman

Seeing the invisible: AIES 2023 - Day 3

One way to think of AI is as “invisible work.” By design, it performs tasks that humans would otherwise complete with invisible effort. It can become easy to stop asking how it was made or with whose data. Today at AIES, we’re talking about how it takes a village to raise an AI system and learn what that village needs.

Read More
Democratizing AI: AIES 2023 - Day 2
Emily Dardaman Emily Dardaman

Democratizing AI: AIES 2023 - Day 2

We think of bias in machine learning, like we do about people. We’re biased or unbiased; we’re corrupt, or we’re pure. The beauty and irony of machine learning lie in how difficult it is to make an orderly representation of our will when our will is anything but orderly. Bias mitigation is not a one-stop shop. It’s hard, and sometimes our efforts backfire. Today, we’re looking at the old problems behind our newest technology.

Read More
Deciding who decides: AIES 2023 - Day 1
Emily Dardaman Emily Dardaman

Deciding who decides: AIES 2023 - Day 1

Shifting the AI governance paradigm from an oligopoly into something more democratic is how we can all reap the benefits of collective intelligence. We’re here in Montreal this week to learn from the researchers chipping away at each piece of this problem.

Read More
Poor facsimile: The problem in chatbot conversations with historical figures
Abhishek Gupta Abhishek Gupta

Poor facsimile: The problem in chatbot conversations with historical figures

It is important to recognize that AI systems often provide a poor representation and imitation of a person's true identity. As a reference, it can be compared to a blurry JPEG image, lacking depth and accuracy. AI systems are also limited by the information that has been published and captured in their training datasets. The responses they provide can only be as accurate as the data they have been trained on. It is crucial to have extensive and detailed data in order to capture the relevant tone and authentic views of the person being represented.

Read More
Hallucinating and moving fast
Abhishek Gupta Abhishek Gupta

Hallucinating and moving fast

"Move fast and break things" is broken. But we've all said that many times before. Instead, I believe we need to adopt the "Move fast and fix things" approach. Given the rapid pace of innovation and its distributed nature across many diverse actors in the ecosystem building new capabilities, realistically, it is infeasible to hope to course-correct at the same pace. Because course correction is a much harder and slow-yielding activity, this ends up amplifying the magnitude of the impact of negative consequences.

Read More
Moving the needle on the voluntary AI commitments to the White House
Abhishek Gupta Abhishek Gupta

Moving the needle on the voluntary AI commitments to the White House

The recent voluntary commitments secured by the White House from the core developers of advanced AI systems (OpenAI, Microsoft, Anthropic, Inflection, Amazon, Google, and Meta) presents an important first step in building and using safe, secure, and trustworthy AI. While it is easy to shrug aside voluntary commitments as "ethics washing," we believe that they are a welcome change.

Read More
War Room: Artificial Teammates, Experiment 6
Emily Dardaman and Abhishek Gupta Emily Dardaman and Abhishek Gupta

War Room: Artificial Teammates, Experiment 6

Effective communication can make or break a crisis response. Can Anthropic’s new model, Claude 2, facilitate better decision-making in crunch time?

Read More
Controlling creations - ALIFE 2023 - Day 5
Emily Dardaman Emily Dardaman

Controlling creations - ALIFE 2023 - Day 5

The advent of artificial life will be the most significant historical event since the emergence of human beings…We must take steps now to shape the emergence of artificial organisms; they have the potential to be either the ugliest terrestrial disaster or the most beautiful creation of humanity.

Read More
Enabling collective intelligence: FOSSY Day 4
Emily Dardaman Emily Dardaman

Enabling collective intelligence: FOSSY Day 4

Collective intelligence (CI) runs on contribution, but setting up a system to elicit collective intelligence isn’t easy. Many open-source projects are created and maintained by an individual founder, who sometimes claims the title of “Benevolent Dictator for Life.” It’s halfway a joke, pointing out the inherent tension between a participatory project and the unilateral actions it takes to set one up. Nothing is free – no margin, no mission. But figuring out how to distribute power quickly and effectively is essential to generate CI.

Read More
Entropy, measurement, and diversity: ALIFE 2023 - Day 4
Emily Dardaman Emily Dardaman

Entropy, measurement, and diversity: ALIFE 2023 - Day 4

In AI, something that excites and worries many researchers is “emergent capabilities” – a phenomenon observed in all complex systems in which complex network interactions lead to entirely new traits and abilities. We build and rely on complex systems in the first place to handle change, but if that system becomes too unpredictable, it stops being useful and might even be dangerous. This is the core problem in science: can we design predictable systems with unpredictable properties? Can we find simple rules and theories to explain complex phenomena without losing their most important parts?

Read More
Embodiment and emergence: ALIFE 2023 - Day 3
Emily Dardaman Emily Dardaman

Embodiment and emergence: ALIFE 2023 - Day 3

It might surprise people that artificial life ("Alife") is not a new field – it has deep roots in evolutionary biology, information science, computer engineering, psychology, and art. Alife blends the fanciful with the practical – looming sculptures of wiggling network interactions alongside urgent warnings about how *not* to train humans how to use AIs. To solve the world's toughest problems, it's probably good to start by understanding what makes us 1) intelligent or 2) alive.

Read More
Incentives and evolution: ALIFE 2023 - Day 2
Emily Dardaman Emily Dardaman

Incentives and evolution: ALIFE 2023 - Day 2

Each death from COVID-19 was, in some ways, a failure to apply collective intelligence (CI). Better modeling, participatory policymaking, and community engagement could likely have 1) increased trust in public health agencies, 2) suggested less burdensome and more cost-effective interventions, and 3) supported better preventative policy design years before the world locked down. Today, we explored how CI can be used to design better institutional incentives that lead to better decisions.

Read More
An introduction to artificial life: ALIFE 2023 - Day 1
Emily Dardaman Emily Dardaman

An introduction to artificial life: ALIFE 2023 - Day 1

Artificial life is a provocative and profoundly ambitious field. It challenges things most of us take for granted: that humans are unique, that consciousness is inscrutable, and that life itself is distinct from other physical processes. “We really want to create artificial life,” said a slide from ALIFE 2023’s opening remarks today. “Anyone can create it. We want to release the recipes for it and change the world.”

Read More
Building communities: FOSSY Day 3
Emily Dardaman Emily Dardaman

Building communities: FOSSY Day 3

Sometimes, projects fall victim to their success. An unexpected cash windfall or user influx can pressure a collective’s social and technical infrastructure. Anyone trying to shepherd a decentralized, collaborative project might feel overwhelmed sometimes. But help is out there. “When I was a boy,” said Mr. Rogers, memorably, “and I would see scary things in the news, my mother would tell me, "Look for the helpers. You will always find people who are helping.” Today at FOSSY 2023, we’re considering designing inclusive, sustainable spaces to encourage the diverse contributions that drive CI.

Read More
Defining Open: FOSSY Day 2
Emily Dardaman Emily Dardaman

Defining Open: FOSSY Day 2

The internet’s most powerful engine of collective intelligence is at an uncertain point. The free and open-source software (FOSS) community emerged and refined its principles over twenty years during the early internet, co-evolving with regulation and technology. Decentralized movements need time to grow and adapt. But to adapt to advanced AI, FOSS needs 20 years’ worth of evolution in just a few months.

Read More
Designed for us: FOSSY Day 1
Emily Dardaman Emily Dardaman

Designed for us: FOSSY Day 1

Within computer science, the free and open-source (FOSS) community has long provided an oasis for developers looking to protect the public good. The software that powers significant areas of our lives might be designed for us, understood by us, and tailored to our needs. Or… they might not. Open sourcing allows users to create something useful, then reap the benefits of collective intelligence (CI) as a decentralized community comes together to improve it. Bugs are found, features added, and UI smoothed. Win-win-win.

Read More