Escaping pilot purgatory in Generative AI
Abhishek Gupta Abhishek Gupta

Escaping pilot purgatory in Generative AI

Don’t get trapped in the frothy piloting phase of Generative AI without a clear exit to turn it into something tangible. Winners from these investments will deliberately escape pilot purgatory through a disciplined approach.

Read More
Instagram effect in ChatGPT
Abhishek Gupta and Emily Dardaman Abhishek Gupta and Emily Dardaman

Instagram effect in ChatGPT

We only see the final, picture-perfect outputs from ChatGPT (and other Generative AI systems), which skews our understanding of its real capabilities and limitations.

A lot is required to achieve those shareable trophies of taming ChatGPT into producing what you want: tinkering, rejected drafts, invocations of the right spells (I mean prompts!), and learning from observing Twitter threads and Reddit forums. But, those early efforts remain hidden, a kind of survivorship bias, and we are lulled into a false sense of confidence in these systems being all-powerful.

Read More
Wikipedia’s Balancing Act: A Tool for Collective Intelligence or Mass Surveillance?
Abhishek Gupta Abhishek Gupta

Wikipedia’s Balancing Act: A Tool for Collective Intelligence or Mass Surveillance?

In this paper from Liu we explore how Collective Intelligence (CI) might face an untimely death, a “chilling effect,” when co-opted by mass surveillance mechanisms. Contributors to a CI system (and in general) desire privacy. Policies like public tracking of edit histories on Wikipedia can feed into intelligence analyses conducted by federal agencies like the NSA that intrude on privacy, thus inhibiting participation.

Read More
Be careful with ChatGPT
Abhishek Gupta and Emily Dardaman Abhishek Gupta and Emily Dardaman

Be careful with ChatGPT

Existing Responsible AI approaches leave unmitigated risks in ChatGPT and other Generative AI systems. We need to evolve our approaches and refine our thinking.

Ethical challenges are only getting exacerbated with increasing experimentation. We are unearthing issues like the generation of very convincing scientific misinformation, biased images and avatars, hate speech, and more. How we put these systems together within human organizations and empower ourselves to take action will be critical in determining whether we get ethical, safe, and inclusive uses out of them.

Let’s dive deeper into these areas and highlight why we must act now.

Read More
What should humans do next?
Emily Dardaman and Abhishek Gupta Emily Dardaman and Abhishek Gupta

What should humans do next?

It’s perhaps time to reject the idea that humans are social and machines are asocial. As machine capabilities increase and human behaviors evolve, this mode of thinking is a vestige of the 2010s. To remain relevant in a society that will be humans+machines, we need a better understanding of what strengths each of us brings and how to best put them together.

Read More