Hybrid Human-AI Conference: Day 5 of Summer in Munich - Humans in the loop
Emily Dardaman Emily Dardaman

Hybrid Human-AI Conference: Day 5 of Summer in Munich - Humans in the loop

If someone asked me to describe the most important messages from HHAI 2023, I would tell them that technology design is a deeply social process. We do not get to decide how to use AI tools on our best day – well-rested, when we're in sync with the other department heads, when we're patient and attentive. We make these calls with our current brains. Exhausted, distracted, disconnected, and often totally clueless!

Read More
Hybrid Human-AI Conference: Day 4 of Summer in Munich - Secret Agency
Emily Dardaman Emily Dardaman

Hybrid Human-AI Conference: Day 4 of Summer in Munich - Secret Agency

Speaking the truth, plainly and clearly, is hard. Especially in science, where overconfidence in a result leads to disaster and under-confidence delays progress on urgent problems. Across the field(s) of human-AI interaction, this “calibration problem” rears its head. Some of the most important work being done today involves improving how AI tools can explain their behavior - and how we can explain AIs to each other. Frank van Harmelen calls this “meaning negotiation” and told me the field must progress long before we agree on terms.

Read More
Hybrid Human-AI Conference: Day 3 of Summer in Munich - Trust the machines?
Emily Dardaman Emily Dardaman

Hybrid Human-AI Conference: Day 3 of Summer in Munich - Trust the machines?

HHAI, even more than other conferences I've attended, has a clear sense of self. We are here for one reason - to study how AI can support human goals and human flourishing. That requires interdisciplinary study, especially from psychologists, computer scientists, and HCI professionals, but it leads to a unity of purpose that encourages openness and discourages 80s-style ego posturing. My only regret is that the material is so interesting I won't have time to explore as much of Munich!

Read More
Hybrid Human-AI Conference: Day 2 of Summer in Munich
Emily Dardaman Emily Dardaman

Hybrid Human-AI Conference: Day 2 of Summer in Munich

When I describe Abhishek’s and my work on augmented collective intelligence, often, people perceive it as exciting but very niche. But upon a closer look, our topic contains multitudes - more to study than anyone could do in a lifetime. How do human teams work together? How do they decide? What is artificial intelligence, and what kind of capabilities are available? What capabilities might be available soon? What happens when we make hybrid human-AI teams -- to the problem being solved, our sense of purpose and autonomy, our organizations, society, and our career paths?

Read More
Hybrid Human-AI Conference: Day 1 of Summer in Munich
Emily Dardaman Emily Dardaman

Hybrid Human-AI Conference: Day 1 of Summer in Munich

Given all the hype about AI in the past few months, it might be tempting to write off human-AI interaction as a trend – but it's better thought of as the bedrock for organizational strategy and individual career planning for the rest of our time. The further AI gets integrated into our lives, the trickier the questions we must answer about how our values are reflected in these systems!

Read More
Collective Intelligence: Foundations + Radical Ideas - Day 3 at SFI
Emily Dardaman Emily Dardaman

Collective Intelligence: Foundations + Radical Ideas - Day 3 at SFI

It was bittersweet to realize the final day of the Collective Intelligence Symposium had arrived. Given the scope of our interests, we all had started conversations we had no hope of finishing. How can we apply insights from the animal kingdom to our work in organizations? What can we know and not know about AI’s likely impact on our work? The only answer is to keep conversing with the bright people we’ve met and puzzle them out together.

Read More
Collective Intelligence: Foundations + Radical Ideas - Day 1 at SFI
Emily Dardaman Emily Dardaman

Collective Intelligence: Foundations + Radical Ideas - Day 1 at SFI

Systems, put simply, are boundaries with constituents. Our world and our bodies are systems of systems, from cities to cells. The reason to study systems is to understand how the levels impact each other, what commonalities exist between them, and how interventions can be designed to target the right level.

On Day 1 with the Santa Fe Institute, I didn't find a solution – but I found a gold mine of insights that have left my head spinning and brilliant partners to push these ideas around with.

Read More
Collective Intelligence: Foundations + Radical Ideas - Day 0 at SFI
Emily Dardaman Emily Dardaman

Collective Intelligence: Foundations + Radical Ideas - Day 0 at SFI

This week, I've flown to Santa Fe to join two hundred others for a first-of-its-kind symposium. "Collective Intelligence: Foundations + Radical Ideas" is an experimental non-conference that brings together young researchers, industry leaders, and interdisciplinary thinkers to learn and discuss the nature of group intelligence – how it emerges in insect swarms, brain cells, and sports teams – and how we can harness this knowledge to improve organizational decision-making. Artists, entrepreneurs, and futurists are here, too; it's that kind of electric energy.

Read More
Bing’s threats are a warning shot
Emily Dardaman and Abhishek Gupta Emily Dardaman and Abhishek Gupta

Bing’s threats are a warning shot

We don’t understand how large language models (LLMs) work. When they threaten us, we should listen.

Read More
Banning ChatGPT Won’t Work Forever
Abhishek Gupta and Emily Dardaman Abhishek Gupta and Emily Dardaman

Banning ChatGPT Won’t Work Forever

Banning a tool reduces its use, but won't stop it entirely - designing space for safe experimenting is better.

Read More
Stop Ignoring Your Stakeholders
Emily Dardaman and Abhishek Gupta Emily Dardaman and Abhishek Gupta

Stop Ignoring Your Stakeholders

There’s a wide spectrum between ignoring stakeholders and delegating decisions to them. Visualizing levels of engagement as steps on a ladder helps leaders make better choices about who to include and when.

Read More
Writing problematic code with AI’s help
Abhishek Gupta Abhishek Gupta

Writing problematic code with AI’s help

Humans trust AI assistants too much and end up writing more insecure code. At the same time, they gain false confidence that they have written well-functioning and secure code.

Read More