Hybrid Human-AI Conference: Day 5 of Summer in Munich - Humans in the loop
If someone asked me to describe the most important messages from HHAI 2023, I would tell them that technology design is a deeply social process. We do not get to decide how to use AI tools on our best day – well-rested, when we're in sync with the other department heads, when we're patient and attentive. We make these calls with our current brains. Exhausted, distracted, disconnected, and often totally clueless!
Hybrid Human-AI Conference: Day 4 of Summer in Munich - Secret Agency
Speaking the truth, plainly and clearly, is hard. Especially in science, where overconfidence in a result leads to disaster and under-confidence delays progress on urgent problems. Across the field(s) of human-AI interaction, this “calibration problem” rears its head. Some of the most important work being done today involves improving how AI tools can explain their behavior - and how we can explain AIs to each other. Frank van Harmelen calls this “meaning negotiation” and told me the field must progress long before we agree on terms.
Hybrid Human-AI Conference: Day 3 of Summer in Munich - Trust the machines?
HHAI, even more than other conferences I've attended, has a clear sense of self. We are here for one reason - to study how AI can support human goals and human flourishing. That requires interdisciplinary study, especially from psychologists, computer scientists, and HCI professionals, but it leads to a unity of purpose that encourages openness and discourages 80s-style ego posturing. My only regret is that the material is so interesting I won't have time to explore as much of Munich!
A Matter of Taste: Artificial Teammates, Experiment 5
We pit ChatGPT and Bard against each other in another experiment – and one shows a major improvement.
Hybrid Human-AI Conference: Day 2 of Summer in Munich
When I describe Abhishek’s and my work on augmented collective intelligence, often, people perceive it as exciting but very niche. But upon a closer look, our topic contains multitudes - more to study than anyone could do in a lifetime. How do human teams work together? How do they decide? What is artificial intelligence, and what kind of capabilities are available? What capabilities might be available soon? What happens when we make hybrid human-AI teams -- to the problem being solved, our sense of purpose and autonomy, our organizations, society, and our career paths?
Hybrid Human-AI Conference: Day 1 of Summer in Munich
Given all the hype about AI in the past few months, it might be tempting to write off human-AI interaction as a trend – but it's better thought of as the bedrock for organizational strategy and individual career planning for the rest of our time. The further AI gets integrated into our lives, the trickier the questions we must answer about how our values are reflected in these systems!
Collective Intelligence: Foundations + Radical Ideas - Day 3 at SFI
It was bittersweet to realize the final day of the Collective Intelligence Symposium had arrived. Given the scope of our interests, we all had started conversations we had no hope of finishing. How can we apply insights from the animal kingdom to our work in organizations? What can we know and not know about AI’s likely impact on our work? The only answer is to keep conversing with the bright people we’ve met and puzzle them out together.
Collective Intelligence: Foundations + Radical Ideas - Day 2 at SFI
I've found that two types of conversation partners are drawn to each other here – those whose work is incredibly divergent from each other and those who are solving the same problems.
Doing My Job: Artificial Teammates, Experiment 4
ChatGPT takes on a consulting case interview!
Hire Stakes: Artificial Teammates, Experiment 3
When choosing who to hire, ChatGPT can’t help you (nor should you ask it!)
Bard vs. ChatGPT: Artificial Teammates, Experiment 2
Bard struggles to facilitate teamwork in our experiment on a group trying to pick a spot for lunch.
Pleasing Everyone: Artificial Teammates, Experiment 1
As AI systems shift from tools to teammates, we test GPT-4’s ability to guide a group decision on where to get lunch.
ChatGPT is a single-player experience. That’s about to change.
We’re used to human teammates, with all their wonders and surprises. But, are we ready for machine teammates?
Collective Intelligence: Foundations + Radical Ideas - Day 1 at SFI
Systems, put simply, are boundaries with constituents. Our world and our bodies are systems of systems, from cities to cells. The reason to study systems is to understand how the levels impact each other, what commonalities exist between them, and how interventions can be designed to target the right level.
On Day 1 with the Santa Fe Institute, I didn't find a solution – but I found a gold mine of insights that have left my head spinning and brilliant partners to push these ideas around with.
Collective Intelligence: Foundations + Radical Ideas - Day 0 at SFI
This week, I've flown to Santa Fe to join two hundred others for a first-of-its-kind symposium. "Collective Intelligence: Foundations + Radical Ideas" is an experimental non-conference that brings together young researchers, industry leaders, and interdisciplinary thinkers to learn and discuss the nature of group intelligence – how it emerges in insect swarms, brain cells, and sports teams – and how we can harness this knowledge to improve organizational decision-making. Artists, entrepreneurs, and futurists are here, too; it's that kind of electric energy.
Normal accidents, artificial life, and meaningful human control
Lines are blurring between natural and artificial life, and we’re facing hard questions about maintaining meaningful human control (MHC) in an increasingly complex and risky environment.
Bing’s threats are a warning shot
We don’t understand how large language models (LLMs) work. When they threaten us, we should listen.
Banning ChatGPT Won’t Work Forever
Banning a tool reduces its use, but won't stop it entirely - designing space for safe experimenting is better.
Stop Ignoring Your Stakeholders
There’s a wide spectrum between ignoring stakeholders and delegating decisions to them. Visualizing levels of engagement as steps on a ladder helps leaders make better choices about who to include and when.
Writing problematic code with AI’s help
Humans trust AI assistants too much and end up writing more insecure code. At the same time, they gain false confidence that they have written well-functioning and secure code.