How people and AI collaborate

That echo button from the story before — why did it work?

The AI did something impressive: summarize half an hour of conversation and formulate exactly the right question. That's remarkable. But it didn't work just because of that.

It worked because it came at the right moment. The group was stuck, the energy was dropping, and right then a question arrived that helped them move forward. The AI could ask the question, but the facilitator had to feel when.

We are social beings. We want to matter, to be heard, to build something together. That feeling you walk away with after a collaboration: that's something only people can create for each other. But AI can help us strengthen that feeling.

The way I see it, it starts there. With the person.


It starts with the person

That might sound obvious. But what I notice in practice: it often goes differently. Someone comes up with a plan, presents it, and asks if it's right. The intention is good. But the ownership sits with the person who came up with it, not with the group. Anyone who didn't say it themselves recognizes themselves less in it.

In this field guide it keeps coming back, in all kinds of ways. When you're preparing a session: what do we want to achieve, and what experience do you design for that? When you use AI: what do you feed it? The words of the people themselves, not your summary. When you reflect back to the group: do people recognize themselves in what's there? And when you're unsure about the right moment: what does this group need right now?

Even when you're working with AI alone, it starts with you. What are you curious about? Where could you use some help? What did you feel in that session, and how do you make that visible?

All those questions start with the person. Not with the tool, not with the system, not with the prompt.

On the previous page it was about how AI can amplify what people already bring. That's the other side of the same principle: everything AI can do, it can only do with what people bring. And everything people create together, they only carry it if they recognize themselves in it.

We'll have to do it together. AI can help us in that, amplify, scale up. But it starts with us.

That fundamental attitude translates into six working principles.


The six working principles

Every time I use AI in a session, I feel the same thing. There are things AI can do that I can't (recognize patterns at lightning speed, summarize with endless patience). And there are things I need to do that AI can't (give people space to be themselves, read body language, decide which direction we go). I make that choice over and over, and I notice that it changes as AI becomes more intelligent.

After a while I started seeing patterns. I captured those patterns in six principles. Below I explain them. In the three phases that follow in this field guide, whether you're starting with your first transcription or analyzing months of conversations, the same principles weave through them.

The principlesAI can...The person needs to...
Ritual vs intentionMake rituals more efficientSafeguard the intention
Your words, your planAnalyze and summarizeArticulate and decide
Ownership through languageQuote literallyFeel the recognition
Iteration as dialogueGenerate quicklySteer and give feedback
Timing over perfectionRespond immediatelyFeel the right moment
Prompt the people firstProcess inputDesign the experience

1. Ritual vs intention

In a workshop everyone writes on sticky notes, clusters together, votes. That's a ritual. But why do we actually do that? The intention is that ideas mix, that everyone contributes, that you arrive at something together.

AI can change the ritual. People talk instead of write, AI clusters in real-time. Fine. But the question I ask myself: does the intention stay intact? Do the ideas still mix? Does everyone still contribute?

What I notice is that you can change rituals freely, as long as you don't lose the intention.

Are we changing the ritual or are we changing the intention?

Rituals may change. But who decides whether the intention is preserved?


2. Your words, your plan

In a session about healthcare innovation, AI had created a beautiful synthesis of the shared vision. Everyone was impressed. Then someone asked: "Can't the AI just make the implementation plan?"

My answer: "It absolutely can, but you are the soul of all this. The fact that you're talking about it is what makes you likely to support it."

The point isn't that AI can't make a plan — it absolutely can. The point is that ownership emerges when we put something of ourselves into it. When you say it, it's yours. When AI says it, it's AI's.

Does the dialogue stay central, or are we taking a shortcut to solutions?

Ownership emerges by saying it yourself. But how do you make sure those words truly remain theirs?


3. Ownership through language

The difference between "you're talking to a wall" and "communication problems" seems small. But the first is what someone said: raw, emotional, recognizable. The second is consultant-speak: smooth, professional, impersonal.

When people see their own words reflected back, they recognize themselves: "Yes, that's what we said." That recognition is where ownership emerges. Paraphrasing (however well-intended) breaks that.

I think this is the simplest test: if people think "yes, that's what we said" it works. If they think "that sounds like a consultant" something is off.

Do people recognize themselves in the output?

Preserving their literal words and reflecting them back is the goal. But what if the AI output isn't right?


4. Iteration as dialogue

AI is a collaboration partner that tries to understand what you mean. Just like with a colleague: if something isn't right, you explain what you want differently. You don't tinker with their work yourself; you give feedback.

What works better than adjusting it yourself: pause and give AI feedback. "This is 70% what I'm looking for. What's missing: more concrete examples. Try again."

By iterating you learn two things. You discover what AI can do (sometimes things you didn't know yourself). And you learn to formulate more precisely what you're actually trying to achieve. The result is better than what you would have made on your own.

Am I collaborating with AI or am I sitting here adjusting it myself?

Refining together with AI is valuable. But sometimes our feeling is more important than the perfect prompt.


5. Timing over perfection

That echo button from the story: why did it work? Not because the analysis was perfect. It worked because it came at the right moment.

When the energy drops, when the conversation goes in circles, when people get stuck: that's when a reflection helps. Afterwards, as a polished report, it has less impact.

AI is fast. But timing requires human sensing. You have to feel the moment when intervention helps.

But timing is about more than choosing the right moment. It's about sensing what's happening in the room. Who is disengaging? Where is there consensus? When does someone have something to say but doesn't dare?

And ultimately: what feeling do people take home? No AI can answer that question. We are social beings: we want to matter, to be seen, to build something together. Creating that feeling is the real human work.

What is my intuition telling me? What does this group need right now?

Timing is about when. But what you give back depends on what you heard.


6. Prompt the people first

How do you get people to speak from their lived experience, not from abstract opinions?

The way I see it, it starts with a sense of safety. People only truly share when they feel seen. And it depends on the question you ask. Stories connect: lived experience can't be disputed. But a rational summary? You can argue about that.

Compare:

  • "How do you think collaboration in your team is going?": you get opinions, abstractions
  • "Can you describe a moment when you collaborated well with a colleague?": you get stories, experiences

Pay attention to the precise words too. "How can we..." suggests you already know something can work. "How might we..." opens up possibilities we don't know will work. That difference shapes what people think is an acceptable answer.

I call this the deconstructed burger method: you start with the goal, work back to what puzzle pieces (ingredients) you need, and then design questions that draw each puzzle piece out of people.

The facilitation question comes before the prompt question. First design the experience that generates good input. Only then do you think about AI.

Have I designed the input experience, or am I jumping straight to the AI prompt?


What this makes possible

These principles are not just guidelines. They open something up.

There is now intelligence you didn't have before. All you need is a transcript and ideas about what you want to do with it. What becomes possible after that is virtually endless.

AI makes visible what's already there. Patterns in what people said, connections you missed live, the ownership that sits in their words. Not by adding something new, but by revealing what was already there.

AI makes participation scalable. What previously was only possible with small groups (truly listening, hearing everyone, collectively arriving at something) becomes possible at larger scale. Not by replacing what's human, but by amplifying it.

And the human work remains. AI can analyze, summarize, recognize patterns. But the safety in the room, sensing the moment, reading faces: that stays with us. AI amplifies what we can do, but doesn't replace who we are.

We're at the beginning of something. What else can we discover about how we, despite our differences, make plans together that work for everyone?


The practice

That discovery starts with practice. These principles are the compass: the three phases that follow show how to apply them.

Phase 1: Start: Raw material. Making your first transcription, capturing the precise words, and discovering how much you can extract from a transcript.

Phase 2: Deepening: Magnifying glass. Making patterns visible, reflecting back live, and learning how to collaborate with AI by giving feedback.

Phase 3: Scale: Mirror and bridge. Comparing multiple conversations, understanding group dynamics, and letting wisdom accumulate over time.

How people and AI collaborate | Social AI Field Guide