Phase 2: Deepen
Magnifying glass: AI makes visible what was intuitively present, but hadn't been put into words yet.
The foundation is in place. Now we go deeper: from recording to discovering, from transcripts to patterns.
Where are you?
You have transcripts. You've made first syntheses. But now you want to go further. The conversation was rich, so how do you get more out of it?
Maybe you sense there are patterns in there that you can't name. Maybe you want to go back to the group with a reflection that moves the conversation forward. Maybe you're stuck in an iteration with AI and you're not getting what you're looking for.
This is the phase of deepening: no longer recording what's there, but discovering what's in it.
The story: twelve rounds
You may already know this story from source document style cloning. There it was about the technique: how you incorporate style characteristics into your prompt. Here it's about something else: the process itself. What does iteration with AI actually look like?
A transformation plan for a mental healthcare network. Thirty people had given input. Now that needed to be turned into a document the health insurer would accept.
There was already an approved transformation plan. The question: how do you write new sub-plans in the same style?
This didn't become a one-time prompt. It became twelve rounds.
Round
I describe what I want.
AI proposes a step-by-step plan.
I add the playbook for context.
AI adjusts the prompts to the playbook.
I ask for three specific prompts.
AI delivers three prompts, but the style isn't right yet.
I ask for universal versions.
AI adjusts, but something's still missing.
I correct: "The style needs to be IN the prompt, because AI doesn't have access to the example."
AI processes the correction.
I clarify the context-window situation.
Prompts are ready.
My lesson: a good prompt doesn't happen in one go. It evolves through feedback.
What can you do with AI?
In Phase 1 it was about recording: transcription, preserving language, cloning style. In this phase it's about deepening: recognizing patterns, iterating together with AI, and designing the input that makes rich output possible.
Getting what's in the transcript out
What I keep noticing: a transcript contains more than you initially see. Sometimes it's your intuition getting words: patterns you felt but couldn't name. Sometimes you know there's more in there than you could process in the moment. AI can work as a magnifying glass on what's already there.
Putting intuition in writing
You sense something is going on: a recurring theme, a tension that isn't being spoken, a dynamic you can feel. Sometimes you can name it, sometimes you can't. But even when you can name it: putting it down, in writing, makes it discussable and referable.
AI can scan the transcript for the patterns you intuitively sense. Not to replace your intuition, but to give it words. The result is often confirmation of what you already knew, but now you can share it, discuss it, and refer back to it later.
This is also a form of democratization: recognizing group dynamics and undercurrents was previously reserved for experienced facilitators with years of training. AI makes this skill more accessible. Similar to how software developers are shifting from writing code to directing the whole: the deep knowledge remains essential, but the application changes.
What I notice myself: I now see things I used to miss. It gives me more confidence to name patterns, because I have evidence. And perhaps most valuable: I learn to recognize new patterns, and ways to work with them next time.
Digging together for what's in there
You've had a session. The conversation was valuable, but you know: there's more in it than what you could name right away. Not because you missed something, but because every rich conversation contains more than one person can process in one moment.
AI can help by digging together: looking for structure, finding hooks for connection, identifying striking quotes. The transcript is raw material: foundation for deepening. And the nice thing is: this doesn't have to be solo work. You can build a prompt together with AI: AI asks questions, you give direction, and the analysis becomes sharper and sharper.
But how do you actually work together with AI to do this? That requires a crucial skill.
Iterating instead of tinkering
When AI output isn't right, the reflex is to go in and adjust it yourself. A sentence here, a word there. But then you miss the chance to collaborate, and to teach AI (and sometimes yourself) what you actually mean.
What works better: give AI feedback and let it try again. "This isn't right because..." or "What I actually mean is..." That takes some getting used to, but it yields better results. And you simultaneously learn how to communicate more effectively with AI.
The story of the twelve rounds above is an example of that. Every round brought new information, new corrections, new insights, until the result was right.
And if you ended up having to adjust a lot yourself? Then that's valuable information. Your adjustments show what was missing in your original prompt. Give your adjusted version back to AI with the question: "What should I have asked differently to get this result directly?" That way, every iteration becomes a lesson for next time.
You've now seen how iteration works. But the best deepening actually starts earlier. How do you make sure there's something valuable in the transcript to begin with?
Prompt the people first
Before you think about what AI does with the output, there's a more important question: how do you make sure the input is rich enough? The quality of what people share determines what AI can do with it. A good question yields richer answers than a bad one. A safe setting yields openness. The structure of your workshop determines what ends up in the transcript.
The quality of AI output depends on the quality of human input. That starts with the questions you ask: in advance when designing your session, and in the moment when the group gets stuck. This is the human work that precedes every AI prompt.
A concrete sub-technique that comes from this: reframing questions. "What do you think about the collaboration?" yields abstract answers. "Can you describe a moment when the collaboration felt good?" yields concrete stories, and directs the energy toward what people want more of.
This is a distinctly human skill: sensing on the spot that a different question is needed, and then asking it. AI can help you develop this skill, for example by asking afterwards: "The group got stuck on this question. What could I have asked differently?" That way you learn techniques you can apply next time.
And when you do this well (the people well-prompted, the process well-designed) then something else becomes possible.
Giving live reflection back to the group
Imagine: a group conversation is going in circles. The same points keep coming back, but nobody names the core. AI can analyze the conversation so far and formulate a question that helps the group move forward. Not a summary for later, but an intervention in the moment.
In tools like Dembrane this is called the "echo button": one press, and AI reads the transcript and asks a question that names the tension or the pattern. The effect is sometimes surprising: the group sees itself. It's not a replacement for the facilitator, but a mirror.
This is where everything comes together: you've learned how to get depth from transcripts, how to iterate with AI, how to design the input. Now you can apply that while the conversation is still happening.
From conversation to live document
One step beyond live reflection: what if you don't just give back a question, but generate an entire draft document while the session is still in progress?
In strategic sessions or workshops where a plan needs to emerge, you can use AI to generate draft sections during the breaks. People see their words reflected back right away, structured in the format the organization needs. From a day's work to minutes, not for efficiency, but for ownership. Because the feedback loop shortens: what people said is still fresh, they recognize it immediately.
This does require a co-facilitator handling the tech, while you stay with the group. And it requires validation along the way: never present it as if AI has captured the truth.
What stays human?
In this phase, the collaboration with AI becomes more intense. You give feedback, you iterate, you steer. But the more AI can do, the more important your role and especially your judgment becomes.
The difference between a live reflection that lands and one that falls flat isn't in the analysis (AI can do that just fine). It's in the moment. In sensing that the group is now ready for a mirror, or not. In seeing that someone wants to say something but hesitates. In knowing when silence is productive and when it's stuck.
| AI can... | Human must... |
|---|---|
| Recognize patterns | Validate whether they hold true |
| Name blind spots | Judge whether they should be discussed now |
| Put intuition into words | Sense whether the words are right |
| Iterate quickly | Provide direction and add nuance |
| Suggest input structures | Create the safety to share |
| Reframe questions | Choose the right moment to ask |
What stays most human? A feel for the room. Intuition. Timing. AI can generate the perfect question, but you need to sense when that question helps, and whether the group is ready for it.
Tensions in this phase
Intervening too early vs. too late The impact of a live reflection is in the timing. Too early feels like an interruption: as if you don't trust the conversation. Too late, and the energy is already gone. That timing is human work: sensing when the group is ready for a mirror.
Reading aloud vs. interpreting Sometimes reading AI output verbatim is exactly right: a quote, a short reflection. Sometimes you want to summarize in your own words instead. The question is: what output does the group need right now, and did you specifically ask for that?
Doing it yourself vs. collaborating When AI output isn't right, the reflex is to adjust it yourself. A sentence here, a word there. But then you miss the chance to collaborate, and to learn what you actually mean. Giving feedback takes some getting used to, but it yields better results.
Pushing through vs. accepting Iterating is effective, but perfection can paralyze. Sometimes 80% is good enough. After three rounds without improvement, it's time to choose: accept or try a different approach.
Patterns vs. noise AI always finds something. The question is: is it a real pattern or coincidental overlap? Your experience in the room is the test.
Introduction: experiment vs. normal part of the process How you introduce AI determines how people engage with it. "Let's see what AI makes of this" invites curiosity. "We're going to do an experiment with AI" invites judgment.
That carries through. At first, participants look with the question "is this correct?" instead of "do I recognize myself in this?" What I notice is that shift happens naturally, but you can speed it up. The first time: explicitly state that you're looking for recognition, not assessment.
Techniques in this phase
| Technique | What it does |
|---|---|
| Live reflection with AI | Real-time reflection back to the group |
| From conversation to plan | Generate a live draft document during a session |
| Intuition in writing | Capture patterns you sense and make them discussable |
| What else was in there | Dig together for depth in your transcript |
| Iteration | Collaborating with AI in rounds |
| Prompt the people first | Design the input experience before the AI prompt |
| ↳ Reframing questions | In-session: casting abstract questions in a different light |