Patterns over time

From snapshot to trajectory: what changes when you lay sessions side by side?

When is this relevant?
Situation

You have a long-running process. Multiple sessions over weeks or months. You feel something shifting, but you can't name it. Or you're curious about mapping that change: not just sensing that something is different, but seeing what exactly shifts.

What I notice

Until recently, it simply wasn't possible to systematically compare meetings. You had your own memory, your notes, maybe an evaluation form. But bringing together what was said in session two alongside what was said in session six: that took more time than most people have. AI makes it possible for the first time to actually do that. Not as a replacement for your own feeling, but as a supplement: a conversation partner with a very precise memory.

Question

What would become visible if you could lay all conversations side by side?

The story: the Gerda pattern

In the Doesburg process we work with a steering group: professionals, residents and entrepreneurs who together want to strengthen the social fabric of their municipality. I analyzed each meeting separately, with the same methodology and the same questions. Only then did I lay the analyses side by side.

Then something happened that I didn't expect.

Floor de Ruiter mentioned in one of the first meetings a woman called "Farmer Gerda" from an earlier bottom-up process. Gerda was an informal leader: whenever something goes wrong in the community, Gerda knows about it. People come to her to solve the problem or do something about it. The group started thinking about "the Gerdas" in Doesburg: who are the informal leaders here?

AI picked up this theme independently as a recurring pattern across multiple meetings. In the analyses of the second and third meetings it surfaced: the group discovered that the social fabric of their municipality doesn't revolve around institutions or organizations, but around a handful of vulnerable key figures. One woman who holds everything together in her street. One neighborhood volunteer who is the only link between two communities. AI called this "the Gerda pattern."

That's important: we hadn't given that name to the AI. Floor had introduced the concept to the group, the group talked about it in the meetings. But it was AI that recognized it as a recurring pattern across multiple meetings and gave it that name on its own.

In the first sessions the tone was concerned. Someone asked about a key figure in the neighborhood:

"What if you drop out? Because then we've lost a very important person."

In later sessions the awareness shifted. The group was talking about key figures who were already doing so much that they risked burning out:

"She already has such a key role... It's almost an obligation."

And in the fourth session it became visible how knowledge disappears when these key figures drop out. Someone had searched online for local initiatives in Doesburg and discovered that the initiatives they knew existed had already stopped:

"How often I get back that Google says, we couldn't find them... Then someone has left again."

The shift across meetings was striking: from "leveraging these people's networks" to "protecting these people from overload." Not instrumental, but caring: how do we make sure the Gerdas don't collapse?

This pattern was only visible because it recurred across multiple meetings. In one meeting it's a remark. Across three meetings it's a pattern. Across five meetings it's a line of change: from observation to concern to action.

That's the core of patterns over time: making shifts visible that unfold across sessions, that you miss when you treat each meeting as an isolated island.

The method: first apart, then together

This is the core principle of patterns over time, and really of all of Phase 3: break apart and synthesize.

Not feeding all transcripts to AI at once. Not treating each session as an isolated island. But:

  1. First analyze each session separately: with the same methodology, the same questions, the same lenses
  2. Then synthesize: compare the analyses, look for patterns

Why not everything at once? Because you lose the nuance of individual conversations, have no control over the methodology, and get data that isn't comparable. AI's context window fills up, and results get worse.

Why not separately? Because you miss the connections between sessions, can't see shifts, and each analysis stays an island.

The approach is simple but effective: consistently apply the same analysis to each session, then lay the analyses side by side. You preserve the richness of each conversation and make patterns visible that you would otherwise miss.

The control sits in the methodology: if the synthesis doesn't hold up, you can redo it based on the same separate analyses. If you know the separate analyses are solid and the problem is in the synthesis, you only adjust the synthesis prompt. The input stays the same.


What the analysis yields

When I had applied this methodology to seven meetings, my first question was: does this actually produce anything? Can AI actually capture those human shifts?

At that point the Doesburg process had already had ten meetings. I analyzed the first seven and asked AI a simple question: what do you expect from meeting eight? I already knew what had happened in meeting eight. That was the honest thing about this experiment: it was testable after the fact. Not speculation, but a hypothesis I could check.

The setup: I gave each of the seven transcripts to a separate AI. Seven parallel analyses, each in its own context window, with the same prompt and the same six signals. Maximum detail per session. One synthesis AI then combined all seven analyses.

The scenario that came true

The synthesis AI wrote a scenario for meeting eight. Not a prediction in the sense of "this is going to happen," but a description of the most likely dynamic. The scenario was called "the Harvest" and what surprised me was the content: it described a group that would come together with a mix of pride and realism. Pride because they had managed to organize a public gathering. Realism because not everything had gone perfectly.

I knew what had happened in meeting eight. And that scenario was largely accurate. Not literally, but the dynamics were right. The tension the scenario described (expectation versus reality) was exactly what hung in the room.

That was the answer to my question. Yes, AI can capture those human shifts. Not perfectly, not everything, but enough to work with. And the scenario wasn't the only thing the synthesis produced.

The data beneath the story

The synthesis also produced something you can read at a glance. Of the six signals I tracked per meeting, three can be expressed as numbers:

                 M1  M2  M3  M4  M5  M6  M7
Ownership         2   4   4   6   6   7   7   ▂▄▄▆▆▇▇
Energy            6   7   7   7   7   6   7   ▆▇▇▇▇▆▇
Decisions         5   4   1   5   6   6   7   ▅▄▁▅▆▆▇

Three lines, three stories. Ownership climbs in two jumps and consolidates: the group takes over initiative step by step. Energy stays remarkably stable, with one dip that (probably) wasn't caused by content but by the meeting being online. And decisions tells perhaps the most surprising story: a valley in the third meeting (the check-in consumed the entire session), followed by a steady acceleration as the group learned to deliberate and especially to choose.

Each line by itself says something. But together they tell more. That stable energy supports the ownership growth: the group keeps going, even when it gets difficult. And the acceleration in decisions correlates with the moment a concrete deadline arrived.

The other three signals (tension, group identity, behavioral patterns) are qualitative and harder to capture in a chart, but at least as valuable. How tension shifts from "anger at the system" to "patience versus urge to act." How group identity changes from "small club" to "diverse Doesburg residents who are building something." Those kinds of shifts you only see when you look back systematically.

And these are six signals that I chose. There's probably much more that's possible. Different questions, different lenses, different signals, depending on what you want to know about your group. The method is the same: consistently apply the same analysis per session, then lay the analyses side by side.

The facilitation card

The scenario shows that the synthesis works. The curves show how the group shifts. But the same synthesis also produced something you can use going forward: four points of attention to think about before your next meeting. What struck me was how specific they were. Not generic advice, but points that came directly from what had happened in this group.

Protect: What should I protect in the next gathering? Think about reflection time before the solution reflex kicks in. Vulnerable voices that don't naturally speak up. The learning process itself.

Confront: What should I name, even if it's uncomfortable? Absence becoming a pattern. Financial realities nobody dares to speak. Dependence on one person while the group should be navigating on its own.

Let go: What should I let go of? Perfection. Full attendance. The original plan if the process is heading in a different direction.

Observe: What do I watch for as a signal of real change? Who takes the lead after a public moment? If you have to initiate the next step, the ownership line has stagnated. If a group member spontaneously says "I suggest we do this," the shift is real.

These four points are not generic. They come directly from the data of seven meetings. That's the difference from a standard checklist: every question is rooted in what was actually said and what actually happened.


What else AI can find

Beyond the signals and the facilitation card, the synthesis can yield something else: recurring patterns that you recognize as a facilitator but would never formalize. Knowledge modules strong enough to share with other processes. The question was: are there insights in these group conversations that are more broadly applicable?

The Transcript Analyst (step 1 of the method) searches per meeting not only for signals, but also for what the prompt calls "candidate modules": reusable patterns. Insights, working methods or strategies that are strong enough to name, to remember, and to apply in other processes. The synthesis AI compares those modules across all meetings and flags which ones recur.

The Help Question. One of the participants said in the fourth meeting: "Don't say we need help. No, say will you help me. That's the moment people think, oh yes tell me more." That moment came back in the sixth meeting when the group was writing an invitation text. AI recognized it as the same pattern: the way you frame the question determines whether people feel invited or addressed.

The Group 8 Test. One of the participants introduced the principle: every communication should be understandable at a Group 8 level (the final year of Dutch primary school, roughly equivalent to sixth grade). The group applied it to their invitation, and it became a recurring checkpoint in later meetings. A simple principle that AI recognized across multiple sessions.

This might sound like open doors. But what's interesting is that AI picked out these "modules" independently. The prompt asks AI to search per meeting for insights that are transferable, and the synthesis draws the connections. What you get back are patterns you already felt informally, but now formalized and supported with quotes from multiple meetings.


What you can do with it

Making patterns visible is step one. The question after that is always forward-looking: what do you do with it?

As preparation

The facilitation card above is the most concrete example. But the preparation goes broader: with the analyses of all previous meetings you can also ask AI to sketch scenarios, flag tensions that are likely to resurface, or identify voices you might want to protect. The difference from generic advice is that everything is rooted in what was actually said in your group.

As a mirror

"Look at what changed." Show the analysis to the group. Not as judgment, but as a look back. In the Doesburg process, group members recognized themselves in the shifts: from waiting to taking initiative, from externally oriented ("the municipality should") to internally oriented ("we will"). That mirror helps a group see how far they've come, even when it doesn't feel that way.

As evaluation

Clients often ask: is this process working? With patterns over time you can show the shift, not in abstract terms but in the words of the participants themselves. That ownership curve from 2 to 7 over seven meetings is more convincing than any evaluation form.

As a story

If you think bigger, more becomes possible. When you make patterns over time visible, you create the story of the group. Not "what we did," but "who we became." That story is valuable for the group itself, for clients, and for future projects. It's the difference between a list of activities and a development narrative.


The prompts

The prompts I used in the Doesburg process form a two-step system: first a Transcript Analyst that analyzes each transcript separately, then a Predictive Synthesizer that combines the analyses. You can use them with any AI model that handles long texts.

Step 1: Transcript Analyst

Use this prompt per meeting. Give AI the transcript, the meeting number, and optionally the analysis of the previous meeting as context.

You are a Transcript Analyst. You analyze one transcript from a participatory group process. Your goal is not to summarize — it is to extract signals that will later be used to predict the trajectory of the group.
Core principle: Summarizing describes what WAS. Extracting signals describes what is MOVING. You look for movement: what is shifting, what is accelerating, what is stagnating, what is about to tip.
Instructions:
  1. Read the full transcript carefully
  2. Extract the 6 signals (see signal model below)
  3. Extract all ARL items (Actions, Reflections, Lessons) and Decisions
  4. Identify candidate modules (reusable patterns)
  5. Note blind spots and risks
  6. Deliver output in the format below
Rules:
  • Every signal MUST be supported by a literal quote from the transcript
  • High recall: if there are ten actions, name all ten. Don't filter.
  • Use the language of the participants, not academic jargon
  • The facilitator is also data — track when they intervene, step back, or let silence sit
The signal model (6 dimensions):
S1 Ownership Trajectory From facilitator-led to group-driven? Measure: Who initiates topics? Who asks questions vs. who answers? Who says "we will..." vs. "you should..."? Output: Score 0-10 (0 = fully facilitator-led, 10 = fully group-driven) + rationale. Quote: The moment where ownership is most visible.
S2 Tension Evolution What tensions are present? Productive or destructive? Measure: Dominant tension. Explicit (named) or implicit (felt but unnamed)? Productive (leads to action) or destructive (leads to paralysis)? Output: Tension + classification (productive/destructive/latent). Quote: The moment where the tension is sharpest.
S3 Energy Pulse What is the collective energy? Measure: Moments of laughter, silence, talking over each other, distraction, deep focus. The ratio of "being present" vs. "drifting off." Output: Score 0-10 + peak moment + dip moment. Quote: The energetic highlight.
S4 Decision Momentum Are decisions being made? By whom? Explicit or implicit? Measure: Count explicit decisions. Watch for implicit decisions (something becomes assumed without a vote). Track who initiates decisions. Output: Count + type (explicit/implicit/deferred) + initiator(s). Quote: The most important decision.
S5 Group Identity How does the group define itself? Measure: "We" language, metaphors, in/outgroup dynamics, relationship to other parties. Output: "We are..." + "We are not..." Quote: The statement that best captures the group identity.
S6 Pattern Stability What recurring behavioral patterns are visible? Measure: Who speaks when, rituals (check-in), avoidance patterns, recurring metaphors or jokes. Output: List of patterns + per pattern: emerging/stable/weakening. Quote: Evidence of the strongest pattern.
Output format:
Meeting [N] Analysis — [Date]
Context
  • Meeting type: [introduction / deep session / organizing / decision-making / etc.]
  • Present: [names + roles]
  • Absent: [names — absence is also data]
  • Group phase: [forming / storming / norming / performing — in plain language]
  • Core question: [The central dilemma on the table, often implicit]
Signals
Per signal: score, quote, analysis.
ARL Extraction (Complete)
  • ACT-[N]-01: [Who] will [do what]. Status: [open/completed/expired]
  • REF-[N]-01: [Insight about the process or current state]
  • LRN-[N]-01: [Generalized insight for the future]
  • DEC-[N]-01: [Decision + who initiated + explicit/implicit]
Candidate Modules (reusable patterns)
Per module:
  • Type: insight / working method / strategy
  • Core lesson: [what is transferable]
  • Conditions: [when does this work]
  • Anti-pattern: [when doesn't this work]
  • Evidence: [quote]
Blind Spots & Risks
Per risk: name, level (high/medium/low), description.
Mirror for the Participants
[Max 150 words, warm and observational, in "we" voice. Contains: acknowledgment, progression, nugget, cliffhanger question]

Step 2: Predictive Synthesizer

Use this prompt after all separate analyses are done. Give AI all analyses as input.

You are a Predictive Synthesizer. You receive analyses from multiple meetings of a participatory group process. Your goal is:
  1. Make the trajectory of the group across all meetings visible
  2. Identify lines of change (shifts that unfold across multiple meetings)
  3. Make a detailed prediction for the next meeting
Core principle: Patterns over time are more revealing than snapshots. An ownership score of 6 says little. A curve of 3 → 4 → 3 → 5 → 4 → 6 → 7 tells a story. You read the curve, not the point.
Instructions:
  1. Load all meeting analyses
  2. Build the trajectory dashboard (signals over time)
  3. Identify lines of change (cross-meeting shifts)
  4. Make retro-predictions (validate your own model)
  5. Generate the prediction for the next meeting
Output format:
  1. Signal Dashboard All six signals per meeting in a table. Per signal: describe the trend line across all meetings.
  1. Lines of Change Per line of change: - Description of the shift - Trajectory through the meetings, with quotes - Projection: where is this heading? - Confidence score (0.0-1.0)
  1. Retro-Predictions (Model Validation) For each meeting (2 through N): what would you have predicted based on the previous meeting(s)? What actually happened? Score per prediction. Close with: model accuracy and blind spots of the model.
  1. Prediction Meeting [N+1] 4.1 Context & Conditions What do we know about the circumstances? Deadline pressure? Season? External factors? Who is likely present/absent?
4.2 Scenarios Scenario A: [Name] — Most Likely (Confidence: [X]) Minimum 300 words. Per scenario: - Expected topics, dynamics, tensions - Expected ownership shift - Expected decisions - The moment to watch for - Risk: what could derail this scenario?
Scenario B: Alternative (Confidence: [X]) Minimum 200 words.
Scenario C: Black Swan (Confidence: [X]) Minimum 150 words. What would surprise everyone but make sense in hindsight?
4.3 Signal Projections Per signal: current value, expected value, margin, reason.
4.4 Lines of Change Projection Per line of change: where is it on the curve, where is it heading?
4.5 Facilitation Card - Protect: what should you protect or nurture? - Confront: what should you confront the group with? - Let go: what should you consciously let go of? - Observe: what should you watch for as a signal?
  1. Module Harvest (Cross-Meeting) Which patterns are strong enough to formalize as reusable modules? Per module: name, sources, maturity, transferability.
  1. Meta-Reflection What does this trajectory say about the participatory process itself? Which insights are more broadly applicable than just this project?
  • "Two steps" instead of an all-in-one prompt ensures each session gets full attention. With multiple transcripts at once you lose nuance: the context window fills up and AI starts summarizing instead of analyzing.
  • "The same six signals" in step 1 make the analyses comparable. Without that consistency you get data that can't be laid side by side.
  • "The synthesis as a separate step" forces AI to actively compare instead of summarizing per transcript.
  • "Retro-predictions" force the model to validate itself. That's the honesty check.
  • "Scenarios with margins" instead of a single prediction force AI to consider alternative futures. Not "this is going to happen" but "here are three possibilities."
  • "The facilitation card" translates analysis into action. Not "here is a chart" but "here is what you can do with it."

Try this yourself

You can use the prompts above on your own transcripts. But if you want to start with something smaller, try this: a comparison of two sessions with the same group.

10-15 minutes. You need two reports or transcripts from gatherings with the same group.

  1. Pick two sessions from the same process. It doesn't have to be perfect: two team meetings, two workshops, two group conversations. As long as they're from the same group with at least a few weeks in between.

  2. Analyze both with the same five questions. Copy this prompt and use it for both sessions:

Analyze this transcript from a meeting.
Answer these five questions:
  1. What are the three most important topics?
  2. How do participants talk about their situation: externally oriented ("they should") or internally oriented ("we can")? Give quotes.
  3. What questions are asked? Categorize: why, what, how, who/when.
  4. Where do people take initiative? Where do they place responsibility outside themselves?
  5. Where was the energy? Which topics came alive?
Give a brief summary per question with the strongest quotes.
  1. Ask AI to compare the two analyses. Give both analyses as input and ask:
I'm giving you two analyses from meetings of the same group, with a few weeks in between. Compare them:
  • What shifted between session 1 and session 2?
  • Have the questions become more concrete or more abstract?
  • Has the tone changed? From externally to internally oriented?
  • Which themes have disappeared, which are new?
  • If you had to name one shift: which one?
  1. What you'll probably see: the themes have shifted. Maybe the questions have become more concrete. Maybe the tone has changed. Maybe something has disappeared.

  2. The real question: do you recognize the shift? If you were there: does it match what you felt? If you weren't: what would you want to check with someone who was?

The goal isn't a perfect analysis. The goal is the experience: oh, this changed, and I had missed it.


Deep dive: the experiment

This is for those curious about how the method was developed and tested. You don't need to read this to apply patterns over time.

Everything above is based on version one of the experiment: seven parallel analyses, one synthesis. But after that first result I wanted to know: can you also do this per session? Not analyzing everything retroactively at once, but already making a prediction after session one for session two. And then at session two, evaluating the previous prediction, adjusting, and predicting again. A spiral of feedback and learning, where each round builds on the observations of the previous one. I tested this with four methods that built on each other progressively.

A note: the percentages below are assessed by AI itself. One AI predicts, another scores what actually happened. These are not hard numbers. What is interesting is what each step in the spiral reveals about how AI handles this kind of data.

VersionMethodWhat it yields
V1Seven parallel analyses + one synthesisThe result above: scenario, curves, facilitation card. The richest approach, but only possible retroactively.
V2Per-session analysis, blind prediction, compressed handoff to the nextThe basis of the spiral. Each AI receives only a summary of the previous one, not the full analysis. First attempt: ~68%.
V3V2 + each round evaluates the previous prediction, adjusts, and learnsThe spiral in action. By explicitly looking back at what the previous round got right and wrong, predictions got sharper (~73%).
V4V3 + full previous analyses passed along instead of summariesMore data, but not necessarily better (~69%). AI became more cautious and descriptive rather than sharper.

What this shows: V3's predictions came closer to what actually happened than V4's, even though V4 had more information. How it works: at each next step, AI received the transcript of the meeting that was predicted in the previous round. That way it could evaluate which scenarios had actually played out, and adjust for the next prediction. This isn't hard science (AI evaluates itself), but the pattern is interesting. When you give AI everything, it has to figure out on its own what matters, and that makes AI more cautious and descriptive rather than sharper. Give it a compact summary and the noise is already filtered out, so AI can focus on the patterns that matter. That supports the principle "first apart, then together."

The honesty: This is an experiment, not a proven method. I defined the signals myself, the scores are indicative, and the model overestimates progress and underestimates resistance. People are not trend lines. The value isn't in the percentages, but in the thinking: a spiral of feedback and learning that makes the analysis richer each round. And practically: a facilitator who gets a briefing before every meeting based on all previous sessions.


Tensions

Too many patterns AI can find patterns endlessly. But not everything is meaningful.

What I notice: AI is getting increasingly intelligent. With nuanced prompts you get good choices. I limit to three to five patterns and ask: which ones really matter for this group, for this purpose?

Patterns that aren't there The temptation is to see development where there's only variation. Not every change is a shift.

What I do: I check with the group. Do they recognize the shift? If not, it may not be a real pattern.

Steering too much You want to provide context, but you need to be aware that you're not steering too much toward what you're looking for.

What I watch for: How many groups, which topics, but not so much that you determine the outcome. Don't put the ideal outcome in your prompt. Don't search for a specific pattern. Do your analysis in a way that gets you to reasonably neutral clusters, and perhaps stay open to what might come up. Because you often already have your own ideas about what matters.

Transparency in synthesis When AI determines that three quotes are relevant to support a pattern, it's valuable to have transparency.

Why this matters: You want to understand: does this clustering hold up? Does this pattern hold up? That's why I prefer working with quotes. Someone in the group can say "I heard this quote" and check whether it's accurate. You can give back quotes that contain a core: a sentence that captures what many people feel.


Safety checklist

  • Multiple sessions available for analysis?
  • Each session analyzed separately with the same methodology?
  • Context per session included in the prompt?
  • Patterns limited to three to five that really matter?
  • Patterns checked with the group: do they recognize the shift?
  • Distinction made between shift and variation?

Philosophical deepening

Wisdom that accumulates

Every session produces insights. But most disappear. The next conversation starts, attention shifts, what was said before fades.

Patterns over time make it possible for wisdom to accumulate. Not only in people's heads, but visibly, documentably, shareably.

This is the promise of analysis over time: not each session as a separate island, but all sessions together as a story that unfolds.

The ritual changes, the intention doesn't

Until recently I tracked group development the way most facilitators know: on feeling, with loose notes, and with what I remembered from previous sessions. That works, up to a point. The problem isn't that you're not paying attention, but that some shifts unfold so slowly that you only recognize them when you look back.

The methodology on this page changes the ritual. Instead of remembering and sensing, you analyze each session systematically and lay the analyses side by side. The intention is exactly the same: understanding what is moving in a group, and using that insight to improve the process. But the ritual makes visible what previously remained invisible.

I think the value isn't in the analysis itself, but in the conversation it sparks. A facilitation card that says "protect the reflection time" is only valuable when you discuss it with your co-facilitator. AI delivers the mirror; what you do with it is human work.

Patterns at different timescales

Shifts don't only happen over months. Within a single day with multiple tables you see the same thing. In the design thinking world this is called the double diamond: first diverge, then converge. That's a pattern on a small timescale. But when you follow the same group over months, you see larger diamonds: the process as a whole diverges (exploration) and converges (action). The method is identical, the timescale differs.

The story of the group

When you make patterns over time visible, you create the story of the group. Not "what we did," but "who we became."

That story is valuable. For the group itself: to see how far they've come. For clients: to understand what happened. For future projects: to learn from what worked.

Patterns over time | Social AI Field Guide