Safe practices with AI
Privacy, boundaries, and responsible use.
The basic principles
Working with AI and sensitive conversations requires conscious choices. Not because AI is inherently unsafe, but because the barriers are so low that it's easy to forget what you're sharing.
1. Know where your data goes
Cloud vs Local:
- Cloud services (ChatGPT, Claude, Notion.ai) store data on servers
- Local tools (MacWhisper) process everything on your own device
The choice:
| Situation | Recommendation |
|---|---|
| Sensitive conversations (therapeutic, personnel issues) | Local |
| Internal organizational matters | Consider carefully, check policy |
| Public or less sensitive content | Cloud is fine |
| In doubt | Local |
2. Ask permission
When recording:
"I'm recording this conversation so we can look back at what was said. The recording stays with me and will only be used for [purpose]. Is that okay for everyone?"
Watch for non-verbal signals. Not everyone speaks up when they're uncomfortable.
When sharing with AI: If you're sending transcripts to cloud services, do participants know that? Being explicit is respectful.
3. Anonymize where needed
What to anonymize:
- Names of individuals
- Identifying details (roles, departments, specific situations)
- Anything traceable to individuals
When:
- Always with sensitive content
- Always when sharing outside the direct context
- When in doubt: anonymize
The prompt safeguards
With every prompt for sensitive conversations, use these safeguards:
Strictly based on transcript
Base yourself strictly on what's written, no fabrications.
This prevents AI from adding information that isn't there.
When in doubt, flag it
When in doubt: phrase as "possibly" or "it seems like" rather than definitive statements.
This prevents AI from simulating certainty that doesn't exist.
Preserve language
Use the exact words and phrasings of participants. No paraphrasing into professional language.
This prevents ownership from disappearing through translation.
Mark AI output
Make an explicit distinction between what participants said and what AI observes.
This prevents AI interpretations from being confused with what was actually said.
Checklist per session
Before the session
- Permission for recording prepared
- Decided: cloud or local processing
- Clarity on who has access to output
After the session
- Transcript/recording stored safely
- Anonymization where needed
- Participants informed about what happens with their input
With every prompt
- "Strictly based on transcript" instruction added?
- "When in doubt: possibly" instruction added?
- Language preservation instruction included?
- AI output clearly marked as AI?
- Privacy checked for the tool you're using?
Tool choices
| Tool | Type | Privacy level |
|---|---|---|
| MacWhisper | Local transcription | High: everything stays on your device |
| Dembrane | Cloud + real-time | High-medium: built for facilitation, privacy-focused, European servers |
| Notion.ai | Cloud transcription | Lower: data on external servers |
| ChatGPT/Claude | Cloud AI | Lower: data may be used for training* |
| Local LLMs | Local AI | High: everything stays local |
*Check current terms, these change regularly.
When not to use AI
Some conversations don't belong in AI processing, not even locally:
- Crisis interventions: focus must be 100% on the person
- When confidentiality is absolute: some things belong only to the people in the room
- When you feel it doesn't fit: trust that feeling
AI is a tool. Not every situation calls for a tool.
The ethical layer
Respecting ownership
When you generate output based on someone else's words, that output isn't yours. It's a processing of their input.
Implication: Don't just share it. Check whether people recognize themselves. Ask permission for use.
Transparency
People have the right to know:
- That they're being recorded
- What happens with the recording
- That AI is involved in the processing
- What happens with the output
People remain ultimately responsible
AI generates, you decide. The output is a suggestion, not a conclusion.
Implication: Review what AI creates. Check whether it's accurate. Take responsibility for what you share.
The limits of AI
AI is capable but has limits that are relevant for safe practices:
AI can sound confident but be wrong: A confident tone is no guarantee of truth. Verify everything that matters.
AI misses what wasn't captured in words: It reads what's written, not what was in the air: the look, the sigh, the atmosphere in the room.
AI often finds something, the question is whether it's meaningful: Not everything AI sees is a pattern. Check against your own experience.
AI doesn't automatically weigh ethics, and doesn't bear the consequences: You know the people, you're in relationship with them, you're the one looking them in the eye. That responsibility stays with you.
Summary
The core: Know where your data goes. Ask permission. Anonymize where needed. Use safeguards in your prompts. Mark AI output as AI. Stay responsible.
The attitude: AI as a capable tool, not as an authority. People remain ultimately responsible.
The check: When in doubt, ask yourself: "Would I be comfortable if the people in this conversation knew exactly what I'm doing with this?"