AI in Clinical Practice: Benefits, Risks, and the Ethics of Informed Consent
Where I'm Coming From
I want to be direct about something: most writing about AI in clinical practice is either breathlessly optimistic or reflexively cautious, and almost none of it is written by clinicians who are actually doing it.
I've been building and running AI systems in my own practice for several years. I run private, self-hosted large language models on local servers — meaning client data never touches a commercial cloud. I use voice transcription through a secure tunnel for session notes. I've built EHR systems for healthcare facilities. I use AI with select clients as a therapeutic tool — for homework support, journaling, communication skill-building, and structured reflection.
I'm not writing this from a theoretical position. These are tools I use every day, and the perspective here reflects both what's working and what I've had to think carefully about.
"The question isn't whether AI belongs in clinical practice. It's whether you're using it in a way that serves the client, protects their privacy, and is honest about what it is."
How I'm Actually Using AI in Practice
Self-hosted LLMs for clinical work
For treatment planning, family court preparation, and documentation, I run open-source large language models — including Llama — on private servers in my own infrastructure. Nothing leaves my network. This is meaningfully different from using ChatGPT or Claude's consumer interface, where data passes through third-party servers. Self-hosting requires more technical setup, but it's the only configuration I'm comfortable using for anything involving client information.
Voice transcription for session notes
I use AI-powered voice transcription daily, routed through a Cloudflare tunnel to my local servers. This allows me to dictate session notes immediately after a session while the material is fresh, and have a clean draft within minutes rather than spending evenings on documentation. The transcription never leaves my controlled environment. The time this returns to direct client care is significant.
Family court preparation
Court-involved cases — reunification, co-parenting disputes, custody evaluations — generate substantial documentation requirements. I use AI to help structure reports, identify gaps in clinical narratives, and ensure that documentation is organized, thorough, and defensible. A clinician still authors and reviews every word. The AI is a drafting and organizational tool, not a decision-maker.
Client-facing therapeutic tools
With select clients and with explicit informed consent, I've integrated AI into the therapeutic work itself. This includes homework support between sessions, the Insight Journal, structured communication exercises for couples, the Insight Game personality framework, and reflection prompts. These are adjunct tools — they supplement the therapeutic relationship, they don't replace it.
The Real Benefits
The efficiency gains are real and they matter clinically — not just operationally. Time returned from documentation is time available for clients. But the benefits go further than that.
- More thorough, consistent documentation
- Faster court and treatment planning reports
- Homework support between sessions
- Extended reflection tools for clients
- Structured skill practice (communication, regulation)
- Accessible support outside session hours
- Significant reduction in documentation time
- More consistent treatment records
- Better audit and court-readiness
- Scalable structured group materials
- Reduced cognitive load after difficult sessions
- More time with clients, less with paperwork
For family court work specifically, the ability to produce well-organized, clearly written reports under time pressure — while maintaining clinical accuracy — has been meaningful. Courts respond to documentation that is structured and readable. AI helps with the structure. The clinical substance remains entirely mine.
The Real Risks
I take these seriously, which is why I made the infrastructure choices I did. The risks in this space are not abstract.
- Commercial AI tools may train on input data
- Cloud-based processing creates exposure points
- Unclear data retention policies
- HIPAA doesn't map cleanly onto most AI tools
- Breach liability if using unsecured tools
- AI cannot assess risk or read affect
- Confident-sounding errors in documentation
- Client over-reliance on AI support tools
- Erosion of the therapeutic relationship
- Bias embedded in models and outputs
- Scope creep beyond intended use
The second risk I watch closely is client-facing AI tools creating a substitute for human contact rather than a supplement to it. An AI journaling tool that helps a client process between sessions is useful. A client who stops engaging in session because they feel "heard" by an AI is a clinical problem. This requires ongoing attention and clear boundaries in how these tools are framed.
Informed Consent: What Clients Need to Know
If you are using AI in any capacity that involves client information — or offering AI tools to clients as part of treatment — informed consent is not optional. It is an ethical and legal obligation, and current professional guidelines are still catching up to the reality of how these tools are being used.
My own consent process addresses AI explicitly and covers the following:
Clients have a right to know when AI is part of their care. Most, in my experience, respond well when it's explained honestly — what it does, what it doesn't do, and why the choice was made. The conversation itself is often clinically useful. Clients who are anxious about AI, or curious about it, or who have strong feelings about it in either direction — those reactions are worth exploring.
"Consent isn't a form. It's a conversation. And with AI, it's a conversation worth having carefully."
AI as a Client-Facing Therapeutic Tool
This is where I think the most interesting clinical territory is, and also where the most caution is warranted.
I use AI with select clients — not all clients, and not early in treatment — as structured homework and reflection tools. The Insight Journal uses guided prompts to help clients track patterns in thinking, emotion, and behavior between sessions. Communication skill tools for couples allow partners to practice responses in a lower-stakes environment before bringing them into session. Personality and attachment frameworks are made interactive in ways that generate material for the therapy itself.
What makes these work clinically is that they feed back into the therapeutic relationship rather than operating parallel to it. A client's journal entries become source material for session. A couple's practice conversation generates something concrete to work with together. The AI isn't doing the therapy. It's creating the conditions for better therapy.
What I watch for — and what I think clinicians using these tools need to be honest about — is when the AI becomes the relationship. Clients who are isolated, who struggle with human connection, who have attachment wounds — for them, the accessibility and consistency of an AI tool can fill a need that the therapy is meant to address. That's a clinical problem. It requires ongoing assessment and frank conversation.
If You're Considering This
A few things I'd tell any clinician thinking about integrating AI into their practice:
- Start with your infrastructure, not your tools. Before you use any AI system for clinical work, understand where the data goes. If you can't answer that question clearly, don't use it with clients.
- Self-hosting is not as hard as it sounds — but it requires commitment. Running local LLMs on private servers is technically feasible for a solo practice or small group. It's the only configuration I trust for clinical documentation. It requires setup, maintenance, and some technical comfort.
- Update your informed consent. Seriously. Today. If you're using any AI tool — even for your own notes — your consent documentation should address it.
- Don't use AI with clients who aren't clinically ready for it. Early treatment, active crisis, fragile attachment, significant technology anxiety — these are contraindications. AI tools as homework work best with clients who have enough stability to use them as intended.
- Keep the relationship central. Every AI tool I use is designed to generate material for the human therapeutic encounter — not to replace it. That's the line I don't cross.
- The field is moving fast and the guidelines are behind. NASW, APA, and AAMFT are all developing guidance, but none of it has caught up to current practice. You'll need to make thoughtful clinical and ethical decisions in advance of formal standards. Document your reasoning.
Consulting on AI integration for your practice or organization
I work with clinical teams, group practices, and healthcare organizations on AI implementation — infrastructure, consent frameworks, client-facing tools, and workflow design. If you're navigating this and want a clinician's perspective alongside the technical one, reach out.
Learn About Consulting Book a Consultation