Stanford University

Key considerations for incorporating conversational AI in psychotherapy

Miner A.S., Shah N., Bullock K., Arnow B., Bailenson J.N., Hancock J. (2019). Key considerations for incorporating conversational AI in psychotherapy. Frontiers in Psychiatry, doi: 10.3389/fpsyt.2019.00746.

View PDF

Abstract

Conversational artificial intelligence (AI) is changing the way mental health care is delivered. By gathering diagnostic information, facilitating treatment, and reviewing clinician behavior, conversational AI is poised to impact traditional approaches to delivering psychotherapy. While this transition is not disconnected from existing professional services, specific formulations of clinician-AI collaboration and migration paths between forms remain vague. In this viewpoint, we introduce four approaches to AI-human integration in mental health service delivery. To inform future research and policy, these four approaches are addressed through four dimensions of impact: access to care, quality, clinician-patient relationship, and patient self-disclosure and sharing. Although many research questions are yet to be investigated, we view safety, trust, and oversight as crucial first steps. If conversational AI isn’t safe it should not be used, and if it isn’t trusted, it won’t be. In order to assess safety, trust, interfaces, procedures, and system level workflows, oversight and collaboration is needed between AI systems, patients, clinicians, and administrators.

Comments are closed.