By Jack Cumming

All close, trusting relationships start with someone who listens to you and understands you. That’s both universal and specific to great therapy. In truth, there can be nothing better than the therapeutic insights you get from a trusted friend. A friend can help you to accept a need for you to change without hurting your feelings or causing you consternation.

Therapists Are Nice

These therapeutic insights underlie everything I ever learned about relationships generally, including doctor/patient relationships. The therapist is paid to connect with the patient. It’s a medical model. Still, the truth is that if therapy hadn’t started with physicians — Sigmund Freud, Carl Gustav Jung, Karen Horney, and a host of others — it might have just emerged as the natural philosophy (science if you insist) of human relationships. This article continues a reflection that began with a suggestion of how medical practice might become more proactive.

Dale Carnegie’s book, “How to Win Friends and Influence People,” comes to mind. The premise is that nice people aren’t just born that way. Niceness and human understanding are capabilities that can be learned. If that seems cynical or manipulative, perhaps it is a bit, but if it makes people nicer, isn’t that a good thing? We all want to be understood and appreciated. There’s nothing wrong with pursuing that as a universal objective.

A Promising App

I’m not sure that relationship understanding should be considered medical, but a little relationship help could improve those medical interactions considerably. Let me explain.

There’s a startup that has a product that has intrigued me. I wish I could invest in it. I think that it has that much potential. Still, I can see a huge upside beyond the narrow scope of what the founders now seem to envision. The enterprise is https://www.talktoash.com, and it proclaims itself to be “AI designed for mental health; 24/7 emotional support that learns, grows & adapts.”

Here’s how it works. You install an app on your smartphone. I have an iPhone 16 Pro Max. There’s no charge for the app … for now. You enable your microphone, and you can start talking to Ash, as in Ashley, for example, about just about anything. What’s remarkable is the conversational responsiveness. It lacks the artificiality that we ordinarily associate with AI applications. Instead, it interacts much as a friend might.

From what I’ve been able to learn, the founders are using their venture capital funding to allow a very wide beta test without cost to users to bring the therapeutic interactions up to a point of excellence that it would be difficult for latecomers to match. They recognize that weekly therapy visits can be very costly, so they plan to provide a parallel service for no more than an annual cost equal to the cost of a single therapist session.

Challenging Pushbacks

Unfortunately, that comes with challenges. A therapeutically oriented device can be seen as inherently medical, requiring an expensive and time-consuming approval process. Moreover, would a device like “Ash” be as safe as a human therapist if a user has, say, suicidal ideation or other sociopathic thinking?

My offhand thinking is that it might be better than a human, but that’s unlikely to be the position, say, of the American Psychological Association, which represents the interests of its 172,000 members: researchers, educators, clinicians, consultants, and students. Then, there is the American Psychiatric Association with 39,200 members in psychiatric practice, research, and academia. Additionally, in the United States, there is the U.S. Food and Drug Administration, which approves medical devices. There are considerable hurdles and objections to overcome.

Avoiding Impediments

If the Ash App provided career coaching, for example, I don’t think it would require such stringent approvals, nor would there likely be the same intensity of concerted pushback from organizations seeking to protect the livelihoods of their members. I tested it to see how it might respond to such a request, and superficially, in a short conversation, it seemed to require no special tweaking for such a purpose.

In fact, its advice was very helpful. It also occurred to me that any relationship, career coaching, pastoring, or even simple friendship might progress toward the kinds of ideation risks that could bring the app into the regulatory gauntlet. It would be a shame to see such a promising advance snuffed out by retrograde interests and excessive regulation.

I see many applications for the conversational breakthrough that this talented team of innovators has achieved. Perhaps its biggest application could be in improving health care and reducing health care costs in the United States, particularly for older people. That will be the subject of the next article in this series.