TechArena Podcast Transcript: Lyssn
TechArena Podcast with Lyssn co-founder Zac Imel
Allyson
Welcome to the Tech Arena. My name is Allyson Klein and today I'm delighted to be joined by Zac Imel of Lyssn. Zac, welcome to the program.
Zac
Hi. Thanks for having me.
Allyson
So, Zach, I had you on today to talk about how technology is intercepting with the mental health arena. Why don't you give me a little bit of background on you and of Lyssn, the company that you helped found a few years ago.
Zac
Absolutely. Sure. I guess in terms of my background, I'm a psychologist by training and have a PhD in counseling psychology and worked as a therapist in the VA for just a little bit and then have been a professor at the University of Utah and director of clinical training for maybe ten or eleven years. And really, ever since I got started in psychology, I've had kind of an interest in data science. What we call data science now used to be statistics. And so I've always had a bit of an analytical mind. And the way I got interested in psychology was, how can we apply whatever the kind of current, most advanced statistical applications are to the questions that are really interesting to people in psychology? And for me, my primary interest was in psychotherapy research. So how and why does psychotherapy work? Which is the conversation that a therapist has with their patient? I was fascinated when I learned about the story of listen and what you have put together.
Allyson
Most people think about psychotherapy as something that is distinctly not technical, a conversation over time between two people, and something where technology really doesn't enter the arena. What was the impetus for looking at how technology could apply? And where were the areas of treatment that you were looking at that you could improve or help practitioners improve in terms of the way they were treating mental health?
Zac
Yeah, for a long time, they would have been right. Psychotherapy is the kind of ultimate low technology, right? If you think of psychotherapy formally as a conversation with a goal in order to help change a behavior or change someone's emotion, we've been doing some form of psychotherapy probably forever, right? I mean, we didn't call it that and didn't have a formal profession around it. And so really, when I first started training in psychotherapy, if you asked about technology, that would have been we probably just started having CD ROMs. There were you could get, like, cognitive behavioral therapy, and you could click on some buttons and read some stuff on a website, or it was like a trainee might have a bug in their ear where they're in a room talking to a client, but there's a supervisor on this other one-way mirror, and they're listening in. And if you get stuck, the supervisor is kind of like, say this, right? Or do that, or try this, or that sort of thing. So that was technology, really, for quite a long time. And so the primary thing we identified and this is going back, I think the first work we started doing with my colleague Dave Atkins and a bunch of folks in computer science and electrical engineering was the way we evaluate quality and psychotherapy is for a human being to listen to the conversation. And that's great. It's a nice way to do it. We've been doing that for a long time. We have measures for that. They're reliable. We can tell you. Some people find it surprising in our field. You wouldn't. There are reliable measures for how empathic a therapist is, right? So I can train someone to listen to a conversation, and we can score the person who's doing the interview, someone like you, for example, about how well are they trying to actively understand the person who is talking to them? Are they actively asking for clarifications, asking open questions, being supportive, or are they kind of barely paying attention? Certainly at the extremes, we can capture that really well. It just takes a long time. And so that's never used outside of really well funded clinical research that's funded by NIH. And you do that for ensuring internal validity of your interventions and making sure people are doing what you hope they're doing.
So when you do a clinic trial, you know, oh, well, this work, because people did the intervention we thought they were doing. Psychotherapy is really different than, you know, pharmacotherapy drug therapy, where, you know, you give someone a pill, and factory has hopefully ensured that the thing that you think is supposed to be in the pill is in it. And psychotherapy is a lot more like teaching, right? You synthesize the specific ingredients of psychotherapy in the moment with the person. They aren't there until you have the talk. And so the question is, how can we evaluate that in a way that is scalable at all and reliable?
This was right around when speech and language technologies were starting to move into a new domain. We didn't have transformer models yet or some of the stuff that has come out in the last few years and machine learning, but speech recognition had gotten a lot better. And so we had the idea that what if we got some of our engineering colleagues who were working on these kind of standard language corpus areas that were pretty well minded and got them interested in some of the problems we had in mental health care? Like, we're having these really high stakes conversations. We don't have any way to analyze them at scale and we're having millions of them. Could we try and train some models that could at least start to replicate human evaluation of those things?
Allyson
So I believe the beginning was looking at ways that you could use this to train psychotherapists and give them feedback on their sessions. Is that correct?
Zac
Yes, it's a little of both. In some ways, training and ongoing quality are somewhat linked. And so I would say the first stuff we did was all like, could I listen to if I had 500 recordings from substance use counselors all over the Pacific, Northwestern, California, and I had human ratings of those, could we replicate those human ratings? And so a lot of our early papers were just that. They were basic proof of concept. Can we do this? And there was a lot of skepticism that we could, especially in the clinical domain, less so, I think, in the computer science domain. But after that, the first clinical application was more, OK, well, we have these models that we think kind of work. We build a platform and take it with folks who don't know much about a particular intervention and maybe they've had an initial training and they can record a session, get immediate feedback on what they're doing. So instead of what you typically would get in a training is this isn't the best case scenario, but it's more typical scenario. You would record like a little role play with a fake patient, maybe another colleague, and you record that and you get a tape or a flash drive and you like, mail it to someone. 1s Maybe if you really advanced to put it on Dropbox and then you get feedback on that, like in a month or in a couple of months. And our first study we did on this, we give it to them in minutes. 2s There's a lot of, like, learning theory on why that's much better, right? I mean, feedback should be proximal to the behavior you're trying to shape.
And so we were able to vastly speed up that process and start to give people feedback much more quickly. And from that you've developed an entire suite of tools for of this space for practitioners. Can you talk about the full suite and how that's evolved over time? Yeah. So at this point, at some point, we launched a company out of our university based research, which you mentioned, and we've built really three lines of products. And so one of them, which we kind of call like always on quality monitoring, which is that for digital mental health care or other spaces we can just be on in the background. Of course, people are aware and consent to these things, but we can take in the conversations you're having, process them pretty much automatically and have that running all the time. And so you don't have to have people who are using, and usually these are well trained people that you'd rather actually spend half their time being used to see patients. And so we have a dashboard and places where you can go and log in and look at your metrics, both as an individual practitioner, you can log in and see what happened in your session right away and how your sessions went and get some summative feedback across not just one session. So it's not just like one example of how you did, but across 10, 20, 50 sessions. Then your supervisor, also our administrators, can do the same thing across hundreds or thousands of sessions. Then we've just now started to release some documentation, assistance tools.
And so one of the big things that providers rightly, complain about, it's probably one of the reasons I'm not a therapist anymore, is documentation is no fun. Right. You get into the field as a person who is interested in people, and a lot of what you end up doing is treating the computer by writing notes and still clicking boxes and all that sort of stuff. We've started to build some things where we can capture the content of the conversation and use some of our models to summarize it automatically and the style of the clinical that at least gets it drafted for you. So you can look at it, edit it, and then speed up some of that process, maybe jog your memory of the session. Then we have primary training tools where we take some people who have had no background intervention at all. And take them, give them some basic introduction to cognitive behavioral therapy, for example, which is a well-known intervention in our field, and teach them a little bit about it. And then instead of just like having you stare at slides, which is what you do mostly in online trainings, you just start practicing. We give you a little vignette of a case and you respond to it, and then we can score it using our machine learning models right way, give you that feedback and use that to shape your behavior a little bit. And you can do things repeatedly and show some skill growth pretty quickly. And so we have a couple of different products along those lines as well.
Allyson
Take us under the covers a little bit. In terms of the underlying AI that you're utilizing to drive this model, I know you have something like 50 I don't remember if it was 56 or 57 parameters that you're scoring against. Can you talk a little bit about that and how you chose those?
Zac
Yeah, so I think what you're referencing is the different analytics we generate. I think the last time, it's probably more than this now, but it's 54 different unique analytics that are broadly related to behaviors that the therapist is using or things that were discussed during the session. To me, there's kind of broad classes of things I've seen in the field and then in digital mental health now where you use a machine learning model to pick out words that you think are associated with a particular intervention. And so one example of that might be homework. In cognitive behavioral therapy, a big part of the intervention is that you're supposed to do homework. The therapist assign stuff for you to do once you go home. And that's important part of extending the treatment hour into the client's lives. And a very, very simplistic version of that would be just to say, did the therapist say the word homework? Right. Which might be more of an angry type of model and where we're just looking at specific keywords. That's fine, that's probably better than nothing. It doesn't map very well onto the human rated, expert defined category of did the therapist actually do a good job talking about this particular thing with the client? And so what we've done is and this goes back to university based research and now is we pick gold standard measures from the field.
So there are existing measures in cognitive behavioral therapy or in an intervention, it's called motivational interviewing, which is a substance use disorder treatment that we have metrics for. We pick those measures, and then we have a human coding team that's all people with backgrounds in the field, licensed clinicians, things like that. They rate sessions. We assess internal interrater reliability, and we do it long enough until eventually the machine can replicate those human ratings at some level of agreement with the human raiders. And so we assess percentage of human agreements and typically target like 80% of human agreements. And for a lot of things, we beat that and are pretty much indistinguishable from human raiders. And so the analytics we chose are based on what are the most widely researched and evidence based practices in the field. So motivational interviewing and cognitive behavioral therapy are the most standard, well researched, evidence-based interventions in mental health care. And so we started there.
We thought we'd pick the ones that were most well studied and use those tools. And so really, what we've iterated is not so much the. The human ratings that go into the models themselves. And so we started doing this work on the university side back in 2008 where if your listeners will know what this is, where we're using things like topic models and other things, where now we're using kind of the most cutting edge, large, transformer based models out of GBT World and Roberta and things like that. Where there are models that are pretrained on large corporate outside of our domain. But then we're doing extensive secondary training on our own internal data sets to make sure we're really tracking the things we want to be tracking.
Allyson
So you're further honing for your unique scenarios and then driving a tremendous amount of inference against clinician data, is that correct?
Zac
That's right. Yeah. So our gold standard for at least the things I was just talking about is human perception of those things, right? To just make it more concrete in motivational interviewing, one of the things you want therapist to do is make listening statements. And so literally, what you just did to me, right, I said a long thing, and then you kind of came back and said, well, so it's this. And I went, yeah, that's right. And so that little intervention we would call a reflection or restatement in interviewing skills. And it's a key therapist behavior. If you can't do that, you're going to really struggle to be a therapist.
We have humans who rate those things repeatedly, thousands and thousands and thousands of times. And then it's some points we evaluate whether or not our machines can rate those just as well.
Allyson
Fantastic. Now, if I was a therapist listening to this, I'd be like, first of all, fantastic. You're going to help me and you're going to help me with my notes. All the therapists that I talk to say that that's the pain point of the profession. So that's awesome. But then there's a little voice inside my head wondering, Zac, is one of your ambitions to replace the human contact in therapy and maybe replace it with machine to human interactions? Can you talk a little bit about that?
Zac
It's so funny that I guess it's not funny. It makes sense that people wonder about that given what's out there in the world right now. If I were a therapist, and I guess I am sort of still, I would not be worried about robots replacing me anytime soon. I mean, maybe for folks in grade school, I don't know. Things changed. We've solved problems faster in some domains than we thought we would. And there's a lot of bot-based therapy that's out there now. And I don't know how much you've played around with some of those or talk to people who have. I think they're really cool and I play with them all the time. They are nowhere near replacing a human conversation. And I think some of that is the reliability of the bot is almost inversely related to the interests of the AI that's in it because these new generative models are really cool and they're really almost creepily good at being able to replicate human conversation and say human like things. But they're also they can say things that you don't expect and they can be unreliable and they can have all sorts of unintended behaviors. And so that's almost never what you're seeing in these bots that are out there in the world. I won't name any in particular. You're seeing stuff, for the most part, that was written by humans, and then there's some algorithm in the background that might be picking up oh, they said the word anxiety, so I should say this versus that, but that's been prewritten or it's kind of pushing you into a particular worksheet that's already prewritten or predesigned or something like that.
I think the future of mental health care, at least in the near term, is much more about augmenting therapists rather than replacing them. How can we make the job of the humans who are doing the work easier? How can we make them more effective at their work? Help them be less burned out? Support them when they're struggling because they're doing so much work? It's one thing to be empathic for an hour. It's another thing to be empathic for 30 hours yes. And then for 50 weeks. Right. And so how do you support someone who's doing that? And I think that's, to me, where the interesting technology is, it also seems like that's been the history of technology. It generally hasn't been, oh, we found this thing. Humans are gone now.
Allyson
Right. Thinking about it, you're training algorithms based on human experience, and you're augmenting based on that. And where I see it going is, are there other tools that we can provide that perhaps address some of the high-level issues with people before they actually get into a therapist appointment? One of the things that I've been thinking about is just the lack of mental health professionals versus the demand in communities. And are there other things that we can do to help those folks that are struggling to find mental health care?
Zac
Yeah, I think what you're seeing is a lot of places are adding either bots or apps as a part of the kind of broad menu of services they provide. Right. So, I don't necessarily think it really has to be an either or, right? Where we certainly don't have enough therapists at all. And so there's a lot of people, really the question isn't, or wouldn't it be nice if they had a therapist? They don't have anything, and so can we give them something that's better than nothing? And I think the answer is there's been some good studies that show we can give them some kind of brief interventions, particularly with people who are lower in acuity or severity, who aren't actively at risk for high risky behaviors, things like that. But for someone in my domain whose background has been studying the conversation that therapists have with patients, when I typically get asked that question, people are often asking me, do you think a robot could replace that? And I just don't think we're anywhere close to that in a way that could be scalable.
What I do think could be interesting and stuff we're working on is, rather than just evaluating, was this session empathic? Did they do the things they were supposed to is in particular cases, can we nudge people in different directions at the moment they're engaging with the client? So if we know we want people to be, in certain circumstances, more actively empathic and expressing how hard they are trying to understand the client, can we suggest things they might say before they say them? And I think the answer there is we could do that, especially if there's a human still in the loop where he's making decisions about well, no, I don't want to say that. These are just options.
Allyson
Yeah. So basically you're looking at predictive prompts based on the analysis of what's being said. That's interesting. Now, I know that the good news for Lyssn is that there's been incredible uptick in interest and deployment of this technology, including statewide, right, in Utah. Can you tell me a little bit about where you've seen success and what's coming next as we head into the new year?
Zac
We have had a bunch of success. I think we're really what we're seeing is over the pandemic you saw the beginning of. I've heard this mentioned, like kind of Telehealth 1.0, which was basically saying, now can we do some psychotherapy over digital media? And people are starting to try and think about like, well, what's next in terms of augmenting this, beyond just making it easier for people to access care. They don't have to go sit in an office. And so I think there are starting to be enough calls in the field for how do we scale quality, not just access. How do we make sure not just people are getting anything, but they're getting something that's a decent quality? And that in particular has been found in large public provider contracts where folks have to provide some evidence that they're doing stuff at a certain reasonable standard of care. And once the legislative requirements are there for people to do that, the burden on the providers is pretty significant to try and meet that, because it's back to the stuff we were talking about before, humans rating this stuff when they could be doing other things and when it's hard to hire people. And so I know from being a professor, when you ask certain people to do tasks repeatedly for hours, for years, they tend to have other opinions, right, and they eventually go do something else.
And so we've had some success signing contracts with state governments where we're helping them meet those requirements and not replacing humans out of the evaluative loop entirely, but just massively augmenting them. And so we now have contracts with the state of Utah, with the state of Wyoming, with Washington DC. With, I think, a county in Minnesota, a few others. And so then we've also just had good traction with traditional mental health clinics that are trying to scale up quality and figure out how to do better for their therapists who are getting burned out, but also want to provide better care. The training solutions that we are offering are much more compelling than what's typically out there.
So if you're trying to do postgraduate education for your providers, which is almost always required, you've got to be doing something to maintain your license. And often what that looks like these days is some sort of online education in a particular intervention. There's just not a lot of skill building that can happen in those places. And so we can really help people practice and get better at things. And so we've had a lot of organizations are pretty interested in that. In the new year, I would expect we're going to end up with five or six, if not more pretty soon state level contracts. We're starting to reach out to another kind of larger community based mental health clinics, those sorts of places as well.
Allyson
I just want to commend you. I think this is such an interesting use of artificial intelligence to make an actual incredible impact on society. So I can't wait to hear more about what you do in the future. One final question for you, Zac, and thank you for your time today. Where can folks go to find out more about what you and the listing team are doing and engage with you guys?
Zac
Our website is lyssn.io. We have links to research papers, their case studies, kind of a blog, talking about the different things we're doing. And so if you want to reach out to us there, please do.
Allyson
Thanks so much for coming on the show today.