What to Pay Attention to When Assessing the Value of an NLP Model
During this webinar, industry experts discussed NLP in the Enterprise and more specifically, what to pay attention to when trying to assess the value of an NLP model. We’ve included a short transcription of the webinar, beginning at 32:49 of the webinar.
J.T. Wolohan, Booz Allen Hamilton: What do you pay attention to when you’re trying to assess the value of an NLP model?
Wes Barlow, USAA: Well, I mean, there’s quite a few. One of them is that they would cover a little bit earlier. You’re essentially automating something that could be done by a human. So there’s a cost-benefit analysis that you can run there and also depending on the expenses taken to run the model, that’s always involved in the cost side. So they are more of a traditional business financial approach to it.
When looking at the value of it, there’s also the quality factor in there, which NLP is a tricky animal, at least in our world for theme and sentiment we’re looking at employee comments and that can be a tricky thing. Because when it comes to accuracy and precision, to jump more to my psych background for a bit, even human raters that are doing the detection, my graduate students, when they’ve done studies where they kind of cross validate their current, they agree about 85% of the time. So a human that’s trained in a technique is going to disagree on themes and sentiment about 85% of the time. So you need to understand that when you’re going to these statements and emitted detection models that if you’re getting nebulous topics that are going on, you’re going to miss some and so what we’ve done from a more practical level. We do have a process where we have humans do some validations and some kind of ongoing validation where they’ll take a sample and go through and see if they agree.
To do that correctly, you need some diversity when you’re doing it. I can’t be a person, especially not the person that built the model, which unfortunately is what happens sometimes because then you’re just taking your biases that went into building it and then you’re adding those again when you’re doing your writing. And there are also levels around those accuracy and precision numbers and to have them understand from a business standpoint that if you’re getting accuracy of 60 percent and that’s a validated accuracy level of it picking the correct theme. That’s pretty awesome, but they need to understand that’s how your algorithm that was choosing whether or not to give somebody a loan was performing then that would not be awesome; that would be pretty bad.
J.T. Wolohan, Booz Allen Hamilton: Yeah, and Wes I think you’re alluding to something here that I think is one of the really interesting areas of NLP. One of the reasons that I’m sure all of us are drawn to NLP is that there’s this possibility within natural language processing to do things at a scale that people can’t. That we can do complex cognitive tasks at a scale that people can’t do and that offers organizations value propositions that they can’t traditionally put costs to. Because they could put cost to it in the terms of people hours. But you would never even think about how many people hours it would take to do such human heavy intensive tasks because you might be able to replace an office full of people with an algorithm in some instances. Johan, what are some examples of natural language processing that you’ve seen that bring value by expanding the realm of possibility for an organization?
Johann Beukes, Levatas: Our focus working for clients is always to bring value quickly because we’re basically getting our next lunch if we do a good job. So we also, over the past three years, we’ve actually patented the computer vision side of things, we patented a process and you can actually apply to NLP as well. But we focused on CV for now. But what it does, if you look at a model or AI in general, it doesn’t really know when it’s wrong. If you think about your kids, when they grow up and they learn about something, they might come to you and ask when they are not sure, whereas AI will have a confidence of maybe 40% but they’re going to give you an answer. So they don’t really know when they’re not right or not sure about something.
We have a process around that and part of that is also taking that haystack and, I think Wes was saying it, if you have so much data and you have to shrink it and get a human in the loop to help validate it, that’s valuable. So adding value we try, and this is one of the jobs that I have at our company, to get our data scientists to talk about what’s the implications of a bottle. If I have true positives, true negatives, false positives, things like that. What is the implication to the business if those values go up and down.
We try to talk about that rather than what’s the accuracy and precision. That’s internal, how we measure the success for the model, but that doesn’t explain the value. The value comes from ROI, it comes from what’s the business value that it delivers and a lot of what we’ve talked about so far. I think it’s becoming clear, we’re not replacing humans. What we’re trying to do is automate and we’re shrinking that Haystack to try and get people to be more productive, more intelligent, and just make smarter decisions overall.
Learn more and watch the full video on YouTube: https://youtu.be/CNH_yphj0P8.