Introduction to Responsible AI: Unpacking the harms
The latest in our Responsible AI blog series, the CallMiner Research Lab explores two of the main categories of harms that AI outputs can cause: Harms...
March 09, 2021
Ahead of his session at the Online Collections Technology Think Tank 2.0, Rick Britt shares his thoughts on the impact of AI and ML on the future of collections.
Late February last year, I was in London at a conference. On the last day, Canary Warf was shut down and ostensibly evacuated due to this new virus we had heard about, CV19. Fast forward a year, knowing what we know now, this seems silly – as if there was a big blob of virus roaming the streets looking for victims. It seemed if we could just get away from that place we could be safe. And people rushing where? To the tube, a bus, crowded streets?
I caught an earlier, very full train back to Winchester, and was the only person wearing a mask. People were staring at me as if I was the virus. In truth, about half way into the journey I took it off, no virus in Hampshire, right?
Human history is full of surprises. Yet we rarely see it coming. From global pandemics, which occur about every 30-35 years, to a single person facing serious vulnerability issues every day.
AI is all about data. For machine learning algorithms to work, they need lots of relevant data. In the CallMiner Research Lab, the theoretical AI research group in CallMiner, we spend more than half our time ensuring the data we provide our models will lead to accurate results. When done correctly, the data can provide insights into the ‘why’ of the unexpected. The data can tell us things we did not know, even things hidden in plain sight.
But sadly, in most cases, AI can only predict a tomorrow that looks a lot like today. In the case of a pandemic, we didn’t have relevant data, as we had not had enough of these in our relevant data gathering life to predict them clearly. Could we have predicted the pandemic with the right data? The answer is scientifically, “probably, yes,” but in order to understand our hesitance, it is important to differentiate between predicting hidden events and predicting unknown events.
There is a difference between hidden and unknown – like the difference between a game of ‘hide and seek’ versus a treasure hunt. For hidden entities, we know they are in the data and the task is to find them. Predicting hidden populations in a larger population is a perfect game for AI to play.
An example of a hidden population is people who are in vulnerable situations. The FCA reported 27.7 million adults in the UK have characteristics of vulnerability, such as poor health, low financial resilience or recent negative life events. Further, UK unemployment is likely to reach 2.6 million by the middle of 2021, 4.1 million people are temporarily away from work, nearly 9 million people had to borrow more money because of the pandemic, and it’s possible nearly £5bn in loans may never be repaid.
Those are staggering numbers. So, how do people stay hidden?
Having worked with the collections industry for a number of years, I have learned that humans can quickly adapt to their current situation, and hide it from the outside world.
So if we can build models to identify people in vulnerable situations with mathematical precision, why can’t we solve it? The answer is, because we are human. There are four things that need to occur to help someone move to a less vulnerable situation.
The process of change is the business model of every collections firm in the world. Helping someone resolve a debt is a big dose of change. It’s how firms go about enabling this change that leads to success of everyone involved.
To learn more about how organizations can identify, support and retain vulnerable customers, watch our on-demand webinar with Vanquis and British Gas.
There is a second, much more difficult problem for AI – predicting the unknown. Unknown entities are different than hidden ones because they may not exist in the data. Could we have predicted the current global pandemic? Or can we predict the next one? We believe the answer is yes, but we also believe there is value in weighing human-based logic and intuition over the results of AI-based predictions of the unknown.
So why won’t we predict things with massive social impact? The answer is actually the same as above. We are human. Borders are not open, politics are real, herd mentality exists, and worst of all, ignorance.
Recently, humans landed a rover on mars, 35.4 million kilometers away to within 4 meters of the target point. Talk about a lot of unknowns. It took an estimated 400 NASA people and 300 additional scientists to get this far, and cost the U.S. about 1.8 billion pounds in direct cost, without adding in volunteer time. We can predict a pandemic, if we have the data, the time, and the treasure. But we don’t have all of that.
The best bet is to prepare for an array of unknowns. Humans can be pretty good at this. We are better off to learn from this event to broadly prepare for the next one, no matter what it is. This is the same of the collections firms. I predict they are about to receive a large batch of pandemic-related debt, are we preparing?
Even if we have all the data, and have the talent, and have the financial resources to predict something, people have to believe it AND take action on it. Belief is the real issue any change agent must deal with, and it is one scientists face every day. There is a wonderful saying, “In the face of overwhelming fact, beliefs will win every time.”
There are debt collection firms who believe simply asking for money as fast as possible with strong language is the right way to be successful. Having worked with numerous collections companies, we can show facts that the longer the call, the more you talk about anything other than money, the more willing you are to have a real open compassionate conversation, the more successful a firm will be, but still, firms believe that is not true.
Dealing with and resolving the vulnerability issues a person faces will always be more successful than banging on doors demanding money. Belief vs. fact.
But it’s not too late for collections organizations to change. By using AI and ML to find incredibly small, but vastly important things in conversations like customer vulnerability, collections can focus on fact instead of beliefs.
I’ll be discussing all this and more on Thursday March 18 at 1:05pm GMT as part of the Online Collections Technology Think Tank 2.0, alongside Steve Collender, Director of Customer Operations at RateSetter, and John Stories, Director of Strategy and Transformation at Arvato Financial Solutions.
Register today to join me.
Learn how this customer realized 4x ROI in year one using CallMiner for collections