Introduction to Responsible AI: Unpacking bias
Part of what makes Responsible AI difficult is the vast set of ideas, theories, and practices that it interacts with. The CallMiner Research Lab unpac...
When we are young, most of us are taught that being responsible is a choice. But what about the hard or even impossible choices? There are those situations in life which are exceptionally difficult, putting us at odds with social norms, or our values and morality. These situations may fall deeply in the grey zone or feel like any choice is a lose-lose. We are all faced with decisions where there is no good answer.
Artificial intelligence (AI) is no different. Should a self-driving car steer you into a tree to protect a puppy? Should the likelihood of recidivism as predicted by a model determine jail sentences? Should college entrance exam scores be predicted and used as truth when the exam cannot be taken due to a global pandemic?
While these choices may sound extreme, these are all very real questions that have come up in the AI world, and in global news media, in the last few years. As humans try to simplify their day-to-day lives by using machinery, we often start allowing machines to tackle not only life’s easy questions, but also some of the hardest. In fact, there have already been an abundance of situations where a model predicts an outcome that reflects and perpetuates the many injustices that happen in our world.
Additionally, computer algorithms lack the ability to reason for themselves or consider the many cultural norms and societal contracts that govern our ways of behaving. Models in today’s world have a real, tangible, and sometimes life-changing impact on the lives of real people, and this brings to light an important new side of machine learning and AI.
Whether you want to call it Responsible AI, ethics in AI, AI misuse, detecting bias in AI, or something else, all reference the need to ensure that machine learning doesn’t add or play into the injustices of our world. Many also use these terms to discuss ways in which machine learning and AI can help us counteract and fight these same problems.
At the CallMiner Research Lab, we don’t have all the answers to creating perfectly responsible AI systems, but we do understand exactly how important it is that we think about and actively work towards building tools that are inclusive and transparent. Understanding exactly what that means and how to achieve it is a process that involves learning, open conversation, and constant self-evaluation and change. Responsible AI is not a final state, but rather, a continuous cycle of detection, evaluation, and improvement.
Through our Responsible AI blog series, we will divulge the details of how the CallMiner Research Lab envisions and implements Responsible AI. Our approach is driven by one relatively simple idea – creating, implementing, and using AI in a responsible way is everyone’s responsibility.
Responsible AI is not simply a checklist that one researcher completes one time throughout the research process. It cannot even be delegated to a single team.
Responsible AI is a company-wide effort that succeeds through the diversity and dedication of those working to achieve it.
It is a process that must be evaluated by a diverse set of minds and backgrounds in order to account for a larger universe of perspectives and experiences. This process must also be repeated throughout the life of an AI-driven tool, from the time the idea is conceived, through its development and deployment, and consistently as users begin to explore and understand it.
Our framework for Responsible AI is based around five foundational ideas.
We hope that through our transparency with our Responsible AI efforts, we can bring increased awareness to how important it is to approach AI and machine learning from this perspective, and add to the discussions that will bring about industry-wide action in the coming years.
Learn 3 key ways that AI benefits customer service organizations from the experts at Gartner.