Blog Home

25 examples of responsible AI: How to leverage AI while minimizing risk

Company

The Team at CallMiner

November 30, 2023

responsible AI customer experience examples
responsible AI customer experience examples

In today’s tech-focused world, artificial intelligence (AI) helps with numerous everyday tasks, like narrowing a candidate pool for a job, drafting emails or creating travel itineraries, and evening helping optimize customer experience (CX).

While many organizations plan to invest in AI in the near future, concerns about the risks of AI are also prominent. In the 2023 CallMiner CX Landscape Report, the top three concerns noted by survey respondents include:

  • Exposing the company to security and/or compliance risks (45%)
  • Spreading misinformation (43%)
  • Giving biased, discriminatory, or inappropriate responses to customers (41%)

Responsible AI ensures AI systems and tools are used ethically, legally, and transparently for fair and accurate outcomes. These responsible AI examples prove that AI can be used positively with the proper practices in place.

Whitepaper
CallMiner CX Landscape Report
Discover how practitioners around the globe using data, analysis & AI to improve customer experience based on this landmark survey from CallMiner.
Right Arrow

What is responsible AI?

Responsible AI bridges the gap between quickly evolving AI technology and its legal and ethical implications. Generally, responsible AI seeks to use modern technology for useful, beneficial purposes, like improving patient care and preventing gender and racial bias. As AI applications continue to expand in virtually every industry, ensuring responsible adoption and implementation is more important than ever.

We gathered responsible AI examples from organizations and people using AI technology to illustrate how it can be used to drive the best outcomes for customers, employees, and businesses.

25 examples of responsible AI

1. CallMiner helps BPOs improve agent performance to prevent exploitation and improve service. “In the journey toward automated QA, BPOs should strive towards automating step by step — achieving quick wins, measuring success, and expanding from there. In doing so, BPOs can celebrate success and build enthusiasm around the QA program. In addition, teams can use the data from conversation intelligence to give agents support and prevent exploitation from abusive customers.

“Conversation intelligence can be a powerful tool to help BPOs baseline quality through scorecards, which measure compliance and process adherence. From there, supervisors can gain data-driven coaching insights specific to each agent, focused on improving performance and reinforcing positive behaviors. This type of direct feedback leads to measurable improvements.

“Agents can also use conversation intelligence systems to self-coach and self-improve, saving time and improving quality in the process. Real-time alerts can guide agents through complex situations, avoiding escalations and improving KPIs such as average handle times (AHTs).” - How BPOs can use AI to improve quality assurance, CallMiner; X/Twitter: @CallMiner

2. Google’s Project Respect fosters inclusivity for the LGBTIQ+ community. “We think everyone should be able to express themselves online, so we want to make conversations more inclusive. That’s why we created tools like Perspective, an API that uses machine learning to detect abuse and harassment online. Perspective scores comments based on their similarity to other comments that others have marked as toxic.

“However, sometimes the labels we use to describe ourselves and our loved ones can be used in a negative way to harass people online. And because machine learning models like the one used for Perspective are sensitive to the data sets on which they are trained, that means they might make the mistake of identifying sentences that use words like "gay," "lesbian," or "transgender" in positive ways as negative. (Within the ML community, we talk about this as insufficient diversity in the training data.)

“That’s why we created Project Respect. We’re creating an open dataset that collects diverse statements from the LGBTIQ+ community, such as ‘I'm gay and I'm proud to be out’ or ‘I’m a fit, happy lesbian that has just retired from a wonderful career’ to help reclaim positive identity labels. These statements from the LGBTIQ+ community and their supporters will be made available in an open dataset, which coders, developers and technologists all over the world can use to help teach machine learning models how the LGBTIQ+ community speak about ourselves. The hope is that by expanding the diversity of training data, these models will be able to better parse what’s actually toxic and what’s not.” - Ben Hutchinson, Fairness matters: Promoting pride and respect with AI, Google AI; X/Twitter: @Google

3. Meta is improving its AI fairness tools by identifying gaps in race and ethnicity data. “Academic and policy researchers commonly use basic ZIP Code analysis to bluntly assess differences across race and ethnicity when true self-identified race or ethnicity data is not available for analysis. This method can be important to track aggregate trends for populations that live in ZIP Codes with high demographic concentrations. Demographic statistics are published annually from the U.S. Census’s 5-Year American Community Survey, aggregated by ZIP Code Tabulation Area (ZCTA*), and can be helpful to identify directional issues in datasets or algorithms that may correlate to patterns of segregation.

“…We turned to a modification of the standard imputation approach Bayesian Improved Surname Geocoding (BISG), a method for creating probability distributions of a person’s race given their last name and ZIP Code that leverages U.S. Census Bureau’s data on surnames and ZIP Codes classified by self-reported race/ethnicity. BISG is a widely used method in other industries to measure potential racial disparities, such as healthcare and financial services, designed specifically for circumstances when self-provided data is not available. U.S. Government agencies like the Consumer Financial Protection Bureau also use this method.

“Outputs of BISG calculations will be used as a dimension for fairness analyses. The primary use cases are in Meta’s Fairness Flow tools (which enables assessment of whether a machine learning model over- or under-estimates whatever the model is attempting to predict across different groups) as well as for measuring other differences that users in different BISG categories may experience. Both of these methods group users by the BISG category and calculate aggregate predictions or performance by group, not at the individual level.” - How Meta is working toassess fairness in relation to race in the U.S. across its products and systems, Meta; X/Twitter: @aiatmeta

4. Microsoft helps schools and organizations filter inappropriate or harmful content. “Microsoft … announced the general availability of Azure AI Content Safety within the Azure AI platform, which uses advanced language and vision models to help detect hate, violence, sexual and self-harm content. When the models detect potentially harmful content, they mark it with an estimated severity score. That allows businesses and organizations to tailor the service to block or flag content based on their policies.” - How Azure AI Content Safety helps protect users from the classroom to the chatroom, Microsoft; X/Twitter: @Microsoft

5. Researchers use AI to detect deadly land mines safely. “A warm wind blows across an empty field on the outskirts of Pawnee, Okla. A small group of researchers struggle against the stiff wind to set up a pop-up tent for some shade. Nearby a young man opens a heavy Pelican case to reveal a pile of explosives.’These are inert,’ he says, ‘but we’re lucky to be working at a range that has so many different kinds of munitions.’

“The range is an explosive-ordnance-disposal field laboratory maintained by Oklahoma State University, and the researchers are led by Jasper Baur and Gabriel Steinberg, co-founders of the Demining Research Community, a nonprofit organization bridging academic research and humanitarian demining efforts. They have been in Oklahoma for two weeks, setting up grids of mines and munitions to train a drone-based, machine-learning-powered detection system to find and identify dangerous explosives so humans don’t have to.

“There are many millions of active mines and munitions estimated to be scattered in dozens of countries. Baur says his and his colleagues’ goal is to make their drone-detection system available to demining organizations around the world to aid in efforts to make post conflict countries safe.” - To Clear Deadly Land Mines, Science Turns to Drones and Machine Learning, Scientific American; X/Twitter: @sciam

6. Boon Logic refrains from human labeling to reduce bias. “‘AI bias’ is perhaps the greatest barrier to achieving responsible AI. One company that provides tools for developing machine learning solutions, Boon Logic, helps to solve the problems of biased data and lack of explainability using its proprietary Boon Nano algorithm. In the words of Grant Goris, the company’s CEO, the algorithm ‘starts with a blank slate and finds its own ‘truth.’’

“‘Given that so much bias is introduced by humans labeling the data, our approach is inherently much less likely to contain bias—unless the unlabeled training data itself contains bias,’ says Goris.

“With unsupervised machine learning algorithms, Boon’s system trains data that’s been collected, without human labeling, directly from sources such as industrial machines, cameras, or internet traffic counters, for example. In this way, according to Goris, data is organized in an unbiased fashion and ultimately presented to a human for optimal interpretation and analysis.” - Kolawole Samuel Adebayo, Executives from leading companies share how to achieve responsible AI, Fast Company; X/Twitter: @FastCompany

7. LinkedIn opts for AI assistance with plenty of customization. “With Profile Writing Suggestions, we are testing generative AI writing suggestions to help members enhance the ‘About’ and ‘Headline’ sections of their profiles. When members opt to use this tool, we leverage existing information on their profile, such as recent work experience, and use it together with generative AI tools to create profile content suggestions to save members time.

“Of course, customization is still important. That's why we encourage members to review and edit the suggested content before adding it to their profile to ensure it is accurate and aligns with their experience.” - Joaquin Quiñonero Candela, Our Responsible AI Principles in Practice, LinkedIn; X/Twitter: @LinkedInEng

8. CallMiner’s AI-powered data collection process automatically redacts sensitive information. “Many contact centers are required to keep audio and text-based communication data, but personal information is at risk due to lack of redaction or manual pause-and-resume recording failures. CallMiner Redact ensures you can meet security and compliance standards by automatically removing sensitive customer data.

“Using AI, Redact accurately identifies and removes sensitive data, covering over 50 out-of-the-box entities. Each redaction type is clearly labeled in the transcript, so users have visibility into which data was redacted.” - Omnichannel redaction for data repositories, CallMiner; X/Twitter: @CallMiner

9. Diagnostic AI tools can positively impact patient care. “Artificial Intelligence (AI) has proven to be good at detecting patterns in data. In healthcare, it is useful for diagnostics and in preventive care.

“Skin Vision, for example, has developed an app to detect skin cancer at an early stage without having to visit a doctor. The user of the app takes a picture with a smartphone and answers a few questions. The app then performs a risk analysis on a piece of skin. If the algorithm assesses the risk of cancer as high, the user is notified of the next steps within 48 hours. A team of dermatologists is involved with the app.

“BedSense is an example of AI in preventive care. This AI application can prevent bedsores. BedSense consists of a sensor under a mattress that monitors a patient's lying behavior and a locker on the wall that sends out signals if a patient lies in the same position for too long.” - Responsible AI in healthcare: the value of examples, Rathenau Instituut; X/Twitter: @rathenaunl

10. Google improves transparency in machine learning models. “Explainable AI is a set of tools and frameworks to help you understand and interpret predictions made by your machine learning models, natively integrated with a number of Google's products and services. With it, you can debug and improve model performance, and help others understand your models' behavior. You can also generate feature attributions for model predictions in AutoML Tables, BigQuery ML and Vertex AI, and visually investigate model behavior using the What-If Tool.” - Explainable AI, Google AI; X/Twitter: @Google

11. Amazon SageMaker Clarify helps model developers detect and address biases in their models. “Amazon SageMaker Clarify provides machine learning (ML) developers with purpose-built tools to gain greater insights into their ML training data and models. SageMaker Clarify detects and measures potential bias using a variety of metrics so that ML developers can address potential bias and explain model predictions.

“SageMaker Clarify can detect potential bias during data preparation, after model training, and in your deployed model. For instance, you can check for bias related to age in your dataset or in your trained model and receive a detailed report that quantifies different types of potential bias.

“SageMaker Clarify also includes feature importance scores that help you explain how your model makes predictions and produces explainability reports in bulk or real time through online explainability. You can use these reports to support customer or internal presentations or to identify potential issues with your model.” - Amazon SageMaker Clarify, AWS; X/Twitter: @awscloud

12. Wells Fargo reduces the risk of AI hallucination using data mapping and transparency. “Transparency is built into each stage of AI development. For example, knowing how and why AI makes a decision should be clear. Wells Fargo developers use open-sourced platforms with built-in decision-making transparency and data mapping. Because of that, Wells Fargo AI models can’t ‘hallucinate’ — a phenomenon typical of large chatbot language models where AI creates inaccurate, contradictory, or nonsensical information for seemingly no reason.

“‘You never let an AI neural network reach conclusions without knowing why,’ Wells Fargo’s chief information officer and head of Digital Technology & Innovation, Chintan Mehta, said. ‘The data we use is extremely explainable and has a lineage that we can track.’” - How Wells Fargo builds responsible artificial intelligence, Wells Fargo; X/Twitter: @WellsFargo

13. Mind Foundry built an explainable AI platform that largely considers human opinion and insight. “Mind Foundry partnered with the Scottish Government in building an explainable AI that aligns with Scotland’s AI strategy. The system allows people of varying technical experience to understand how and why AI impacts decision-making and further work within the system to make better, more understandable decisions.

“Mind Foundry developed a framework powered by its intelligent decision architecture which allows technical and non-technical users alike to work with the system in understanding how AI was used and impacted results. In addition, the system naturally enables data-driven decision-making in collaboration with AI.

“The transformative partnership potential of human-AI collaboration will allow experts in many fields including education, healthcare and planning to realize the significant power and value of AI automation while retaining control, oversight, and understanding.” - Case study: Responsible AI for government decision making, techUK; X/Twitter: @techUK

14. Microsoft developed a team game to gauge the ethics and fairness of its AI products. “To help cultivate empathy during its product creation process, Microsoft’s Ethics & Society team created the Judgment Call game. The game is an interactive team-based activity that puts Microsoft’s AI principles of fairness, privacy and security, reliability and safety, transparency, inclusiveness and accountability into action.

“During the game, each participant is given a card that assigns them a role as an impacted stakeholder of a digital product (e.g. product manager, engineer, consumer). Each is also given a card that represents one of Microsoft’s AI principles and a card with a number from 1 to 5, representing the stars in a ratings review. Participants are asked to write a review of the digital product from the perspective of their assigned role, principle and rating number.

“Each player is asked to share and discuss their review. The game has a number of benefits:

  • Engineers, product managers, designers and technology executives consider the perspectives of the impacted stakeholders and imagine the potential outcomes of their product on these stakeholders.
  • Although the game does not replace the valuable benefits of interacting directly with stakeholders, it builds empathy, especially early in the product design process.
  • Roles are arbitrarily assigned to participants due to the random distribution of the cards. The game’s dynamics create a safe environment for product team members to discuss potentially sensitive ethical topics.”

- Responsible Use of Technology: The Microsoft Case Study, World Economic Forum; X/Twitter: @wef

15. Credo helped a reinsurance company automate its compliance processes. “One of the leading global reinsurance providers started an internal AI risk and compliance assessment process to address growing regulatory concerns, and demonstrate it was effectively governing its AI and mitigating potentially harmful bias in its models. Their entire process was managed through Excel and was incredibly burdensome on technical development teams, requiring significant hours per report.

“With Credo AI’s Responsible AI (RAI) Platform and Lens, the company found a complete solution that met their needs. The reinsurance company worked with Credo AI to develop a set of custom Policy Packs that operationalized the company’s internal risk and compliance assessment policies within the Responsible AI Platform.

“Any AI use cases in development across the team are now registered for governance, and the governance team can manage and track progress through the risk and compliance assessment process from the Credo AI UI rather than updating them manually through various spreadsheets and documents.” - Standardizing & Streamlining Algorithmic Bias Assessment in the Insurance Industry, Credo AI; X/Twitter: @CredoAI

16. Google is helping AI understand skin tones and prevent biases. “Skin tone plays a key role in how we experience and are treated in the world, and even factors into how we interact with technologies. Studies show that products built using today’s artificial intelligence (AI) and machine learning (ML) technologies can perpetuate unfair biases and not work well for people with darker skin tones.

“Computer vision (CV) is a type of AI that allows computers to ‘see and understand’ images of people and environments, but when present-day systems aren’t designed with everyone in mind, they can fail to ‘see’ and ‘understand’ people with darker skin. Building more inclusive CV systems requires being intentional—from collecting representative datasets for training and evaluation, to developing the right evaluation metrics, to building features that work for all users.

“To improve CV systems’ understanding of skin tones and improve ML fairness evaluation, we’re open-sourcing the Monk Skin Tone (MST) Scale—an alternative scale that is more inclusive than the current tech-industry standard. Developed by Harvard professor, Dr. Ellis Monk, the MST Scale provides a broader spectrum of skin tones that can be leveraged to evaluate datasets and ML models for better representation.

“Google's Research Center for Responsible AI and Human-Centered Technology has partnered with Dr. Monk to openly release the MST Scale for the ML community. By openly releasing the scale to the broader industry, we hope others will incorporate the scale into their development processes and that we can collectively improve this area of AI.” - Improving skin tone evaluation in machine learning, Google AI; X/Twitter: @Google

17. H&M Group developed a working group to ensure its AI systems protect children’s best interests. “H&M Group’s Responsible AI Framework has been designed in alignment with their commitment to human rights and is being updated to focus on children’s rights in the context of AI. While the company is not developing AI products that target children specifically, the Responsible AI Team wanted to better understand and address potential indirect implications for children.

“Their policy is aligned with the European Union’s Ethics Guidelines for Trustworthy AI, which recommends identifying AI's potential effects on under-represented and vulnerable groups, including children and adolescents. The H&M Group has initiated a planned review of their Responsible AI Principles and tools and identified an opportunity to further promote two child-centric requirements that form part of the UNICEF Policy Guidance on AI for Children, namely: prioritizing fairness and non-discrimination, and providing transparency, explainability and accountability for children.

“To proactively identify potential implications for children in its policies, the Responsible AI Team has set up a working group with relevant stakeholders, including their departments on Human Rights and colleagues from relevant business units across the company. Through various workshops, this working group has helped identify possible scenarios where AI products interact with children and then analyze any potential unintended consequences.” - H&M Group, UNICEF; X/Twitter: @UNICEFUSA

18. AI budgeting tools like Cleo help people manage their finances. “Not all budgeting tools are made equally, but AI tools give you an advantage in developing and sticking to a plan. AI-powered budgeting apps go beyond ‘one-size-fits-all’ financial advice.

“They use those personalized insights we mentioned above to help you credit a budget that fits your life. Using advanced algorithms and data from your financial past and future, apps like Cleo:

  • Gather and analyze your income and expense data
  • Categorize and visualize your cash flow
  • Suggest changes to your spending and saving routines
  • Provide automated savings tools
  • Help you rearrange, track, and control your spending – AKA, budgeting!

“By consistently spending less than you make, you’ll be able to pay off your debts and build credit over time. And since AI continually learns as your finances evolve, apps can keep you on track effortlessly.” - How to use AI to improve your credit score and tackle debt, Cleo; X/Twitter: @meetcleo

19. CarMax and UVeye make used vehicle assessments more accurate and informative. “CarMax, the nation’s largest retailer of used cars and one of the largest wholesalers of used cars, announces it is partnering with UVeye on automated vehicle assessment technology through AI-enhanced condition reports for wholesale buyers of vehicles sold at auction. UVeye is a computer vision tech company that develops automated inspection systems for vehicles, powered by artificial intelligence and proprietary hardware.

“Since CarMax strategically invested in UVeye in 2021, both companies have been working together on innovative inspection solutions for the auction space. CarMax moved its auction sales online in 2020, and in a remote-first world, capturing quality imagery is critical to providing buyers with maximum information regarding each vehicle.

“CarMax has installed the technology in several wholesale locations and uses UVeye to scan the body, tires, and the undercarriage of vehicles to quickly produce an online user-friendly report with high-resolution photos. The system also has the capability to detect issues such as frame damage, missing parts, fluid leaks, brake and exhaust-system issues.” - CarMax Partners with AI Technology Company UVeye on Vehicle Assessment Technology for Wholesale Vehicles, CarMax; X/Twitter: @carmax

20. Conservation Metrics uses AI to monitor wildlife for conservation efforts. When researchers collect audio recordings of birds, they are usually listening for the animals’ calls. But conservation biologist Marc Travers is interested in the noise produced when a bird collides with a power line. It sounds, he says, ‘very much like the laser sound from Star Wars.’

“In 2011, Travers wanted to know how many of these collisions were occurring on the Hawaiian island of Kauai. His team at the University of Hawaii’s Kauai Endangered Seabird Recovery Project in Hanapepe was concerned specifically about two species: Newell’s shearwaters (Puffinus newelli) and Hawaiian petrels (Pterodroma sandwichensis). To investigate, the team went to the recordings.

“With some 600 hours of audio collected — a full 25 days’ worth — counting the laser blasts manually was impractical. So, Travers sent the audio files (as well as metadata, such as times and locations) to Conservation Metrics, a firm in Santa Cruz, California, that uses artificial intelligence (AI) to assist wildlife monitoring. The company’s software was able to detect the collisions automatically and, over the next several years, Travers’ team increased its data harvest to about 75,000 hours per field season.

“Results suggested that bird deaths as a result of the animals striking power lines numbered in the high hundreds or low thousands, much higher than expected. ‘We know that immediate and large-scale action is required,’ Travers says.

“His team is working with the utility company to test whether shining lasers between power poles reduces collisions, and it seems to be effective. The researchers are also pushing the company to lower wires in high-risk locations and attach blinking LED devices to lines. - AI empowers conservation biology, Nature; X/Twitter: @Nature

21. Intenseye updates its lineup and agreements as AI-related privacy concerns increase. “Intenseye ‘provides AI solutions for preventing workplace accidents, using closed-circuit video as the system’s data source’—a scenario that might raise personal privacy concerns at first glance. But cofounder and CEO Sercan Esen says Intenseye recognizes the potential ethical risks associated with the unintended use and further development of its technology, which is why the company is committed to designing it in a way that cannot be repurposed for unjustified surveillance on frontline teams. Intenseye applies a responsible data and AI approach throughout the lifecycle of its various models, Esen says.

“‘We ensure that our system design, end goals, and treatment of individuals subject to the system are ethically justifiable, mitigating potential risks related to privacy and bias through design choices and user agreements. As AI becomes more advanced, it can be challenging for humans to understand how algorithms produce a given result, and that’s why explainable AI is crucial for building trust and confidence when putting models into production,’ he adds.” - Kolawole Samuel Adebayo, Executives from leading companies share how to achieve responsible AI, Fast Company; X/Twitter: @FastCompany

22. Retrain.ai helps close diversity gaps in hiring processes. “Our responsible AI driven software helps you make unbiased recruiting and hiring decisions. The platform supports your DEI goals by breaking down candidate profiles into skills while masking titles, degrees, or other factors that can introduce potential bias. Accurately pair candidates to best-fit positions to cut time to hire and build a skilled, diverse, future-proofed workforce.” - Hire the right people with a talent acquisition platform driven by AI., Retrain.ai; X/Twitter: @RetrainAI

23. Meta attempts to watermark the sources of AI-generated images. “...Meta’s researchers developed a system that leaves a secret binary signature into all images generated by latent diffusion models, like Stable Diffusion – creating a watermark for AI-generated images.

“Developed with France's National Institute for Research in Digital Science and Technology (Inria), the watermark is invisible to the naked eye but can be detected by algorithms. Meta says its marks can even be detected if the image is edited by a human post-generation.

“The Facebook parent said it is exploring ways to incorporate the Stable Signature research into its line of open source generative AI models, like its flagship Llama 2 model. It is also exploring ways to expand the Stable Signature to other modalities, like video.” - Meta Develops Invisible Watermarks to Track AI Image Origins, AI Business; X/Twitter: @business_ai

24. Intel and Mila partner for AI-driven climate change. “While standard physics-based climate models can help predict the effects of climate change, they are complex and computationally expensive. They often take months to run – even on specialized supercomputing hardware – which reduces the frequency of simulation runs and the ability to provide granular, localized predictions.

“Furthermore, these models are typically unable to explain the reasoning or causal relationships underlying their predictions. Intel and Mila aim to fill this gap by building a new type of climate model emulator based on causal machine learning to identify which variables are predictive among high-dimensional inputs to traditional climate models.

“The project seeks to enable significant advancements in climate science and directly inform policy by enabling thorough and trustworthy predictions of the effects of climate change.” - Intel and Mila Join Forces for Responsible AI, Intel; X/Twitter: @Intel

25. DataKind and Grameen America work to level the playing field for entrepreneurial women. “Women in the US receive only 4% of all small business loans from mainstream financial institutions. Grameen America (GA) is dedicated to helping entrepreneurial women who live in poverty build businesses to enable financial mobility through microloans, financial training, and direct support to members.

“GA measures their performance by the number of active loans and active members in each relationship managers’ portfolio. Currently, each step of GA’s multi-step process for viewing this data equates to hours of people’s time; on average, staff take two hours to complete the process for each report with an average of 10 reports per week.

“DataKind is partnering with GA to develop carefully curated data visualization dashboards to complement and augment current daily, weekly, and monthly reports. Our project aims to automate the creation and visualization of these reports by transforming text based data into charts, graphs, and maps using business intelligence tools. This can provide managers with the ability to identify trends and clients most in need of support.

“This, coupled with predictive analytics around loan default and attendance, will provide the GA team with the ability to improve retention and loan accountability so that their clients are able to continue their financial and professional growth and build their economic resiliency. This solution also provides GA staff with the time required to further scale their operations beyond their current 150,779 clients with a specific focus on Black women entrepreneurs.” - Advancing Economic Empowerment in Communities Across the U.S.: DataKind Launches Seven New Projects, DataKind; X/Twitter: @DataKind

AI is a powerful tool for businesses, and as AI technology continues to advance, new use cases arise. However, as AI increasingly becomes a part of everyday life, responsible AI use is a serious and pressing concern.

These examples highlight the awareness of the potential risks that come with AI, as well as the efforts major players are making in the AI space to ensure responsible use. When investing in artificial intelligence tools, it’s imperative to select solutions that prioritize responsible AI. CallMiner, for example, is a robust, AI-driven conversation intelligence platform with built-in safeguards, such as automatic redaction of sensitive customer information from audio- and text-based communication data.

Frequently asked questions

What is an example of responsible AI?

One example of responsible AI is CallMiner’s Redact software, which automatically removes sensitive customer information from audio- and text-based conversation data. Redact was built specifically to identify sensitive information that should be removed before agents can view the data while still maintaining the integrity of the original conversation.

What is ethical AI vs. responsible AI?

Ethical AI is a form of responsible AI. Ethical AI refers to the ethical principles surrounding AI, like preventing bias and creating fair results, while responsible AI focuses on the products, strategies, and policies that form ethical AI tools and systems. Responsible AI helps ensure that people and organizations use AI ethically and in ways that will benefit rather than harm others.

What are the components of responsible AI?

Many organizations create their own policies for what responsible AI means at those organizations. Usually, these policies define responsible AI as being transparent, accountable, reliable, inclusive, fair, safe, and private.

Contact Center Operations Speech & Conversation Analytics Executive Intelligence North America EMEA Customer Experience Artificial Intelligence Risk Management & Compliance