CallMiner's 2024 CX Landscape Report is here! |Download today

Blog Home

24 AI Professionals & Ethics Experts Reveal the Most Overlooked Obstacles for Companies When It Comes to AI Ethics/AI Bias (and How to Overcome Them)

Company

The Team at CallMiner

October 06, 2020

Man touching AI screen
Man touching AI screen

The subject of ethics and bias in AI systems is widely discussed, especially as more practical applications for AI are discovered and implemented across all industries. AI systems are already heavily relied on in some critical decision-making processes, such as loan eligibility, hiring and recruiting, and more. And AI is used to detect potential fraud and improve the customer experience, such as in speech analytics solutions and customer service chatbots. In fact, AI has been used in contact centers in some form for many years now, and there are numerous smart implementations of AI and machine learning in data analytics today.

Learn more about how AI can be leveraged to prevent fraud while preserving the customer experience by downloading our white paper, Sitel + CallMiner Survey: Preventing Fraud and Preserving CX with AI.

Due to the growing use of AI in a wide range of industries for decision-making with significant impacts on consumers, populations or the public at large, AI bias is a significant concern. Without taking the proper steps to reduce AI bias and ensure that AI systems are making ethical decisions, companies risk reputation damage that is difficult, if not impossible, to overcome.

If you’re thinking about implementing AI in your organization, there are some critical pitfalls and obstacles you should be aware of to avoid AI ethics issues and reduce AI bias. To find out what common challenges exist and how companies can overcome them, we reached out to a panel of AI professionals and ethics experts and asked them to answer this question:

“What’s the most overlooked obstacle for companies when it comes to AI ethics / AI bias (and how can they overcome it)?”

Read on to learn what our experts had to say about the most overlooked obstacles when it comes to AI ethics and AI bias, and how you can overcome them.

Shahid Hanif

@Shufti_Pro

Shahid Hanif is the CTO & Founder of Shufti Pro.

“The most commonly overlooked obstacle for companies when it comes to AI ethics and AI bias is…”

A lack of resources, namely data and technical staff. Insufficient data fed to the AI model trained by one or few people creates a barrier for future bias elimination efforts.

To remove this bias, consider your AI model as an inclusive workplace where you need to add unbiased data and have a diverse group of technical staff train the model. Training AI models with diverse and sufficient data helps to mitigate AI bias.

Jack Zmudzinsk

@FutureProcessin

Jack Zmudzinski is a senior associate at Future Processing.

“The biggest obstacle for companies getting on board with AI is data…”

Put simply, AI can only do its thing when it is fed data – not just enough data but the right kind of data. In order for AI to be effective, brands need to have the following data strategies in place:

  • An adequate amount of actionable data which has been properly cataloged and governed. It goes without saying that this data also needs to adhere to the new government guidelines, including collation, storage, and sharing.
  • An assurance that the data held is not biased. This is incredibly important as AI will use this data in order to learn, so if biased data is added to the mix, the entire process will be skewed and inaccurate.
  • Before even thinking of getting started with AI, brands need to make 100% sure that their data is up to the job, as this will save them a lot of time and money in the long run.

Rafael Ruggiero

@turing_bot

Rafael Ruggiero is an astrophysicist and creator of TuringBot.

“The detection of bias in AI models and datasets can be challenging because the most commonly used models are black boxes…”

But alternatives exist and are gaining popularity. For instance, a special technique called symbolic regression generates models that are explicit mathematical formulas. If an input variable that should not be predictive of anything appears in the resulting formulas, then bias can be said to be taking place.

Robb Hecht

Robb Hecht is an Adjunct Professor of Marketing at Baruch College.

“The most overlooked obstacle for companies when it comes to AI ethics is the cleanliness of the data upon which the AI builds its knowledge…”

If data capture was skewed in any way, AI learns from the data it’s provided, and hence its learnings and output recommendations to humans could be considered unethical if data sets aren’t properly captured on the front end.

To overcome this, data integrity councils and professionals are arising who market, collect, and assess data so that the goals of AI on the back end are provided data that is, at least in western economies, fully representative of those affected by the decision recommendations of the AI. For example, if the decisions and recommendations coming out of AI are for policies that affect multiple races, then the data the AI learned from should include fair and balanced racial data.

Joe Tuan

@topflightapps1

Joe Tuan is the CEO at Topflight Apps.

“AI is only as intelligent as you teach it to be…”

And the way you teach AI is by trial and error. That’s where many companies fall short of users’ expectations. They don’t do enough QA, perfecting their ML algorithms. As it turns out, even behemoths like HP happen to fail on that front. I’m sure you heard the outcry a decade ago, which resurfaced with the Black Lives Matter movement: the outcry about HP motion-tracking webcams not recognizing black people.

If they’d spent more time testing their image recognition software and thinking about diversity, there wouldn’t be a case to discuss. HP would have had plenty of time to teach its AI engine different patterns, accounting for different races. Companies need to spend more time on testing and verifying the results of self-learning ML algorithms.

Andrey Podkolzin

@ZyfraCompany

Andrey Podkolzin is a Data Scientist at Zyfra.

“The astronomical rise in usage of AI-driven applications across many areas and facets of the modern society and the economy has…”

Stirred a furious debate about our increasing reliance on the machine intelligence. As the computer algorithms are designed by humans, it’s only natural to assume that alongside the good aspects of our collective consciousness, AI is destined to be shaped by individual and societal biases as well. The question is, will AI actually be smart enough to recognize and filter these imperfections or will it, in fact, proceed to magnify them?

Defining what’s fair and what’s not biased is a challenge in itself. A more general meaning of bias will refer to it as any form of preference for one thing over the other. In turn, the fairness aspect of it reflects the frequency with which this preference is systematically used against an individual or a group of individuals. Without a doubt, most prominent biases will correspond to such attributes as ethnicity, gender, disabilities, and sexual orientation. So, what are the gateways of bias, and how does it propagate through the AI and ultimately influence our decision-making process? Essentially, there are two main avenues.

  1. Through badly designed algorithms

One canonical example of a badly designed AI system is the COMPAS system, which is a risk assessment tool used in the U.S. criminal justice system. In its essence, the COMPAS system is designed to aid judges in evaluating whether a defendant should be kept in jail or be released while awaiting trial by considering his chances of being rearrested. The problem with COMPAS was that – even though on the surface, the characteristics of race and ethnicity were excluded from consideration – the scoring was found to systematically produce more false positive cases for African American defendants, making them twice as likely to be kept in jail before trial. It turned out that the key problem with COMPAS was that the scoring algorithm did not consider that different groups of defendants are arrested at different rates. In other words, if black defendants are arrested at a higher rate than white defendants in the real world, they will have a higher rate of predicted arrest as well. That means they will also have higher risk scores on average and a larger percentage of them will be labeled high-risk, both correctly and incorrectly. In this case, designers have essentially failed to recognize inherent disparities in various groups of defendants, and thus retroactively injected the bias back into the system.

2. Through badly collected dataThe most common pathway of bias into AI systems is surely via training data fed to the machines during the learning step. A simple case of a badly stratified data (i.e., under or over representation of certain group of attributes) will likely lead to severely skewed results. A seminal example of such a problem refers to face recognition systems sold by the likes of IBM, Microsoft, and Face++. It was shown that due to inaccurate sampling of photos of various demographic groups, darker-skinned females were 34.4% more likely to be incorrectly classified into a matching gender group than lighter-skinned males. A similar issue is also evident in practically any speech recognition system that fails miserably when a non-native speaker with an accent asks a query.

Mitigation strategies

While it’s somewhat ironic, the best solution for mitigating the impact of unethical AI practices is yet another AI system. To date, there are already many frameworks designed to detect and remove unwanted bias. At a macro level, these mitigation algorithms are represented by three major clusters, corresponding to the pre-processing, in-processing, and post-processing phases of AI system development.

For example, the pre-processing cluster includes (a) obvious re-weighing schemas that add weights to training examples before classification; (b) probabilistic transformation that edit features and labels of data with group fairness, individual distortion, and fidelity metrics.

In turn, the in-processing cluster accommodates methods like (a) prejudice remover, integrating a discrimination-aware regularization term to the learning objective; or (b) adversarial de-biasing classification, which trains models to simultaneously predict the target label and prevent a jointly-trained adversary from predicting a protected feature that is prone to discrimination.

Finally, the post-processing cluster encompasses methods and metrics like (a) equalized odds, that checks if, for any particular label and attribute, a classifier predicts that label equally well for all values of that attribute; or (b) the reject option classification, incorporating Bayesian methods determining confidence bands around the decision boundary with the highest uncertainty.

Cat Casey

@csdisco

Cat Casey is the Chief Innovation Officer for DISCO and member of the AI Transparency Working Group for the Sedona Conference.

“Algorithms and AI systems are not free from preconceived notions or prejudice because they are created by humans, with all of the implicit and explicit bias they bring to the table…”

There are methods for validating the absence of human bias in the underlying structure of AI or machine learning systems, but what is harder to adequately account for in many cases is data bias. An objectively ethically designed AI tool being applied in a balanced and ethical way will still yield biased results if the data inputs are affected by inappropriate representation or externalities.

In the case of representation, insufficient data, or sample bias, strictly interrogate your data sources for breadth and inclusion of all the key groups that may be affected by the AI. In the event the primary data source(s) are insufficient, then incorporate external data sources to round out the representativeness of the data you feed into your AI system.

Externalities affecting data integrity are harder to account for. In the case of the legal application of AI, it is especially important to ensure that, for example, you are not getting recidivism rates in areas that are overpoliced, historical disparities in outcomes based on socio-economic or racial lines. In any AI deployment, the volume and variety of data sources, combined with an impartial evaluation of the data itself and the outcomes are all pivotal in mitigating data bias.

Abdul Rehman

@VPNRanks

Abdul Rehman is a cyber security editor at VPNRanks

“The most overlooked obstacle in AI ethics is the representation in the training data sample…”

The bias exists; it’s clear. Since AI runs on a large volume of data that is collected from humans, it makes decisions based on the data, right? But if the data is taken from humans, humans are not fair with each other, and prejudice exists in the real world, aren’t the automated systems going to adopt the same unethical behavior that humans own?

The solution, in my opinion, is gender and racial inclusion in the data used to train algorithms and systems. And to make sure that happens, the workforce should also be diverse and should include people from all races and genders.

Speech-based AI systems have the common issue of not being able to interpret the different regional accents. It causes a lot of racial discrimination and may offend a lot of people, as speech-based AIs mostly have an American or British accent.

If you search ‘Latina or Asian girls,’ half the search images are semi-nude or pornographic images. Now, that can be a result of lack of diversification in the tech field or in the data, but either way, the AI algorithms have the potential to offend a huge number of people, in many possible ways.

To err is human, but if machines are copying humans, are we really in a position to bear machines making errors?

Rahul Vij

@WebSpero

Rahul Vij is the CEO at WebSpero Solutions.

“The most overlooked obstacles when it comes to AI ethics and AI bias are…”

  1. Favoritism in Data

It can be called ‘racism’ in technology. There are instances when people are shortlisted or categorized based on their gender, nationality, or other similar factors. These things are common in organizations worldwide, and they largely go unnoticed. They become a bigger problem when gender-based or race-based names are given promotions and others are neglected.

Another limitation of bias in data is that hiring managers do not reach the right candidates because the AI solution has shortlisted the potential candidates according to their gender or race.

Solution:

AI-based solutions are created by people, so the easiest way to get rid of data bias is to remove the elements in a solution that promote racism. While choosing a technical tool for hiring managers, it is necessary to ensure the platform does not shortlist candidates based on their race or gender.

2. Distribution of AI-based Data

It doesn’t matter how effective your AI-based solutions are to eliminate bias, they do not work if people are unaware of them. When data is not circulated in an organization to promote equality, it affects transparency.

Solution:

One way to make AI-based solutions work is to distribute them over different technology-based channels. From video to blogs, there are plenty of ways to quickly make employees aware of corrected AI-based data.

Petra Odak

@betterproposals

Petra Odak is the Chief Marketing Officer at Better Proposals.

“The most overlooked obstacle with AI for companies that want to implement it is…”

In order for customers to trust them, the AI algorithm needs to be transparent. If you can’t show how AI works, you can’t correct for any bias it may have, which is actually a matter that is very important to customers that know a thing or two about AI. The other problem is that by making the algorithm transparent, most companies today would show that their ‘AI’ is nothing but a hoax. I don’t think we’re close to seeing true AI in tech soon, at least in my industry (SaaS).

Dr. Gemma Galdon-Clavell

@EticasConsult

Dr. Gemma Galdon-Clavell is the CEO of Eticas.

“AI-powered algorithmic development prioritizes corporate stakeholders, company goals, profitability or efficiency…”

Rather than a technology’s impact on the individuals it may eventually affect (i.e., individuals who have applied for a loan or a job). As a result, at first glance, it appears that the algorithm is ‘functioning as expected.’ What’s even more alarming is that algorithms work in ways that are often unknown, even to their own developers.

What we’ve seen in conducting algorithmic audits is that engineers create algorithms that have a serious social impact in the real world. Furthermore, we’ve created a technological world where engineers are calling all the shots, making all the decisions without having the knowledge of what could go wrong – bias, false positives, discrimination, etc.

Peter Mann

Peter Mann is the Founder of SC Vehicle Hire.

“AI is being used in the vehicle rental industry to match the right cars to customers, optimize pricing, and optimize fleet utilization. The most overlooked obstacle I see for AI ethics and AI bias is that…”

Many companies do not pre-process their data to create representations of the data that don’t include sensitive attributes that may show bias. Existing data that is used to train AI can include the racial or gender bias of our employees, which causes the AI to use the same bias in its decision-making process. Many companies have the perception that AI as an emerging technology is based on science and fact so there is no way it could be biased. However, AIs are created and trained by humans. For example, crime data could include more samples of certain neighborhoods that have more policing, which is not reflective of the overall data. AIs used to sort resumes could be trained on data created by recruiters who tended to underweight female names or foreign names, which causes it to learn the same bias.

Without pre-processing the data, the AI makes biased decisions, and humans eventually have to intervene to filter out inappropriate decisions. This negates the benefit of AI to process data and make decisions faster than the human mind can.

Alexander De Ridder

@INKcoinc

Alexander De Ridder is the co-founder, CTO, and creator of INK.

“The most overlooked obstacle for overcoming AI bias is…”

Inadequate or limited datasets. Artificial intelligence, like organic intelligence, is only as smart as the information it has access to. If the information used to train an AI is limited or biased, the intelligence will reflect these flaws. Whether an AI or person, behavior is partially determined by the expectations we set. What conclusions are we encouraging the AI to come to? What standard do we set for ourselves? What culture do we create? If an AI draws an unfavorable or biased conclusion, condemning AI in general is not the answer. Instead, we need to reevaluate the data used to train it. We need to examine the biases or oversights of the data scientists who designed the dataset or training programs. Opting for ample and diverse datasets and evaluating potential biases that may be contained therein are critical for overcoming this obstacle

Ian Kelly

Ian Kelly is the VP of Ops for NuLeaf Naturals.

“The most overlooked roadblocks in the AI industry are…”

Artificial stupidity and racist machines. All machines have a learning period, just like humans have an education. During the learning period, the machine is fed with inputs, and it makes a decision based on a pattern it finds. Many companies, in an effort to become AI-powered before their competition, end up rushing the whole process. The problem is that they end up not providing machines with enough patterns during the learning phase. So, in a real industry scenario, these machines turn out to make mistakes that normal human beings wouldn’t make. This is the most overlooked problem, and most companies don’t realize the mishap until it’s too late. The only way to solve this problem is by having an in-depth learning period. The learning period should be followed up by multiple rigorous testing periods.

Humans are not made racist, but machines can be made racist as the unknown biases of the makers creep in. This problem is seen mostly when companies try to use AI to hire, and it’s not limited to just racial bias. For example, Amazon had to scrap their AI hiring technology because it proved to be gender-biased against women. In such cases, AI plays the disastrous role of multiplying the very thing it was devised to combat. Unfortunately, it’s very hard to detect such problems early on, and companies end up overlooking it. The only solution is adopting AI tech that is tried and tested for years. No company wants the reputation of being gender or racial biased.

Al Marcella, Ph.D., CISA, CISM

Al Marcella, Ph.D., CISA, CISM is the President of Business Automation Consultants, (BAC) LLC.

“Cutting my teeth in the IT world as a programmer and eventually landing in the fields of IT risk management and security…”

The most overlooked obstacle for companies when looking to AI solutions (which inherently involve ethics) is failing to ask these questions:

  • Whose ethics?
  • Am I willing to subscribe to or live with that perspective, understanding, or interpretation of ethics?

Given that, for the moment, the knowledge base (the rules) – which ‘fire’ when accessed via the inference engine at the heart of an AI application – have been coded by a human, based upon that human’s perception of ethics or potentially based upon a societal perception of what is ethically proper, who is to say that perception of ethical decision making is correct?

Often you hear people lament that the computer did this wrong or the system failed to this correctly, forgetting that the ‘computer’ was programed by a human and is only responding and reacting to the logic encoded in its programming, as written by a human.

Until AI fully possesses the capability to self-design, create, and execute, update and modify internal application programs, and is given the authority/ability to design the rules used to make decisions, then the AI application will be logically linked to a human. As a result of that linkage, the AI will be dependent on that human’s view of what, in this case, is or is not ethical.

Think for a moment about how you teach someone to be ethical. How did you learn to be ethical? Whose framework, logical, biases, behaviors, experiences were used to teach you those lessons on how to behave ethically? Will the same basis, approach, perspectives, etc., be used to establish the rules by which an AI application will ‘learn’ to be ethical? Once again, the question: ‘Whose ethics?’

Thus, one of the obstacles which will need to be overcome is to understand and accept that AI decisions (for now) are based on logic, not emotions as expressed and felt by humans. Understand that the AI application will decide not based upon what YOU think is ethical, but what has been programmed into the core decision making logic of the application, which provides rules that define examples of ethical decision making.

That core knowledge, those rules for decision making, will come from the human (for now) who wrote the inference engine code, designed and populated the knowledge base, and who may abide by a completely different perspective on what is and what is not ethical.

The true obstacle may be in admitting that at this moment, society does not have a globally accepted set of operating rules by which society can hold AI accountable. The technology is rapidly outpacing its creator. In the framework of this question on AI and ethics, if AI makes an error and unethical decision (by society’s standard and again, ‘Whose ethics?’) how does, how will, or how can society hold AI accountable?

There are many concerns related to the evolution of AI, such as rogue AI (becoming self-aware and deciding that human control is no longer wanted nor needed), whether AI systems should be allowed to kill (military applications instead of placing humans at risk), and the understanding that AI isn’t perfect. So, what happens if it makes a mistake (provided with bad data), etc.? Thus, AI bias (e.g., profiling), is a subset of the larger AI and ethics issue.

Given the question raised above, ‘Whose ethics?’, maybe it would be better to start with a simple premise and build from there…carbon-based life is to be preserved, respected, and protected.

Bartosz Baziński

@SentiOne_com

Bartosz Baziński is the Co-founder and COO of SentiOne.

“Chatbots are taking the world by storm: they can assist in customer service queries, book medical appointments, recommend books or movies, and find our missing parcels…”

Chatbots will be only as good as their training dataset. In order for AI to do its job, models need to be trained on data. However, data brings quite a few obstacles to the table.

First of all, how can we ensure that we have well-rounded and well-represented content to create a training dataset? Even if we take data from real-life conversation transcripts and historical messages from a customer service database, we may still underrepresent certain groups. It was commonly reported that the accuracy of several common NLP (natural language understanding) tools was dramatically lower for speakers of ‘non-standard’ varieties of English, such as African American Vernacular English, slang, or those with strong accents.

Second, someone needs to judge the quality of the content for the training dataset. Again, this is crucial for the success of any chatbot, as human bias can easily creep into AI through algorithms and data. Hidden bias is present in both people and data, and often bias is transferred to data because of people. If you do not have enough data or you want well-rounded data, you can go shopping around for data. However, that data may contain a bias that you don’t even know about. There were two famous examples of well-known chatbots that quickly misbehaved, purely due to their training dataset.

  1. Facebook trained their chatbot on Reddit data, and it quickly learned abusive language. Consequently, it ended up being offensive and vulgar.
  2. In 2016, Microsoft’s AI chatbot, Tay, was withdrawn from the market within 24 hours as it started tweeting racist comments. What happened to Tay? It was simply trained on conversations from Twitter and replicated human bias.

Another limitation to AI is that machines often don’t know what they don’t know. While AI is fantastic for interpreting large volumes of information, there is no guarantee that the technology will understand all the data. Again, a flawed chatbot is either a result of skewed data or an algorithm that does not account for skewed data. It is crucial for AI engineers and chatbot designers to be aware of those limitations so they can prevent them, or at least mitigate the risk, at the development stage.

How can we minimize the bias in conversational AI?

Encourage a representational set of users, as well as representational content and a representational training dataset. Create a diverse development team that will keep an eye on the unconscious bias of the others. Establish processes within the organization to mitigate bias in AI, such as additional testing tools or hiring external auditors.

Darren Deslatte

Darren Deslatte is the Vulnerability Operations Leader at Entrust Solutions.

“The idea of AI and related intelligence technologies has captured the public imagination for many years…”

But such advanced technologies need to be considered from ethical and moral standpoints when put into practice.

When it comes to AI, many companies and developers overlook the ethical implications of their AI systems. AI systems do not have a built-in moral or ethical code. AI learns to recognize and classify the information it is given based on a series of directives from its developers.

In this way, developers must consider their responsibility in determining the role (and, therefore, the behavior) of the AI system they are creating. Further, companies and developers working to create AI systems must consider questions such as:

  • What are the biases present in my thinking that could negatively impact the development of this system?
  • What are the implications of introducing this system into the market that we are making it for?

Through such questions, AI systems can be created that serve important roles in various enterprises while not actively contributing to information biases present in human societies.

Taylor McCarthy Hansen

@theecommmanager

Taylor McCarthy Hansen is a Co-Founder of The Ecomm Manager.

“I believe cultural sensitivity is a major challenge in AI ethics…”

How can AI respond to vast cultural differences, especially as remote work has enabled cross-cultural collaboration across the globe? People in China, for instance, may be more open to AI-based surveillance implementations in everyday life, but that may not be so with citizens in western countries that place a greater value on privacy and independence.

AI solutions need to be adaptable and programmed to recognize geo- and social-based differences, and this is where humans need to play a continuing role. At the end of the day, AI response is still dependent on human input.

Yaniv Masjedi

@Nextiva

Yaniv Masjedi is the CMO at Nextiva.

“A major flaw in using artificial intelligence is that…”

It lacks the knowledge to make decisions regarding a problem without the appropriate algorithm and data. Unlike the human brain, AI heavily relies on the inputs by developers. The lack of specific algorithms can affect the decision-making process and provide an inaccurate result. The human mind can quickly overcome this by creatively searching for unique solutions by finding new information. However, the current state of AI technology doesn’t allow any AI-powered programs to find new information like humans can.

David McHugh

@crediful

David McHugh is the CMO of Crediful.

“AI bias originates with its creators, but it doesn’t have to…”

A critical obstacle to overcome, bias can be replaced with equality by educating ourselves. Once we have overcome our own biases or at least learned to see them objectively, we can then translate our knowledge into AI without direct or indirect bias.

The data used to fuel AI creation is sensitive and should be treated as such. Good people harnessing data to propagate AI technology should be aware that this data can also be used maliciously. It’s ethical to protect this data and use it for good.

Michael Yurushkin

Michael Yurushkin is the CTO & Founder of BroutonLab.

“One of the most overlooked obstacles when it comes to AI bias is the conviction that…”

Machine learning algorithms reduce bias and eliminate subjective data interpretation because they are based in black or white mathematical markers.

However, many forget that the neural networks and AI are trained by developers, who are humans. So, it is crucial to consider their cultural and racial backgrounds. Their shared bias is more likely to be used by the AI if their backgrounds are very similar. For example, AI-powered resume search can assign a lesser priority to black-sounding names, regardless of experience or skills.

During the training phase, AI learns how to act according to the input, and it can easily embed human and societal biases. This phase doesn’t cover all data that the system will deal with once deployed in the real world. Many overlook that it is much easier to fool AI than a human. For example, you can introduce a random dataset to make a machine see things that aren’t there.

To overcome this obstacle, we need powerful recognition tools for enforcing fairness. There are three stages to reduce AI bias: pre-processing, in-processing, and post-processing.

I believe that the pre-processing method is the most effective. It allows data scientists to assess the quality of data and eliminate bias before starting to train a model and saves time, energy, and effort. First and foremost, gather a diverse team of data scientists to interact with your model in various ways. Then find comprehensive data and experiment with a variety of datasets and metrics. Perform external validity testing and auditing.

Incorporate characteristics like gender and race and address possible social bias coming from particular attributes within the code. You can also improve the learning process and reduce bias in predictions by using techniques that change data labels for similar records.

Kavita Ganesan

@opinosis

Kavita Ganesan is the founder of Opinosis Analytics.

“From what I’ve seen from being in the industry for a while now, there are essentially three problems…”

  1. AI developers are primarily focused on getting models developed rather than thinking about the implications of the model or the problems in the underlying data that can be perpetuated, such as racial bias.
  2. Leaders have a limited understanding of how AI or machine learning works and are thus not aware of the potential implications of those models. They think of AI as a silver bullet, which it’s not. They don’t realize that garbage-in equals garbage-out. In the context of AI, this means biased data can result in biased predictions.
  3. Companies don’t often have a data strategy and thus, there is an inherent bias in their data.

The first step to fixing issues related to AI bias and ethics is education at the leadership level. Once leaders understand how AI works and how bias gets introduced through models, they will then push their developers to do the same. Keep in mind that developers are often trying to get solutions out the door and may not necessarily be thinking about the downstream impact on customers.

The responsibility falls on leadership. By being in the know, they’ll be able to enforce appropriate policies prior to releasing any AI applications. For example, sufficient evaluation to ensure that models are tested exhaustively to represent all types of customers or employees.

The second way potential bias issues can be minimized is by having a big data strategy.

Phillip Gales

@InateAi

Phillip Gales is the Founder of inate.ai.

“The biggest problem in AI ethics is pre-existing human bias in the training data…”

In order for the algorithm to work correctly, it must mimic the inputs and outputs of the training data, and that effectively bakes explicit or implicit bias into the AI from its inception. It’s like the ‘original sin’ of AI.

The Venture Capital (VC) industry has historically been dominated by elite white males and has shown incredible bias towards investing in companies led by similar peers. At inate.ai, we analyze data from growth-stage startups, and we determine the likelihood of VC investment. In order to be accurate in this determination, we need to accurately model a VC’s behavior, and this means we have to account for any biases that exist.

These biases may be explicit or implicit. An example of an explicit bias would be a VC not investing in a founder because they just don’t look or sound right. Often, that’s just a biased opinion based on the founder’s background or ethnicity. Thankfully, the VC industry is increasingly aware of this bias and is working hard to correct it. Implicit biases occur when a VC is looking for particular traits, properties, or indicators in an unbiased way, but those traits are biased themselves. For example, VCs may value education from certain institutes or in certain subjects, such as engineering from Ivy League schools. While their analysis of the founder or company may be unbiased, there may nonetheless exist an underlying bias because of the selection process of that school, the success rate of individuals in that subject, or even the accessibility of that program.

When modeling the likelihood of VC investment, we find ourselves uncovering issues like the lower likelihood of investment for female founders or founders from certain backgrounds. We also uncover more subtle biases, like a driver towards certain founders that studied certain subjects that themselves correlate with low representation of female founders or founders from diverse backgrounds.

Human bias exists in a number of explicit and implicit ways, and the greatest issue for AI ethics is the trade-off between accurately modeling the characteristics that an algorithm needs to model, but not baking in those biases in doing so. The latter perpetuates systemic bias, because while the algorithm may be soft, the implications are long lasting and very hard.

Charlie Wright

@ImprimaVDR

Charlie Wright is an IT consultant at Imprima.

“As we all know, AI applications are created to make decisions based on training data…”

However, this data can sometimes act in detrimental ways, affecting human decisions and reflecting inequalities with factors such as gender, race, or age. These patterns that algorithms create are difficult to overcome and are everyone’s responsibility, therefore I identify this bias as one of the most challenging biases within the industry. It is actually a very complex task to accomplish, although there are many solutions available to tackle these issues, such as processing data beforehand.

 

How does your company ensure AI ethics and reduce bias when implementing AI systems?  

Artificial Intelligence Speech & Conversation Analytics APAC EMEA North America