By Prangya Pandab - February 09, 2022 4 Mins Read
Over the last couple of years, the society has begun to wrestle with the extent to which human prejudices might infiltrate artificial intelligence systems, with potentially disastrous consequences. Being acutely aware of those risks and striving to eliminate them is an urgent priority at a time when many businesses are looking to adopt AI systems across their operations.
Today, there are two major issues with machine learning. The first issue is the “black box” issue. Machine learning models are capable of making highly accurate predictions, but they lack the capacity to describe the reasons behind them in a way that is comprehensible to humans. Machine learning models only provide a prediction and a confidence score for that prediction.
Second, machine learning is limited to the data that it was trained on. If there is historical bias in the training data, that bias will be evident in the predictions if left unchecked. While machine learning provides exciting possibilities for both consumers and businesses, the historical data on which these algorithms are based might be biased.
The reason for concern is that commercial decision-makers do not have a reliable way of detecting biased practises embedded in their models. As a result, it’s critical to figure out what biases are present in source data. In addition, human-controlled governors must be installed as a precaution against actions deriving from machine learning predictions.
Predictions that are biased result in biased behaviour. Biased decisions lead to biased acts, which businesses continue to build on. This starts a cycle that becomes worse with each prediction, resulting in a problem that compounds over time with every prediction. The sooner bias is identified and eradicated, the faster the risk may be managed and businesses can expand their market into previously untapped areas. Those who do not address bias now risk exposing themselves to a slew of future unknowns in terms of penalties, risks, and lost revenue.
The cost of AI bias in the real world
Machine learning is employed in a range of applications impacting the public. Historical data of these programs may contain biased data, and relying on biased data in machine learning models reinforces bias. However, recognizing such bias is the first step toward correcting it.
For example, it was revealed that a popular algorithm used by many large US-based health care systems to screen patients for high-risk care management intervention programs discriminated against Black patients because it was based on data about the expense of treating patients. The model, however, did not account for racial disparities in healthcare access, which contribute to reduced spending on Black patients compared to similarly diagnosed white patients. Although cost is a good proxy for health, it is biased, and it is this choice that puts bias into the algorithm.
Is there a way to overcome AI bias in models? People should be in charge of deciding whether or not to take real-world actions in response to a machine learning prediction. People must be able to understand AI and why it makes certain decisions and predictions, therefore explainability and openness are essential. Algorithmic biases can be exposed and decisioning changed to avoid costly penalties or harsh social media feedbacks by expanding on the reasoning and elements influencing ML forecasts.
Explainability and transparency should be a priority for businesses and technologists when it comes to AI.
For mitigating biased AI practises, there is limited but growing regulation and guidance from lawmakers. The UK government recently released an Ethics, Transparency, and Accountability Framework for Automated Decision-Making in order to provide more detailed instructions on how to use artificial intelligence responsibly in government. This seven-point approach can assist government agencies in developing algorithmic decision-making systems that are sustainable, safe, and ethical.
Humans must grasp how and why AI bias leads to particular outcomes and what this entails for everyone in order to fully harness the power of automation and create equitable change.
Prangya Pandab is an Associate Editor with OnDot Media. She is a seasoned journalist with almost seven years of experience in the business news sector. Before joining ODM, she was a journalist with CNBC-TV18 for four years. She also had a brief stint with an infrastructure finance company working for their communications and branding vertical.
A Peer Knowledge Resource – By the CXO, For the CXO.
Expert inputs on challenges, triumphs and innovative solutions from corporate Movers and Shakers in global Leadership space to add value to business decision making.Media@EnterpriseTalk.com