With AI initiatives becoming the centerpiece of many future technology initiatives of the enterprises, IT leaders should ensure that these projects are scaled sustainably and free from bias and ethical issues.
AI and machine learning initiatives have brought to heel even the most sophisticated organizations composed of data scientists and engineers while still failing spectacularly in their applications in the real world.
Whether it is demand forecasting models driven by COVID-19 uncertainty or models that power HR software to stop discriminating against potential job seekers, problems with AI and ML models have become as common as they are dangerous when they are not monitored or caught early. In recognition of this fact, large organizations acknowledged their use of AI as a risk factor. Still, many organizations are confidently advancing in their initiatives.
Today, almost all industries rely on ML-powered systems to increase productivity, profitability and in some cases, save lives. As per a 2021 report from IDC, titled the “WordWide Artificial Intelligence Spending Guide,” global enterprises spending on AI will top US $204 billion by 2025. With so much emphasis on AI and ML technologies in shaping the future, enterprises need to come with strategies to balance the power and potential of AI. In addition, they need to identify ways to maximize the positive outcomes for customers and society at large.
Here are a few things enterprises can do to ensure the sustainability of AI initiatives while scaling them:
The teams responsible for the development and deployment of AI projects should represent the diversity of customers and society at large.
Often, AI and machine learning models leave behind the hard truths of an individual’s lived experience. As ML models are fed and trained on historical data, they can intensify any discrimination or unequal power structures present in the historical dataset.
Even though most teams firmly believe that this is of great concern and need immediate attention, most data scientists find it difficult to detect and mitigate every possible fairness issue for retraining the ML models accordingly. This is not because data scientists have any hidden intention but because it is a blind spot that formed due to the lack of diversity.
To address this challenge, enterprises should focus on diversifying the workforce. They should have explicit hiring goals alongside accountability to achieve them. Finally, they should succeed in these efforts and transparency to the board or public.
Build a codified ethical and risk governance framework for AI
Having a siloed technological approach cannot mitigate every risk or contribute to making AI ethically responsible. Instead, the IT leaders should implement systems that identify both ethical and organizational risk factors across the organization and incentivize staff members to act on them. While technology undoubtedly plays a crucial role in determining the problems, organizations should empower employees to act on these insights. They should develop an organization-specific plan for operationalizing AI ethics that ensures IT teams incorporate procurement frameworks designed with proactive model monitoring and ethics in mind.
Grant access to AI practitioners of protected data wherever required
Often in their attempts to root out the bias or ethics issue, data science and machine learning engineers stumble upon the problem of not having access to protected data. Hence, while advancing toward a responsible AI framework, enterprises should modernize policies around access to protected data wherever AI practitioners deem it necessary. This will equip them to utilize data across the full ML lifecycle that will be critical to deliver accountability as well as ensure the output generated by the ML model are not biased or discriminatory.