Gartner has predicted that 75% of large organizations will hire AI Behavior Forensic Experts to reduce reputation risk by 2023.

Users’ trust in AI and ML solutions is dipping due to the increasing incidents of irresponsible data breaches and misuse. Despite the regulatory scrutiny to combat these, Gartner has predicted that by 2023, 75% of large organizations will hire an ‘AI behavior forensic’ expert and even privacy and customer trust specialists who will help in reducing brand and reputation risk.

Also read: Realities of Blockchain for Next Five Years

Bias based on gender, race, location, and age; and now on structure of data, have always posed risks in training the AI models. Also, opaque algorithms like deep learning sometimes incorporate many implicit and highly variable interactions in their predictions, making them difficult to interpret.

Experts believe that new tools and higher skills are required to help organizations identify the potential sources of bias, reduce corporate brand and reputation risk, as well as build more trust in using AI models. An increasing number of Chief Data Officers (CDOs) and data and analytics leaders and are hiring ML forensic and ethics investigators.

Sectors like technology and finance are testing and deploying various combinations of AI governance and risk management tools to manage reputation and security risks. Tech companies such as Facebook and Google, and other organizations like Bank of America, MassMutual, and NASA have already appointed or are hiring AI behavior forensic specialists. These experts primarily focus on uncovering the undesired bias in AI models before deployment.

Also read: Top Strategies to Attract Top Tech Talent

They validate models during the development phase and also monitor them after they are released, as unexpected bias can be introduced due to the divergence between real-world and training data. The number of these experts will rise exponentially over the next five years.

Service providers will launch new services to ensure that ML models are explainable, meet specific standards and audit, and certify the models before they are moved into production. Some organizations have also launched dedicated AI explainability tools that help their clients and customers identify bias and fix them in the algorithms. Many open-source technologies like Local Interpretable Model-Agnostic Explanations (LIME) can also look for unintended discrimination before they become models.

Such tools can help ML investigators examine the influence of data on sensitive variables like gender, age, and race on other variables. These also measure the amount of the correlation that variables have with each other and see whether they are adjusting the model and its outcomes.

Experts believe that CDOs and data analytics leaders must make ethics and governance a critical part of AI initiatives to build a culture of trust, transparency, and responsibility.

Also read: Future Tech Jobs That Do not Exist Yet