More organizations are aiming for AI behavior forensic experts in order to combat the business reputation risk by 2023, claims a Gartner study.
Over time, users’ trust in advanced technologies including automation, artificial intelligence, and machine learning tools, is plunging with the increasing usage. Besides, there has been growing incidents of misuse and irresponsible data breaches in these pandemic times.
Regardless of the regulatory scrutiny to combat such occurrences, Gartner projected that nearly 75% of organizations would focus on hiring an AI behavior forensic expert by 2023. Furthermore, business leaders are planning for privacy and customer trust experts in their organization who could help reducing reputation and brand risk.
Biases based on gender, location, and the data structures have been posing risks while training the AI models. Besides, complex algorithms like deep learning often incorporate many highly variable and implicit interactions in their estimates – making them tricky to interpret.
Given the current market scenario, the industry experts believe that new solutions and higher skills are obligatory to help companies reduce corporate brand risk, identify the potential sources of preferences, and build more trust by using the AI models.
As a result, several chief data officers (CDOs) and data analytics leaders across organizations are focused on onboarding more ML forensic as well as ethics investigators. Technology and financial firms are increasingly testing and deploying innovative combinations of risk management and AI governance tools to tackle this.
For instance, prominent organizations like Facebook, Google, Bank of America, NASA, etc., have been appointing AI behavior forensic specialists for years now. These professionals primarily focus on exposing the undesired bias in AI models before their deployment.
The number of such experts will exponentially rise over the next five years, especially with the steep rise in digital transformation journeys in businesses. In fact, service providers plan to launch new facilities to ensure ML models are explainable, meeting specific standards and audit.
Moreover, many organizations have also introduced dedicated AI explainability solutions to support their clients and customers categorize bias and fix them within the algorithms. Open-source technologies like Local Interpretable Model-Agnostic Explanations can help in unintended discrimination before turning to models.
These advanced tools can help ML investigators look at the influence of data on sensitive variables – including age, gender, and race on other variables. Besides, they can measure the correlation amount that variables have with each other and find if they are adjusting to the expectations of the model.
In this context, Jim Hare, Research VP at Gartner, explained in the company blog post – “Data and analytics leaders must also establish accountability for determining and implementing the levels of trust and transparency of data, algorithms, and output for each use case. It is necessary that they include an assessment of AI explainability features when assessing analytics, business intelligence, data science, and ML platforms.”