By Megana Natarajan - August 17, 2020 3 Mins Read
CIOs believe that artificial intelligence will be self-explanatory and the interpretation will boost research on machine intelligence
IT leaders acknowledge that machine learning is the “it” thing in the enterprise world in the current situation. Organizations have eagerly adopted digital transformation, and as result, automation of tasks has been boosted significantly. Coverage and volume of the available data have grown and ML has tackled tasks with higher complexity and achieve much better accuracy.
CIOs point out that people tend to mix-up machine learning with the broader umbrella of artificial intelligence. ML has its own sets of liabilities. It works by inputting real-world historical data into algorithms needed to train models. ML models are then fed the latest data and bring about relevant results built on the historical data input to train the model.
IT leaders say that ML models have been used in the medical industry to provide data using CT scans and X-rays. This has proven to be highly beneficial in the current pandemic situation.
Read Also How to solve the challenges in adopting Artificial Intelligence
Explainable artificial intelligence
CIOs use the term interpretable and explainable interchangeably w.r.to AI. They define interpretability as the degree to which an employee or human can understand the significance of a decision or the level to which one can predict the ML model’s possible result. Models with higher interpretability have higher comprehension of why specific decisions or predictions were arrived at.
Real task: application-level evaluation
CIOs call for explaining the product and getting it tested by the client or end-user. This arrives from the assumption that an ML model’s baseline should be how a human will explain a particular decision.
Simple task: human-level evaluation
IT leaders refer to this as the less-complex application-level evaluation. Such tests are carried out by common users rather than the domain experts. This method is cheaper and also has a wider range of sample sizes. The end-user can be provided with different explanations and the final decision rests with the user.
Proxy task: function level evaluation
CIOs point out that this level of testing doesn’t include humans. Such evaluation works best when humans have already evaluated the models.
IT leaders acknowledge that most of the interpretability methods are for tabular data, separate methods exist for text and image data.
Read Also Top Misleading Myths that IT Leaders Tend to Believe
CIOs say that while global interpretability is unachievable, there is the potential to understand few models at the modular level. Global models are implemented by taking a range of instances and assuming as if the group consists of the total dataset. IT leaders point out that individual explanation measures can be used for every instance and then aggregate or list for the complete group.
Megana Natarajan is a Global News Correspondent with OnDOt Media. She has experience in content creation and has previously created content for agriculture, travel, fashion, energy and markets. She has 3.9 years’ experience as a SAP consultant and is an Engineering graduate.
A Peer Knowledge Resource – By the CXO, For the CXO.
Expert inputs on challenges, triumphs and innovative solutions from corporate Movers and Shakers in global Leadership space to add value to business decision making.
Media@EnterpriseTalk.com