Monday, March 20, 2023

How Enterprises can Keep Machine Learning Models on Track with Crucial Guard Rails

By Prangya Pandab - December 30, 2021 4 Mins Read

How Enterprises can Keep Machine Learning Models on Track with-01

As deep neural networks become more common in machine learning, businesses are becoming more reliant on a technology that experts don’t completely comprehend. To create a safe and predictable operating environment, guard rails are essential.

Over the next several years, AI and machine learning (ML) will undoubtedly play an increasingly important role in the development of enterprise technology and the support of a wide range of corporate projects.

According to the latest issue of IDC’s Worldwide Semiannual Artificial Intelligence Tracker, global AI market revenues, comprising hardware, software, and services, are estimated to hit USD 341.8 billion this year and rise at an annual pace of 18.8% to cross the USD 500 billion mark by 2024.

Also Read: CIOs Reimagine IT Expenditure

Despite the optimism, the deep neural network (DNN) models that are driving the boom in ML adoption have a secret: researchers do not completely know how they function. If IT leaders deploy a technology without first knowing how it works, they risk a variety of negative consequences. The systems may be dangerous because they are unpredictable, biased, and give outcomes that are difficult to comprehend for their human operators. Adversaries can take advantage of idiosyncrasies found in these systems.

When it comes to mission-critical applications, CIOs and their teams must choose between the better results ML can provide and the risk of disastrous outcomes.

Some machine learning researchers want to get a better understanding of DNNs in the long run, but what should practitioners do in the meanwhile, especially when negative outputs can put lives and property at risk?

Guard rails for machine learning

Here are some approaches for improving the safety and predictability of machine learning systems:

Determine the safe range of model outputs

After determining the safe output range, IT leaders can work backwards through the model to determine a set of safe inputs whose outputs will always fall inside the desired envelope. This analysis has been demonstrated for specific types of DNN-based models by researchers.

Install software guard rails in front of the model

Once the safe range of inputs has been determined, a software guard rail can be installed in front of the model to ensure that it is never provided inputs that will lead it into an unsafe situation. The guard rails effectively keep the ML system under control. Businesses will know that the outputs are always safe, even if they don’t know how the model arrives at a certain outcome.

Also Read: Artificial Intelligence: A Blessing or Bane for the Employees?

Focus on models that provide predictable outcomes

It’s critical to know that the models don’t produce results that wildly swing from one region of the output space to another, in addition to maintaining the outputs within a safe range. It is possible to ensure that if an input changes by a small amount, the output will vary correspondingly and will not jump to a completely different region of the output range unpredictably for certain classes of DNNs.

Train models to be safe and predictable

Researchers are working on ways to change the training of DNNs in such a way that they can be subjected to the aforementioned analysis without compromising their pattern recognition abilities.

Maintain agility

In this fast-paced environment, it’s critical to incorporate guardrails into the ML architecture while maintaining the flexibility to evolve and improve them as new techniques become available.

The task at hand for IT leaders is to ensure the ML models they develop and deploy are under control. Establishing guard rails is an important interim step, while a better understanding of how DNNs works is attained.

Check Out The New Enterprisetalk Podcast. For more such updates follow us on Google News Enterprisetalk News.


Prangya Pandab

Prangya Pandab is an Associate Editor with OnDot Media. She is a seasoned journalist with almost seven years of experience in the business news sector. Before joining ODM, she was a journalist with CNBC-TV18 for four years. She also had a brief stint with an infrastructure finance company working for their communications and branding vertical.

Subscribe To Newsletter

*By clicking on the Submit button, you are agreeing with the Privacy Policy with Enterprise Talks.*