Friday, March 29, 2024

Human Intelligence Can Fix AI Shortcomings

AI is in its relative infancy, and humans are still figuring out how to address issues such as algorithmic bias. However, biases demonstrated by AI are the same as existing human biases — data sets used to train machines should be fed fair to avoid AI prejudices.

The doomsday scenario about Artificial Intelligent (AI), stealing away labour-intensive jobs across industries is becoming popular. AI has become the greatest existential threat for humans, largely due to the increasing dependency on Robots to perform day-to-day chores. But AI comes with its own set of boon and banes. People are anxious and perplexed about their future in a completely automated world.

The underlying fear that AI is dangerous because it can do things better than humans is a myth, as AI is just an extension of human intelligence. Its humans who can implement fixes and changes in the shortcomings in any AI robotic system. The main concerns relate to factors such as algorithmic prejudices, lack of sufficient oversight, and the ultimate wrong decisions due to partial knowledge.

Read Also: Deloitte’s Heat Launches Artificial Intelligence Practice ‘Heat AI’

After repeated incidents demonstrating AI bias has become an urgent issue that needs correction. To fix this, the creator of each robot needs to feed data sets without any human prejudices to the machine. Incidences like – biased facial recognition systems such as Amazon’s Rekognition and Federal Reserve deals with money lending discrimination. In AI, diversity is a must-have. Experts suggest that different perspectives from different individuals cutting across diverse segments of the industry should contribute towards building the database.

Testing AI algorithms and ML models before deploying is key to ensure that they are capable of carrying out the designated tasks. One modelling error commonly noticed is known as “overfitting,” where a specific function is closely aligned to training data, throwing up false positives. Wise teaching goals with well-crafted tests, providing them relevant and diverse perspective is necessary to avoid AI bias.

Read More: Genpact Artificial Intelligence Engines Help Companies Accelerate AI Adoption

To avoid catastrophic failure, a “safety net” needs to be in place in multiple forms. For example, consider any device that has been set to unlock using facial recognition of the owner. If somehow the camera wrongly identifies any unfamiliar, a PIN on the lock gets used as a “Safety Net.” AI is a reflection of its creators. Implementing systems and checks is required to ensure those building the machines are responsible and accountable for any AI bias.

Experts from leading tech companies suggested that it is necessary for AI to involve expertise from human professionals’ for training machine-learning systems — regardless of the experts’ knowledge of AI or ability to code. The data sets authored by engineers need approvals from analysts and decision-makers. These analysts and decision-makers need to check the tests done by statisticians and look for the safety nets put in place by the engineers. Its human component that ultimately influences the entire concept of AI. Therefore, it is time to focus on “machine teaching” rather than “machine learning” to resolve all shortcomings that AI currently faces.

Read Also: Can Artificial and Human Intelligence Together Fight Cybercrime?

Top 5 This Week

spot_img

Related Posts

spot_img
Debjani Chaudhury
Debjani Chaudhury
Debjani Chaudhury works as an Associate Editor with OnDot Media. In this capacity, she contributes editorial articles for two platforms, focusing on the latest global technology and trends.Debjani is a seasoned Content Developer who comes with 3 years of experience with Fashion, IT, and International Marketing industries. She has represented India in International trade forums like Hannover Messe, Germany.

Popular Articles