Friday, December 2, 2022

Are Regulations On AI Enough To Avoid Bias?

By Meeta Ramnani - May 21, 2019 4 Mins Read

As governments globally introduce bills and publish guidelines on development and usage of AI technology, experts argue that top tech companies are influencing these guidelines keeping economic profits above ethics

AI is a tool that humanity is wielding without a thought to consequences. Without the code of ethics, corporate transparency, laws, government accountability, and capability of monitoring, experts have started voicing concerns about it being used for weaponry and even for commercial purposes that can cause severe and irreversible harm.

However, as regulations are coming through in the usage of AI, experts have observed that the foundation of these rules is flawed. In April this year, the European Commission published ‘Ethics guidelines for trustworthy AI.’ This is supposed to be a ‘solid foundation based on EU values,’ but one of the 52 experts who participated in the making of the guidelines revealed that the foundation has a bent towards the economic growth of the tech industry.

Some experts involved in the making of these guidelines were from the tech industry or aligned with industry interests. The influence of the tech allies was at a level that when the list for ‘prohibited use of AI’ was being drafted, the tech allies did not let any ‘red lines’ appear around uses of AI. Due to this, the current guidelines cannot be said to be as robust and influential as that of GDPR. The trade group Digital Europe has supposedly caused this influence as it had representatives from Facebook and Apple.

The removal of ‘red lines’ seemed to be approved by Microsoft too as Cornelia Kutterer, Senior Director, EU Government Affairs, Microsoft, in the public comments of the draft wrote, the group had “taken the right approach in choosing to cast these as ‘concerns,’ rather than as ‘red lines.’”

With the lack of regulation, space has reached a level where the word ‘AI’ is being used without any sort of control. According to a survey report from London VC firm MMC where it studied 2,830 AI start-ups across 13 EU countries, 40% of European start-ups classified as AI companies, do not use the AI technology in a way that is “material” to their businesses.

After the failure of Google’s Advanced Technology External Advisory Council (ATEAC) – a committee meant to guide the ethical development of AI, it is clear that self-regulation will not work in this space. Experts also criticized the National Science Foundation’s program for research in ‘Fairness in Artificial Intelligence,’ as that is co-funded by Amazon. Though the tech giant is only to allocate the grants and not actually participate in the peer review process, experts point to the NSF documents, which allow the company to ask recipients to share updates on their research, also giving company rights to royalty-free license to the intellectual properties developed.

Experts believe that tech companies are trying to steer legislators toward rules for AI that are in their favor. At the same time, there is an expectation that policymakers will stand up for communities and consumers and be able to see the need for regulation. Experts also point towards situations where employees at technology companies have observed potential dangers of the development of ML and do circulate petitions, but there are no significant results.

All eyes are now on the US government introduced a bill – Algorithmic Accountability Act – that requires companies to assess whether AI systems and their training data have built-in biases that can result in discrimination. An expert who participated in the discussions during the bill’s drafting believes that while talking with lawmakers about racial disparities in face analysis algorithms, a few had been briefed by the tech companies on the positive impacts of AI on society.

According to, Google’s parent company, Alphabet had spent $22 million on lobbying last year. This year earlier, Google issued a white paper arguing self-regulation will be sufficient, and that the hazards of AI can be avoided just with that.

Experts recommend a system that ensures harnessing AI created opportunities across various areas like safety, transportation, criminal justice, labor, medicine, and national security, and confronting ethical challenges like the potential for social bias and low transparency. AI has the power to reduce social and economic inequality, but if left unregulated, it can also be dangerous for democracy and even capitalism itself.


Meeta Ramnani

Meeta Ramnani is the Senior Editor with OnDot Media. She writes about technologies including AI, IoT, Cloud, Big Data, Blockchain across various industries with a focus on Digital Transformation. An avid bike rider, Meeta, is a postgraduate from Indian Institute of Journalism and New Media (IIJNM) Bangalore, where her specialization was Business Journalism. She carries four years of experience in mainstream print media where she worked as a correspondent with The Times Group and Sakal Media Group in Pune.

Subscribe To Newsletter

*By clicking on the Submit button, you are agreeing with the Privacy Policy with Enterprise Talks.*