By Meeta Ramnani - April 09, 2019 5 Mins Read
As we understand more about AI, governments and companies need to decide on how to regulate the powerful technology
AI must be transparent, accountable and trustworthy, says the European Commission as it sets AI’s ethical guidelines to test. The Commission set up the High-Level Expert Group on AI in June last year, and the 52 independent experts were asked to make ethics guidelines for AI. To get into the ‘next step’ of AI ethics, a pilot project has been now launched to test them by early 2020.
Building on this review, the Commission will evaluate the outcome and propose any next steps to set global standards for ethical and responsible AI. This initiative by the European Union has started a global debate about whether companies should put ethical concerns before business interests. With government being involved with the question of ethics in AI, another question raised is – will this intervention will hamper innovation.
EC’s digital chief, Andrus Ansip, said in a statement, “The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies.” While, AI can help detect cybersecurity threats, improve financial risk management and healthcare, it can also be used to support questionable business practices and authoritarian governments.
The importance of ethics in AI cannot be ignored, as AI algorithms and datasets have the ability not just to reflect and reduce, but also reinforce unfair biases. Sundar Pichai, CEO of Google, blogged saying, “We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.” Google has been at the center of a widening public debate over how automated systems and how they might disadvantage vulnerable groups or lead to large-scale job losses.
While the search engine giant has strong stands on AI ethics, ironically, the Artificial Intelligence Ethics Board that the company formed was dissolved within ten days, due to employee criticism. The short-lived Advanced Technology External Advisory Council (ATEAC) was meant to audit Google’s ethical standings on machine learning and AI products. Earlier, technology companies like Microsoft, Facebook, and Axon have all laid out their ethical principles to guide their work on AI and have advisory boards on the issue.
But experts question the credibility and the power of the members in these boards. Last week Facebook was sued by the US Department of Housing and Urban Development on the way it let its advertisers target their ads by race, gender, and religion. Though the company announced that it would disable this ability, a study shows that Facebook’s algorithm carries out the same discrimination.
The study done by the Northeastern University found that slight variations in the available budget, headline, text, or image had significant impacts on the audience that the ads reach. “We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come,” Sundar Pichai, CEO, Google.
While the issue seems to have its sources in the way AI is learning stereotypes, there are ways out that lead to answers in this debate. A joint study by Yale University and the Indian Institute of Technology suggest that it is possible to constrain algorithms in their design to minimize discriminatory behavior, but will cause minor losses in the revenue of the platform.
Sander Klous, Data & Analytics Leader, the Netherlands, KPMG International in his blog on KPMG Insights suggests that auditing AI is not all that different from auditing financial statements as the principals remain the same. “The same principles and good practices apply – such as the three lines of defense and the impact of potential mistakes (materiality). And just as with financial statements, public interest should be the highest priority of the auditor as well as a far-reaching willingness to be transparent and to cooperate closely with national and international regulatory bodies. As always, the auditor is accountable to the general public, as well as to regulators and the corporate sector.”
While forming the boards to check AI Ethics, experts believe the group should be constituted appropriately, given clear and robust institutional powers and beheld to transparent accountability standards. The tech companies should also be willing to share the criteria they are using to select the board members. The boards should also have a core team of actual ethicists and actual domain experts. Also, these boards should meet much more regularly than against what Google had suggested for its council (4 times a year)
As the AI technology matures and the world would be using it flawlessly, this phase of decision making is what will define the future.
Meeta Ramnani is the Senior Editor with OnDot Media. She writes about technologies including AI, IoT, Cloud, Big Data, Blockchain across various industries with a focus on Digital Transformation. An avid bike rider, Meeta, is a postgraduate from Indian Institute of Journalism and New Media (IIJNM) Bangalore, where her specialization was Business Journalism. She carries four years of experience in mainstream print media where she worked as a correspondent with The Times Group and Sakal Media Group in Pune.
A Peer Knowledge Resource – By the CXO, For the CXO.
Expert inputs on challenges, triumphs and innovative solutions from corporate Movers and Shakers in global Leadership space to add value to business decision making.Media@EnterpriseTalk.com