The squeaky wheel gets the most attention; people tend to address the loudest problem at the time. However, this isn’t the most efficient method of dealing with the situation. The ethical issues of AI must be addressed today, alongside security concerns.
Organizations that utilize artificial intelligence (AI) programs are experiencing pressure to include ethical guidelines in the development of AI software as AI programs grow more powerful and widespread. The question is whether ethical AI will become a real priority for businesses, or whether these critical principles will be seen as another roadblock to rapid development and implementation.
The EU General Data Protection Regulation (GDPR) could serve as a cautionary tale. GDPR was enacted with noble intentions and was heralded as a big step toward better, more consistent privacy regulations, but it quickly became an albatross for businesses attempting to comply with it. The GDPR and other privacy regulations that followed were frequently viewed as simply adding to their workload, preventing them from focusing on more important tasks. Organizations that try to solve each new regulation in silo end up adding a lot of overhead and become vulnerable to competition in terms of agility and cost effectiveness.
Could a focus on ethics in AI follow the same path? Or should businesses be aware of the hazards — as well as their responsibilities — of deploying sophisticated AI applications without first addressing ethical concerns? Is there a better method to deal with yet another aspect of quality without adding to the workload?
Addressing AI bias
AI programs are unquestionably intelligent, but they are still programs, and they are only as intelligent as the thought—and programming—that went into them. Their ability to digest information and develop judgments on their own, adds levels to the programming that aren’t required by more traditional computing systems that account for obvious issues.
The problem is exacerbated by the fact that AI systems aren’t very adept at describing how they arrived at a judgement. Whether AI software is detecting disease or even recommending a restaurant, it’s “thought” processes remains inscrutable. And this adds to the upfront ethical programming load.
Privacy and ethics together
Continued advancements in AI could have far-reaching implications. According to Algorithmia’s third annual survey, 2021 Enterprise Trends in Machine Learning, 76 percent of companies are prioritizing AI and machine learning in their spending plans.
Along with the ethical issues about AI’s role in decision-making, the issue of privacy is unavoidable. Should an AI that scans social media be able to alert authorities if it notices a suicide pattern? Given the ethical and legal ramifications, it’s only natural that privacy and ethics are incorporated into the same security process as businesses consider how to deal with ethics. They should not be viewed as independent entities.
As these and other systems progress, new AI ethics guidelines will inevitably emerge. This will add to the workload for teams attempting to bring new capabilities or products to market, but it also poses challenges that need to be addressed.
How well AI ethics policies are incorporated with existing programs will likely determine how successful they are. The experience of businesses with GDPR can be instructive. Some companies that are incorporating it into their security operations have gained a lot more maturity by addressing privacy and security as one bucket, when it was earlier considered as largely a burden.
The way forward
Finally, programmers should bake in some rules and guidelines on how to treat various types of data differently, as well as how to ensure that data segregation does not occur. Integrating these rules into an organization’s entire operations and software development will require its executives to make ethics a priority.
Enterprises should handle ethics and security in tandem, employing the same processes and technologies that they use for security to address ethics. This will ensure that the software development lifecycle is managed effectively.
Given the history of how the influence of other game-changing technology has been disregarded until legal difficulties have arisen, the risk of regret may well lie in failing to take it seriously and acting proactively until it becomes a critical priority.