AI and Ethics Part 1: Impact on Privacy
While the word “robot” is now understood and known by many people due to its prominence in STEM fields and pop culture, most don’t know its dark origin. The term “robot” originated almost 100 years ago in a Czech play describing mechanical slaves created by humans that eventually rebelled against the human race. To celebrate the words’ 100th year birthday, I will focus all my posts in February on artificial intelligence and its potential benefits and risks.
One of the most prominent issues regarding AI is the prospect of machines utilizing deep learning impeding privacy. For AI to be successful and impactful, it needs to utilize immense amounts of data. Privacy concerns increase as companies feed more and more consumer and vendor data into advanced, AI-fuelled algorithms to create new bits of sensitive information, unbeknownst to affected consumers and employees. This means that AI may create portfolios of data for certain people, which can later be exploited. When it does, “it’s data that has not been provided with [an individual’s] consent or even with knowledge,” said Chantal Bernier, assistant and interim privacy commissioner in the Office of the Privacy Commissioner of Canada from 2008 to 2014. Another significant issue regarding AI and privacy is surveillance. Surveillance has always been a part of life and recognized by the public and policymakers; however, technology has changed significantly in the last decades while regulation has been slow to respond, resulting in a “certain anarchy that is exploited by the most powerful players.” Jon Fasam writes, “the rules are all over the place with a lot of this technology because it’s so new, because it changes so quickly,” Fasman says, in his book detailing the lack of regulation in AI. All data collection is now digital and connected to a single internet; AI increases intelligent data collection possibilities and data analysis possibilities. This applies to blanket surveillance of whole populations as well as to classic targeted surveillance. Also, much of the data is traded between agents, usually for a fee, resulting in massive amounts of private information being exposed.
However, while the problem of AI impeding privacy is very real, there are also various potential solutions. The simplest would simply be greater regulation. An example of successful data regulation would be the EU’s General Data Protection Regulation, or GDPR. The GDPR went into effect in 2018 and vaulted digital privacy expectations to a higher level worldwide by ushering in new standards on a person’s right to their own information. The EU’s new privacy rules are taken seriously because of the potential fines for violations. Organizations can face fines up to the greater of €20 million ($22 million) or 4% of their annual global turnover if they are found to be out of compliance with the new privacy regulations. How data is stored, used, and protected is a focus of the GDPR, requiring companies to ensure their data collection and use policies and practices align with the privacy standard, which requires tight control over how personal data is collected and processed. The GDPR has been immensely successful, and similar plans could be implemented in the United States to ensure better data privacy. Additionally, AI can be used to protect privacy just as well as it can infringe upon it. For example, some companies are beginning to use regulated AI that aligns with current privacy laws to audit and monitor data. The purpose of the auditing and monitoring is to determine whether data leakage, inappropriate access, or other compromises to the data are occurring. Historically, auditing and monitoring have been a manual process; however, that process is often long and ineffective at capturing all leaks. AI is an exponentially superior agent when it comes to auditing and determining where there are privacy leaks. Thus it is evident that with the correct regulation, and use, AI can be an instrumental tool in producing a better future for the world.