
By: Gabriel Schram
April 16, 2021
Are Rules And Regulations Keeping Up With AI Advancements?

By: Gabriel Schram
April 16, 2021
Artificial Intelligence (AI) capabilities are continuously growing beyond the scope of existing laws and legislation. This issue has caused concern over the need for constant regulation that fluctuates with the rate of AI. Developing a solution to this problem has proven difficult with the global reach of AI combined with conflicting interests for its uses and potential.
AI and its subsets are used across a wide spectrum of industries and infrastructures, including autonomous vehicles, facial recognition, and more accurate medical equipment. Due to the extensive reach that AI has and its components' vitality, there should be clearly defined rules and regulations. Even more concerning is the increasing rate at which AI is being used to collect user data. Collected user data has become a valuable commodity because AI systems work more effectively when provided with more information. *The correlation of AI capabilities and its regulations will determine if enough is being done to protect information systems and their user data. *
Artificial Intelligence Capabilities
The extent of AI capabilities has grown to become invaluable in modern technological advancements and a societal infrastructure tool. Increasing data storage contributes to how well AI performs. The sheer amount of data on users and machines provides the perfect conditions that power an AI system.
High-performance data-driven medicine is what describes the application of AI to medical diagnoses. Accumulated electronic medical records provide the needed data that has led AI algorithms to outperform doctors in some areas of medical exams. Specifically:
-
DLAD (Deep-Learning-Based Automatic Detection) is an algorithm that was developed at the Seoul National University Hospital and College of Medicine to read chest radiographs and detect certain cancers. When compared to a physician's detection, it outperformed 17 of 18 doctors.
-
Google AI Healthcare developed LYNA (Lymph Node Assistant) to perform histology slide analysis and detect breast cancer tumors from lymph node biopsies. When provided with two datasets, it was 99% accurate and cut the average slide review time. (Greenfield, 2019)
Concerning cybersecurity, AI has played a key role in the potential for detecting and mitigating threats. AI can track and analyze user behavior, identify abnormalities in network traffic, and remediate vulnerabilities. An added benefit is the automation of certain tasks that would otherwise use time and resources. Conversely, AI has allowed for a higher level of situational awareness in malware. The result is malware that has higher level stealth configurations due to its adaptability. IBM researchers presented DeepLocker at Black Hat USA in 2018: it is encrypted ransomware that determines which computer to attack based on facial recognition (Stoecklin & Dhilung, 2018).
AI-based surveillance technologies have paved a market and are being used by nation-states, law enforcement, and government agencies. AI makes it possible to monitor millions of cameras simultaneously in real-time and identify anomalies based on the provided algorithm and data. Added to video surveillance tracking, AI has been used in business models that collect and analyze consumer data (Nouri, 2020).
AI is continuously being applied to vehicles as well. Autonomous vehicles rely on an AI-based system that utilizes GPS, cameras, and multiple types of sensors to obtain data. AI in vehicles can be applied to:
- Emergency controls and active monitoring of blind spots
- Predictive maintenance
- Driver preferences
To reiterate, AI algorithms work better with more data. This is most obvious with autonomous vehicles. More data will equate to a better-performing vehicle with a lower likeliness of crashing. Furthermore, many companies that produce these vehicles are connected to a shared network. Therefore, collected data among these vehicles can be shared and "learned" from.
Although not actively used, Lethal Autonomous Weapon Systems (LAWS) has been a great concern in the realm of AI potential. This led to talks surrounding LAWS at the UN and the plea from many nations that this route should be taken with caution. However, the development of weapons that can choose targets and take action without human involvement is something that AI is fully capable of (Congressional Research Service, 2020).
Current AI Legislation
Legislation for AI varies globally based on each nation's AI strategy. However, major changes in AI affect everyone in the field. In 2018 the United States passed the John S. McCain National Defense Authorization Act, which established the National Security Commission on Artificial Intelligence. This was established to provide recommendations on the use of AI for national security. In its most recent press release (October 13, 2020), the commission approved 66 recommendations for the Legislative and Executive branches. These recommendations relate to training more AI talent, using AI to build on US technology advances, and applying AI for national security matters (The National Security Commission on Artificial Intelligence, 2020).
On February 11, 2019, Executive Order 13859 was signed, establishing the American initiative for AI advancement. Among several points in this initiative, its goal was to create more people who work in the field of AI, engage internationally with AI, and direct government investment in AI research for several departments.
More recently introduced is the National Artificial Intelligence Initiative Act of 2020. If passed, this act would allow the US to continue its global leadership in research and development of AI and better prepare the workforce for AI integration. This legislation is set apart because it would establish a standard for AI development and a National Artificial Intelligence Initiative Office(Minevich, 2019).
Is more regulation needed?
Current regulation and legislation addressing AI appear to be relatively broad, whereas AI's advancement has shown specific opportunities for advancement and danger. Added to that is the general lack of regulation overall. This could be due to the lack of understanding from lawmakers or a lack of information conveyance from those presenting to lawmakers. Whatever the case may be, AI is advancing faster than legislation is being passed.
The globe is in what appears to be a type of arms race for AI; nation-states want to have the most advanced and sophisticated AI systems faster than their rivals and/or allies. This strategy constitutes as little regulation as possible. Little regulation leaves gray areas for well-funded private companies to have free reign over the development of AI.
While not directly regulating AI development, data privacy legislation is imperative because it has made the collection and analysis of data easy. Some headway has been made here with legislation such as the California Consumer Privacy Act, the European Union's General Data Protection Regulation, and the Health Insurance Portability and Accountability Act. However, this is also not enough to cover the surface of diverse user data.
While AI has made way for impactful scientific breakthroughs, there is potential for misuse of AI that would lead to a distrust of AI among the public. To build trust in AI systems, there needs to be transparency of AI algorithms and their use. This is true in the medical field. The FDA is extremely strict about what is being passed for certified medical use, and a lack of understanding in their approval process slows down AI research and development in the field. Legislation concerning Information Technology and cybersecurity, in particular, tends to be reactionary as opposed to preventative. The American AI initiative is a good start with only broad propositions, but it will eventually require something more concrete and specific towards individual industries and technologies.
References
Congressional Research Service. (2020). Defense primer: U.S. policy on lethal autonomous weapon systems. Retrieved from https://fas.org/sgp/crs/natsec/IF11150.pdf
EC-Council. (2019). The role of AI in cybersecurity. Retrieved from https://blog.eccouncil.org/the-role-of-ai-in-cybersecurity/
Greenfield, D. (2019). Artificial intelligence in medicine: Applications, implications, and limitations. Retrieved from http://sitn.hms.harvard.edu/flash/2019/artificial-intelligence-in-medicine-applications-implications-and-limitations/
Minevich, M. (2019, 20 August). The american AI initiative: A good first step, of many. Retrieved from https://social.techcrunch.com/2019/08/20/the-american-ai-initiative-a-good-first-step-of-many/
Nouri, S. (2020). Council post: How AI is making an impact on the surveillance world. Retrieved from https://www.forbes.com/sites/forbestechcouncil/2020/12/04/how-ai-is-making-an-impact-on-the-surveillance-world/
Stoecklin, M., & Dhilung Kirat, J. J. (2018). DeepLocker: How AI can power a stealthy new breed of malware. Retrieved from https://securityintelligence.com/deeplocker-how-ai-can-power-a-stealthy-new-breed-of-malware/
The National Security Commission on Artificial Intelligence. (2020). National security commission on artificial intelligence submits 2020 interim report and third quarter recommendations to congress and the president. Retrieved from https://drive.google.com/file/d/1jg9YlNagGI_0rid-HXY-fvJOAejlFIiy/view