Leducate Explains: Developments in Artificial Intelligence Laws
Hint - key terms are defined. Just click on the blue words to see their definitions!
Artificial intelligence is a term that often crops up when discussing technological advances. There are fears that it will take away human jobs and even harm people. In this article, we look at what artificial intelligence is and what legislation the European Parliament is proposing whilst considering the risks and benefits of AI.
The European Union’s Legislative Proposals on Artificial Intelligence (AI)
In light of AI’s numerous current and near-term applications in our daily lives, the European Parliament wants to develop a framework through which to control or govern the development and use of AI. In October 2020, three sets of legislative proposals were approved by the Legal Affairs Committee of the European Parliament. These relate to human oversight, civil liability and intellectual property.
What is 'artificial intelligence'?
Broadly speaking, AI refers to a computer software which is capable of getting better at doing things independently, or interacting with its environment in an intelligent way. Often, this allows computers to perform tasks that typically humans are very good at. A traditional computer struggles to handle unexpected or imperfect environments or inputs. Artificially intelligent software allows computers to adapt to such situations as humans do.
One particularly prominent sub-field of artificial intelligence is ‘machine learning’. This field of AI applies statistical methods to enable computer systems to learn from data inputs how to achieve a defined goal. Current uses of machine learning include internet search engines, voice recognition apps and skin cancer detection for example.
Closely connected to AI and machine learning is the field of robotics. Robots can be the physical, embodied form of AI, or they can be virtual agents sometimes referred to as ‘bots’. Examples of physical robots include autonomous vehicles, drones, robotic service assistants and autonomous weapons.
The European Parliament Legal Affairs Committee’s Legislative Proposals
Human oversight
The first initiative suggests “ethical principles” should surround the development, deployment and use of AI, robotics and related technology. The stated aim of the initiative is to ensure that AI is “human-centered” and “human-controlled”. To achieve that, developers and operators should be required to abide by principles such as: safety, transparency and accountability; safeguard against bias and discrimination; right to redress (to challenge when harm is done by AI and get appropriate justice); social and environmental responsibility, and; respect for fundamental rights (as set out in the EU Charter of Fundamental Rights).
Civil Liability
The second initiative suggests a “future-oriented” liability regime that holds operators of AI accountable for the death, personal injury, and damage to property caused by their use and control of AI systems. This regime is based on separating 'low risk' AI technology from 'high risk' AI technology. This means that an individual using high risk AI should know that there is a greater chance of doing harm to others, and therefore is subject to a harsher regime if their AI does harm.
High-risk AI systems would be those that have “a significant potential to cause harm to one or more person”. The Legal Affairs Committee proposes “strict liability” of the operator of high risk AI. This means that the operator will always be held liable for the harm or damage inflicted, regardless of whether he intended or foresaw such harm.
To avoid uncertainty, given the strictness of this regime, the Committee proposes that all AI systems defined as high-risk should be listed in an annex, to be reviewed and updated every 6 months.
The third initiative notes the increasing number of AI-related patents (a type of IP) being granted, and stresses the importance of ensuring a high level of protection of IP rights. It also proposes clarification that ownership of IP rights, if any, should only be assigned to people or companies - not to the AI itself. This means, for example, that even though AI software will write new code to do new things as it learns, the right to that code as a copyrighted written work does not belong to the AI.
Bigger questions
How do we monitor AI?
In a fast-growing, dynamic field such as AI, how can we monitor compliance with the law? Theoretically (though easier said than done) anyone with a computer could be developing or using AI technology. So it's difficult to check that they are applying the ethical principles, or make them liable for any harm they do. The same is effectively already true of all conventional 'cyber' crimes. This difficulty is compounded by the AI's ability to learn and self-improve. As AI systems become 'smarter' and more complex, humans may not be able to understand or explain the workings of “black box” algorithms. One particular result is that the EU would struggle to exhaustively list all high risk AI in law.
The first initiative, 'Human Oversight', stresses the need for human oversight over how the AI system operates by saying that operators should ensure full explainability of decisions made with the involvement of AI. Furthermore, developers and users of AI must conduct their work in a secure, technically robust, reliable, ethical and legally binding manner.
The second initiative, 'Civil Liability', adds a further level of accountability by suggesting AI-specific liability be created. By making sure the human operators of AIs are liable, the risk of a having to make a huge payout should be an incentive for AI operators to closely monitor the AI systems they control.
AI & Criminal Law
The usual requirement in criminal law for a person to have intended an act in order for it to be criminal (known as ‘mens rea’) fits uncomfortably with AI, as AI is incapable of having its own intentions independently of its operators. Similarly, while civil liability is more normal to be applied with a lower level of intent from the perpetrator, an AI that is not fully understood by an operator could do something criminal without that being intended by the operator. It's therefore difficult to suggest that this operator, supposedly responsible for the activities of their AI, should face criminal sanctions.
Even where the human operator of an AI which kills a person is held criminally liable for the offence, it is far less likely that the AI itself can be found directly liable. Could 'mens rea' be established in an AI? Can an AI be punished meaningfully through conventional criminal law sanctions? Certainly, simply extending the existing criminal law to AI is unlikely to be effective.
Giving IP rights to a robot would be an inappropriate use of the copyright system that is designed to reward the human creative spirit and expressions. Equally, to impart ‘legal personality’ on AI to “own” their creations would have a negative impact on incentives for human creators.
Conclusion
These legislative proposals are being discussed and prepared for legislation in early 2021. The European Commission will be aiming to bring forward legislative proposals that strike a good balance between improving humanity’s control of AI on the one hand, and encourage technological advancement and innovation on the other.
Written by Janet Wong
Glossary box
Artificial Intelligence - A computer software which is capable of getting better at doing things independently, or interacting with its environment in an intelligent way.
European Parliament - A part of the European Union that makes law
Intellectual Property - Refers to the intangible (non-physical) rights that individuals and companies can hold to protect their ideas from being used by others. IP ensures that inventors and creatives are able to exclusively get the value of their ideas, at least for a period of time. This encourages people to be creative and invent, adding to the general human knowledge.