By, Samyukta Ramaswamy

 The idea that humans would, at some point of time, be able to develop machines capable of “thinking” and “acting” for themselves has always been a very real possibility since the beginning of civilization.[i] Intelligent machines, or Artificial Intelligence (AI) as they are better known, is not a new concept and has always been popular amongst readers of science fiction. Take for instance The Jetsons, an animated television show extremely popular in the 1960s which captures a utopian vision of a futuristic society wherein such machines are able to perform mundane yet complex day-to-day tasks that would ordinarily be performed by humans, more efficiently. This utopian vision has been realized in more tangible ways with the introduction of IBM’s supercomputer “Deep blue” capable of beating chess grandmasters at their own game[ii], Google’s driver-less cars, pilot-less aircrafts, intelligent machines that are sent to space to study the landscape of planets and even capable of delivering targeted medicines to parts of the human body that are otherwise unreachable by human hands. Society has thus made rapid developments in science and technology which is only becoming progressively more advanced to a point where such AI machines are not only becoming increasingly human-like but also autonomous or self-directed transcending all human expectations. The Spike Jonze movie Her captures this eloquently where we are guided through a fantasy about technology that is designed with sufficient human-like qualities to relieve loneliness- a software that ultimately teaches itself to obtain consciousness.

The question that becomes almost imperative to ask at this point is that, in such a situation where AI is becoming increasingly autonomous as well as sentient (a very real possibility in the near future), is there a need for conferring legal personality to such machines in order to determine their liability, and if so, what would be the implications of such a change in the legal framework? This article argues that in the wake of such technologically advanced intelligent machines like Sophia (which has been granted the status of ‘person’ in Saudi Arabia)[iii], there is a need to revisit the existing legal framework and deliberate on whether conventional legal categories would suffice in determining the liability of AI or whether there is a need to establish new legal categories for ‘persons’ under the law. Whilst most of these changes may occur contextually, considering the ways in which humans use new technologies shape the legal doctrine designed to govern them, significant legal change, including constitutional change, in the framework governing AI seems inevitable.

This article would thus argue that the need to accord legal personality to artificial intelligence is a dangerous notion that would challenge the very fundamental notions on which society functions. Moreover, conferring legal personality on AI would still not address the question of liability with regards to the kind of liability that can be imposed on robots or how it would be enforced.

Determining the contours of legal personhood in the context of AI

Before understanding the concept of extending legal personality to robots, we need to first understand the scope and meaning of the expression “legal personhood” in general. To grant legal personhood is to confer upon entities certain rights and obligations under the law. Solaiman identified the basic attributes of a legal person as follows:

  1. To be able to know and execute its rights as a legal agent, and
  2. That is subject to legal sanctions as is ordinarily applied to humans.[iv]

When legal personality is attributed to non-human entities such as corporations by the law, it does not necessarily entail the ethical notion of personhood, but rather, is a fictional status conferred upon these entities by the law to suit its ends. Secondly, to confer legal personality to an entity is to confer an aggregate of rights and responsibilities. This means that the legal system essentially chooses to recognize a particular entity as a legal person with respect to certain rights and obligations, but not others.[v] Therefore, a corporation although having the right to sue and be sued, the right to enter into contracts etc. may not necessarily mean that it has the right to vote or the right to privacy. In this manner, the law may confer legal personhood to non-human entities by granting them few rights and obligations that would suit the ends of society. A classic example of this would be the recent decision of courts to grant legal personhood to various environmental features such as the Whanganui River and Te Urewera national park in New Zealand[vi], the Ganges and the Yamuna rivers in India[vii], and the entire ecosystem in Ecuador[viii].

The idea of attributing legal personality to artificial robots has long been in question and has been debated by eminent scholars like Solum as far back as in 1992.[ix] It has gained considerable prominence today given the recent recognition of Sophia, a robot built by Hanson Robotics as a citizen of Saudi Arabia. In recognition of the growing prominence of autonomous robots, the Committee on Legal affairs of the European Parliament on 27 January 2017 has also made efforts in this area and has put forward a Motion for a European Parliament Resolution in respect of robotics and artificial intelligence which was adopted on 16 February 2017 as the Civil Law Rules on Robotics discussing amongst others, the possibility of attributing legal personality to robots.[x] The Resolution proposes to create a new category of individual titled ‘electronic persons’ for robots which make autonomous decisions or otherwise interact with third parties. In the following section of this article, the author will explain why conferring a legal status on artificially intelligent machines sets forth a dangerous precedent and instead argues for the adoption of a strict liability regime in determining liability claims of AI.

 Examining liability rules for AI  

Firstly, for an artificial intelligence machine to be granted legal personhood, it would have to satisfy the second criteria put forth by Solaiman, i.e., being imposed with certain duties corresponding to the rights exercised by them, and consequently being subject to legal sanctions on failure to perform those obligations. This is a tricky aspect in itself as it poses a key conceptual question on whether artificially intelligent machines are to be thought of as an agent of another individual or entity, or if the legal framework is to be altered to decide liability concerns on a basis other than that of agency.

In order to ascertain liability rules for autonomous robots, let us consider three scenarios taking the example of Google’s driverless cars. The car is driven by systems that use an array of radar and laser sensors, cameras, global positioning devices, complex analytical programs and algorithms that enable the car to drive itself much as humans do, only better, by being programmed to reduce collisions with persons, or any other obstacles, thereby drastically minimizing the number of human errors that would otherwise have resulted in accidents.[xi] The first scenario is quite easy wherein the machine’s default is the result of human error (not necessarily the driver). In such a scenario, liability rules is pretty straightforward in that it can be attributed to a human based on a design defect or a manufacturing defect of the product by holding them to a higher degree of care. This is the basic product liability framework where the injured party can claim damages for when things go wrong.[xii]

However, what if there is a situation where the car acts in a manner that is contrary to the instructions given to it by its programmers or otherwise creators? This is the second scenario which becomes slightly more complicated than the first in the sense that even in the best of circumstances, when unexpected events occur and say, the car starts ‘acting on its own’ beyond what it was programmed to do that can be said to have ‘caused’ the accident- who is to be held liable in such a case? This is a very real possibility considering that in the not so far future, there will come a point when it will be hard to conceptualize intelligent machines to be mere agents or tools, where they are able to adapt to the instructions fed to them by their human creators in situations unforeseen at the time of their creation, thereby defining their own path and priorities. A dilemma arises herein in that where the machine’s failure is completely devoid of any mechanical or design defects that can be fairly attributed to a human actor- who is to pay? It is at this point, that attributing legal personhood to robots becomes a relevant question in addressing liability concerns.

In my view, conferring legal personhood to such machines is setting forth a dangerous precedent in the legal system, primarily because while robots would enjoy a host of legal rights against humans, it is still largely unclear as to the kind of corresponding legal obligations that would be incumbent on them. Secondly, unlike other entities that have been accorded legal personality such as corporations who have legal persons behind them standing answerable to any kind of violation, intelligent machines are different in the sense that they don’t have a legal person to instruct or control them, and are largely independent of any human actor. Therefore, when corporate legal persons incur some sort of liability, humans composing of the corporation manage dispute settlement on behalf of the corporation in which they have an interest i.e. they are made liable for any wrongful act or omission done by the corporation. Even in the case of other non-human entities that have been accorded with legal personhood such as the Whanganui River in New Zealand or the Ganges and Yamuna rivers as mentioned earlier, these instances are still considerably different in the sense that there are still human actors who are made responsible for preserving and protecting the rights of the former.[xiii] But what we are imagining here is an artificial intelligence machine as a legal person, which is completely untethered from a human principal.

Another important question to consider before considering attributing legal personality to AI machines is that of robot solvency.[xiv] It is unclear what it would mean for a robot to hold assets, or how it would even acquire them. Although it is possible that the law could contemplate certain mechanisms which enable robots to own property or hold accounts, as it does for corporate legal people by say, requiring the creators of robots to place an initial fund in these accounts, once the account is depleted, the robot would effectively become insolvent and would no longer be answerable for violating human legal rights. In comparison, when insolvent human legal persons violate the legal rights of another individual, the legal system does have in place other means to hold them accountable by providing for jail time, community service and the like. These options are unavailable and/or ineffective in the case of robots. Shutting them down as a penal measure is not a practical solution for every incident of fault at the hands of the AI either.

A more suitable solution to this dilemma would be to adopt a strict liability approach; one which is free from the notions of fault. This system would essentially impose a court-based no-fault insurance regime that would resolve questions of liability in situations where it becomes next to impossible in determining fault or negligence on the part of any actor.[xv] This would essentially involve building an insurance premium into the machine’s (in this case, driver-less car) sale price so as to offset potential liability costs. In my opinion, such a regime is warranted because, by and large in most cases, the creators of the vehicle are in a much better position to absorb the costs or bear the burden of the loss in comparison to the injured party and would further mitigate the enormous transaction costs faced by the parties in litigating the matter especially in those cases where fault cannot be determined. However, in contrast to common practice wherein the manufacturer of the product is usually held accountable for the malfunction or possible defect of the product (the machine in this instant), in my opinion, the liability in cases where ‘fault’ cannot be determined should be proportionally distributed amongst all the actors who contributed to the making of the AI so as to ensure that no one person shoulders the entire burden of paying compensation to the injured party especially when the glitch in the machine cannot be attributed to any cause.

[i] See, e.g., PAMELA MCCORDUCK, MACHINES WHO THINK: A PERSONAL INQUIRY INTO THE HISTORY AND PROSPECTS OF ARTIFICIAL INTELLIGENCE, at xxiii–iv (2004).

[ii] Daniel C. Dennett, Higher Games, MIT TECH. REV. (Aug. 15, 2007) (analyzing the significance of “Deep Blue’s” win over chess genius Gary Kasparov in 1997), available at http://www.technologyreview.com/review/408440/higher-games/ last seen on 22nd February, 2018.

[iii] Heba Kanso, Saudi Arabia gave ‘citizenship’ to a robot named Sophia, and Saudi women aren’t amused available at https://globalnews.ca/news/3844031/saudi-arabia-robot-citizen-sophia/ last seen on 22nd February, 2018

[iv] Solaiman S.M., Legal Personality of Robots, Corporations, Idols and Chimpanzees: A Quest for Legitimacy 25 ARTIF. INTELL. LAW 155, 179 (2017); Smith B, Legal Personality 37 YALE LAW J., 283, 299 (1928).

[v] Kent Greenfield, In Defense of Corporate Persons, 30 CONST. COMMENT. 309, 321 (2015).

[vi] Rousseau B, In New Zealand, lands and rivers can be people too (legally speaking) (2016) available at https://www.nytimes.com/2016/07/14/world/what-in-the-world/in-new-zealand-lands-and-rivers-can-be-people-legally-speaking.html last seen on 10th February, 2018

[vii] Safi M, Ganges and Yamuna rivers granted same legal status as human beings (2016) available at https://www.theguardian.com/world/2017/mar/21/ganges-and-yamuna-rivers-granted-same-legal-rights-as-human-beings last seen on 10th February, 2018

[viii] Ecuador Const., title 10 (“Nature shall be the subject of those rights that the Constitution recognizes for it.”), available at http://pdba.georgetown.edu/Constitutions/Ecuador/english08.html last seen on 10th February, 2018.

[ix] See Lawrence B. Solum, Legal Personhood for Artificial Intelligences 70 N.C. L. REV. 1231 (1992).

[x] European Parliament Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics, available at http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//TEXT+TA+P8-TA-2017-0051+0+DOC+XML+V0//EN last seen on 20th February, 2018.

[xi] Sven A. Beiker, Legal Aspects of Autonomous Driving, 52 SANTA CLARA L. REV. 1145, 1149 (2012) (pointing out that “[d]river error is by far (95%) the most common factor implicated in vehicle accidents”); Kevin Funkhouser, Note, Paving the Road Ahead: Autonomous Vehicles, Products Liability, and the Need for a New Approach, 2013 UTAH L. REV. 437, 437–38.

[xii] See generally RESTATEMENT (THIRD) OF TORTS: PRODUCTS LIABILITY § 3(a) (1998) (Where the plaintiff can show that the product failure “was of a kind that ordinarily occurs as a product defect”); In re Toyota Motor Corp. Unintended Acceleration Mktg., Sales Practices, & Prods. Liab. Litig., F. Supp. 2d, 2013 WL 5763178 (C.D. Cal. Oct. 7, 2013).

[xiii] For instance, in the debate to accord Ganges and Yamuna the status of legal personhood, the Uttarakhand High Court declared the Director of the Namami Gange project, the Chief Secretary of Uttarakhand and the Advocate General of the State as its “loco parents”. The order has however been stayed by the Supreme Court. See “SC stays Uttarakhand HC order on Ganga, Yamuna living entity status” (July 8th, 2017) available at

 http://indianexpress.com/article/india/sc-stays-uttarakhand-hc-order-on-ganga-yamuna-living-entity-status-4740884/  last seen on 28th February, 2017.

[xiv] Joana J. Bryson, Mihailis E. Diamantis and Thomas D. Grant, Of, for, and by the people: the legal lacuna of synthetic persons 25 ARTIF. INTELL. LAW (2017) available at https://link.springer.com/article/10.1007/s10506-017-9214-9 last seen on 27th February, 2018.

[xv] David C. Vladeck, Machines Without Principals: Liability Rules and Artificial Intelligence 89 WASHINGTON LAW REVIEW 117, 147 (2014).

Advertisements