Putting Asimov’s Laws Into Practice

by Mircea Băduț

Addendum to the Laws of Robotics

Preamble

Listening this morning to the radio – in a short sequence on the topic of robots and artificial intelligence – upon hearing the statement of Asimov’s famous laws, I immediately said to myself: “Those who have to write the methodological rules of application, they really got their work cut out for them!”

Of course, I was once fascinated by the stories that Isaac Asimov embroidered on the “infrastructure” of robotics’ laws, which became not only legendary but also, behold!, a reference for humanity’s concerns regarding the advancement of automation and computer science, and in the disturbing perspective of a potential Singularity[1]. But at the time I did not know that a law is a concise statement, and that – in order to function in social, administrative, economic and judicial practice – it must often be supplemented with detailed provisions on concrete application, so-called ‘implementing regulations’ (a common feature of European Union legislation, and that of its member states, such as my native Romania).

Let us call to mind the three articles of robotics:

1st Law: “A robot may not injure a human being, or, through inaction, allow a human being to come to harm.”

2nd Law: “A robot must obey the orders given by human beings, except where such orders would conflict with the First Law.”

3rd Law: “A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws.”

Therefore, the challenge arises to reflect on (and even to imagine) the content of ‘the implementing regulations for the laws of robotics’, whether these regulations are for the use of lawyers (legislators, courts, judges, attorneys) or for the use of the entities involved (robot builders and programmers; owners of future robots; conscious and legally responsible robots; etc). [2]

Intermezzo

As a basis for deliberation, we can admit that MORALS consists of the rules of coexistence (or ‘social rules’, if you will). And from the perspective of the way individuals are raised in society, it can be said that MORALS reach us in three tranches: (1) through intra-species biological reflexes (primary instincts of socialization, as we see in animals); (2) through education (the example provided by parents and others, and through explicit learning); (3) by written laws (concretely defined by the society’s officials). Here we will be primarily concerned with this third level, but from the vantage point of ‘artificial intelligence’ that is supposed to animate robots.

In search of rules and regulations

First of all, it is worth acknowledging that – in view of the possible conflict between humans and robots, or, rather, between humans and autonomous technology (and I propose this alternative and comprehensive phrase) – the laws formulated by Isaac Asimov are admirable if we consider the year of their issuance: 1942. That is only two decades after Karel Čapek launched the term ‘robot’ through his fictional writings. [3]

But today, such a synthetically-expressed legislative approach would appear to us rather as a pseudo-ethical, or even playful one. Yes, looked at in detail, the text of those laws is dated, and as regards applicability they are downright obsolete. On the other hand, an equally concise reformulation, with comparable impact, is unlikely to arise now. Society’s mind has changed too much since then, and so has the context.

Lately, we have all witnessed several “emanations” of popular artificial intelligence (see the web applications Google Maps and especially Google Translate, not to mention the latest wave of generative iterations) and we have been able to get a taste of what ‘machine learning’ means, as a premise for a future, possible autonomy – an epistemic autonomy that in the year 1942 could not have been anticipated. But this is only a part of the altered point-of-view.

Now, armed merely with the life experience of an ordinary 21st-century person – so not necessarily cleaving to standards of jurisprudence –, I would suggest to dissect a bit the texts of the three original robotics laws (and maybe even look at them with possible ‘implementing regulations‘ in mind).

1st Law: “A robot may not injure a human being”

This first and essential part of the Asimovian law looks docile from the perspective of application. This is due to the similarity to the classic laws of human morals, for which there are both customary and written norms in civil and criminal law. We leave behind the suggestion of exclusions from this statement (i.e. the speculation that “yes, the robot can’t harm any human, but it would be free to harm another robot”), and we observe – by extrapolating the idea of similarity with human laws – that we can ask ourselves a number of questions.

Such as: Could it be that the anthropomorphic robot (literally but also figuratively, i.e. the robot destined to coexist with humans) is firstly subject to the laws of humans, in integrum, to which Asimov only formulated a ‘codicil’? In other words, shouldn’t we consider that the laws of robotics function as a legislative subset, designed to necessarily complement civil laws?

Or here is another question: How autonomous and responsible (morally and civilly) can a robot be, when it is manufactured and programmed by others? To what extent is the legal responsibility for the robot’s deeds/acts shared with the humans or robots that created it? Even more: is it possible to incriminate a complex algorithm, in which the participation of creators – humans or robots – was very dispersed? Or, how much dispersion can responsibility bear until it becomes… lapsed?

We have seen that for the time being, the civil liability for criminal or misdemeanor incidents caused by existing machines (such as Google or Tesla autonomous cars) is considered to belong to the creators and/or owners. (And if it is just material damage, it can be covered by using the insurance system.) But things get complicated in situations where those robots end up evolving in unforeseen contexts or circumstances, which can no longer be attributed to the creators or owners.

Probably in the ‘early robotic jurisprudence’ the concept of INTENT – a fundamental concept in the judicial documentation of crimes – will be somewhat simple to operate and detect (and will likely often be preceded or replaced by the concept of NEGLIGENCE), but in the distant future it will not be easy to establish it, because an exponential and independent development of artificial intelligence may take the “thinking” of robots away from human morality. (That is, it may be difficult for us to distinguish the motives or intentions behind super-intelligences’ decisions and actions.)

And one more question! Where is the boundary between the autonomously evolving automaton, fully civilly responsible, and the one incapable of moral discernment? What do we call those who are not fully legally mature? Limited liability robots? Minor androids?

We return to the text of Asimov’s first law, namely the second part of the statement: “… or, by non-intervention, to allow a human being to be harmed“. Here things are rather uncertain. Yes, a methodological implementing regulation could fix the laconic expression, clarifying the fact that it refers to a robot that is witnessing an injury. (In parentheses we will notice that Asimov’s perspective is a juridically incomplete one: he refers only to violent crimes having as direct object the human being, effectively ignoring the multitude of facts that can indirectly harm the human: theft, slander, smuggling, corruption, lying, perjury, fraud, pollution, etc.) But even assuming the clarification of the possible application norm, we still have debatable aspects, such as:

(1) an advanced robot, having a powerful or multiple connections to the information network (data, sensors, video surveillance cameras), could theoretically witness crimes in an area with much larger geo-spatial coverage than those specific to man, which could easily bring it/him into a state of saturation, of judicial inoperability;

(2) there is no such obligation in human law to intervene in an ongoing crime, therefore, asking robots to do so, could prove ‘politically incorrect’. In fact, here the perspective of ‘slave of man’ associated with the robot in the middle of the last century shines through, a vision explicitly incarnated in the text by…

2nd Law: “A robot must obey orders given by human beings, as long as they do not conflict with the First Law.”

Yes, most people imagine robots – industrial, domestic, counter clerks, software applications, toys, nurses, companion robots, and so on – as being destined to serve people, because they truly are machines built for this purpose. But in the future, when/if their autonomy expands – by increasing their capabilities of storage, processing and communication – the outlook could change. There is already a lot of technical-scientific research and practical applications that prove that inserting self-development skills (adding independence) can be a way to solve more difficult and more complex problems. (Eventually we make an epistemological parallel here with the transition from the von Neumann computer to the quantum one.) And self-development could be represented by both (1) the accumulation of new knowledge (growing the database through self-learning) and (2) the modification and optimization of algorithms for information processing and decision making (which again brings us to the question of legal liability). (We open now another parenthesis, to note that in modern software programming, from Object-Oriented Programming (OOP) onwards, the boundary between data and algorithm is no longer a strict one. And over time, the paradigm could shift even further.) In addition to the aforementioned machine learning (ML) model, we have other related concepts: machine-to-machine (M2M), Internet-of-things (IoT), neural networks (NN), artificial intelligence (AI). But it must be said that such phrases and acronyms often form a frivolous fashion (catalyzed by the thirst for hype of contemporaneity), an emphasis that imparts hope but also hides naivety and ignorance. And it often conveys anxiety (unjustified, for now): the fact that we have a lot of automatons that know ML, M2M, AI, NN and IoT does not mean at all that they will develop soon to the point of “weaning” themselves, to cause that Singularity which human civilization fears.

Towards the end, a few words about the 3rd Law: “A robot must protect its own existence.”

Although the Charter of Human Rights states that “Everyone has the right to respect for his or her physical and mental integrity“, nowhere does it say that suicide is illegal. In other words, for people, their own existence is a right, not an obligation. Why would it be different for robots? Is it because they are material goods, and they carry purchasing and manufacturing costs? This would imply a purely economic view of the law?

But there is one more questionable aspect: in order to apply this law, those robots should be aware of themselves (either through initial programming or through self-development). Then what does ‘self-aware’ mean? Here, too, we can identify at least three levels: (1) The stored knowledge (or own sensors) can inform the robot about the extend of its freedom of movement. (2) Consciousness: reflective and assumed knowledge of one’s own abilities to interact and change the world around. (3) The intuition of uniqueness, and possibly the intuition of perishability. (We open a parenthesis for a necessary remark: the perishability of the evolved robot can mean both awareness of physical vulnerability and its over-time finiteness (its mortality, as a human attribute). And we remember, also from Isaac Asimov, two illustrative examples for this: ‘I, Robot’ and ‘The Bicentennial Man man [1].) These three levels of self-awareness – each being able to correspond to definable levels of civil/legal responsibility, and each being more-or-less implementable by algorithms – can be found also in animals, from those many and simple (small herbivores or carnivores) to those mentally evolved (such as elephants, primates or dolphins).

However, we end the series of questions and dilemmas by making a somehow transgressive observation: in terms of legislation, human civilization has at least two thousand years of experience, so we can assume that the difficulty does not lie in defining rules. The real test will be to define what the robot is.

#

Bibliography

[1] Asimov, Isaac, ‘The Bicentennial Man‘, Ballantine Books, 1976

[2] Băduț, Mircea, ‘DonQuijotisme AntropoLexice‘, Europress Publishing-house, București, 2017

[3] Čapek, Karel, ‘R.U.R (Rossum’s Universal Robots)‘, (theater play) 1920


[1]              The assumption of a future imminence in which artificial intelligence will merge with, or even surpass, human intelligence, eventually taking control of the world.

~

Bio:

Mircea Băduț is a Romanian writer and engineer. He wrote eleven books on informatics and six books of fictional prose and essays. He also wrote over 500 articles and essays for various magazines and publications in Romania and around the world.

Feel free to leave a comment

Previous Story

To Circumvent The Laws

Next Story

Trčka’s Gorget

Latest from Fact & Opinion

Why Warhammer Matters

Reflections on the first academic conference devoted to the grand old franchise, by Dr Mike Ryder,