Published: 19.9.2016

Public discussion about robotics lead very quickly to philosophical depths. Consider the nuanced parliamentary motion concerning Europe-wide legislative framework for robotics and AI. It made headlines for suggesting, for purposes of legislation on liability and responsibility, a new legal status of “electronic persons, with specific rights and obligations, including that of making good any damage they may cause”.


The motion rightly notes that a new legal category might be needed, and that legislation in EU-countries needs to be updated concerning issues of liability. But is the category of electronic persons fitting or needed here? What would it mean to think that in addition to human manufacturers, programmers, trainers, owners or users and so on, the robot itself would have rights and obligations?

Robots, as we currently know them, are not moral persons. They do not meet all conditions of personhood. While they can master tasks that require intelligence, there is nothing that it feels like to be a robot. Robots cannot feel pain – or anything else for that matter – and they do not lead lives or have “stakes” in the world: they cannot literally care about anything, including their own survival, but only simulate.

Could things change in the future? What would it take to build a robot that would have sentience or consciousness, or that would have a stake in the world? Arguably, that would require certain type of hardware, and not just software.

Consciousness seems to be associated with certain physical systems and not others. The key is not whether it is carbon-based or silicon-based, but how it runs. Why is it that, say, the cerebral cortex produces consciousness but the cerebellum does not, or what is it that underlies the difference between wakefulness, sleep, coma, and anaesthesia? It is not fully known. The best attempts are very controversial and even speculative, such as the integrated information theory.

Further, robots do not lead lives, that is, engage in far-from equilibrium dynamic processes, whose end amounts to the thing ceasing to exist, dying. A candle flame is a self-organizing, self-maintaining process, and bacteria (and other living beings) are recursively self-maintaining processes. They have a normative stake in the world: they must maintain the process on pain of ceasing to exist. That is not so with robots.

Until there are robots, that are sentient and that engage in recursively self-maintaining far-from equilibrium processes (such as life), they merely simulate being concerned or behaving in a normatively guided way. It does not seem to make sense to think we owe them anything, such as making their lives go better rather than worse, as they do not even have lives that could go better or worse. In short, talk of rights and responsibilities is not fitting. But what robots can do is serve purposes that we make them serve. We owe it to each other to make AI systems serve better rather than worse purposes.

Supposing then that the category of electronic persons is misleading, how should we categorize them? One counter-proposal is that robots should be slaves, or less polemically, e-servants instead of e-persons. Indeed, all the points the EU-motion makes about legal responsibility and strict liability (including compulsory insurance schemes, compensation funds and public registration numbers, as with cars) can be made without the category of an electronic person.

What, then, are the purposes that we should design robots to serve, in transportation, service robotics, healthcare, education, low-resource communities, public safety and security, employment and workplace, and in entertainment (to mention the areas that the Stanford University AI100 report of 2016 focuses on)? The question is fundamentally the same as what goals should we design our institutions, such as schools and hospitals, to serve, or ultimately, how should we design human societies. To promote well-being, equality, autonomy, justice and so on – the very principles that political philosophy has always debated. And what valid ethical principles should robot-designing be allowed to break? None, of course. But we need to know is what ethical principles are valid (the motion mentions the principles of beneficence, non-maleficence and autonomy, as well as fundamental rights, such as human dignity and human rights, equality, justice and equity, non-discrimination and non-stigmatisation, autonomy and individual responsibility, informed consent, privacy and social responsibility). On these questions, philosophical reflections on the principles of ethics and political goals of society will complement practical knowledge in the field of robotics.

Arto Laitinen, Tampere University of Technology

Yhteystiedot

Konsortion johtaja

Ville Kyrki

ville.kyrki@aalto.fi

Projektipäällikkö

Timo Brander

timo.brander@aalto.fi