I hope there will be “artificial persons” willing to act for the sake of and not against humanity, says Gabriela Bar, an expert in the field of new technology regulations, in conversation with Monika Redzisz

Monika Redzisz: You’ve been trying to answer the question whether legal personality should be ascribed to artificial general intelligence. To me, it is pure futurology. We don’t know when or if this kind of technology will be created. Isn’t it too early for this sort of questions?

Gabriela Bar*: You are right. We can’t categorically say that artificial general intelligence will or will not come to existence. However, taking into account all breakthrough technological inventions we have seen in recent years, including artificial intelligence, I am inclined to believe that it will. Maybe it will be developed in parallel with works on one of the systems dedicated to a specific task. Irving Good, a British mathematician, wrote about the explosion of intelligence already in 1965. When we finally design artificial intelligence systems capable of programming other AI systems, we may experience a technological snowball effect that will take us to the next level. Many renowned researchers specializing in this domain, e.g. Nick Bostrom, Stuart Russell or Max Tegmark, claim that it is possible and that it might happen before the end of this century.

But there is another path to take: to try to make a human being perfect, to get into their brain. When exactly will we be able to tell that we are still dealing with a human being and not with artificial intelligence? What marks the end of a human being and the beginning of a robot? It’s an open question.

OK. Let’s assume that strong AI will be created. Is it really necessary to think about legal solutions before that happens?

If we’re not prepared before it happens, it will be too late.

Why? Today we can’t even imagine what it will look like. Will it be similar to a human being, will it be a single entity or will it be more like a swarm of bees? Or maybe it will not have any physical form at all? How can we prepare for something we can’t even imagine?

It’s true that we cannot draft any specific regulations. But it is never too early to discuss the issue. In my opinion, holding talks about this issue is vital. We can ask fundamental questions that will help us to work out different ideas. If we tackle the matter now, we will experience less problems in the future.

What is the most important aspect?

I am most interested in social interactions between artificial intelligence and a human being. Its impact on our everyday life is huge. I’m talking here both about the android form of AI, i.e. humanlike robots, and about other forms of AI, such as smartphone apps, voice assistants, self-driving cars, and smart houses. For instance, the sexbot industry has been developing for quite some time now. There are also robots that take care of the sick and the elderly. People get attached to them, they can’t live without them, they even fall in love with them! And what will happen in the future, when their efficiency is improved?

Gabriela Bar

You are talking about emotions and emotions may be surprising. But why would we have to think about defining legal regulations and to ascribe personality to those … things? You have explicitly referred to their moral right to personality.

For now, we consider them as things, but what makes an object different from an entity? Is it consciousness? An ability to feel? We say that conscious beings that can feel, such as people and animals, deserve to have their rights. The moral aspect would become important if we decided that artificial intelligence had something that might be called consciousness.

That would be hard for me to accept, especially without having a clear definition of what consciousness really is. We don’t know how it is created, where it is located and when it ends.

That’s true. The problem with defining what consciousness is makes it difficult to clearly specify the criteria that AI should meet to be classified as an entity and not as a simple tool. The definition of consciousness I personally find convincing is the one presented by Max Tegmark, who says it’s an ability to feel subjective experiences (qualia). Because of qualia we have ascribed a limited moral status to animals, which lets me assume that if an artificial intelligence system had subjective experiences, it shouldn’t be treated merely as a tool. We would need to create a certain legal fiction having the form of some kind of legal personality. Today nobody denies that animals are not things. They are not because they are conscious and have an ability to feel for example pain or pleasure.

This is a fundamental difference. Being an animal myself, I can assume, by drawing an analogy, that another animal is conscious and able to feel pain and pleasure just like me. But that analogy disappears if I wanted to compare myself to a computer which is a completely different entity. How can I tell if it can feel anything?

I admit it’s a problem. But only resulting from how we perceive things. Even if that sort of consciousness consists in something else if compared to the consciousness of humans and animals and even if it’s an entity made of silicon rather than proteins and carbon, I still believe there is no reason to deny granting certain rights to such a “form of life”. I don’t want to say that those rights should be the same as ours or as the ones we have given to animals. But we should take into consideration the nature of those potentially conscious beings with a potential ability to feel.

But why would we grant them any rights?

I’m a supporter of the idea called permissivism according to which the law should be an adaptable tool used to form social relationships. All legal concepts are specific agreements that are commonly acceptable in certain groups for one reason or another. Marriage, voting rights, money – they are all legal fictions that have a great impact on our life and that allow for the orderly functioning of large social groups. You also have to take into account that there are different legal persons. For example, commercial law companies have legal personality despite the fact they are not any physical entities – they are organizations established under legal regulations which make it easier for us to do business.

If an artificial intelligence system has subjective experiences, it shouldn’t be treated merely as a tool

Does that mean that, in terms of personality, artificial intelligence may be more similar to a company than to a human being?

I think so, although I can’t say I’m entirely sure about it.

But all legal fictions are developed for a reason to serve the interest of a specific group. Who would need a fiction connected with AI personality?

We would. It’s about the most beneficial way to regulate our relations to achieve the highest level of protection. It’s about the way to establish relations between humans and advanced AI to make it possible for both humans and AI to function together in the society.

Wouldn’t it be more convenient for us if we stuck to the idea that artificial intelligence is just a tool? If we ascribe legal personality to it and allow it to have legal rights, we may find ourselves in a situation where it will be easier for AI, once it becomes more intelligent than we are, to crush us like bugs.

This can happen regardless of whether we will ascribe legal personality to AI or not. Treating AI whose intelligence surpasses ours as a tool seems to me both unrealistic and immoral. I hope that during the process of creating artificial general intelligence we will manage to instill certain ethical standards into it. I may sound too optimistic but I hope that there will be “artificial persons” willing to act for the sake of and not against humanity. If there is ever a hybrid society consisting of real and artificial people, the approach based on cooperation and not on control will certainly be more beneficial. And before you ask, yes, I am aware of a trap we may fall into: cultural differences may hinder the implementation of all those ethical standards. We may experience the conflict of values, for example between the personal well-being in the West and the social well-being in the East. We may find it hard to decide which values to implement, which could result in different countries or regions of the world developing different solutions.

Will such artificial persons be responsible for their actions? Will an autonomous doctor be held responsible for making an incorrect diagnosis? And if so, how will it be punished? You can’t imprison it. Can you order it to pay a fine? Will it possess any property?

For practical reasons, I don’t think that AI could become fully independent. No company manufacturing artificial intelligence systems would ever agree to forgo potential incomes they could generate. I’m thinking more about a concept that would be similar to the autonomy of legal persons. Artificial persons would have their rights (including to right to possess property) and obligations, they would be liable under certain rules, but there would also be a possibility to sue a person (natural or legal) that is “behind” the AI. That would probably not involve criminal liability. We could also introduce a possibility to switch AI off, i.e. to implement the so-called Kill Switch…

What’s that?

Let’s say we’re talking about an automated healthcare center, a company “employing” autonomous artificial persons – doctors having a certain legal personality, earning and investing money, for example by buying and selling real estate, shares, etc. With the money on their bank accounts or with any other property they possess, artificial persons could be liable for example for damages caused by making an incorrect diagnosis. Additionally, we would need to introduce the concept of subsidiary liability of a company itself that would be obligated to pay compensation if an artificial doctor was insolvent.

Artificial persons would therefore have a certain degree of autonomy, they would decide about their own actions, but in the end it would be people, e.g. the company’s management board or its shareholders, who would make the most important decisions about the fate of AI, at least as long as artificial intelligence is not fully independent and until it starts employing people in its own companies.

But why would I be responsible for the decisions made by autonomous systems?

You would be responsible for the decisions made by autonomous systems just as you are responsible for the decisions made by your employees. There are two main types of liability: fault-based liability and risk-based liability. As far as the fault-based approach is concerned, I am afraid that the injured party will not be able to prove that the artificial intelligence owner, e.g. an international corporation, is at fault. The company would probably say: “Wait a minute. We have designed the system according to all ethical standards, legal regulations and expert recommendations. The system has made an unexpected, independent decision. It’s not our fault”.

You are right. Artificial intelligence is learning all the time and is constantly being fed with new data. I guess you can say that AI systems we are dealing with today are different from what they were six months ago when they were commercialized.

Exactly. Therefore, we are only left with the risk-based rule according to which if something bad happens, the responsibility rests with a given entity, e.g. a manufacturer, irrespective of what the causes were and who was at fault. And it doesn’t matter if the entity was involved in the incident or not. What matters is the fact that the entity was the one to have commercialized that specific AI. Today such rules apply to hazardous products. If you buy a TV and it explodes in your living room, you are entitled to claim damages from its manufacturer.

If there is ever a hybrid society consisting of real and artificial people, the approach based on cooperation and not on control will be more beneficial

That’s the direction the European Union is going in: to treat artificial intelligence as a hazardous product. It’s a good approach although I still believe that we do not treat the autonomy of the systems seriously enough. They will be getting increasingly more efficient and they will be able to learn faster. Do you remember the Microsoft bot called Tay, which became a foul-mouthed, sexist, racist Nazi in just 24 hours? If a seemingly simple bot could get out of control, I dare say that artificial general intelligence systems can work beyond the context they are designed for too. And that is why we need to implement efficient safety and liability mechanisms as soon as possible.

It is similar to how we are responsible for our kids. Even though they are getting more independent over the course of the years, their parents are responsible for all their actions until they turn 18. And no matter how deep they get in trouble, there is no way we can incapacitate them.

That’s true. Whenever we talk about artificial intelligence we have to face the same problem: we want it to be autonomous, but we also know that it must remain under control. On the one hand we want artificial intelligence to be as powerful as possible, to learn more and to be able to perform more tasks. On the other hand, we want it under control and we want humans to stay in the loop.

I feel that those two objectives are impossible to reconcile. We have to chose. We will either have superautonomous artificial intelligence with a certain kind of legal personality and property for which we will be liable only on a subsidiary basis (we might also consider some sort of insurance), or our artificial intelligence will have no rights, property or personality and we will treat it as a tool or as a slave. But in that case we will be fully responsible for its actions, even if its decisions are made beyond our control.

The European Union calls for regulations and stresses how important the control of that tool is. In the USA, however, AI is developing in a more chaotic and uncontrollable way.

The difference in the approach to AI may result from differences in the legal systems. In Europe statutory law is dominant; almost all aspects of social life are codified. The American system is based on common law. But that does not prevent US legal experts to work on ethical standards for autonomous systems. I am currently a member of the law committee of the Institute of Electrical and Electronics Engineers (IEEE), an American organization of engineers and researchers developing hardware and equipment standards. For many years IEEE has been working on the Ethically Aligned Design project, a document setting standards for autonomous systems (currently on the second, practical part thereof). IEEE guidelines concerning ethical rules for AI are very similar to those agreed by the EU HLEG experts.

On the other hand, Europe is rather conservative when it comes to artificial intelligence which is treated more like a tool. Researchers working in the USA, such as Max Tegmark, Nick Bostrom or Stuart Russell, have boldly presented their vision and have never been afraid of looking far ahead. That’s very inspiring. We cannot foretell the future, so there is no point in focusing on one scenario. But the discussions we hold help us to open our minds and to broaden our perspectives.


*Gabriela Barmanaging partner at Szostek_Bar and Partner Law Firm, legal counsel, doctor of laws. She is a civil law expert specializing in new technology law, intellectual property law and companies law. She is a member of the IEEE law committee, a member of Women in AI and AI4EU, and a member of the Association for New Technologies Law, a research associate in the Center for Legal Problems of Technical Issues and New Technologies at the Faculty of Law and Administration of the University of Opole, and the winner of Corporate LiveWire Global Awards – holder of the titles: GDPR Advisor of the Year 2018 and Technology Lawyer of the Year 2018 & 2019

Gabriela Bar is a co-author of the monograph entitled “Artificial Intelligence Law”, edited by Luigi Lai (National Information Processing Institute) and Marek Świerczyński, PhD, Eng (Cardinal Stefan Wyszyński University in Warsaw), which has recently been published by C. H. Beck. The book concentrates on legal aspects of artificial intelligence in various domains of life and replies to recommendations and proposals included in “White Paper on Artificial Intelligence” put out in February by the European Commission.

Przeczytaj polską wersję tego tekstu TUTAJ