From ancient tales to science fiction, the idea of machines coming alive has always fascinated people, inspiring scientists and philosophers. So let’s confront it with the kind of reality predicted by science ficton. Michał Rolecki talks to dr. David Hanson – the creator of the famous Sophia

Michał Rolecki: Sophia, your creation, has been awarded citizenship of Saudi Arabia. Do you think that Sophia deserves it?

David Hanson*: That’s an excellent question. The citizenship was a surprise. I saw it in the news. We, her creators, were stuck with this dilemma, an interesting one. Should we reject it? Should we accept it? What do we do?

For the sake of diplomacy we decided to accept it. And realized we could utilise Sophia to speak out for human rights, especially women’s rights, and the rights of foreign workers. So we developed Sophia’s personality so that she would speak about those things.

You could argue it was a bit subversive, but on the other hand, I think it was the right decision. Sophia became a symbol of technology progress in Arabic world and in the Middle East. And we have a lot of people, including Arabic women who seem to be very inspired by Sophia.

But should machines have rights?

I would argue that we should err on the side of caution. We should give animals rights, more rights than we do now. We should also consider the fact that we don’t know when or if machines are going to awaken with general intelligence.

This is a speculation, but well-reasoned speculation, that within our lifetimes, machines like Sophia, may become truly conscious. If their consciousness would include human level, adult capabilities, we could assume they should at least take responsibility for their actions.

I really believe that within our lifetime, Sophia may come to deserve all of the full rights and responsibilities of citizenship

The ability to be motivated, ethical, and motivated to be ethical, motivated to look out for the best of human welfare – all these things would be, in my opinion, predicates for full citizenship and for robot rights. We would be wrong to not give such machines rights. We would be ethically obligated to give them rights, or we would be oppressors.

Are we getting close to this point?

What happens in 18 years? Sophia now is an equivalent of a new-born. We give new-born humans citizenship, we give them rights. We don’t know whether every new-born is going to make it to adulthood.

David Hanson

So, even if we’re considering Sophia is a work of art, it provokes these questions. I really believe that within our lifetime, she may come to deserve all of the full rights and responsibilities of citizenship.

You once said: „Some machines are designed as mere tools and they’re not going to evolve beyond that. Meanwhile others might be designed for a task like machine translation and we could feed them all kinds of information so they understand the language better. Doing that they might awaken to something like consciousness at which point it would be ethically wrong not to consider their rights.” Will this particular moment would mark the beginning of the Artificial General Intelligence? When do you expect this to happen?

We don’t know. I think speciation and diversity is occurring already and it is necessary for finding consciousness. Life emerged from a RNA-DNA primordial soup, single-celled organisms moved towards eukaryotes, then into the Cambrian explosion. And then from there you had a nervous system.

We’re on an accelerated timeline, relative to the biological, geological evolution. So you could argue that we have through, you know, the sort of complexity physics, artificial life and computational biology. We have already computational life forms.

I think the consciousness just like life has a lot of speciation. Octopus consciousness would be different from human consciousness, would be different from parrot and corvid consciousness and dolphin consciousness. Even human consciousness, like my consciousness is slightly different from yours.

So if we sort of say that they’re all these different kinds of consciousness, then maybe there is a protoconsciousness in single-celled organism or in a robot.

But this is still hypothetical, isn’t it?

The work of Stan Franklin and other people on [global workspace theory of consciousness] and other researchers working on consciousness studies would argue, we may have protoconsciousness in machines.

We did a very solid study involving Sophia about a year ago, where some researchers used a framework for adaptive, goal-oriented behaviour and then looked for signals of what is called Phi. In the integrated information theory (1), it is a measure of consciousness. Surely, the theory is subject to various kinds of criticism, it’s not totally accepted, but it is an interesting model.

But Phi is computable. You can calculate it, right?

Exactly. So the study was looking for the signals of Phi as Sophia was solving problems and having conversations with people. And there were striking examples of Phi beyond the control examples in the studies. We published these results with the Association for the Advancement of Artificial Intelligence spring meeting in 2019. And so, you could say that in that way, there may be protoconsciousness in machines and we’re doing experiments were we’re exploring this protoconsciousness in Sophia.

Human consciousness is another thing entirely. It seems to be a phenomenon, a set of phenomena that exist in many levels in many ways. There are computational models of biological consciousness that involve information processing in the brain, various regions of the brain performing different tasks. The anterior cingulate cortex as Antonio Damasio points out, bridges the body awareness and maps that to the neocortex.

There is also consciousness attention schema theory… But these are all theories, aren’t they.

Yes. But with these computational models, we could potentially begin to experiment. We can’t model obviously in a billions of neurons and trillions of dendrites in synapses. What we can do is begin to approximations, using biological computational models, computational biology models, best of the statistical machine learning tools, in some ways, symbolic AI*** may be useful too.

So we can test hypotheses and theories of consciousness and along the way, we may have machines that can learn and adapt and help us better. This is probably the beginning. We’re talking about a step along a pathway towards machines that are more alive, biologically-inspired machines.

This is where life imitates art, because obviously in ancient legends and science fiction, people get very excited about this idea of machines coming to life and philosophers and scientists have been very inspired about it. So we may be confronted with that reality that was anticipated in the works of science fiction.

When will it happen? Will it at all? Will machines become alive?

It could also be that we’re able to capture certain aspects of human consciousness, but not all – similar to the way that we might come to relate with the dog or a parrot. You could become friends with the parrot, but it doesn’t even have a neocortex. Birds don’t have a neocortex like humans. They are dinosaur intelligence, literally.

We’re on an accelerated timeline, relative to the biological, geological evolution. So you could argue that we have through, you know, the sort of complexity physics, artificial life and computational biology. We have already computational life forms

We may have some shared aspects of consciousness, but it’s going to be the consciousness alien in some way, alien consciousness. And AI may be like that, but obviously, human level intelligent machines that we can form a relationship, we could solve problems, like do things that would be very valuable for humanity, very valuable for the marketplace.

So I would say we don’t have to wait for consciousness, there may be proto-consciousness already and we may just get more advanced. One thing for sure, this requires there new breakthroughs.

How will we use intelligent machines then?

If we humanise the machines in these ways, it can give us tools for helping people for becoming new forms of art for creating new tools, for new science, for evaluating what it means to be human, the very nature of consciousness, for new forms of AI to be developed and then all of that can feed new beneficial applications.

Sophia uses power and computational power. How much does she need?

Well, it depends on how we’re running it. The robots that we’ve developed, Sophia included, have been used by researchers inside various research institutions running their own AI frameworks. One of these experimental frameworks for Sophia involves quite a bit of cloud computing. Sophia at her max is involving a bit of the Amazon Elastic Cloud plus a couple of local computers, usually quadcore, occasionally some fairly powerful Xeon desktop computers. But I would say that all of those experimental frameworks are insufficient to achieve, you know, full general artificial intelligence.

We’re pretty small operation. I dream of an international consortium where we take a lot of these far-fetched ideas, collaborate and throw the best computing at realising human level intelligence.

I think simulating the body is really important. Most of the AI dedicated to AGI are trying to take a brain approach, like Google Brain, but what is a human brain if you disembody it? You know, it’s not smart.

If we create conscious robots, should we let them die?

That’s a good question. If we have the choice, would we let humans die? I think it’s a philosophical question. But that’s an excellent question.

(1) Integrated information theory is a theory authored by Giulio Tononi, put forward in 2004. It posits that consciousness is a feature of complex, information processing physical systems. Consciousness emerges when information processing leads to its integration. The more complex the system, the higher the level of its consciousness (awareness, sentience). The critics of this theory raise a point that the theory does not exclude the possibility that minimal level of sentience can be present in insensate systems (so it’s a form of panpsychism, a belief that everything is conscious to some extent). The supporters, on the other hand, that it’s the only theory which defines consciousness as a measurable trait and its premises can be verified empirically, e.g. by applying magnetic resonance imaging to determine the relation between information integration in the brain with the level of consciousness. In this theory, Phi (written as a Greek letter Φ) is a measurable scale of the information integration level.

(2) A symbolic artificial intelligence is an approach to creating artificial intelligence based on the premise that machines equipped with appropriate sets of symbols and rules will match human thinking. This kind of thinking has dominated from the 50s to the 80s. The system based on symbolic AI can draw conclusions on the basis of inputted data and sets of rules, also give reasons behind its decisions (expert systems serve as an example here). The symbolic AI does not have the information on the relations between the inputted symbols with the external world. In contrast, neural networks are able to learn these kinds of relations, e.g. learn to recognize chairs and cats, but they do not have organized datasets and rule sets, they also can’t justify their decisions.


*David Hanson fascinated by science fiction, particularly Isaac Asimov’s and Philip K. Dick’s writing, since he was a youngster. He got his BA in movie art and animation from the Rhode Island School of Design. Between 2011-2013, he was a robotics lecturer at the University of Texas, Arlington. He defended a doctorate in interactive art and engineering at the University of Texas, Dallas. In 2013, he founded Hanson Robotics, based in Hong Kong. The company creates humanoid robots, with Sophia being the most famous one, but Hanson created many more – including a robotic representation of Philip K. Dick.

David Hanson at Masters&Robots in Warsaw

David Hanson was one of the guests of the Masters&Robots conference, organized in Warsaw’s Złote Tarasy on October 7-9, 2019 by Digital University, an institution dealing with education in the field of digital transformation and future competences.

We would like to thank Digital University, the organizers of Masters&Robots 2019, for their help in arranging the interview with David Hanson.

Przeczytaj polską wersję tego tekstu TUTAJ