MAX TEGMARK: Of all the words I know, there isn’t one that makes many of my colleagues more emotional and prone to foaming at the mouth than the one I’m just going to say: awareness. Many scholars dismiss this as a full BS and completely irrelevant and many others think this is the main thing – you have to worry about machine consciousness etc. What’s your opinion? I think awareness is irrelevant and very important. Let me explain why.
First of all, if you are being chased by a heat-seeking missile, it is completely irrelevant to you whether this heat-seeking missile is conscious, whether it has subjective experience, or whether anything appears to be that heat-seeking missile, Because everyone you care about is what the heat-seeking rocket does, not what it feels like. This shows that believing that you are safe from AI in the future if it is not conscious is completely wrong. It’s her behavior that you want to make sure it aligns with your goals.
On the other hand, there is a way that consciousness is very important, I feel, and there is also a way that consciousness is very wonderful. Going back 400 years or so, Galileo, he could tell you that if you tossed an apple and a hazelnut they would move exactly in this shape of a parabola and he could give you all the accounts for that, but he had no idea why apples are red and hazelnuts brown or Why were apples soft and hazelnuts hard? This seemed to him beyond science, and science 400 years ago could only say logical things about this very limited field of phenomenon related to motion. Then came Maxwell’s equations that told us all about light and colors and that became the realm of science. Then we got to quantum mechanics, which told us why apples are softer than hazelnuts and all other properties of matter, and science gradually conquered more and more natural phenomena. And if you ask now what science can do, it will be much faster to describe the little thing that science cannot logically talk about. I think the last limit is actually awareness. People mean a lot of different things by this word, I simply mean subjective experience, experience of colours, sounds, emotions etc., I feel like I feel something being me, which is completely separate from my behavior, which I can have even if I’m a zombie and I don’t experience anything At all, it is likely.
Why should you care about that? I’m interested in it first because that’s the basic thing we know about the world: my experiences, and I’d like to scientifically understand why that is and not just leave it to the philosophers. And secondly, it is also very important in terms of purpose and meaning. In the laws of physics, there is nothing about meaning, there is no equation for it, and I feel that we should not be looking for our universe to give meaning to us because we are the ones who give meaning to our universe because we are conscious and experience things. Our universe wasn’t conscious, it was just a bunch of things moving and gradually these incredibly complex patterns were arranged in our brains and we woke up and our world is now self-aware. We have incredibly beautiful galaxies. Why are they beautiful? Because we are consciously aware of it. We see them in our telescopes. If we screw up technology in the future and all life becomes extinct, our universe will revert back to being meaningless and just a massive waste of space, as far as I’m concerned. And when a colleague tells me that they think consciousness is BS, I challenge them to tell me what’s wrong with rape and torture, and I ask them to explain it to me without using the word consciousness or the word experience. Because if they can’t talk about it, the whole thing that they say is so bad is just a bunch of electrons and quarks moving a certain way instead of some other particular way, and what’s so bad about that?
I feel that the only way we can actually get any logical and scientific basis for morality, morality, purpose and meaning is precisely in terms of experience, in terms of consciousness. And that makes it really important, as we prepare for our future, that we understand what this is. And I think that’s actually something that we can also eventually understand scientifically. I don’t think the difference between a live bug and a dead bug is that a live bug has some kind of secret life source; I think of errors as mechanisms and that a dead bug is just a broken mechanism. Likewise I think that what makes my mind conscious, but the food I have eaten, which I have rearranged in my mind, was not conscious, not because they are made of different kinds of things; It’s the same quarks, rearranged, right? It is the style in which they are arranged. And I think it’s a scientific question: what properties must this style of information processing have in order for there to be subjective experience there? You can imagine building a brain scanner – we actually have very good hardware at MIT where I work – and we have some software that tests whatever theory you have of consciousness and makes predictions for what you’re testing. And if you’re sitting in that machine and your computer screen tells me, well now I see the information processing in your brain that you’re consciously aware of the idea of an apple. I’m impressed, yes that’s right, right. Then you say, I see information about your heartbeat in your brain and you are aware of it. And I like, no I didn’t realize it. Now I’ve left out the theory that was applied in the program, so it’s falsifiable, meaning it was a scientific theory.
If one day we can find a theory like this, and there are some candidates in the market like Giulio Tononi’s integrated information theory, for example, if we find any theory that continues to pass tests like this and we start taking it seriously and we can use it to build a consciousness detector, that would be Really useful in the first place. Physicians in the emergency room would love it if they took an unresponsive patient, put them into a consciousness scanner and found out if they had lock-down syndrome and couldn’t communicate but were conscious, or if no one was home. This will also allow us to understand whether the future AI systems we are building are conscious and whether or not we should feel guilty about turning them off. Some people may prefer the robot that will help them around the house in the future to be an unconscious zombie, so they don’t have to feel guilty about giving it boring tasks or turning it off. Some people may prefer being conscious so that it can be a positive experience there and so they don’t feel like stepping out of this machine just to fake it and pretend to be conscious even though it’s a zombie. And most importantly, in the long-term future, if far from now we have life spreading from Earth to other galaxies and the entire universe is alive and doing amazing things, if this life becomes a descendant of humanity, wouldn’t it suck if this all turned out to be just unconscious zombies? And that the whole thing we felt before we died was just empty seat play? I feel like we really have to tackle this ultimate frontier of scientific ignorance, the consciousness problem, and find these things so we can shape a really great future — not only from the outside looks like great things are happening, but there’s actually someone in the house to experience all of this.