What Studying Consciousness Can Reveal about AI and the Metaverse (with Anil Seth)
The workings of the brain have long puzzled scientists and philosophers, but the last twenty years have been a golden age for consciousness research.
Cognitive and computational neuroscience professor Anil Seth is at the cutting edge of that work. He and Azeem Azhar discuss theories on how and why our brains work the way they do and explore how learning more about those questions could lead to new discoveries in medicine, AI, and virtual reality.
They also address:
- How is it possible to scientifically study a subjective experience?
- What is the difference between consciousness and intelligence, and can you have one without the other?
- If a computer becomes conscious, but it doesn’t want to share its self-awareness, would we ever know?
‘What Is It Like to Be a Bat?’ – Thomas Nagel (1974)
Being You, Anil Seth – Penguin Random House
AZEEM AZHAR: Welcome to the Exponential View podcast. I’m your host, Azeem Azhar. Now every week I speak to the people who are shaping on their future. So far in this series we’ve had experts in everything from fusion and quantum computing to cryptocurrency and the future of the car. Now this week’s episode is a little different. Bear with us. It is just as mind expanding. My guest today is Anil Seth, a professor of cognitive and computational neuroscience at the University of Sussex. He is a friend of mine and the author of a recent book, Being You: A New Science of Consciousness. In it, Anil posits at what we think of as reality is a series of controlled hallucinations. We construct our version of the world according to our preconceptions and best guesses. Both the science and philosophy of consciousness is fascinating. And it has me for more than 30 years. Recent developments hint at a range of real world applications that could change the way we live. From the clinical uses to applications in virtual reality and artificial intelligence, the science of consciousness touches on so many exciting areas and no one is better placed to explain why than today’s guest. Anil Seth, welcome to Exponential View.
ANIL SETH: Hi Azeem, it’s really great to be here. I’m glad we’re able to talk now.
AZEEM AZHAR: And I am glad that you have summed up the energy to be here. And I think this is going to be the first time where I feel I’ll be able to keep pace with you, only because you are slightly under the weather, but I’m feeling perfectly fine. So, thank you for giving me that slight handicap advantage.
ANIL SETH: Let’s see how that goes.
AZEEM AZHAR: Well, we met several years ago when there was a social media meme that went crazy. It was about a dress, whether it was black and blue, or white and gold. And we were both asked to go on television to talk about it. I had to talk about why Kim Kardashian was tweeting it. And you got to talk about why we perceive things the way we do. Which is really the heart of your work and your professional career over the past decades. Now, consciousness has occupied thinkers from millennia. We think about Descartes or Thomas Nagel’s paper. What is it like to be a bat? And in the ’90s, there was a lot of work, a lot of emphasis on new ideas, perhaps relying on new instrumentation, like MRIs and other kind of experiments that we could use to understand what consciousness is, take us through your view and, and how you got that?
ANIL SETH: My approach to this question actually touches on a couple of the things you mentioned. First thing is you’ve got to start with a definition. What do we mean by consciousness? There are all sorts of definitions out there. But I mean, something very specific, very biological, very personal. It is any kind of subjective experience. And this is what the philosopher Tom Nagel said, of course. He said, “For a conscious organism, there is something it is like to be that organism.” It feels like something to be me. It feels like something to be you, right? But it doesn’t necessarily feel that anything like it is to be a table or a chair or an iPhone. There’s what David Charm is called this hard problem. You have this world made of physical stuff, made of material, atoms or quacks or whatever it might be. And somehow out of this world of physical interactions, the magic of consciousness emerges or arises, and it’s called a hard problem because it seems almost impossible to solve, as if no explanation in terms of physical goings on could ever explain why it feels like anything to be a physical system. But we are existence proofs that it does. So instead of addressing that hard problem head on, my approach, and it’s not only my approach, it builds on a history of similar approaches. Is to accept that consciousness exists. And instead of trying to explain how it’s magic out of mere mechanism to break it up into its different parts and explain the properties of those different parts. And in that way, the idea or the hope is that this hard problem of consciousness, instead of being solved outright will be dissolved in much the same way that we’ve come to understand life, not through identifying the spark of life, but through explaining its properties as part of this overall big concept of what it is to be a living system.
AZEEM AZHAR: The hard problem that Charms talks about, I guess, back in the mid-nineties, perhaps when you were an undergraduate is a really, really tricky one, but even the easy problems of consciousness, how the mechanisms function were pretty difficult. But if your approach is neither the easy problem or tackling the hard problem, you call it, the real problem. Why do you say it’s the real problem?
ANIL SETH: Well, partly to wind up David Charm, I mean, he’s been in a fantastic influence on the field, of course, but dividing the game between the hard problem and the easy problem. I think I forces people to ignore consciousness entirely. If you focus on the easy problems you’re studying all the things that brains are capable of, that you can think about without needing to think about consciousness. These are challenging problems, but they’re not conceptually difficult in the same way that the hard problem is. And so if you divide it this way, you’re either sweeping consciousness under the carpet, or you are facing this apparently unsolvable mystery. So, I call it the real problem. Simply to emphasize that yes, we have conscious experiences and importantly, consciousness is not one single big, scary mystery. It can be addressed from different angles. We can think about what’s happening when you lose consciousness under anesthesia or sleep. We can think about perception. Why did some people see a golden white dress and why do other people see a blue and black dress? And then for me, the most interesting aspect is we can think about the self. Now, the self is not a sort of essence of you that sits somewhere inside the skull, doing or perceiving. The self is a kind of perceptual experience. Two, and it has many properties. Experience of being a body. The experience of free will, all these things are aspects of selfhood. And I think we’ll make a lot more progress by addressing these aspects of consciousness somewhat separately. We can take the approach of trying to explain what makes them distinctive and get a lot further in understanding why our conscious experiences are the way they are. And as we do that, what’s happening, certainly for me, is that this hard problem seems to lose its luster of mystery a bit. We’re doing what science always does, which is we’re able to explain, predict and control the properties of a system. And there’s no reason we can’t do that when it comes to consciousness. That’s the real problem of consciousness.
AZEEM AZHAR: One of the things that we could do, I mean, this comes back from our own experience. It comes back from the Nagel paper, is that we can recognize that there is this quality to be a thing, and to have that sense of self and this sense that we have of consciousness. But let’s take a step back. If we know that consciousness is there, why do we have it?
ANIL SETH: I don’t think there needs to be any single reason why consciousness is part of the universe. We don’t know why it extends either. But for all creatures that are conscious, I think there’s a good hint about function. When we think about what we can call the phenomenology of consciousness, what our experiences are actually like. And if you think about your conscious experience at any particular time, it brings together a vast amount of information about the world in a way that’s not reflecting the world as it is, but is reflecting the world in a way that’s useful to guide your behavior. You experience all of these things in a unified scene together with the experience of being a self, together with the experience of emotion, things feel good or bad, and with the opportunities that you have to act in that way world. So there’s this incredibly useful unified format for conscious experiences that provide a very efficient way for the organism to guide its decision making, its behavior in ways that are best suited basically to keeping the organism alive over time. And actually that’s how I ground my whole ideas about consciousness. They’re fundamentally rooted in this basic biological imperative to stay alive.
AZEEM AZHAR: So that is evolution all the way down. And we have evolved this capability, because it helps us make sense of all of our experiences, all the stimulation that we get in the external world and put it into ourselves so that we can experience that in ways that allow us to survive and allow us to potentially thrive, take the right kind of actions, that sort of thing.
ANIL SETH: That’s right. But also it’s worth emphasizing that the self is not the recipient of all these experience. The self is part of that experience. It’s all part of the same thing. And this is one of the more difficult intuitions to wrap one’s head around. And I think when thinking about consciousness, I always use this heuristic. I always remind myself that how things seem is not necessarily how they are. So it seems as though we are perceiving the world as it really is. That colors, like the color of the dress, exist objectively out there in the world. Now stuff does exist out there in the world, but the way we experience it, especially for something like color, depends on the mind and the brain too. And it seems as though the self is the thing that’s receiving all these perceptions, but that again is not how things are. The self is also a kind of perception. And the fact that it’s all integrated into a unified conscious experience where we experience the self in relation to the world, that I think points to the function of consciousness, that it’s useful to guide the behavior of the organism.
AZEEM AZHAR: This key idea, you have this sentence, the purpose of perception is to guide action and behavior to promote the organism’s prospect of survival. We perceive the world not as it is, but as it is used useful for us. So this is the rationale for why consciousness exists. And you then connect it to the notion of it being a controlled hallucination, capturing the idea that in a way consciousness is directing what we… I hesitate to use the word choice, but what we choose to access from the real physical world, this mechanism of controlled hallucination.
ANIL SETH: It’s a bit of a tricky term to think about perceptual experience. Because there’s a lot of baggage to things like hallucination. The reason I use controlled hallucination to describe perceptual experience is to emphasize that all of our experiences are generated from within. And we don’t just receive the world through the transparent windows of the senses. What we perceive is the brain making a best guess and inference about the causes of its sensory signals. And the sensory signals that come into the eyes and the ears and all the senses, they’re not just read out by the self inside the brain. No. The sensory signals are there to calibrate these perceptual predictions, to update these perceptual predictions. Again, according to criteria of utility, not necessarily according to criteria of accuracy. So the control is just as important as the hallucination here. I’m not saying that our perceptions are all arbitrary or that the mind makes up reality. No. Experiences are always constructed, but they’re tied in very, very important. And as we’ve just said, evolutionarily sculpted ways. So that the way we experience the world is in general useful for the organism. So what we might think of hallucination colloquially, when, like I see some, I just have a visual experience that nobody else does, and there’s nothing in the world that relates to it. You can think of that as an uncontrolled perception, when this process of brain-based best guessing becomes untethered from causes in the world.
AZEEM AZHAR: There are a few words that you used in your last answer that you talked about, inference and prediction and utility. And these are all words that we might use when we’re talking about artificial intelligence. So thank you for putting those words out there. Because when we talk about AI with you later in this discussion, we will come back to some of them. But let’s go back to this notion of consciousness having this purpose. It helps organisms prospects for survival, that there is this notion of a kind of controlled hallucination, given all of these signals that are coming toward us. Now, for this to be a scientific theory, we have to be able to test it. We have to be able to run experiments on aspects of these assertions. So once you’ve made an assertion like that, what are the kind of experiments that you can run now to demonstrate parts of this theory?
ANIL SETH: This is a really good question. Because, of course, theories need to be testable in order to have any traction and to have a future. The idea of the brain as a prediction machine does have a long history. And you can take that idea and you can generate a lot of testable hypotheses about it. For instance, a whole range of work. Some of it from my own lab, others from other labs asks how our perceptual experience changes based on the expectations that our brain explicitly or implicitly has? If this controlled hallucination view is right then perceptual content should be determined, not by the sensory signals, but by the brain’s top down predictions. So we can test this in the lab and what we would call psychophysical experiments, where we carefully control the stimuli people are exposed to.
AZEEM AZHAR: You, sort of prime someone in advance, right? With a cue, they might interpret their experience one way, and you’ve primed them a different way they’ll interpret it in a different way.
ANIL SETH: Right. This is a very blunt, a very blunt, very simplistic way to get at this. You can, for instance, tell people that seeing a face is more likely than seeing a house. And then you give them a situation which experimentally we’ve set up, so that there’s an ambiguous image. And they’re more likely to see what they expect than what they don’t expect, or they’ll see what they expect more accurately and more quickly than what they don’t expect to see. So that’s a very simple kind of prediction that you make. It by no means validates or proves this whole theory. We need to do brain imaging studies as well. And these are beginning to happen in our lab, in other labs across the world, too, where we find that indeed we can read out what people are perceiving by looking at these top down flows of information in the brain. Certainly, in vision.
AZEEM AZHAR: Are these experiments that we’ve seen recently where you put someone into an MRI machine that’s looking at their brain and you get them to think about a dog. And then you are able to look at the output and have a system that predicts that they were looking at a dog and recreate what they are thinking about. Is it that sort of thing that we’re talking about here?
ANIL SETH: It’s based on the same sort of idea. So that’s this emerging technology of brain reading, right? Can you decode what someone is looking at or thinking simply by basically chucking a load of brain imaging data out to machine learning classification algorithm. And you can. And, there’s a lot of debate in the field about, is this telling us anything about the brain or is it just telling us that machine learning classification algorithms are quite good? But if you do this in a way that’s more constrained to the anatomy of the brain. And for instance, you show people an image and then a quadrant of it might be missing, but it turns out a machine learning algorithm can still decode the content of the image. From brain imaging data from part of the visual cortex, where there was no stimulation and indeed from a layer of that visual cortex that receives top down input. And so the fact that you can do that is telling you there’s information in this top down signaling that at least partly determines, or is relevant to the content of what someone is at experiencing. So experiments that build on this kind of approach are helping us disentangle, not just which regions are implicated in perception. I mean, neuro imaging has this history and starting point of focusing on, is this region a hotspot? Does it light up? Does this region light up? And I think these days we are moving beyond that, to think about networks and mechanisms and processes, rather than just this area or that area.
AZEEM AZHAR: There is a relationship between scientific theory and the tools that we have to run an experiment. And sometimes the two get somewhat out of sync. I think one of my favorite examples is when Einstein came up with the general theory of relativity in 1916, and he had these ideas of gravitational waves. It took us a century, until the LIGO device was available to actually experimentally prove that theory. When you look at the progress in your field and the types of experiments that have happened, certainly over the last 20 years, do you think that you’ve got the science of consciousness on a path that is more in sync with the tools that we have to do the tests, or is this going to end up being a little bit like general relativity where we have to sort of rely on it and then wait a hundred years before we can prove it?
ANIL SETH: Now, neuroscience, and especially the neuroscience of consciousness faces three specific challenges. One is brain imaging. We don’t yet have a single brain imaging technology that is able to record both in high time resolution, in high spacial resolution at is from many, many different small parts of the brain at once, and coverage. So we can get any two out of three maybe, or one out of three, but we can’t visualize the activity of a brain in the detail that we would ideally have. That’s one challenge. So developing new technologies that can manage that, I think is not necessarily going to be critical, but would certainly be helpful. The second challenge is specific to consciousness. And that is that the data by which we test theories of consciousness are of a different kind. There’s subjective data. It’s not some data that we can get from LIGO or the James Webb telescope and or agree about it. It’s subjective data. Now, some people say this means you can’t do a science of consciousness at all, because you are dealing with data that is intrinsically private and subjective. I don’t think that’s quite true. I think it just adds a layer of difficulty. There’s a whole tradition and philosophy called phenomenology, which is about how to describe, how to report what’s actually happening in the space of Contra experience. And there are methods now in psychology and in psychophysics where we can try to remove various biases in how people report what they experienced. So, it adds complication, but it’s not a deal breaker. The third thing, and this is something that’s actually going on now, is that there’s a movement to come up with experiments that disambiguate between competing theories of consciousness. Over the last 10 or 15 years in consciousness science, there have been a number of different theories refined and proposed this idea of the prediction machine. But then there are other ideas too, that consciousness is to do with integrated information in the brain or that it’s to do with broadcasted of information around the brain. And the challenge is to come up with experiments that distinguish between these theories, rather than just trying to be aligned with any particular one. And these experiments are now beginning to happen, which I think is very promising to the field.
AZEEM AZHAR: I then start to think about what the real world applications of all of this might be and what it might be telling us in practice. I think of perhaps a roughly sort of three areas I think about what’s happening within medicine and within neurological and psychological conditions. I think about what’s happening within artificial intelligence and the sort of work that’s happening there. And, also what’s happening in this the field of virtual reality. Because, I can see that virtual reality presents us with a whole set of sensory experiences that we may want to have sort of controlled hallucinations around. So I’d love to explore those three areas. Perhaps starting with that first one, which is thinking about medical applications. I mean, what are we learning about psychiatric conditions or psychological conditions or neurological ones that is being illuminated by this kind of work?
ANIL SETH: If you take an example from neurology, people who suffer severe brain trauma often go into a coma where they unambiguously lose consciousness, and then they may recover partially to something called the persistent vegetative states. And this is a state when you diagnose it from the outside as a neurologist, the state in which the patients go through sleep, wake cycles, but there really doesn’t seem to be anyone at home. Now there’s no voluntary action. There’s no response to command or the questions. It seems like no consciousness is there. And people are often treated that way. That becomes a diagnosis of sort of wakefulness without awareness. But what the science of consciousness is allowing clinicians to do now is to not just rely on external science of consciousness, but look inside the brain. And there’s a great example of this. It’s now about 10 years old, but it’s a way of measuring the complexity of brain activity, by basically disturbing the brain with a very strong, very brief electromagnetic pulse. And then listening to the echo, listening to how this pulse bounces around the circuits of the brain. And this measure turns out to be quite a good approximate measure of how conscious somebody is, and has been validated under anesthesia and in sleep and so on.
AZEEM AZHAR: So it’s like a consciousness meter.
ANIL SETH: It’s like the start of a consciousness meter. And I wouldn’t want to make that analogy too tight. Because I don’t think consciousness does lie along a single dimension, but I think in these clinical cases, it can be useful approximated that way. And indeed it is being in certain clinics now. If you take this measure, this consciousness meter measure, call it the perturbation complexity index, it will was developed by Marcello Massimini and Giulio Tononi and colleagues. That gives quite a good indication of whether somebody is in fact conscious, even though they can’t express it outwardly, or will recover at least some conscious awareness.
ANIL SETH: Because if you track the trajectory of patients over time, you’ll find people that score high on this perturbation complexity index tend to be the ones that do better over time. And this is a direct clinical application of focusing on the brain basis of consciousness. And accompany that, there’s of course there’s many applications in psychiatry too. Because the primary symptom of most psychiatric conditions is disturbance in experience. The world seems different. People have actual hallucinations. People experience their body in different ways. People have delusional beliefs. And so now there’s this whole field of computational psychiatry, which is trying to understand the mechanisms that give rise to the symptoms that appear at the level of conscious experience. Because, once we understand the mechanisms, we can start to think about really targeted interventions and bring psychiatry up into the 21st century, where it should be for medicine these days.
AZEEM AZHAR: Is consciousness to be found in a single place in the brain, or is it emergent? I mean, do we know what the minimal physiological requirements for consciousness are?
ANIL SETH: Certainly consciousness is not generated in any single area. There’s no seat of the soul, whether it’s the pineal gland that Day Cat identified or anywhere else. Consciousness emerges in some way from activity patterns that span multiple areas of the brain. But do we know the minimal neural correlate for conscious experience in a human brain? The answer still know, but there are some that argue that a very basic form of consciousness can emerge just from the brain stem, that it doesn’t require any cortex at all. That’s sort of one extreme. And I don’t think there’s strong evidence for that. Then there’s a very lively debate in the field at the moment about whether consciousness depends more on the front of the brain or on the back of the brain. Different theories might predict different involvement of the frontal parts of the brain. Some theories say that it’s absolutely essential. Other theories say it’s not. And so by designing experiments that can test the contribution of frontal parts of the brain. We can begin to distinguish between different theories too.
AZEEM AZHAR: Now I’m interested in interaction between consciousness and machines as well. I go back one of the ways in which you describe consciousness. You say, “The purpose of perception is to guide action and behavior to promote an organism’s prospect of survival.” It reminds me of the definition of intelligence that is often used in the artificial intelligence field, within computer science. Where people say an agent is said to be intelligent, if it can perceive its environment and act rationally to achieve its goals. So there seem to be a parallel from these different disciplines about the definition that you use about consciousness and the definition that some artificial intelligence researchers use for intelligence. They’re not really the same thing at all, but I’m curious about those parallels.
ANIL SETH: Right? There are parallels, but I think there are also important distinctions just in the specifics of the definite that you have. There’s a lot of work being done by the word rational, in that definition of intelligence from the AI community. But consciousness should not be defined that way. Consciousness, back to our very beginning, is any kind of subjective experience whatsoever. Instead of just being sad when something bad happens, we can be disappointed. We can experience regret. We can even regret things. We haven’t even done anticipatory regret. But to conflate consciousness and intelligence I think is to underestimate what consciousness really is about. And making this distinction, I think, has a lot of consequences. For one thing, it means at consciousness is not likely to just emerge as AI systems become smarter and smarter. Which they are doing. And there’s a common assumption that there’s this threshold. And it might be the threshold that people talk about as being general AI, when an AI acquires the functional abilities characteristic of a human, Oh, well that’s when consciousness happens, that’s when the light comes on for that AI system. And I just don’t see any particular reason, apart from our human tendency to see ourselves at the center of everything on the top of every pyramid to think that’s going to be true. I think we can have AI systems that do smart things that need not be conscious in order to do them.
AZEEM AZHAR: You call this idea of pernicious anthropocentrism. The idea that we have to be at the center of all of these. But when we think about what happens with engineered machines, as opposed to biological organisms, why are we saying this particular set of qualities that we call consciousness is present within sort of biological living organisms, but can’t be present in engineered built ones.
ANIL SETH: I think there’s just this big open question about whether consciousness depends on being made out of the particular kind of stuff. We are made out of carbon and neurons and wetware. Computers are made out of Silicon, mostly at least most modern day computers. Now, some people would say that it really doesn’t matter what a system is made out of. It just matters what it does, how it transforms inputs into outputs. This may be true. It may be that consciousness is a sort of thing that if you simulate you instantiate. Like playing chess is like this. If you have a computer that plays chess, it actually plays chess. But then there are other things in the world for which functionalism is not true. And that the substrate, the what it’s made out of actually matters. Think about a really detailed simulation of the weather. Now this can be as detailed as you like, but it never actually gets wet or windy inside that simulation. Rain is not substrate independent. So, there’s an open question. Here is consciousness dependent on our biology. It’s very hard to come up with a convincing reason why it must be, but it’s equally hard to come up with a knock down argument that it has to be independent of that substrate. And that’s why I’m agnostic. But I do tend a little bit more towards the biological naturalism position. And that’s primarily because when we think about a living creature, and we talk about the substrate, like what is the wetware that the mindware is running on? Well in the computer, you’ve generally got quite sharp distinction you can make the hardware and the software. But in a living organism, there’s no sharp distinction between Mindware and wetware. And if you can’t draw a line between these, then it almost becomes an unanswerable question about whether it’s independent of the substrate or not. Added to that, the only examples of things that we know are conscious are biological system. So that should be a kind of a default starting point until proven and otherwise.
AZEEM AZHAR: If we did get to a stage where, because you haven’t ruled this out, a computer became conscious, how could we know it was if it chose not to tell us?
ANIL SETH: This is a big problem. And bear in mind that being conscious just doesn’t necessarily have with it, the ability to report. The system might not even be able to. Again, brain damage patients can’t report things, even though they are conscious. I think the real danger in this area of artificial consciousness is that even though we don’t know what it would take to build a conscious machine, we don’t know what it wouldn’t take. We don’t know enough to rule it out. So it might in fact even happen by accident. And then indeed, how would we know. The only way to answer that question is to just discover more about the nature of consciousness in those examples that we know have it, that will allow us to make more informed judgements. I actually think a more short term danger is that we will develop systems that give the strong appearance of being conscious, even if we have no good reason to believe that they actually are. I mean, we’re almost all already there, right? We have combinations of things like language generation, algorithms, like GPT-3 or GPT-4, shortly and Deepfakes, which can animate virtual human expressions very, very convincingly. You couple these things together, and apart from the actual physical instantiation stuff, we’re already in a kind of pseudo Westworld environment where we’re interacting with agents.
AZEEM AZHAR: And you’ve also identified this challenge through some of your experiments of the idea of priming, that you can take something ambiguous and you can prime me, and I might hear the description of a lovely meal and someone else might hear the description of a political position. And so there’s perhaps a vulnerability in the consciousness system that towards things that also look and walk and talk as if they’re conscious.
ANIL SETH: Absolutely. And I think this is something we need to keep very much front of mind as AI develops. Which is that, we have a lot of cognitive vulnerabilities, our cognitive vulnerabilities already being exploited by social media algorithms and the like. AI systems that give the appearance of being conscious will be able to exploit these vulnerabilities even more. So, there’s a project I’m working on with some colleagues in Canada, Joshua Benjio, and Blake Richards and others. Where what we’re trying to do is figure out how implementing some of the functions associated with consciousness can actually enhance AI, overcome some of its bottlenecks, like its ability to generalize quickly to novel situations, choose the data that it learns from, all these sorts of things, which we can do, and which are closely associated with consciousness in us. Without that having the goal of actually building a conscious machine, which want to adapt some of the functional benefits, but also do so in a way that we can help mitigate some of these dangers. For instance, an AI system that is actually able to recognize its own biases and correct for them might be a very useful change in of where AI is currently going.
AZEEM AZHAR: So, there’s another technology theme that people are getting really excited about in 2022, which is the idea of the metaverse. And I guess that the metaverses, 2020’s version of virtual reality. Creating environments that will be increasingly sensorially rich and immersive. To what extent would those appear to be real, real experiences to organisms that exhibit consciousness?
ANIL SETH: I have quite a problem with the overall objective of something like the metaverse. And it’s a very basic problem, which is that I think in the society, in which we live at the moment, we should be doing everything we can to reconnect ourselves with the world as it is. And with nature as it is, rather than trying to escape into some commercially driven virtual universe, however, glittering it might be. But I also think there’s important lessons here or an important role that understanding consciousness has to play. When we experience a visual scene, we’re engaging with it all the time. We don’t just passively experience a scene and sit there like a brain and a jar. We’re interacting with it all the time and to understand how these interactions shape our experience. Now, these are the sorts of experiments, which VR is very useful. And of course the flip side of that is when we understand the role of interactions and shaping experiences, we can design VR environments to be more engaging, to be less frustrating, to perhaps be more useful, to the extent that they can be. And of course there are many very valuable applications as well. I just want to tell you this one experiment that we’ve been doing in the lab for a while, that I think is super interesting in this domain. Which is really what you said about will VR get to the point that it’s indistinguishable from real experience, setting aside whether we actually want to get there or not? It’s an interesting question, right? And so, one of our experiments led by Keisuke Suzuki and Alberto Mariola is developing something. We call constitutional reality. This is the idea. Instead of using computer generated graphics, we use, in this case, real world video of let’s say my lab. And we replay that real world video through a head mounted display so that as people look around, they can see the part of the room that they would see anyway. And in fact, that’s what we do. We invite them in, they wear a headset, it has a camera on the front. And so to begin with, they are indeed experiencing their environment through the camera, projected into the headset, but then we can flip the feed, and run the prerecord video instead. And if you do it in the right way, people don’t notice. So here’s a situation. I think it’s really the first situation where people are fully convinced that what they’re experiencing is real in a way that you never get in standard VR or in a cinema. However, good the movie is. People really have the conviction what they’re experiencing is real, and yet it isn’t. And this is a platform we can use to figure out, okay, now what happens if we mess with this movie in various ways? What happens to the person’s perception when they’re high level prediction of what’s going on? Is that this is indeed the real world. And that’s a set of experiments that we’re working on right now.
AZEEM AZHAR: But that speaks to at sort of the potency or the potential potency of that set of technologies, that it could really deliver real experiences, right? Experiences that based on the idea of the controlled hallucination, the organism, the human is conscious of and believes they are experiencing and may make decisions based on those experiences.
ANIL SETH: Yeah, potentially. I mean, at the moment, this is obviously only possible in a very restricted circumstance. People have to come and sit in exactly the same place we recorded footage from and so on. But these are technological constraints. There’s not an in principle objection to extending that kind of technology. And there’s another benefit of doing this. And this gets back to the first set of the applications. Which is that there are a range of psychiatric conditions, which are generally characterized, not by people having positive hallucinations, like seeing things that other people don’t or hearing things. But rather reality seems drained of its quality of realness that their perceptions start to feel unreal. Their self can start to feel as if it’s not really there. And these kinds of conditions we might call them dissociative conditions. Are very, very tricky to deal with because they don’t present with these obvious positive symptoms. And so this general line of research and thinking, what does it take for our brains to endow our perceptions with the quality of being real? Understanding that, I think, will refract back onto some of these applications in psychiatry as well. Where that quality of being real is attenuated or even abolished.
AZEEM AZHAR: I mean, I’m curious about where this might go. I mean, science helps us get to settled understandings. It helped us get to a settle understanding of the relationship between the earth and the sun. It took Darwin to come along and then many years of arguing the discovery of DNA until we got a settled understanding about how new species come to earth and how they developed. When do you think science will come to a settled understanding of what consciousness is?
ANIL SETH: Oh, I hate that question so much. But, it’s an important question to ask. One of the strange things that I often hear when people talk about consciousness science and philosophy is that we still know nothing about how the brain generates consciousness or about how consciousness happens. It’s still this complete mystery. But if I think back to what people were saying and thinking 20, 30 years ago, when I was just getting going, there’s been a massive increase in understanding, not only of the brain networks that are involved, but also the kinds of questions that people ask. To just throw something very controversial and ride at the end. There’s this question about free will. Do we have it? Do we not have it? Does it matter? Yes, it matters because it influences all sorts of things like jury processes in law, when we hold people responsible and so on. But the questions are starting to change. It’s not become a question of whether or not we have free will, but more a question of why do experiences of voluntary actions feel the way they do? How are they constructed and what role do they play in guiding our behavior? They become more sophisticated questions. And I think that is going to be part of the evolution of consciousness science, just as much as finding new answers. The questions will start to change, and we’ll go… Just like happened in the science of life. We’ll go beyond looking for the spark of life, the élan vital, and we’ll come up with a richer picture of what consciousness actually is and what the right sorts of questions are to be asking about it. So the process of settling I think, is going to be quite slow. I don’t think it’s going to be a mystery that’s solved at any one Eureka moment. But the progress really is heartening. And I think the last thing I’d say about it is that it’s very useful even to gain a partial understanding of consciousness. That’s useful for developing applications and technology and society and medicine. And fundamentally, it’s useful for us. Because, besides all these applications, I think most of us, at some point in our lives, certainly when we were kids, we were asking ourselves these questions. Who am I? What does it mean to be me? Why am I me and not you? What happens after I die? Understanding how experiences of the self and the world are constructed can help each of us understand our relationship with the rest of the world, with each other, and with nature much, much better at a deeper level. And I think that sufficient reward, and that reward is just going to keep on coming as we progress our understanding of the biology of consciousness.
AZEEM AZHAR: I know you cover many of these ideas in your new book, Being You, which is doing very well and is a great read. And of course, so much more to come. Thank you so much for your time today.
ANIL SETH: Thank you, Azeem. It’s a real pleasure. Thanks for having me on. I’ve really enjoyed the conversation.
AZEEM AZHAR: Well, thanks for listening to this podcast. If you want to learn more about the cutting edge of AI, enjoy a previous discussion I had with Nathan Benaich and Ian Hogarth, authors of The Annual State Of AI Report. And if you want to know more about how the science of consciousness and philosophy of mind interacts with virtual reality, watch this space. We’ve got to great guest coming on to discuss what the metaverse might mean for us through the lens of consciousness. To become a premium subscriber of my weekly newsletter, go to www.exponential view.co/listener. You’ll find a 20% off discount there. And stay in touch. Follow me on Twitter. I’m @azeem, A-Z-E-E-M. This podcast was produced by Mischa Frankl-Duval, Fred Casella, and Marija Gavrilov. Bojan Sabioncello is the sound editor.