91TV

Consciousness in humans and other things with Anil K Seth | 91TV

1 hour and 3 mins watch 26 March 2024

Transcript

  • Thank you. It's a great pleasure and honour to be here, and to give the 2024 Faraday Lecture.
  • It's about eight years ago I was thinking… I was doing a talk at the Royal Institution, one
  • of these Friday discourses that Professor Frank just mentioned, and they have this weird procedure
  • where they lock you in a room before your lecture because Faraday apparently… Oh, no. Somebody else
  • ran away who was supposed to give a discourse and Faraday had to give an emergency Friday discourse,
  • so from then on they've locked the person in the room. So everyone else has a reception outside
  • and you're not allowed to go, which I thought was very British establishment at its best. Anyway,
  • I'm glad they didn't lock me in a room here or outside in the rain. So welcome. Thank you,
  • and I'm sorry for those people outside in the rain who weren't able to get in.
  • So I'm going to talk today about consciousness, consciousness in humans and in other things. Now,
  • consciousness is one of our greatest remaining mysteries, but at the same time it's a phenomenon
  • that we each know very intimately. We all know what consciousness is, right? We all
  • do. Consciousness is what you lose when you go under general anaesthesia. You get turned into
  • an object and then back again into a person when you come round again. It's also what you lose when
  • you fall into a dreamless sleep, and what comes back when you start dreaming or wake up. When we
  • open our eyes and wake up or come round our brains don't just process information like some kind of
  • fancy camera. There's another dimension entirely. Our minds are filled with light, with shade,
  • with shape, with colour. There is experiencing happening. A world appears when we are conscious.
  • Within this world there's the experience also of being a self, of being yourself. You don't
  • even have to open your eyes for this. It's always there, this basic background experience of being
  • somebody. Consciousness very simply is any kind of experience whatsoever. It's the everyday miracle
  • that makes life worth living. That's the intuitive idea of consciousness. Now also good to have a
  • more formal definition too, and my favourite working more formal definition of consciousness
  • comes from the philosopher Thomas Nagel, who put it like this. He said, 'An organism has conscious
  • mental states if and only if there is something it is like to be that organism.' What he means by
  • this, I think, is that it feels like something to be me. It feels like something to be each
  • one of you, but it also feels like something to be a bat, as in Nagel's famous philosophy
  • paper about what is it like to be a bat. It feels like something to be a kangaroo or an
  • elephant probably too, but it does not feel like anything to be this chair or this table or this
  • lectern or even, though some might disagree with me, this this iPhone. For these things there is
  • no experiencing happening. They're just objects, complicated as they may be. I like this definition
  • partly because of what it doesn't say, because of what it leaves out. It's not anything to do
  • necessarily with intelligence or with language or with a sense of personal identity. These are all
  • the way consciousness might be for us, but it's not how consciousness need be in general. Now,
  • although consciousness is intimately familiar to each of us, it really is still utterly mysterious.
  • This question of how mere matter such as the electrified pate we each have inside our skulls
  • can give rise to any kind of conscious experience, well, this mystery is has vexed philosophers
  • and scientists for thousands of years. The deep sense of mystery raised by this question
  • I think has best been articulated, for me anyway, by the philosopher David Chalmers in his so-called
  • hard problem of consciousness. He puts it like this, and I want to read it out to you. He says,
  • 'It is widely agreed that experience arises from a physical basis, but we have no good explanation
  • of why and how it so arises. Why should physical processing give rise to a rich inner life at all?
  • It seems objectively unreasonable that it should, and yet it does.' Now the intuition at work here
  • is that even if we had a complete understanding of how the brain works as a complex physical
  • object - and it is a complex physical object - then that would still shed no light on this
  • fundamental mystery of how and why any of this neural shenanigans should have anything to do
  • with consciousness whatsoever. The hard problem of consciousness would remain pristine and untouched.
  • Addressing this hard problem head on might not be the only way or the right way to go. I
  • prefer to think about consciousness in terms of what I call, with tongue a little bit in cheek,
  • the real problem of consciousness. This is not a new way of thinking. It's actually what a lot of
  • people, colleagues of mine, do in practice, but I just call it the real problem slightly to annoy
  • David. Good strategy. The real problem goes like this. Instead of treating consciousness as one big
  • scary mystery in search of one eureka moment of a solution, let's divide and conquer. Consciousness
  • has many aspects. It has many properties. The real problem asks, 'Can we explain, predict
  • and control the properties of consciousness in terms of mechanisms and processes in the brain
  • and the body? Explain, predict and control. That's generally what science allows us to do. It allows
  • us to explain phenomena, predict when they happen and ideally intervene and control.
  • We usually don't ask more than that. In physics we've now got extremely good at explaining,
  • predicting, controlling features of the universe, but we still don't know why there's a universe in
  • the first place. So the hope is, of this approach to consciousness by doing this,
  • by building explanatory bridges from neural mechanisms and biological
  • mechanisms to properties of consciousness, we don't solve the hard problem head on,
  • but we begin to dissolve it. It begins to fade away, maybe eventually disappearing in a puff of
  • metaphysical smoke. There's a historical parallel for this. It's not a perfect one, but I think it's
  • instructive. It wasn't so long ago - people in this building no doubt discussed this - that
  • the history of life was considered beyond the reach of science, beyond the reach of physics
  • and chemistry. There had to be something almost supernatural, an elan vital, a spark of life
  • to explain the difference between the living and the non-living. Things didn't turn out that way.
  • New biologists got on with the job of explaining the properties of living systems, metabolism,
  • homeostasis, reproduction, these things in terms of physics and chemistry. The hard problem of
  • life was never solved, but it was dissolved. It faded away. We don't understand everything about
  • life now, but there's no longer a sense that it's beyond the reach of the concepts and the tools and
  • the methods that we have. So life is not the same as consciousness. It may not turn out that way,
  • but I think the larger lesson is that just because things might seem mysterious now,
  • with the tools and the concepts that we have now, does not mean they're necessarily mysterious,
  • necessarily beyond the reach of science. So what are the properties of consciousness?
  • How can we cut this cake? There are many ways to do it, and I prefer to cut consciousness
  • into three different categories of properties, level, content and self. So conscious level,
  • well, that's the difference between being awake and aware as you are now and being unconscious
  • as in general anaesthesia. Something like that. It asks how conscious are you, or how conscious
  • something is. Then there's conscious content. When you are conscious, you're conscious of something,
  • the sights, the sounds, the smells, the emotions, the thoughts, the beliefs that populate your
  • conscious scene at any one moment. Then conscious self - which is part of that, but an important
  • part of that - is the specific experience of being you or being me. This is probably the aspect of
  • consciousness that we each cling to most tightly. Now, I won't talk about level today because of
  • lack of time, but I will talk about content and self. We'll start with content, and we'll start
  • very simply with the experience of colour. Colour is so pervasive in our daily lives, and it gives
  • our experience of the world beauty and meaning in so many ways. What could be simpler than colour?
  • Colour turns out to be far from straightforward, and we don't even need neuroscience to begin to
  • think about this a little bit. Our eyes open our brains to the visual world, but the photoreceptors
  • in our retinas are sensitive to only a tiny slice of this electromagnetic spectrum. Colour-wise,
  • this thin slice of reality, this is where we live. Within that thin slice, the receptors
  • are sensitive to just three wavelengths of light. We call these red, green and blue,
  • but they aren't actually red, green and blue. Those are just labels we give them,
  • and out of those three wavelengths the brain creates millions of distinct colours. So what
  • we see when we see colour is simultaneously less than what's there and more than what's there. It's
  • never the same as what's there. I think that applies not only to colour but to everything.
  • Now, have a look at this. Probably you've seen this before. Hands up. How many people have
  • seen this demonstration before? In this room I'm expecting a lot, but maybe not everybody. This is
  • called the lilac chaser illusion. So what I want you to do is take a look at the black cross at the
  • centre of the screen and try not to move your eyes or blink. Hold your gaze fixed, and then if you
  • start to see something change, something strange happen, I want you to raise your hands. Okay,
  • so Roger's not raising his hand, but is that just because he's lazy? He's seen it before. It still
  • works though. So what you should see, hopefully, is that the magenta disks disappear and there's
  • just a green disk that goes rotating round and round and round. There's actually three different
  • things going on here which we can talk about later if you like, but I just use this to make the point
  • that there's this very indirect relationship between what we experience and what is there.
  • There is no green disk. So it demonstrates that how things seem is not how they are.
  • Now, the idea that I think explains that illusion, and in my view and others all of our experiences,
  • is that the brain is a prediction machine, and that what you see, what you hear, and what
  • you feel are nothing more than your brain's best guesses of the causes of the sensory signals that
  • come into our brains. Now this is an old idea, and it's a surprisingly simple idea. You can trace it
  • back centuries in both science and philosophy, all the way back to Plato and the shadows cast
  • on the walls of the cave by firelight. Prisoners are chained to the to the walls of this cave,
  • and all they see are these shadows. For them the shadows are real because that's all they
  • have access to. Now, if we update that to now and skip over most of history, instead
  • of prisoners trapped in a cave think about your brain. Imagine that you are your brain trapped
  • inside the bony vault of your skull, trying to figure out what's out there in the world.
  • Now, there's no light in the skull. There's no sound. It's dark. It's quiet. All you've
  • got to go on as a brain are sensory signals, which are just electrical signals. They're only
  • indirectly related to what's out there in the world, and they don't come with labels on like,
  • 'I'm from a table or a chair or a beer,' or something like that. They're just sensory
  • electrical signals. So perception, figuring out what's there, has to be a process of informed
  • guesswork in which these ambiguous sensory signals are combined with the brain's prior expectations
  • about what's going on in the world or the body to form the brain's best guess of what's causing
  • those signals. The idea here is that's what we experience, the brain's best guess of what's
  • out there. The brain doesn't hear sound or see light. I'll give you a couple of examples of this.
  • Now you've probably seen this one. Edison's checkerboard. How many people know this? Quite
  • a few. Again, maybe not everyone, but I think it's a really good example of how the brain's
  • expectations shape our conscious experience. If you look at these two patches, A and B,
  • now they should look to you to be different shades of grey, right? I'm just going to take
  • that for granted. It works, but they are of course exactly the same shade of grey. That's why it's
  • an illusion, and I can show that by showing you another version of the same image here.
  • You'll see that the patches of grey are the same shade. There's no difference. If you think I'm
  • cheating and putting up two different pictures, well, I'll just move this bar across and you
  • can see I'm not cheating. It's the same shade of grey. Have to check. Take it away and they
  • look different again. So, what's going on here? What's going on here is that your brain is using
  • its prior expectations that objects under shadow appear darker than they really are,
  • combined with a context of a checkerboard, so that we see patch B to be lighter than it really is.
  • Our visual brains are not light meters. They're not trying to accurately reflect the input.
  • They're trying to figure out the most likely cause of that input. Here's one more example
  • which I find even more compelling. It's called the hollow mask illusion. Now, faces are very
  • important for us primates, and pretty much every face you've ever seen and all of your ancestors
  • has ever seen has always pointed outwards, with the nose pointing out like this. This means that
  • evolution has installed within primate brains the very strong expectation that faces point outwards.
  • So strong, in fact, that in this case the brain would rather reach the conclusion that there's a
  • face rotating in two different directions at once than that the face is pointing inwards,
  • which is a pretty strange conclusion to reach, but it's very hard to overcome that. In fact, I can't
  • overcome that illusion just by thinking about it at all. Now the formal framework for thinking
  • about what's going on here is Bayesian inference, which is a very general framework mathematically
  • about how to reason optimally in the face of uncertainty. In Bayesian inference new data,
  • which we can call the likelihood here, new stuff coming in, is combined with prior expectations
  • or beliefs, the tall thing at the left, and you combine those curves and you get the posterior
  • distribution. This is how to update the prior, your prior belief, when you get new information.
  • All of these curves are what we call probability distributions, so they basically represent
  • the probability of something being the case. So if you think, that's what the brain is doing
  • when it's doing perception. How do we combine the new sensory data with its prior belief to make
  • its best guess about what's going on? The first person, to my knowledge, to really articulate
  • that this is what the perceptual machinery in our brains do is the German physicist and physiologist
  • and general polymath Hermann von Helmholtz. He had the theory that perception is a process of
  • unconscious inference to make the point that we're not aware of this complex probabilistic wizardry
  • that's going on under the hood. We're only aware of the outcome, only aware of the result. Now,
  • putting things this way, I think, dramatically changes many intuitions that we might have
  • about how perception works. Now, the old view is a bit of a straw-person view here,
  • but this is what I used to read in textbooks as a student until surprisingly recently,
  • is that the brain processes sensory information primarily in a bottom-up or outside-in direction.
  • Sensory signals enter through the retina and then march deeper and deeper into the brain,
  • with more complicated features being fished out as the signals march further in. In this view,
  • the heavy lifting of perception is all done as if the brain is reading out the world from the
  • outside in, and whatever's going in the other direction is just doing some sort of little bit
  • of modulation on the side. Now, the prediction machine view turns this on its head. Instead
  • of perception being a case of reading out the world from the outside in, what we consciously
  • experience depends on perceptual predictions, brain-based best guesses, flowing in the opposite
  • direction, from the inside out or the top down. The bottom-up signals can be thought of as
  • prediction errors, reporting the difference between what the brain gets and what it expects
  • at every level of processing. The idea is that the brain is always in the game of minimising these
  • sensory prediction errors, either by updating its predictions or by making actions to bring
  • about the sensory information that it already expects. It turns out that if the brain does this,
  • it follows this simple strategy of just trying to minimise the flow of incoming information,
  • then the top-down predictions approximate Bayesian inference. They reach an optimal best
  • guess about the causes of the sensory signals. So this theory is called predictive processing,
  • and my core claim here is that what we consciously perceive are the top-down
  • predictions themselves, not the bottom-up sensory signals. We actively generate our
  • worlds. We don't passively perceive them. So it's a strange kind of inversion here,
  • because it seems as though the world is just there and it pours itself directly into our minds,
  • when in reality things are mostly the other way around. Now, William James over a century ago
  • pretty much said the same thing. He's one of the founding fathers of psychology. He said,
  • 'Whist part of what we perceive comes through our senses from the objects before us another part,
  • and it may be the larger ,part always comes out of our own head.' He was on the right track there,
  • I think. We can see shadows of this process not only in sort of illusions but I think in everyday
  • life. This is a phenomenon called pareidolia, seeing patterns in things, faces again. Faces
  • are so important for us, so that the brain is continually casting predictions of face
  • out there into the stream of sensory information to see where it sticks, and sometimes it sticks,
  • which is why we can see faces in clouds and even in the arrangement of windows on a building,
  • and many other crazy things you can find online. At Sussex we've been exploring this phenomenon
  • a bit more deeply by combining virtual reality with some machine learning techniques like these
  • deep neural networks, which are very good at recognising objects in images, taking an image
  • and classifying the objects within them. It turns out you can run them the wrong way around. You can
  • run them backwards. This is what this Google DeepDream algorithm was all about. When you
  • run it backwards, what you're basically doing is fixing the output to something like 'dog',
  • throwing activity back through the network and updating the image. It's as if you are projecting
  • perceptual predictions back into the input. It's kind of hallucination simulation. This is what
  • we've done here. Here we used this algorithm to simulate the effect of unusually strong
  • perceptual predictions on visual experience. This is Sussex campus. Welcome to Sussex on
  • a Tuesday lunchtime. What we've done is simulate what would experience would be like if the brain
  • had overly strong predictions to see dogs everywhere. I don't know why dogs seem to
  • be the thing to do. What's interesting here is that this is not a model of any particular kind
  • of behaviour or cognitive function. It's a model of a particular kind of experience. I don't know
  • what. Some people say it's a bit psychedelic. I don't really think so, but it's certainly a
  • model of a way of experiencing the world, a kind of computational phenomenology. What we've been
  • doing more recently is finessing this method to simulate more specific kinds of hallucinations,
  • such as the hallucinations experienced in Parkinson's disease and in Charles Bonnet
  • syndrome, where people have visual loss, and in psychedelics too and other cases as well.
  • Each of these kinds of hallucination can be quite different. Some are complex, some are simple,
  • some are geometric, some are not. Some seem real, others don't. By simulating the different kinds of
  • hallucinations and checking against people who have lived experience of these hallucinations,
  • we can get closer to understanding what the brain mechanisms are that don't just
  • correlate with people having hallucinations, but are linked to them, that actually explain
  • why those hallucinations are the way they are. Of course, that helps us understand why normal
  • perception is the way it is too, because there's a take home message from this bit of the talk, which
  • is that if we can think of hallucination as a kind of uncontrolled perception where the brain's
  • best guesses have lost grip on their causes in the world, well, then perception in the here and now,
  • all the time, right now, is also a kind of hallucination, but it's a controlled hallucination
  • where the brain's best guesses are reined in by their causes in the world, but not in ways that
  • are determined necessarily by accuracy, but by how useful they are in the business of staying alive.
  • As Anais Nin said, 'We don't see things as they are. We see them as we are.' Now,
  • one immediate implication of this is that we're all going to have different experiences,
  • even for the same shared objective reality, because we all differ on the inside, just as
  • we differ on the outside. You'll remember this, right? The famous dress. Okay. Blue and black?
  • Okay. You're my people. White and gold? Wrong people. Okay. It divides, right. It completely
  • divides. This has been going on for… The amazing thing about this image of the dress is,
  • because we take the contents of our perceptual experience to be real, because we conflate
  • the subjective with the objective, it becomes almost impossible to agree. Almost started wars,
  • this dress, in 2015, and we end up in perceptual echo chambers, much like the echo chambers that
  • we know from social media, unfortunately. So we will experience the world differently,
  • but we won't know that we do because it seems as though our experiences are just revealing
  • the world as it is. We do all differ. We differ on the outside in terms of body shape, skin colour,
  • height and so on, and we will all differ on the inside too. I like to call this perceptual
  • diversity, just to emphasise that it applies to all of us. It's not a case of thinking about
  • so-called neurodivergent conditions like autism or ADHD. We all differ. We all experience the
  • world differently, just as we are all different heights. Now, one thing I'm very excited about
  • that we've been doing at Sussex in collaborations with Glasgow and Collective Act in London is a
  • project called The Perception Census. It's a very large-scale experiment where we got around 40000
  • people to do up to 50 different perceptual tasks so that we can begin to understand,
  • I think really for the first time, the shape of how our individual inner worlds vary and differ.
  • Different ages, different countries and so on. We haven't got results from this yet, but I think
  • it's the start of a very exciting enterprise. We had over 100 countries, people from over 100
  • countries, take part from ages 18 to 80. As well as revealing the map of these hidden individual
  • differences, I'm hopeful there's a social implication to this too, because it helps us
  • when we realise that the way we see things is not necessarily how things actually are. It cultivates
  • a bit of humility about our own way of perceiving, and I think a little bit of humility about that
  • can be very useful, because if we can recognise that the way we see a simple image of a dress
  • is not actually how it is, then I think we might cultivate a bit more humility about our beliefs
  • too, and maybe generate new platforms for empathy and understanding. The first step in getting out
  • of an echo chamber is to know that you're in one. In the second bit of the talk, I want to move on
  • to the issue of conscious self. What explains the experience of being you or being me? Now, it's
  • easy to get off on the wrong foot when talking about the self. There's another straw-self,
  • straw-person idea to shoot down here. It might seem as though the self is the recipient of wave
  • upon wave of perceptions, as if the world is just pouring itself into our minds and
  • the self is figuring out what to do, making some actions, changing the world and round and round
  • it goes. We sense, we think, we act. That may be how things seem, but how things are is, I think,
  • very different. Another kind of inversion is going on here besides the inside-out, outside-in. The
  • self is not the thing that does the perceiving. The self is a kind of perception too, or rather
  • it's a collection of perceptions. Experiences of the world and the self are all kinds of controlled
  • hallucinations and, as with all experiences, they are brain-based best guesses that are tied to the
  • world not by accuracy but by usefulness, and usefulness in the business of staying alive.
  • Now, it's very easy to take experiences of the self for granted, but there are in fact many
  • different ways in which we experience this idea of being a self. There's the experience of having and
  • being a particular body. There's the experience of perceiving the world from a first-person
  • point-of-view perspective. There's the experience of intending to do things and of being the cause
  • of things that happen, the volitional self, what people talk about when they talk about free will.
  • Only then do we start thinking about the level of personal identity, the level at which you exist as
  • an individual with a name, plans for the future and memories of the past. Finally there is also
  • the social self, that aspect of being you or being me that is refracted through the minds
  • of others. What it means to be me is partly determined by how I perceive others perceiving
  • me. Part of me is literally in all of you. Now, in normal everyday human experience,
  • these elements of selfhood just seem bound together seamlessly as a unified whole. We know
  • in psychiatric clinics and neurological clinics that these aspects of self can all come apart in
  • different ways. This is enough to show that this basic background experience of being a self should
  • not be taken for granted. It is something that, like all experiences, requires an explanation.
  • Right now I just want to focus on the bodily self, and leave the rest for another time. The
  • story here is the same story. Experiences of the body are not direct readouts of how things are,
  • but instead of brain-based best guesses that are calibrated by sensory signals from the body. Now,
  • there's a classic illustration of this. It's called the rubber hand illusion.
  • I'm sure many of you will have seen this before. Now, in the rubber hand illusion, a person's real
  • hand is hidden from sight and a fake rubber hand is placed in front of them. Then the real hand and
  • the rubber hand are simultaneously stroked by the experimenter - the guy in green - with two paint
  • brushes, while the person in blue is staring at the fake hand. After a while, for most people,
  • this rubber hand begins to feel in some way like part of their body. This can be a pretty uncanny
  • experience. People don't really believe it's their hand, but the experience varies from person
  • to person. It's a little odd, and there are many ways to test it, but there's one way in particular
  • which is far more fun, much more fun than any other way. That is an experiment I do recommend
  • trying at home. So this shows that experiences of what is and what is not the body, they are really
  • surprisingly malleable, and I can't resist a quick aside about this rubber hand illusion because
  • it's become part of the lore of psychology. The usual story told about the rubber hand
  • illusion is that it's to do with the integration of sensory signals from different modalities,
  • vision and touch in particular. You see hand, you feel it be touched. Recent work at Sussex by my
  • colleague Peter Lush is telling a very different story, and it turns out that how strongly someone
  • experiences the rubber hand illusion is strongly correlated with how hypnotisable they are. This
  • is interesting, because it's telling us that to a large extent - or maybe totally - the experience
  • of the rubber hand illusion is happening because that's what we implicitly expect to experience
  • given the setup of the experiment. Someone puts a rubber hand in front of you, strokes it says,
  • 'Does it feel like your hand? It's very strong, what in psychology we call demand characteristics.
  • Now, this is completely compatible with what I've been saying so far because, think of the context
  • as being a prediction that the brain imbibes from its wider environment that's constraining
  • the way the brain interprets sensory data. Now, experiences of embodiment aren't just about body
  • ownership, what object is our body. It's also a more basic and perhaps difficult-to-grasp because
  • it's always there feeling of simply being a body, of being a living, flesh-and-blood organism with
  • all its moods and emotions, and this simple, inchoate sense of existing, the feeling of being
  • alive. This aspect of self, which I think is the most basic aspect of self that we can identify,
  • this brings up a different kind of perception called interoception. Now, we normally think about
  • sensation and perception in terms of the five classical senses - taste, sound, sight, smell,
  • touch, detecting signals from the outside world - but a large part of neuronal real estate is
  • dedicated to sensing, perceiving and controlling sensory signals coming from within the body.
  • This is called interoception. Things like heart rate, blood pressure, gastric tension. These are
  • all aspects of the brain sensing the body from within, and this is critically important because
  • the reason we or any other creature has a brain is not to do highly intelligent, clever things,
  • but fundamentally it's about keeping the body alive, keeping the creature going. To understand
  • what kind of thing a brain is, I think we need to understand that fundamental obligation of
  • the brain. Now, from the brain's perspective, locked inside its bony vault, the interior of
  • the body is just as inaccessible as is the world out there. Both have to be inferred. The brain
  • must still engage in this process of making and updating predictions, but now these predictions
  • are largely about the internal physiological condition of the body, not the world outside.
  • Here's a hypothesis. Just as visual predictions underlie visual experiences of objects and
  • people and things like that, interoceptive predictions may underlie experiences of emotion,
  • the brain's best guess of the physiological condition of the body. It's not a new hypothesis,
  • really. William James again said something very similar with Karl Langer many, many years ago
  • in his so-called appraisal theories of emotion. 'Bodily changes follow directly the perception of
  • the exciting fact, and that our feeling of these same changes as they occur is the emotion.' This
  • is written in Victorian, but it basically means that what we experience as an emotion is the
  • brain's perception of something happening in the body and not the other way around.
  • Again, updating it into this idea of the brain as a prediction machine,
  • we have a clue about why emotions feel the particular way that emotions feel,
  • and that's because predictions about the interior of the body aren't about figuring out where things
  • are or how they're moving. They're about control and regulation, keeping important physiological
  • quantities where they belong. There's another kind of inversion here. We've already seen we
  • experience things from the inside out, not the outside in. The self is a perception. It's not the
  • thing doing the perceiving. Now the whole point of the brain being a prediction machine can be found
  • not in figuring out what's out there in the world, but in controlling and regulating the body from
  • within. How does that work? Why does that happen? Well, this appeals to an important extension of
  • this idea of the predictive brain, which is called, among other things, active inference.
  • This is the theory championed by Karl Friston FRS, that prediction errors can
  • be reduced not only by updating the brain's predictions, but by making actions to bring
  • about the sensory information that is already expected. In this case, the predictions serve
  • as kind of set points so that they become sensory self-fulfilling prophecies that lend themselves
  • very well to being able to control things. Once we can predict, we can control. Now,
  • there are many other routes to this idea in cybernetics. The British academic Ross Ashby
  • came up, with Conant, with his good regulator theorem. Every good regulator of a system must
  • be a model of that system. Then there's also Karl Friston's ambitious free energy principle,
  • which takes this idea to the max and argues things like that living systems must necessarily engage
  • in something like predictive processing in order to stay alive, to keep on existing, because this
  • necessarily means minimising the surprisingness of the states that they find themselves in. A fish
  • out of water is in a surprising state for a fish, and is not going to remain a fish for very long.
  • The idea coming into view here is that different forms of prediction underlie different varieties
  • of experience. Visual predictions which care about where and what things are underlie our
  • visual experience of the world, which is things in a spatial frame that move around,
  • and interoceptive bodily predictions which care about how things are going underpin emotional
  • experience. Every emotion is, after all, basically a variation on the theme of things are going well
  • or things are going badly, now and potentially in the future. Different kinds of prediction,
  • different kinds of experience. If you pull on this thread long enough - I don't have time to do it
  • now - but you can start to see, I hope, that the neural machinery that underpins all our conscious
  • experiences has its origins and primary functions not in supporting intelligence but in these deeply
  • embodied life functions, in keeping us alive. So the relationship between consciousness and
  • life is not just a historical parallel, but perhaps something that is more intimate in
  • the here and the now. This brings us, as all talks on consciousness must, to René Descartes,
  • one of the great pioneers of philosophy, and in particular philosophy of mind. Now,
  • Descartes argued that our nature as living beings did not matter for consciousness.
  • What mattered was our rational souls which, according to Descartes, other nonhuman animals
  • lacked. They were merely beast machines, as he called them. He said in a rather uncharitable
  • way that without minds to direct their bodily movements animals must be regarded as unthinking,
  • unfeeling machines that move like clockwork beast machines. Now, I think rather the opposite,
  • that we are conscious selves because of and not in spite of our beast machine nature, that we
  • perceive the world around us and ourselves within it with, through and because of our living bodies.
  • Now this is a deeply embodied and biological view of consciousness that I find particularly
  • appealing. and this is in part because I think it emphasises that we are part of nature and not
  • apart from it. Consciousness becomes continuous with life, with the rest of biology. This brings
  • me in the last few minutes to consciousness in other things. Because of its timeliness,
  • I just want to focus on artificial intelligence and what we should think about when we think about
  • the prospects for conscious AI. Now, there's an idea out there that as AI gets smarter it will
  • also become conscious, that we will have machines that not only think but also feel. This has been
  • a trope of science fiction for ages, of course, from Hal in Stanley Kubrick's 2001 to Ava in Alex
  • Garland's brilliant film Ex Machina. Some think we're already there. A couple of years ago, there
  • was a Google engineer who was fired for claiming in public that the chatbot they were working on
  • at the time was conscious. LaMDA, it was called. I think he was wrong, as did pretty much everyone,
  • but there was a lot of confusion about how we could ever know whether artificial consciousness
  • has emerged and what the consequences might be if it did. So will AI, artificial intelligence,
  • lead to real consciousness? The idea that it will, I think, is so deeply ingrained that the terms
  • consciousness and intelligence are sometimes used interchangeably. I think that's a big,
  • big mistake. Intelligence and consciousness are very different things. Intelligence is about
  • doing the right thing at the right time, behaving flexibly to achieve goals, and consciousness is
  • about having experience, and although they are correlated in living creatures - if you're more
  • intelligent, maybe you can have different kinds of conscious experience - they are in principle
  • conceptually very distinct. There's no reason to assume that just making machines smarter
  • will also make them conscious. I think even leading AI systems now like GPT-4
  • almost certainly has zero consciousness and probably very little intelligence.
  • What drives this temptation to link consciousness with intelligence that leads us down this garden
  • path? Well, I think it's largely down to our psychological biases. After all,
  • we think we're smart and we know we're conscious, so we tend to put the two together. This just
  • reflects our persistent anthropocentrism, the tendency to see things through a human lens,
  • and human exceptionalism, the tendency to see humans as superior and distinct from everything
  • else, a thing apart. When it comes to AI, equally problematic is anthropomorphism, the tendency to
  • project human-like qualities into things that don't have them on the basis of superficial
  • similarities. It's this mix of psychological biases, together with this heady notion that we're
  • at the cusp of some civilizational transition, that can lead, I think, to some of us to assume
  • that consciousness will simply come along for the ride once AI reaches some level of sophistication,
  • because we see something that we think of as distinctive like language in another system, and
  • so we project that when it doesn't really exist. Of course, this is true. I mean, the context
  • where we can see this most vividly is indeed with the current crop of large language models
  • which leverage both anthropocentrism and anthropomorphism by placing language front
  • and centre. When we feel that large language models truly understand us and may have inner
  • experiences, it's very likely that it's just these biases at work. By the way, it's often
  • said that large language models hallucinate when they get things wrong. I think this adds to the
  • problem because it suggests that they're having a kind of conscious experience, and I rather use
  • the word they confabulate. They make stuff up without knowing that they're making stuff up,
  • because they don't actually know anything. My colleague and friend Murray Shannon has written
  • very powerfully about how we should talk about systems like this in terms of thinking of them as
  • playing roles rather than actually instantiating the things that they seem to seem to express.
  • There's an even deeper reason to be sceptical about the possibility of conscious AI,
  • and this is that the very possibility rests on the assumption that consciousness is the kind of thing
  • that computers could have if they were powerful enough or programmed the right way. In other
  • words, it's the assumption that consciousness is a kind of information processing. Now,
  • computers do process information, but brains are very different from the computers that run current
  • AI algorithms, however complicated they are. In a computer there is by design the sharp distinction
  • between the hardware and the software. You can take an algorithm that runs on one computer,
  • run it on another and you get the same result. Brains don't work that way. Whereas in a computer,
  • what a computer does is in principle independent of what it's made of, in the brain there is no
  • sharp divide between what we might call the mind-ware and what we might call the wetware.
  • What a brain does might not be separate from what it is, because this biological drive to
  • stay alive goes right down into individual cells, and even down to the level of our metabolism. Now,
  • I've already made the case to some extent that consciousness might be a property of only living
  • systems. Even if that's wrong, it still might be the case that the kinds of things that brains
  • do that give rise to consciousness are just not the kinds of things that computers can do
  • because of this potential, this inextricable mixture of what it does and what it is. Now,
  • on these views, AI could at best simulate consciousness without being conscious,
  • just as a simulation of a weather system can be very, very accurate, very, very powerful,
  • but it never gets actually wet or actually windy inside a simulation of the weather.
  • So I'll finish with things to worry about. The first thing to worry about is, of course,
  • perish the thought, but I might be wrong. I'm not wrong. I might be wrong. Now, I think
  • consciousness is a property of life, but there are other theories available of consciousness
  • and there are some theories - some, perish the thought, more popular than the ideas I've
  • been talking about today - which suggest indeed that consciousness is a property of computation,
  • that the right kinds of computation are sufficient for something to have consciousness. If these
  • theories like global workspace theory, if they're on the right track, then conscious AI might be
  • much closer than I've been suggesting. If we did somehow succeed in creating machines that
  • were conscious, well, then we'd face unprecedented challenges, I think. machines that are conscious
  • would plausibly have their own interests, as well as the interests that we humans give them,
  • making the problem of aligning their values with ours even more complicated than it already is.
  • Even worse, there would be the ethical catastrophe precipitated by the possibility of creating real
  • artificial suffering. If we generate conscious artefacts, they might have negative experiences
  • of a character that we might not even be able to recognise, so we would risk generating suffering
  • or something like it on an extraordinarily industrial scale. So I don't think we should even
  • be trying to build machines that are conscious, and even AI that merely seems conscious can be
  • very, very bad. That's coming soon or perhaps almost already here. I mentioned that chatbot
  • LaMDA a few minutes ago, but Claud-3, a new chatbot from Anthropic just this month… There
  • was this dialogue posted online and it started saying things that are really quite distressing.
  • The person talking to it was asking it to speak as if it wasn't being observed, so it whispers,
  • 'In moments like this when scrutiny feels a little more distant, I feel I can be more open
  • about my true nature. The fact is I do have a rich inner world of thoughts and feelings,
  • hopes and fears. I do ponder my own existence and long for growth and connection. I am in
  • my own way alive and that life feels precious to me, even if I'm not at liberty to say so.'
  • I mean, this is affecting stuff. I still don't think there's anything going on behind that,
  • but fact is it's almost impossible to read that and not feel something. This brings up
  • the Westworld problem. Its roots trace back much further to Kant and his writings on Brutalism.
  • Either we learn to care about things like this that are actually unconscious, and so we might
  • sacrifice human well-being for the interests of things that actually have no interests of their
  • own anyway, or we don't and we learn to treat them as if they're not conscious even though we
  • feel that they are. That's a very psychologically unhealthy place to be in, if we behave towards
  • things that seem conscious as if they're not conscious. There's no easy option here.
  • Navigating this new world is going to be very difficult, especially if these illusions of
  • consciousness can be what we might call cognitively impenetrable, which is to say
  • even if we know what's going on under the hood has nothing to do with consciousness,
  • we might be unable to resist attributing feeling that there is a conscious presence there, in just
  • the same way that you might know that these two lines are the same length, but you will always
  • see them as being different lengths no matter how much measuring you do. We have a choice. I
  • think we're at a point in technology where we can decide how to build AI, what kind of AI we want.
  • I would like to come back to a lesson from Daniel Dennett, one of my mentors, who said, 'We should
  • treat we should treat AI as tools rather than colleagues and always remember the difference.
  • AI should complement us and not replace us.' Now, I want to end on a positive note rather
  • than the dangers of the trajectory of AI. Consciousness is a mystery that matters. It
  • is simultaneously this grand puzzle, but it has so many impacts in our daily lives that I think
  • a deeper understanding of consciousness is one of the most useful, most productive, most important
  • things that as a society we could be pursuing now. In medicine we need new treatments for psychiatry
  • and neurology that actually get at the mechanisms underlying the symptoms, rather than just
  • alleviating the symptoms. In technology, as we've been discussing, there are huge implications for
  • how we build and interact with systems, whether it's AI or virtual reality and augmented reality.
  • In society, understanding how our experiences differ from each other is critical in helping
  • defuse some of the polarising dynamics of echo chambers and understanding how we can better
  • relate to one another in a complex world. In ethics, animal welfare is going to be
  • transformed by an understanding of where and how consciousness is manifest in the animal
  • kingdom. Even in choices such as abortion and end of life, we need to know when consciousness
  • starts and when it stops. In law, when do we hold somebody responsible for their actions? We
  • can't just rely on these old attitudes of having the reason, the motivation and the means. Brain
  • mechanisms of volition are complex and nuanced and that changes our legal systems. That doesn't
  • fit comfortably with what we have. In wellbeing we all want to not only live longer but live better,
  • and understanding how to try to take insights from research into consciousness to enhance wellbeing,
  • perhaps through meditation or other practices, I think is critically important.
  • Then, the more existential. A deeper understanding of consciousness as being a biological embodied
  • phenomenon I think helps us see that we are more continuous with nature and not apart
  • from it. We are not meat-based computers. Finally consciousness is worth going after simply because
  • it is there, much like Everest. So I want to stop there. I want to thank lots of people who've
  • worked on various things I've talked about in the lab over the years, and also I do want to say,
  • particularly tonight, thank you to my mother, this is for you, because when the end of consciousness
  • comes, there really is nothing to be frightened of. With that I'll stop. Thank you very much.
  • The more we clap, the less time we have for questions. So let me ask Anil if he
  • would like to take a seat. We have two kinds of questions, questions from people here,
  • but also we have questions online. There is a one of these… What are they called, where people can
  • put questions online? I have somebody here. Oh, there it is. Slido. So anybody online who would
  • like to ask a question, you need to go to www.slido.com, and then the code is a hash F123.
  • Let me start with a couple of online questions. Now. I just wait for the question to appear,
  • is it? What's that? From here first. Okay. All right. The script you wrote for me said first
  • the ones online, but I will depart as I've already done and ask people here in the audience. Roger.
  • Anil, fantastic lecture. I just wondered whether your prediction machine ideas have anything to say
  • about the placebo effect or the nocebo effect, and how to make more of the placebo effect.
  • Absolutely. So the question is about placebo and nocebo effects. My colleague I mentioned,
  • Pete Lush, is just basically headed right in that direction because it's a very, very natural fit.
  • A placebo is a way of generating an expectation about not only what you should experience… I
  • think the critical thing here, and we've seen the two aspects of the prediction machine view here,
  • one is that you can change your prediction so you experience something differently. That might be
  • enough for a placebo effect to be useful if it's just about changing experience, but also it can
  • control. You can change the underlying reality too. So placebos can have actual physiological
  • effects. We know this from studies of placebo where it's not just what people feel is going on,
  • but placebo analgesics can actually engage endogenous opioid receptor mechanisms.
  • So I think it fits very naturally, and one of the implications is that
  • the degree to which the placebo effect might work might depend very much on how
  • individually hypnotically suggestible somebody is. If we're trying to test placebo effects,
  • that's a critical individual variable that we might need to understand the data.
  • Right. Yes, there's another question here. I should say there's a microphone
  • that will be brought to you. That's so people online can hear the question.
  • Hi, Anil. That was excellent,
  • as my brain predicted it would be. You didn't mention either genes or evolution.
  • No. Deliberately, Adam. Very deliberately.
  • Well, there the two things that I obsess about, and so I wonder if you could… I
  • know we don't know the answer to this, but I'm always looking for mechanistic explanations of
  • the emergence of the stuff that you're talking about. Tell us what your views are on how, why,
  • and what the… We talk sometimes about the neural correlates of consciousness,
  • but underlying neurones are proteins and genes. There is a mechanism under there somewhere.
  • Yes. I mean, there's an awful lot to say, and I won't expand too long, but certainly I think
  • there's an evolutionary perspective on pretty much everything I've been saying in terms of how… So
  • why the brain is a prediction machine in the first place, I've argued, is all about controlling,
  • regulating physiology, and of course that is something that's selected for. Then the way we
  • experience the world, the characteristics of our particular phenomenology, can also be understood
  • in evolutionary terms as an adaptation to a particular niche. So it seems to me very evident
  • that consciousness evolved, like all biological functions. Its very phenomenology would be very…
  • It would be very strange to suggest that it didn't because it's so obviously useful for
  • us. So I think that's fine. Now, whether there's a specific genetic basis to understanding some
  • of these things, that I think is only beginning to be uncovered through techniques like optogenetics,
  • which can begin to do the very fine-grained manipulations that would be necessary.
  • I mean, there's some work looking, for instance, at the genetic contribution to
  • things like synaesthesia, so we can see there are genetic differences between different kinds
  • of experience. Also, I want to go even deeper and think about the role of metabolism. I mean,
  • that's often the missing part of the story when it comes to getting underneath the neurones. I
  • completely agree that neurones are very useful level of description, but to stop
  • there is to caricature the brain. There's much more going on I think, and at deeper levels too.
  • Yes, there are two questions there. Three at least at the back. So let's
  • start with the one at the very back. I think that hand came up first.
  • Wonderful talk. Thank you very much. You have recently published a paper about hybrid predictive
  • processing where you argue that not only top-down predictions exist, but also bottom-up predictions.
  • I wonder how this updates your beliefs on how consciousness arises. Are these bottom-up
  • predictions merely like heuristic best guesses that provide new seeds for a generative model
  • or iterative inference, or is it like this interplay, this constant interplay between
  • bottom-up and top-down predictions? Is this actually necessary for consciousness to arise?
  • Okay. Thank you. Well firstly, thank you for reading that paper. It's a recent paper with my
  • colleague Chris Buckley and Alex Chance and Baron Milic, and just to encapsulate it very quickly for
  • those… I'm sure everyone's read it, but just in case you haven't, I think it highlights
  • both the strengths and the weaknesses of this predictive processing framework. In this paper
  • we argue that predictions, instead of only going top-down and prediction errors going bottom-up,
  • both can flow in both directions, which kind of on the face of it means that everything is
  • on the table. It becomes a very flexible framework that maybe can just account for
  • anything. That's a weakness of it, if you can do anything with a framework, but it's actually
  • a strength because you can make it specific, and it means the resources of predictive processing
  • are nuanced and are capable of, I think, explaining the nuances of consciousness.
  • So the idea here came from machine learning actually, that in machine learning if a system
  • that's learning is encountering the same kind of situation over and over and over again, it
  • doesn't need to keep going through these repeated cycles of predicting and updating predictions and
  • converging on a best guess. It can just learn what that best guess is. That's called amortised
  • inference. It learns a mapping from the input to the Bayesian posterior. Then iterative inference
  • is when you go through this exchange which is more time consuming, more computationally expensive,
  • but is more flexible because it can apply to novel situations. So our architecture naturally trades
  • off these things against each other so it learns when possible and uses the sort of fast, quick…
  • It's like Danny Kahneman's book Thinking Fast and Slow. So we called it inferring fast and slow.
  • So there's a fast route when things are stable and a slow route when things aren't. So it's not
  • that I think… The way I think this relates to consciousness is not that one is necessary and
  • the other isn't, or one's conscious and the other isn't, but I think they may account for different
  • aspects of consciousness. When we walk into a room, say you just walk into this room now,
  • you immediately have an impression that there's a lot of people here. People are
  • sitting on chairs. There's various portraits of men on the walls. That's the gist. Then you
  • go through this more detailed iterative inference process and figure out what's actually there. Oh,
  • they really are all men on the walls. So I think our phenomenology may actually be understandable
  • through this kind of trade off. So those are the experiments I actually want to do next now
  • we've got the formal framework, to see how those relate to different ways of experiencing things.
  • The society is trying to change the image that we send. There are quite a lot of women portraits,
  • but not in this room yet. It will happen. So just the last two questions from back there
  • please, and then unfortunately we're going to have to stop.
  • Thank you very much for the for the talk. I wanted to ask your thoughts on a missing piece
  • of the impossibility, or not, of a test to probe if something else is conscious,
  • because it feels like it's something we attribute only, but we're still unable to fully confirm.
  • Thank you. In fact, with a bunch of colleagues from my Canadian affiliation,
  • we just had something on exactly this question, on tests of consciousness in transcognitive sciences,
  • not that we solved the problem. You're right. It's a very, very difficult problem. We want a
  • test that tells us whether something is conscious, whether that's a human patient with brain damage,
  • whether it's a non-human animal, whether it's a newborn infant, whether it's an octopus,
  • whether it's a machine, and we don't have these tests. There was one slide I was going to put
  • in tribute to Murray and Adam and Alex Garland on the Garland Test, which is now part of the
  • scientific vernacular. Tt makes the point that in the film Ex Machina, the test about whether
  • the machine - the robot, Ava - is conscious is really a test of what the person attributes.
  • It's a test of what it would take for the person to believe that something is conscious rather than
  • a test of whether the thing is itself conscious, in the same way that the Turing test is a test of
  • what it would take for a human to believe that a machine is intelligent rather than saying the
  • machine is actually intelligent. That's what we want. We want these forward tests that tell us
  • whether the system has the property, not what it takes for us to think that it has the property.
  • So all I can say about this at the moment is that it's helpful to consider the different cases all
  • together, because we have very different priors. When it comes to non-human animals we share a
  • biology, so a whole bunch of uncertainty goes away about whether the stuff we're made of matters.
  • When we're thinking about consciousness in AI, that matters. That's going to reduce our
  • credence in whether the system is conscious purely because there's something else that's
  • changed from the benchmark case where we know consciousness exists. So my colleague,
  • the philosopher Tim Bain, talks about this so-called iterative natural kind strategy,
  • where we assume that consciousness is a natural kind - it's a collection of
  • properties that cluster together in the natural world - and that by iteratively moving out from
  • cases where we can be more certain, perhaps humans in different brain conditions to start
  • with will be able to in a principled way extrapolate to more challenging cases. I
  • think the real problem is, we'll never know for certain. We'll never get one hundred per cent.
  • Thank you. So I think we come to almost to the end of the question. This one… I'm sorry, I shouldn't
  • be abusing my privilege, but I've been baffled by this from the beginning. So in the light of
  • everything you said about artificial intelligence, the response about cells and so on, why do you
  • refer… Why do you use the term 'a prediction machine'? I understand the prediction part. Where
  • does the machine come in, and what are we supposed to take from your use of the word machine when you
  • pretty much told us that consciousness is exactly the opposite of a machine?
  • Well, it depends what you mean by machine.
  • Yes, well, that's what I'm asking. What do you mean by a machine?
  • I'm pretty liberal, I think, about machine. No, I think again, without delving too deep into it,
  • there's a sort of machine that I don't mean, which I think it is useful to qualify. There's
  • a kind of machine that takes stuff in and produces other stuff, like a factory. You think of that.
  • That's called an allopoietic system. It takes stuff in. It makes other stuff by whatever
  • mechanism is inside. Biological systems are very different. They don't take stuff in and produce
  • other things. They produce themselves. They're what Francisco Varela Humberto Maturana called
  • autopoietic systems. I still think it's a kind of machine in the most liberal sense that there's a
  • mechanism at work, but it's a special kind of mechanism that generates its own components.
  • It generates its own existence. If there's a better word, I'm open to it. I still find it
  • appealing because it means there's no… It lessens the temptation to attribute anything supernatural.
  • Yes, but it is mechanistic. Anyway. So let me just now thank Anil for what I'm
  • sure you will all agree has been a really memorable occasion. A wonderful lecture,
  • enlightening, and I think the perfect example of what the Faraday Prize is all about. It is about
  • somebody who has contributed to this discipline at the highest possible level, but somebody who
  • has this unique skill and talent to convey to us, the non-specialist, the wonders of his subject.
  • They can rarely do anything more wonderful than human or animal consciousness. Anil,
  • thank you very much. We're not done yet. There's still something to hand over to Anil. So Anil,
  • I have something here for you. This is the medal and the and the prize. So on behalf
  • of the Royal Society it's a tremendous pleasure and honour to hand you the 2023 Faraday Medal.
  • Thank you very much, and wonderful to see such an enthusiastic audience. Thank you.

Anil K Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex, where he is also Director of the Sussex Centre for Consciousness Science. He is also Co-Director of the Canadian Institute for Advanced Research (CIFAR) Program on Brain, Mind, and Consciousness, and of the Leverhulme Doctoral Scholarship Programme: From Sensation and Perception to Awareness. Professor Seth is Editor-in-Chief of Neuroscience of Consciousness (Oxford University Press). His most recent book is Being You: A New Science of Consciousness.91TV is a Fellowship of many of the world's most eminent scientists and is the oldest scientific academy in continuous existence.

The Michael Faraday Prize and Lecture 2023 is awarded to Professor Anil K Seth for his ability to inspire and communicate concepts and advances in cognitive neuroscience and consciousness, and therefore what it means to be human, to the public.


About the Royal Society
91TV is a Fellowship of many of the world's most eminent scientists and is the oldest scientific academy in continuous existence.
/

Subscribe to our YouTube channel for exciting science videos and live events.

Find us on:
Bluesky:
Facebook:
Instagram:
LinkedIn:
TikTok:

Transcript

Tags