91TV

How the human brain thinks about itself | 91TV

1 hour and 3 mins watch 23 January 2025

Transcript

  • So thank you all for coming. Thanks for everyone who is watching online. It's a real honour to be
  • giving this lecture in this amazing venue. I'm also delighted to have family and friends and
  • colleagues in the audience tonight. So I'm going to be talking about how the human brain thinks
  • about itself. To start us off, I'm going to tell you a story. This is Stanislav Petrov,
  • who was a lieutenant colonel in the Soviet Air Defence Forces at the height of the Cold
  • War. He was on duty monitoring early-warning satellites for incoming missile attacks on
  • one morning of September 1983, at the height of the nuclear tensions between the US and Russia,
  • and on that fateful morning, the alarm system started to go off,
  • signalling that missiles were on their way from the US. Under the doctrine at the time,
  • he was instructed to pass that information immediately upwards, so that a retaliatory
  • strike could be launched within a matter of minutes, but instead of doing so, he calmly
  • reflected on what the systems were telling him and decided that instead this was a false alarm.
  • Now, of course, that was the case, and by calmly deciding not to act on that day,
  • he arguably did more than anyone else has done in human history to avert nuclear disaster. So
  • Petrov was seeing the world in shades of grey rather than black or white. He was willing to
  • entertain uncertainty about what his senses and the systems were telling him. This allowed him
  • to wonder whether what he was being told was illusory rather than real. He could wonder
  • whether what he was being told was perhaps false rather than true. Doing this is not easy, because
  • appearances don't always match up with reality. Plato famously imagined people trapped in a cave,
  • mistaking shadows on the wall for reality, and like them, we often misinterpret what we perceive.
  • So I'm a psychologist, and one of my heroes of experimental psychology is Richard Gregory,
  • who did a lot to illuminate how the mind sees the world. He used fiendishly-clever experiments
  • to showcase this using behaviour alone. This is one of his famous demonstrations,
  • the hollow-mask illusion. So this is a mask of a human face rotating, but most of us will see this
  • as the inner side of the mask, the concave side, as popping out towards you in three dimensions,
  • convex, and so this leads to this weird phenomenon where we're actually more likely to see two convex
  • faces rotating opposite to each other. One explanation for why this happens is that
  • we're so used to seeing faces as being convex and three dimensional and pointing towards us,
  • that this overrides any cues to depth and shading that tell us about the true concave
  • side of the mask. More generally, Gregory and others have argued that the brain is building
  • a model of what is out there based on limited or noisy sensory data from the outside world.
  • Our perception of the world is a mix of this incoming data, but also opportunistic guesswork.
  • Now, this is a bit like how a scientist would build a model of their data. Imagine a climate
  • scientist sitting down to build a computer model of how the earth's weather systems work,
  • and these models, hopefully, turn out to be useful in terms of predicting things in reality,
  • but they're not faithful reflections of reality, they're approximations. This feature of scientific
  • modelling led to this infamous quote by the statistician George Box, who said, 'All models
  • are wrong, but some are useful.' This is similar to what the brain is doing. The brain is building
  • useful models of the world. If they weren't useful, then we'd always be making mistakes
  • in our behaviour. If I perceived the water glass here as twice as large as it really is, then I
  • would knock it over the minute I tried to pick it up. Even though the models are mostly useful,
  • they can sometimes, as we saw in the case of the hollow face illusion, be misleading.
  • Now, often, these errors we make in how we see the world are usually benign, just side effects
  • of the brain's attempt to understand what's going on, but if the distortions become too extreme,
  • then psychiatric symptoms such as hallucinations and delusions might result, and this has been
  • elegantly and beautifully described in Elyn Saks' memoir of the experience of having psychosis.
  • Even in otherwise-healthy brains, when we try and remember our past, we might do so in a way
  • that causes errors in what we recall. The American psychologist, Elizabeth Loftus, has shown us that
  • people can be readily led to believe that events occurred in their childhood that never really did,
  • such as this hot-air balloon flight. So this tells us that memory is not like a videotape
  • just reading out the past. Instead, it's the brain actively trying to reconstruct what might
  • have happened, and this sometimes leads to errors. More generally, we have this voracious appetite
  • for seeing patterns in the data that are coming in, and this can lead us to build beliefs that
  • go beyond the facts. It can lead to things like conspiracy theories. So all of these different
  • examples show us that the brain is building models of reality that are mostly useful,
  • sometimes misleading, and that leads us naturally to wonder how can we do what Petrov did that
  • morning? How can we realise that we might be wrong in what we're seeing? How can we distinguish
  • appearances from reality? Now, I hope that understanding how the brain is able to reflect
  • on and think about itself in this way, will help us better navigate complex worlds, and also help
  • us recognise when we might be trapped by our own illusions, but this notion of the brain engaging
  • in self-reflection and self-knowledge has been controversial. The French philosopher, Auguste
  • Comte, suggested that true self-knowledge wasn't possible. It's somehow paradoxical. It would
  • require the thinking individual to cut himself in two. The organ observed and the organ observing
  • are identical. So how can any observation be made? He's suggesting that the brain thinking about
  • itself is somehow nonsensical, but he exchanged letters on this issue with the British
  • philosopher, John Stuart Mill, and Mill responded that perhaps self-reflection is possible,
  • not by this thinking-about-thinking paradox, but instead, because we can use the regular way our
  • mind works to take our own thoughts and actions as the basis for future processing. He says here,
  • 'We think about what we've done once the act is over, but with the impression still vivid
  • in our memory.' This simple fact shoots down Mr Comte's entire argument. More recently,
  • this idea has recurred in the writings of Daniel Dennett and Doug Hofstadter, who have
  • argued that the human mind is perhaps unique in being able to think about its own actions
  • and thoughts and feelings in a recursive fashion. So this is what psychologists call metacognition,
  • or thinking about our own actions and behaviours, skills and abilities,
  • but what is metacognition? How does it work? How can we study it in the lab to probe how it might
  • be operating in the human brain? More generally, how does the brain think about itself in this way?
  • These are the questions that are going to occupy us for the rest of this lecture. So I'm going to
  • tell you about two test cases that we've been focusing on in the lab, to try and illuminate
  • answers to these questions. The first is how we monitor our decision making. How do we know,
  • without being told, whether we are making good or bad decisions? The second is how and whether we're
  • able to distinguish reality from imagination, and recognise that the world might not always
  • be quite as it seems. So this work is part of the broader science of human metacognition, which is
  • in turn part of an even broader field known as cognitive neuroscience. That's the scientific
  • study of the biological and computational processes underlying how the mind works.
  • Since my PhD in the late 2000, I've been lucky to be pursuing cognitive neuroscience in what
  • many would say is its spiritual home, in Queen Square in London. Queen Square is home to this
  • amazing array of research institutes, which are all focusing on different aspects of how
  • the brain works. There's also the National Hospital for Neurology there. One of those
  • institutes is the Functional Imaging Lab, or the FIL, which is developing ever more ingenious
  • ways of imaging the human brain, and this is where many of the studies that I will tell you
  • about this evening were carried out. So when we focus in on metacognition about our decisions,
  • I'm going to cover three areas, which have been the focus of our work for the past few years
  • now. So the first is just how do we solve this problem of measuring metacognition in the lab?
  • The second is how we can then apply these tools to illuminate a brain basis for self-monitoring,
  • and the third is how we can use these insights from the neuroscience to start explaining
  • cases where metacognition begins to fail. I just want to get something cleared up to start
  • with. When you think of making a decision, when we talk about decision-making, we probably think of
  • making a weighty decision, such as which house to buy or which job to take, but as neuroscientists
  • studying decision-making in the lab, we often find it useful to give people much simpler decisions
  • about perceptual attributes, such as whether this patch of lines is tilted one way or the other,
  • whether a face is embedded in this noise or not, whether this video of moving dots
  • is going one way or the other. Even though these stimuli are contrived and simple, they allow us
  • tight experimental control over how difficult those decisions are for each person who comes
  • into the lab. So to use these stimuli to study metacognition, we can bring you in, we can get you
  • to make judgements about the world - we call these type one judgements, for instance saying which way
  • the dots are moving - but we can also then ask you to make what we call type two judgements.
  • These are decisions about my decisions about the world. How confident am I that I got the answer
  • right? Do I think I might have made an error? When we have lots of these judgements over time,
  • we can begin to build up a picture of how good your self-monitoring is. Intuitively,
  • when you have more confidence and you're correct, and less confidence when you're wrong, then we can
  • say that you have a high degree of metacognitive efficiency. This is good introspection about your
  • performance. So here are some real data from a single participant that we collected a few years
  • ago now, in collaboration with Rimona Weil and Geraint Rees, where we looked at this task where
  • people were asked to decide whether the first or the second presentation had a slightly brighter
  • patch in it, and then they were asked to rate how confident they felt in that decision. Now,
  • on the top row here, you can see the accuracy over time of that person. A one is when they
  • were correct on that decision, a zero is when they were incorrect, and the bottom row shows
  • their confidence fluctuating from trial to trial. What I find quite striking, just looking at this
  • data, is that even in this very simple task where the stimulus remains pretty similar from trial
  • to trial, people are fluctuating wildly in their confidence. Sometimes they're confident that they
  • got the answer right, and other times they're not confident at all. So this leads us then to
  • this question, which we can start answering with data like this. Do people actually know
  • when they are right and when they are wrong? Now we can answer this by starting to fit statistical
  • models to these data that summarise the overlap between the distributions of confidence you
  • get when you're right and the distributions of confidence you get when you're wrong. This gives
  • us individual parameters or numbers that we can use to describe each person's data. In this case,
  • we can get a parameter that we call d prime, which is your performance on the primary task,
  • how good you are at discriminating these patches, and we also get a number we call
  • meta d prime which is now how effectively your confidence is tracking primary task performance.
  • Then, finally, we can take the ratio of these two parameters, meta d prime over d prime,
  • and that tells us something about your metacognition relative to your task performance,
  • what we call metacognitive efficiency. This should be around one; this ratio should be around one if
  • your self-monitoring is intact. Now, here's a plot I made when I was preparing for this talk,
  • which aggregates data from a number of studies we've run now using this task. Now, plotting
  • metacognitive efficiency on the x axis here, and I'm actually plotting a log of the efficiency
  • score, the ratio. because that helps create a more evenly distributed plot. Essentially,
  • what you can see is that there's variability in people's metacognitive efficiency. On
  • average in the population, metacognition is good. The ratio is indeed around one,
  • or the log of the ratio is around zero, but some people on this side have acute awareness of making
  • good or bad decisions. Other people on this side, have more limited insight into their performance.
  • Now, a natural question that arises here is, how broad is this capacity we're tapping into?
  • We can start answering that question by looking at the covariance in metacognition
  • across different tasks. We can do this by measuring your metacognition on one task,
  • such as deciding which of two patches contains more dots, and then asking whether that predicts
  • your metacognition on a very different task, such as deciding which of two countries has a
  • higher GDP. This is work from recently-published work together with Micah Allen's lab in Denmark,
  • and we indeed saw that controlling for covariance in performance, we do see systematic covariance
  • in metacognitive efficiency. If people have good metacognition on one task,
  • they also tend to have good metacognition on a very different task. We've also looked at how
  • this relates to what we call first-order cognitive abilities, and that includes traditional measures
  • of intellectual function like IQ. What we find is that we can usually
  • distinguish a dimension in our data which maps onto metacognition, but not cognitive ability,
  • and so that suggests that a human, or perhaps even a machine, might be smart, but have limited
  • self-awareness of how it's doing, and vice versa. I'll return to this point towards the end when I
  • talk about metacognition in AI. Okay, so how does the brain achieve this feat of metacognition,
  • and why does it vary across individuals? So one powerful idea here, a theoretical idea,
  • is that we can identify two different modes of processing in the brain that are relevant for
  • thinking about metacognition. The first is how we understand the world. We saw that earlier in
  • considering how we perceive the outside world based on limited data, but we can also engage
  • in this second stage of self-directed processing, where we can use our own hypothetical actions on
  • the world as inputs. We can think about how confident am I that these lines are indeed
  • tilted left rather than right. I think I can think about how confident I am that I'll be
  • able to score a goal by kicking the football. More generally, estimating our confidence in
  • a hypothetical decision or action that we might make, unlocks a rich capacity to doubt ourselves.
  • One of the first laboratory studies to try and distinguish these two types of processing was
  • conducted by Pat Rabbitt in Oxford in the 1960s. So this study is another beautiful example of the
  • depth of insight you can get from behavioural data alone, if the experiment is designed
  • correctly. So this required subjects to just say whether a number that was briefly flashed on the
  • screen was either higher or lower than five, and they were asked to do this really quickly,
  • so they sometimes made errors, but they were then also given another button to press that
  • they could use to signal to the experimenter that they'd made an error on that trial. What
  • Rabbitt found was that the latencies for using this error correction button, how fast you were
  • to press that error-correction button, were faster than the fastest response to an external stimulus
  • that he could measure. So what that shows you is that you can detect and correct errors efficiently
  • without being given any external signal. It relies on some kind of internal brain process.
  • A little bit later on, work using EEG recordings linked this error-correction process to a brain
  • potential that occurs only tens of milliseconds after people make an error. Now, officially,
  • psychologists refer to this as the error-related negativity, but unofficially, sometimes people
  • call it the oh-shit response. This activity has been linked to activity in the medial frontal
  • lobe at the border of the anterior cingulate and the pre-supplementary motor area. Now,
  • when I was doing my PhD at UCL, we were building on this work to try and probe the
  • brain systems that might be involved in making confidence judgements about your decisions,
  • so we set up an experiment that would compare two different conditions. In both conditions,
  • you had to make a type one decision about whether this noisy patch contained a face,
  • and we could titrate how much noise was in the image to make that a difficult decision for you,
  • and in the metacognitive judgement condition, people were asked to rate how confident they
  • felt in that decision. Then we also set up a control condition, where now people just
  • had to slide the slider on the confidence scale to a predetermined point. They weren't required
  • to actively reflect on their choices. When we contrasted those two conditions,
  • we found that there was elevated activity in a set of regions that we thought was relatively
  • specifically involved in the metacognitive judgement, because in both cases, we're asking
  • people to still make this type one decision. So we're keeping that part of the task constant.
  • We found two sets of regions that were increased in their activity. The first one was this dorsal
  • anterior cingulate region, that we know from the previous work on error correction is involved in
  • error monitoring, but we also saw changes in activity in the lateral prefrontal cortex,
  • in a region at the border between area 46 and lateral area 10, and in both of these regions we
  • saw that activity here was not just elevated when you were making a judgement about your decision,
  • it also predicted how confident you would go on to feel in that decision.
  • Now, knowing where things are happening in the brain is a good start, but really what we want
  • to know is what are these regions doing to enable metacognition. This is something I began thinking
  • about at New York University, when I went there for my postdoc, and with Nathaniel Daw there,
  • we created a simple twist on the classic perceptual decision-making task. So we
  • asked people to make this initial judgement about whether a patch of moving dots was going to the
  • left or to the right, but rather than just getting them to rate their confidence in that decision,
  • we then inserted some new information after they made their decision, but before they rated their
  • confidence. This is useful for us, because it then provides us an experimental control
  • over new information that might inform their eventual metacognitive judgement. So you have
  • this world-directed phase, and you're making a judgement about the outside world, and then
  • you have this self-directed phase where you can use this new information to update how confident
  • you feel that that original decision is correct. We can also write down a mathematical model of
  • what should happen here. So this model is quite minimal. It's a toy model, but it has these two
  • parts in it. It has a world-directed phase, where you're accumulating evidence about which way the
  • dots are going, and then it has a self-directed part, where you're comparing that against
  • the choice that you actually made, to figure out how confident you should feel in that response.
  • The nice thing about theoretical models like this is that we can use them to simulate what we should
  • see in the brain data, if there are regions supporting these kinds of transformations. I
  • just want to highlight one prediction here that comes from this self-directed part of the model,
  • and that says that, as the post decision information gets stronger, more significant,
  • then you should increase your confidence if you're originally correct in your decision,
  • because that information is now supporting your decision, but you should decrease your confidence,
  • in red here, if you were originally incorrect in that decision, because that new information
  • is going against the decision you made. So we can use this signature, these diverging
  • lines, as qualitative motifs that we can then go searching in our brain data for. When we did that,
  • we found that activity in the posterior medial frontal cortex expressed this interaction pattern,
  • this signature of collecting this new evidence and using it to figure out whether you were
  • right or wrong. We can also look at how that new evidence is getting used to inform your
  • metacognitive judgements about yourself. Now, this prediction is a little bit more complex,
  • but essentially, it involves looking for brain areas which show a monotonic relationship between
  • this model-predicted quantity here, which is telling you how much the new information
  • should update your belief, and what the confidence value that you actually express is at the end of
  • the trial. When we look for those signatures, we found that these areas of the frontopolar cortex
  • on the lateral surface, and also in lateral area 46, were showing this signature. So that
  • suggests to us that the frontopolar cortex is somehow involved in perhaps updating your
  • metacognitive judgement about yourself. Now, fMRI, functional MRI, that produces
  • these images, is a wonderful tool to probe the anatomical details of neural computation,
  • but the signals we acquire from these scanners are relatively slow. Another important imaging
  • technique that we can use to complement fMRI is known as magnetoencephalography, or MEG,
  • and this is used to measure small fluctuations in the magnetic field around the head that is linked
  • to ongoing neural activity. When we measured MEG activity in this task - this was work done
  • by Max Rollwagen when he was a PhD student in my group - we found that we could decode in real time
  • both this world and self-directed component of the model. So we can decode whether your brain
  • activity suggests that the dots are going to the left or the right - that's the world-focussed
  • part - but we can also decode how confident you feel about your choice about the dots
  • going left or right. These seemed to emerge in parallel from the start of the trial, but what
  • we found very interesting was that it's only the confidence component of these data that go on to
  • predict whether you're going to change your mind at the end of the trial. So that suggests that
  • this metacognitive aspect of processing is really crucial for guiding future behaviour.
  • So we have a picture that's something like this. We have initial perceptual and cognitive
  • brain processes involved in figuring out what's out there in the world, but then in parallel,
  • we have a second stage of processing that kicks in to figure out how confident I should feel about
  • the decisions I'm making about the world. It's this second stage of processing that allows us
  • to effortlessly engage in metacognition about our decisions as we go. So far,
  • I've been talking about these relatively contrived perceptual stimuli, but we've also been interested
  • to build out some more naturalistic decisions, and it turns out similar brain processes are involved.
  • So with my friend and UCL colleague, Benedetto De Martino, we've been studying how people reflect
  • on making choices about what snack to eat, and it turns out, very similar brain processes in
  • the prefrontal cortex are engaged when people do this. With a former lab member, Dan Bang, we've
  • been studying how the social context influences your metacognitive judgements about yourself,
  • and we find that activity in this frontopolar region represents not only information about my
  • choice, but also about the social context and how that affects how we think about ourselves.
  • So these kinds of data can also start helping us explain the variation between individuals
  • in metacognition that we started with. We found that in healthy brains the structure
  • and connectivity of these prefrontal areas is correlated with your metacognitive ability,
  • and in patients who have damage to the frontopolar areas of the brain, we can find that metacognition
  • is subtly impaired, even though other aspects of language and cognition remain intact.
  • So just to summarise this part, then, on metacognition, we found that metacognition
  • can be quantified by asking how closely your confidence judgements track your
  • performance. We can use those tools to study the neural basis of self-monitoring,
  • how we form confidence in our decisions, and engage in both world and self-directed brain
  • processes. We can start explaining metacognitive dysfunction by probing how disruption to these
  • self-directed processes, supported by prefrontal cortical networks, can impair self-awareness. So
  • that's metacognition about decision making. Let's now turn to our second test case,
  • how we distinguish reality from imagination. So we saw earlier that our perception of the
  • outside world is a construction based on both the incoming data, but also our prior knowledge,
  • our background beliefs, about what's out there in the world. It turns out when you're just engaging
  • in imagining the outside world, even with your eyes closed, you're using similar brain circuits
  • as when you're actually perceiving it for real. So given this overlap, a natural question arises,
  • can we tell the difference? How does the brain know whether its sensory activity reflects reality
  • or imagination? This is a question that's been tackled over the past few years by a brilliant
  • postdoc in my group, Nadine Dijkstra, who's now setting up her own lab at UCL. With Nadine,
  • we've been setting up experiments in the lab that allow us to investigate imagination in
  • a controlled setting, so we can show people dynamic noise like this, a bit like watching
  • static on a TV screen, and then ask people to imagine particular stimuli in the noise,
  • such as these tilted lines here known as gratings. So we might ask someone, 'Okay, imagine this left
  • tilted grating appearing within the noise as vividly as possible, as if it was really
  • there,' and afterwards, we can then ask them to reflect on whether anything they saw was imagined,
  • or whether it was actually real. So I want us to try this together now .I'm going to
  • show you a patch of noise like this. I want you to try and imagine left tilted
  • lines in that noise on the screen as vividly as you can. Everyone ready? Okay, here we go.
  • So we can ask you, was there actually a stimulus presented on the screen there, or was anything you
  • saw only in your imagination? Hands up for just imagination. Oh, okay, a few people. Hands up for
  • stimulus. Okay, so there was actually a stimulus there that was fading in, and this was relatively
  • high contrast, but the ones we gave people in the actual experiment were more noisy. Hopefully this
  • just gives you some idea of how difficult it can be to make these judgements about whether
  • what we're seeing is real or imagined. Now, the actual experiment that Nadine
  • ran looked like this. We gave people an initial few trials, where they imagined the stimulus in
  • noise, and then on a final trial, without them expecting it, we then faded in a real stimulus,
  • and that real stimulus could either be the same as the thing they're trying to imagine, so
  • congruent with their imagination, or it could be of an opposite orientation, so incongruent with
  • their imagination. We then asked, at the very end of this final trial, was there a grating presented
  • on the screen, or was what you saw just in your imagination? Now, we can then separate our data
  • from this final question into whether they were imagining the same stimulus as what we presented,
  • or whether they were imagining a different stimulus. What's fascinating here is that when
  • you're imagining the same stimulus as we present to you, you're more likely to think that there was
  • actually a real stimulus out there. It's almost as if your imagination is giving you a bit of a boost
  • to your decision that there's something real. When we looked at how vivid your imagery was,
  • how vivid your experience of imagination was, then what we found was that when you judge
  • something was real - so this is the solid bars here - then you were more likely to experience
  • vivid imagination on that trial. So it seems like there's an intermixing of the signals
  • from imagination and perception that's driving your judgement of whether something is real or
  • not. Thinking about this a bit more, we thought that one explanation of this pattern is perhaps
  • that the brain is actually using the vividness of experience to figure out whether something is real
  • or imagined, because one of the most striking differences between reality and imagination
  • is its amount of detail or vividness. Perception is usually vivid and detailed,
  • whereas imagery is usually less so. This idea had already been pointed out by David Hume in his,
  • Treatise of Human Nature, in which he wrote the idea, or the imagination of red, we form in the
  • dark, differs only in degrees of intensity, not in nature, from the impression or the perception
  • of red that strikes our eyes in the sunshine. So the hypothesis, then, is that the difference
  • in sensory activity is used by the brain to figure out whether what we're seeing is real or imagined,
  • and we refer to this idea as the reality threshold model. We can simulate this by assuming there's
  • some common neural population that gets inputs both from perception and imagery, and the inputs
  • in perception are a bit stronger than the ones in imagery. Then, this summed level of
  • activity is compared to some internal threshold. If the summed activity goes above the threshold,
  • then the brain concludes that it's real. If it's below, it concludes it's just imagined. Again,
  • the useful thing about writing down a theoretical model like this is that we
  • can use it to simulate what we would expect to happen in our experiment, and then compare it
  • to what we actually find. So if you remember that these were the actual data from our experiment,
  • and this is what our model predicts we should see in the data, a really nice qualitative match.
  • Now, perhaps this is not so surprising, because we actually developed our model to match this
  • pattern, but we can develop a more stringent test of the model by running a second experiment,
  • and this one involved presenting no stimulus at the end of that sequence of trials,
  • and yet we found that people would sometimes just spontaneously hallucinate or false alarm
  • a real stimulus in the noise. What the model predicts is that on those trials,
  • they should have particularly vivid imagery, the solid bar there, because it's the vividness
  • of their imagination that's getting them over the threshold to say that something's really
  • out there, and that's exactly what we saw in the data when we ran this experiment. We can
  • also then go on to look at the neural basis of this internal signal that we're using to make
  • reality judgements, and to do this, we combined a variant of this task with functional MRI in
  • collaboration with a master's student, Thomas von Ryan, and my colleague at the FIL, Peter Cock.
  • Now, we can use the model to again simulate what we should see in a brain area that is tracking
  • this level of vividness that we're using to make reality judgements. I won't go through all the
  • predictions here, but basically there are quite a rich set of patterns we should see from our model
  • simulations that should be recapitulated in a brain area that is representing this reality
  • signal. Strikingly, we saw in the fusiform gyrus, a high-level visual area, we saw that all of these
  • predictions were obtained. So that suggests that perhaps imagination and perception is
  • coming together in the high-level visual areas, and it's predicting people's judgements about
  • reality. We can also look for brain areas in which the activity is better explained
  • by a reality threshold, rather than this continuous reality signal, and to do that,
  • we looked for activity that was better explained by a binary judgement of reality rather than a
  • continuous judgement of imagery vividness. And This contrast isolated regions in the prefrontal
  • cortex that we know from other studies that are involved in metacognition and self-monitoring,
  • and these are active when people are making these binary judgements. So to put it all together,
  • then, one idea for how reality monitoring works is that these higher-order regions
  • of prefrontal cortex are monitoring the strength or reliability of our sensory activity, and if
  • it proves strong enough and it gets over this threshold, then the brain decides that what it's
  • seeing is likely to be real. Now, this is only one factor, of course, and in Nadine's new lab,
  • she's pursuing lots of other factors, such as how much control we have over our imagery, and so on,
  • that might inform this judgement about reality. So to come back to the start then I think both
  • Comte and Mill were right in different ways. Comte was right, in that the way the brain thinks about
  • itself is not with another little brain in the head or doing some paradoxical recursion,
  • but Mill was also right that metacognition can be achieved with relatively simple,
  • clever psychological tricks. We can engage in metacognition about our decisions, knowing we've
  • made a suboptimal decision, by engaging in post decisional processing in this self-directed frame
  • of reference, where we ourselves become input to the machinery of self-reflection. We can
  • distinguish reality from imagination by tracking cues such as sensory strength, which usually
  • predicts whether we're perceiving or imagining. I find it intriguing and also exciting that these
  • two mechanisms might share a deeper commonality, because we can think of reality monitoring
  • as tracking how confident we are now, not in our actions, but in our models of the world.
  • So this is really just the beginning I think of this research programme,
  • and an ongoing work. We're extending it in a number of different directions. So far today,
  • I've talked about metacognition in a single moment in time about individual decisions or precepts,
  • but in work with Marian Rouault and others, we're also looking at how this is built out to broader
  • global models of our skills and abilities, and even personalities, so really investigating how
  • the brain might carry around a model of ourselves as individuals. We're also increasingly interested
  • in how our awareness of bodily processes might inform and shape these metacognitive models.
  • So why does any of this matter? Well, on the one hand, I think, or I hope, I've convinced you that
  • investigating self-awareness is fascinating for its own sake. It's a remarkable aspect of
  • how the human mind works, and it's one that we've only relatively recently had the tools
  • to probe in the lab, but this understanding also has more widespread implications
  • In psychiatry and neurology, understanding how metacognition works helps us explain why some
  • patients might lack insight into symptoms, such as hallucinations and delusions. In education,
  • it's long been recognised by psychologists and teachers that in encouraging children
  • to develop metacognition about what they don't know, as well as what they do know,
  • can help them in self-regulated learning, and for society in general, being able to
  • collaborate with others and resist polarisation or dogmatic thinking, requires maintaining an
  • open mind and realising when our view of the world might be wrong. Finally, developing an
  • understanding of human metacognition might help us inform how we build metacognitive AI systems,
  • and in the last couple of minutes, I just want to dig into this last point in a little more detail.
  • So we're all becoming familiar with the challenge of figuring out when and when
  • not to trust the outputs of AI systems. It's well known that systems such as large language models
  • hallucinate and confabulate. This is actually an article that I took from today's issue of
  • Nature. It came out online just a few days ago. I quite enjoyed a quote from one of the academics,
  • a computer scientist in the States, within this article, who said that large language models sound
  • like politicians. They tend to make stuff up and be totally confident no matter what. Now, solving
  • this kind of problem is only going to become more critical as these AI and robotics systems move
  • into safety-critical arenas such as self-driving cars. Now we can think of metacognition in these
  • interactions as being important on both sides of the human-AI collaboration,
  • both for humans to be able to ask themselves what they do and do not know, and when they might need
  • to seek help from AI, but also for AI systems to effectively communicate and signal their
  • confidence and uncertainty back to human partners. More generally, as the first order abilities of
  • these systems become stronger and approach human levels of function,
  • the need for effective metacognition is going to become more important. I think we're going to need
  • to invest in developing artificial metacognition, or AM, as well as invest in developing AI.
  • An issue here is that actually many of these systems that are already out there have internal
  • estimates of confidence or uncertainty, but they're just not often on public display.
  • In work from my lab by Clara Colombatto, we found that without these explicit metacognitive signals,
  • humans often think that AI systems are more confident than they really are. This leads to
  • this calibration gap - this is a plot from Mark Stivers' lab - so that human confidence in AI
  • knowledge is just uncorrelated, essentially, with the internal accuracy of the model.
  • This call was echoed last summer by Bill Gates, who says in this quote, 'The big frontier in AI
  • is,' what he calls, 'metacognition. Understanding how to think about a problem in a broad sense
  • and say, "Okay, how important is this answer? How could I check my answer?" It will be human-like in
  • terms of knowing to work hard on certain problems and having a sense of confidence and ways of
  • checking what it's done.' So more generally, what I think these developments are tapping into is a
  • growing recognition that we need to be cultivating not only human and machine intelligence, but also
  • a sense of wisdom, a sense of self-awareness about what we do and do not know. Now, this is not a new
  • idea, of course. This goes all the way back to the ancient Greeks and their recognition that knowing
  • what we don't know, knowing ourselves, is possibly the highest form of human knowledge. So that's it.
  • I hope that I've given you a sense of how we might be able to start to understand this remarkable
  • aspect of how the human mind works. If you're interested in exploring more about this topic,
  • we have two books which I think are useful places to start. The first is aimed at a more academic
  • audience and covers many of the key methods underpinning the cognitive neuroscience of
  • metacognition, and the second, Know Thyself is aimed at a general audience. Now, none of the
  • work I've told you about today would be possible without the many wonderful students and postdocs
  • who have passed through my lab over the past few years, and one of the wonderful things
  • about running a lab is that you then get to see people who have been in your lab go on to become
  • PIs and research leaders in their own right. The people on this slide, Dan Bang Marion Rouault,
  • Marco Wittmann, Matan Mazor, Clara Colombatto and Nadine Dijkstra, are all now blazing trails
  • by setting up their own research groups in areas that I'm excited to say are either
  • within or adjacent to metacognitive neuroscience. Finally, I need to acknowledge a debt of gratitude
  • myself to mentors that I have had along the way. Starting all the way back as an undergraduate with
  • Paul Azzopardi, who in classes, as a third-year psychology student, first introduced me to the
  • science of metacognition. Then, going on as a PhD student, I was taken on for my PhD by Ray
  • Dolan and Chris Frith at the FIL, and they've been kind and supportive and wise mentors ever
  • since. Also my postdoc advisor, Nathaniel Daw at NYU, who showed me how to do proper computational
  • neuroscience in this area of metacognition research. Then, finally, I need to acknowledge,
  • of course, the generous support of funding agencies without whom this work wouldn't be
  • possible, and that includes the Royal Society, who together with Wellcome, supported my Henry
  • Dale Fellowship, which a few years ago now was instrumental in kick-starting the lab. So that's
  • it. I very much look forward to the discussion and questions, and thanks so much for your attention.
  • Stephen, thank you for a wonderful lecture. We now have time for a discussion. There
  • are two microphones in the hall, so if you have a question, please wait until
  • a microphone arrives with you. There will also be questions online as well, and we'll
  • deal with those a bit later on. So let's start over on the right-hand side with glasses. Yes?
  • Hello. Thank you very much. It was a fascinating talk. I'm a researcher in AI, so I have very much
  • interest in the last part. I'm really curious about where the training signal for the brain for
  • the metacognitive circuit comes from, because of course, for the world model that is, as you said,
  • the glass is twice bigger. That's the error the model would adjust its own perception. For the
  • metacognition, is the signal is also just, was I right, was I wrong, and if so, how does that very
  • complex machinery of uncertainty estimation arises from such a simple signal? Thank you.
  • Yes, thanks so much for the question. I mean, it's a rich question because the
  • details of that answer would depend on the kind of task we're trying to model, but in general,
  • people have thought about two key inputs, if you like, to this metacognitive machinery. One is
  • uncertainty over some world-directed quantity, but that can also be uncertainty over some variable
  • you're holding in memory, for instance. That's important, because how uncertain we are about the
  • information itself is going to constrain how good we can be about decisions about that information.
  • So knowing that, and tracking that uncertainty itself is important, and there are people working
  • on how you, for instance, read out the uncertainty of a neural population and use that to inform
  • metacognitive judgements. Another key aspect, as I picked up on in the first part of the talk,
  • is monitoring your own actions, and that's really exposed by these error-correction studies,
  • where actually what you need to do - the task, for instance, saying whether the number is higher
  • or lower than five is simple. If you had all the time in the world, you would never make errors,
  • but when you're put when you're put under time pressure, it's almost like you're
  • watching yourself make the wrong response, right? So your action that you're making in the world is
  • then used as input, and so when you have those two pieces, that can then be used as input to
  • this extra stage of processing. I think the first one is going to be really difficult for artificial
  • systems to do, when you have a really high dimensional space of what you're representing.
  • So there's a question behind you. So if you could pass the microphone back to...
  • Hello. Thank you very much. My question is not going to be as complex as that. If we're
  • thinking about applying it and its practical applications, we know that some views have more
  • authority than others. Have you thought about the extent to which information, depending on
  • if you've told them where it's come from, will be used to update the decision? Does that make sense?
  • It does, yes. So you're asking about the trust we might have of the source of information.
  • Yes, exactly. Thank you.
  • We haven't. Although, with a student in the lab who's sitting down at the front here, we've been
  • talking about running exactly that experiment. So using this post-decision processing setup,
  • but then introducing an additional set of conditions which vary how trustworthy that
  • new information is. The prediction from the model would be that you would somehow be able
  • to track that extra source of uncertainty, that's now not coming from the stimulus,
  • but it's coming from your inference about the source of that stimulus,
  • and if you can do that, then you should down-weight its influence on updating
  • your metacognitive judgement. Absolutely, but we haven't actually run that experiment yet.
  • A question very close to the microphone just there.
  • Thank you so much. That was absolutely fascinating. I recently became aware of
  • a condition called aphasia, which is when you don't have a visual memory. So I was really
  • interested in how visual memory might play into the reality monitoring system that you describe,
  • and what does that mean more broadly for the role of memory accuracy and recall
  • in that ability to distinguish between what's false and real?
  • Yes, that's very interesting. So I think one thing to point out is that when we're
  • talking about reality monitoring here, we're using the visual system as a model,
  • but we think many of these principles should hold across other perceptual modalities. There's also a
  • body of work that's looking at this in the case of memory. So in the example I gave towards the start
  • of the talk of false memory, people have looked at whether you're able to have this additional
  • stage of processing where you might be able to realise that a memory you hold is indeed false,
  • or perhaps the source of that memory was different to what you realised what it was. But in the case
  • of the visual system, the model here would suggest that yes, in order to be able to do
  • this reality monitoring in the first place, you do need to have a representational system. You
  • need to have a system that's representing, or trying to infer, the outside world. Yes,
  • so it'd be very interesting to look at reality monitoring in neurological conditions such as
  • aphasia, and Nadine might know more than I do whether that's already been done.
  • Question about halfway back in the middle here.
  • Hello. About a week ago I was sitting in the classroom and I wasn't understanding what was
  • being said, and I felt very uncomfortable, and I began to fall back in my chair and I ended up flat
  • on the ground. Then I woke up. It was a dream, but to me, at the time, it was utterly real. Of
  • course, it took me a minute or two, and I realised it harked back to when I was at school and I
  • didn't understand what was going on. Of course I didn't fall back on the carpet, but in the dream I
  • did and it was utterly real. Should I worry about my cognitive, my metacognitive...? Am I a danger?
  • So, no, thanks to this question. Dreaming is a fascinating case, because those are cases
  • where we are having often quite vivid perceptual experiences, but without much insight into the
  • fact that they are illusory. Now, there are cases of lucid dreaming where you gain this extra layer
  • of awareness that, okay, I'm in a dream now, but that's usually the exception rather than the rule.
  • So what you're describing there is that you've, essentially, had a failure of reality monitoring,
  • but it happens to us all when we dream. One explanation for that is that, actually, these
  • prefrontal systems that I showed you in the second half of the talk, during dreaming seem to get
  • suppressed, seem to get inhibited. So if reality monitoring goes offline, then the experiences
  • we are internally generating might be just treated as matter of fact being vivid and real.
  • One here, and then we'll go further back.
  • Thanks, Steve. That was brilliant. Actually, I'm following up quite smoothly from the previous
  • question, which is, these lovely experiments of judging whether the Gabor patch was there or not,
  • beautifully illustrate the combination of what you're expecting in the stimulus,
  • but I just wonder whether reality monitoring is loading a bit too much than those experiments
  • can hold. So the question is, how general is this? I mean, dreaming, yes, it could be that
  • the prefrontal things are off, so everything is real, but there are many other variations
  • where experientially this distinction seems to manifest. So various kinds of hallucinations,
  • schizophrenia versus Parkinson's versus Charles Bonnet syndrome, different kinds of synaesthesia
  • where people experience their synaesthetic colours as out there versus in here. Do
  • you all think there's this core principle going on, or do we need to actually think about this,
  • that there may be many different phenomena that contribute, but it's not just that mechanism?
  • Thank you. I think that, in general, as I was briefly mentioning in the summary slide, the
  • work that Nadine has been leading over the past few years is really pinpointing one particular
  • and quite well-circumscribed mechanism that contributes to these judgements about whether a
  • stimulus is really out there or whether it's just in our imagination. That seems to be the strength
  • or reliability of these sensory signals, but clearly, that's not the entire explanation of how
  • you might make a judgement in more naturalistic settings here. So one feature of imagination
  • is that it usually moves around. We have control over it. I can imagine the elephant to be over
  • here rather than over here. So if I gain control over it in that way, I might use the
  • fact I'm controlling it as a signal to say, okay, that's probably me rather than the outside world.
  • I think maybe to speak to the broader aspects of these phenomena, one thing
  • we were intrigued about in the imaging data here, is that even though you're monitoring
  • relatively low-level stimuli that we think are mainly represented in early areas of
  • visual cortex, the signals that are doing the work for predicting whether something is
  • real or imagined ,are actually in higher level areas of the visual cortex. So one speculative
  • interpretation of that is that the signal that matters is not about the individual stimulus,
  • but it's about somehow the broader perceptual model. So what level of the system are we
  • monitoring the precision or confidence we have at to make these judgements? I think
  • that's an open question. For now we've just looked at this relatively low-level stimulus.
  • I think we've got time for two more questions. So one there and one right at the back there.
  • So thanks for the amazing talk. Regarding the first part,
  • I was finding the connection between uncertainty and metacognition to be a little reductionist,
  • in the sense that any system that can do Bayesian inference would come with its
  • own uncertainty quantification. You would not need an additional mechanism to be implemented
  • for that rather than just plain inference. So I was thinking, what is the connection,
  • what is the cue that makes you really think that confidence estimation is somehow a good
  • representative of metacognition? This is the first question. Second, for the second part,
  • with this Bayesian inference kind of perspective, I would, for a second, I would actually say that
  • it's not the vibe. Maybe it's not how vibrant the imaginary experience is, but the question may lie
  • in the probability that what I generate, does it align with the world, right? So I
  • have an observation I have a likelihood term, and then as a top-down inference machinery,
  • my brain generates some kind of scene. How likely that scene aligns with the observation, I would
  • say that is the probability of that confidence that is what I'm imagining is actually real.
  • Okay, so there's a lot there, and thank you for the question. The first part,
  • I think we're in agreement that one starting point for thinking about models of how this
  • works are Bayesian models. Those are the kind of models that I was briefly describing on some of
  • the slides here. The question I think is whether having a Bayesian inference is a simple thing to
  • explain for a brain that is flexibly engaged in acting in the world. I don't think it is
  • a simple thing to explain, because you need to take into account high dimensional sensory data,
  • you need to take into account all possible actions that we could perform. It seems like
  • one way the brain has started to try solving that is by building this hierarchy, which extends into
  • prefrontal areas that might contain higher levels of this hierarchical Bayesian model, if that's one
  • way you want to think about it, that are doing this more metacognitive level of inference.
  • So I think you can have Bayes as a starting point, but I don't think it gets you an explanation of
  • how a real system is working. Second part, I think it's useful to distinguish these implicit
  • priors that affect how we see the world, and that was the kind of thing I was talking about at the
  • very start with the rotating face. From the fact we can internally generate a conscious experience,
  • a conscious experience of imagining, and we don't usually do that. We don't usually
  • engage in internal imagination when we're doing inference on the world using these more implicit
  • priors. So we still have this problem, if you like. If I just close my eyes and imagine
  • something, I'm activating similar representations as when I'm perceiving something for real. So
  • there's still a problem that arises in trying to distinguish those two situations, if you like.
  • So final question at the far back.
  • Thank you. I believe my question will be simpler. Does your research presuppose that the subjects
  • seek the truth, that they will use additional stimuli, additional information to move towards
  • something that is more accurate, objectively, and how do you filter out the fact that some
  • subjects may want to stick to their original hypothesis and not modify it with new information?
  • Thank you. I mean, so in these more minimal tasks, then using the new information is helpful,
  • because it is helping you get to the truth in terms of the objectively correct
  • generative model that the experimenter was using. That information is helping you,
  • but as a previous question suggested, that might not always be the case. You could play
  • with the trust that someone should have in that new information. I think also
  • an important thing your question is getting to is this notion that there might be other
  • motivations for beliefs that are not about the accuracy of the world, and absolutely, I think
  • those are major influences on how we form beliefs about the world and how we make decisions. One way
  • of thinking about that is, again, you can start developing models of that. One way of thinking
  • about it is that certain beliefs we hold might not be accurate in terms of scientifically accurate,
  • but they might be helpful, they might be useful for us, because they help us socially integrate.
  • Holding a particular belief about a political party, or some belief about how the world works,
  • about vaccinations, etc., might help you integrate with your social group, even though that's not a
  • scientifically accurate belief to hold. So I think these are really interesting
  • issues to probe, about what are the normative inputs into belief formation. I absolutely agree,
  • they're not always going to be about perceptual or scientific accuracy.
  • Well, unfortunately, I think we have to close at this point. It's been a tremendously stimulating
  • lecture, and I'm sure the discussion could have gone on for much longer. It remains to me now
  • to present, Professor Stephen Fleming, with the Francis Crick Medal and Lecture for 2024,
  • for tackling foundational questions about the neurobiology of conscious experience,
  • and advancing our understanding of the neural and computational basis
  • of metacognition. Stephen, thank you very much and congratulations.
  • Thank you. David. Thank you.
  • Thank you very much. Okay. So I think now the photographer would...

Join us for the Royal Society Francis Crick Prize Lecture given by 2024 winner Professor Stephen Fleming.

The human brain has a remarkable ability to monitor and evaluate its own thinking, known as metacognition. Metacognition is crucial to success, enabling us to recognise gaps in our knowledge and collaborate effectively. Problems with metacognition are linked to maladaptive behaviours, such as endorsing false beliefs or being unaware of our own limitations. Professor Stephen Fleming will discuss how his group is developing the tools to isolate how this extraordinary capacity for self-reflection and self-awareness is supported by the functions of the human brain. By combining mathematical models of human behaviour with cutting-edge brain imaging techniques, the team is discovering the building blocks of metacognition, and asking how these pieces come together to support a rich awareness of own skills and capabilities. This work is uncovering the neurobiology of a core aspect of what makes us human, with wide-ranging implications for mental health, education and AI.


About the Royal Society
91TV is a Fellowship of many of the world's most eminent scientists and is the oldest scientific academy in continuous existence.
/

Subscribe to our YouTube channel for exciting science videos and live events.

Find us on:
Bluesky:
Facebook:
Instagram:
LinkedIn:
TikTok:

Transcript

Tags