Can AI help us think for ourselves? | 91TV
Transcript
- Yvonne Rogers: Thank you very much for that introduction and that welcome. It's a pleasure
- and a joy to see so many faces out there, not just my colleagues, but my friends, my bridge
- players! I know my family are watching online. So I'm really pleased that you've managed to come
- out in the rain, or you're tucked up somewhere at home. So I would like to start my lecture with a
- picture here of the great Robin Milner, and I had the pleasure of meeting him a few years before he
- died at a workshop that we were both attending on ubiquitous computing. As you can see, Robin had
- a twinkle in his eye, and he was very endearing, and very approachable, and even though our areas
- of computer science are worlds apart, as you heard from Marta, and Abby about his work in theoretical
- computing, and mine is about the human aspects. He was interested in how our minds could meet,
- and to think about ubiquitous computing from the user experience, but also from the theoretical
- aspects, and the design. So we had a number of conversations about this, and so I'm very lucky
- to have met him. In honour of Robin I'm wearing an orange shirt like he is! So my lecture today,
- I'm going to do it in three parts, first I'm going to talk a bit about the early stages of my career,
- where I very much helped to inform the field of ubiquitous computing, and in particular looking
- at how we can inspire children to learn. Then I'm going to talk about some research that I've been
- doing in the last two or three years, where we've been trying to design software to help people
- think more systematically. Finally I'm going to finish by looking into the future about how
- we might develop new AI tools to facilitate creativity. So at the start of my career,
- I was very much interested in how we could design new technology for children, and this is what we
- were confronted with, rows of children sitting in front of PCs, by themselves, following tasks,
- and trying to get something finished, it was very dull and drill based. I thought,
- surely we can do better than that, there's this amazing technology that we're just discovering,
- that's taking off the desktop. This is where I got involved in an area called technology enhanced
- learning. What we tried to do was to think about how we might move computers out of the classroom,
- and into the wild, and the reason for this is, is that kids get excited when they're outdoors. We
- wanted to encourage them to be more self-initiated in their thinking, to talk to each other,
- and to think about scientific enquiry in a much more engaged way. Also to inspire
- curiosity. So one of the projects I’m most well-known for is called the Ambient Wood,
- which we did 20 years ago now, which was really a field trip with a difference. This was work that
- was done with partners on the Equator IRC project, that was eight UK universities of likeminded,
- daring to think differently, researchers, of which one of them is Tom Rodden is in the audience here.
- Part of this project was that we worked with people from different disciplines, from design,
- from development psychology, from engineering, and from computer science. We suddenly
- felt like we were in a sweet shop that we had been given all these new technologies
- to experiment with. So we designed all manner of innovative technologies to help children really
- get inspired by using technology to think. So we developed what were called probing devices,
- where they could collect readings from the environment on moisture and light. We also
- made our very own handheld devices, this was before mobile phones were around, by which that
- the students could get feedback and information whilst they were walking around outdoors in the
- woodland. One of my colleagues, Dannielle Wilde, created this… She was at the RCA at the time,
- she created this periscope device down the bottom here, where the idea was that the children would
- come across it, and a video would be played in the woodland, rather than watching something in the
- classroom and then going out. So this was a David Attenborough video about the bluebell lifecycle,
- or something like that, and then they could see immediately, and that would whet their curiosity.
- So what we tried to do then, is to encourage learning through exploring and discovering,
- we didn't tell them explicitly what they had to do, we just gave them these tools that we'd built,
- and said, 'Go forth and experiment, and see what you can find.' One of the things we were
- experimenting with at the time, is what could we do with these new ubiquitous technologies? So we
- thought about allowing them to see the invisible, and the inaudible, whereby a physical action,
- this might be you walk past a flower, causes a digital event to occur. This device down here
- looks pretty weird, it was what was called an Ambient Horn, and it was honk when it was going
- past something that would play a sound. So I'm just going to play you the sound that these two
- girls are listening to, and I want you to guess what it is. It's a butterfly drinking nectar!
- Now you know. Things that you take for granted, or you never really think about, we also had
- sounds for photosynthesis, what that might sound like. This was again to try and get them curious,
- to think about these things. We found when we let children, they were aged ten to 12, go out into
- this woodland, that much self-initiated learning took place. We got them to go out in pairs,
- so that they could talk to each other and collaborate. With this probing device, they
- probed everything, the air, the ground, the trees, and foliage, and what we discovered is kids being
- kids, they like to find the most extreme, the wettest, the lightest, the darkest. They of course
- tested different parts of their own body, to see whether they were the wettest, the lightest,
- and the darkest, and it was just a pleasure to see them enjoying that. An important part of being
- in pairs is that one of the children would go and probe, on the device, but then wouldn't have the
- reading immediately, they would then have to join the other child to have a look on the display,
- and talk about what they'd found. The displays that we used, or the visualisations really simple,
- so they could see relative levels, and then that would trigger them to think where they might go
- and probe next. Here's two girls that are using the devices together, and you can see how one does
- the probing, and then the other reads out what the reading is, and then on the basis of that,
- they hypothesise where to go next, which would be even drier, or even wetter.
- So we couldn't stop them from going around testing things, and thinking about why something was dark,
- and light, and whether something was even darker. Little did they know, was that
- once they'd been experimenting and exploring the woodland, they had to come back to the classroom,
- but the classroom wasn't back at school, we made a pop-up classroom, and you can see it
- on the left here, this rather, yes, stripy looking tent. We got the pairs of children to come back,
- and share their experiences, and what they didn't realise is that every reading that they'd taken,
- every probe we recorded it, and then one of our software designers here then represented those on
- a bird's eye view of the woodland, and where they'd been. They could click on these dots,
- and the reading that they'd taken would show, and it would show the relative light or moisture
- level. They were absolutely fascinated by this, and were trying to predict which of the readings
- that they'd taken were moist, the moisture levels, or the light levels. This triggered
- a natural conversation between the children, and they were comparing the different places
- in which they had been exploring. This led them to hypothesise about the ecosystems in the woodland,
- for example which plants grew in the wetter areas, and why, and what creatures and insects thrived in
- different parts of the woodland. Just to show you how different that type of learning is,
- compared to what we were up against in the classroom. So I think I can be confident
- enough to say that at the time, we pioneered a new way of designing technology to, daring children to
- think differently, and it was hard work as you can see here, after a hard day's work. It brought us
- all together. Also we brought indoor and outdoor learning together in a novel way. The children
- talked freely and excitedly with others, not just whilst they were there in the Ambient Wood,
- but also on the bus back. Believe it or not, ten, 15 years later, we came across some of these who
- were nice, young adults, and they remembered their day in the Ambient Wood as one of the best days
- at school. They loved doing the scientific enquiry like this in the wild. They were also fascinated
- by the underlying technology, so at the time this was a woodland of one of my colleagues, where his
- wife used to do yoga, and so we had to wire it up, literally, and make our own Wi-Fi with aerials,
- and put laptops in trees for this, and they would go around just to try and find that technology as
- well. To try and work out how it was possible that things were pinging, and noises were being made.
- So that got them interested in the ubiquitous technology, as well as trying to understand more
- about the ecosystems. So Marta's already said how I contributed to the field of ubiquitous
- computing, and that doing this type of work led me to think that that field of ubiquitous computing,
- should be much more exciting and provocative, and stimulating, and engaging. Many people thought it
- should be following on from Mark Weiser's view, which is that it should make our lives efficient,
- calm, and easier by doing things on our behalf. The trouble with that is that we just get lazy,
- we just expect the technology to do it. I think really technology's there to exercise our minds.
- In particular we should be designing engaging user experiences. So for most of my career that's
- what I've been doing, is trying to think about the different technologies that are out there,
- to encourage us to be more active, be more reflective, in our learning,
- but also in our living, and work. To facilitate creativity. So that's part one of my talk,
- and how it's inspired me throughout. Twenty years on, since the beginning of ubiquitous computing,
- there's been a lot of technology that's been around and developed, that we can experiment with.
- So we've had PCs for a long time, but now we've got tablets, and various mobile devices, we've
- got what's called the Internet of Things, which basically is putting senses into the environment,
- into objects, and connecting them to the internet, so that they can talk to each other. We've also
- got what are called tangibles, and physical computing, where the computation is in some
- artefact, and this allows us to think, what do we want to do with the digital world, in respect to
- the objects that are out there. Then there's augmented reality, and virtual reality, and
- wearables, and speech interfaces, and robots, and chatbots, and of course artificial intelligence,
- and machine learning. The question is, which of these technologies do we design for and how?
- This gets me onto thinking about thinking, there are many kinds of thinking that we can use those
- different technologies to augment. So we're all involved in different aspects throughout
- our lives, whether it's planning, deciding what to do, choosing between alternatives,
- reasoning about things, making sense, reflecting on what's happening, contemplating, and solving
- problems. So how the hell do we match this up, how do we know which of these various technologies do
- we use to support these, we could possibly use PCs, tablets for supporting problem solving, but
- then again we might want to use them for planning. We could use artificial intelligence to support
- decision making, we could use augmented reality to support reflecting, we could use tangible to
- support planning and so on and so forth. There really isn't any systematic research or guidance
- out there as to how to make those decisions. What we do in human computer interaction is a bit of
- trial and error, and a bit of experimentation, but also we go to theory, particularly in psychology,
- to inspire us. One theory I'm just going to mention, but there are many theories in which
- I have looked at, and been inspired to think about how you design technologies, is Daniel Kahneman's
- book on Thinking Fast and Slow, how many of you have bought this book, I'm not going to ask if
- you've read it! I'd say 90 per cent of you. It is a bestseller, and it's a really great title.
- Basically in the book he argues that there are two types of thinking systems, one which is intuitive,
- fast, it's no effort, it's instinctive, automatic. The other one is more effortful, it's slower,
- it's more orderly, and deliberate. He argues that system one is what routinely guides our
- thoughts and actions, and is often right, but is prone to making errors, and particularly
- those of judgement in decision making. Whereas system two is meant to be the voice of reason,
- and that he argues that we should employ this more when we detect a bias in our thinking.
- Now that it's rather an oversimplification of how thinking is, but when you're modelling,
- you do try and think of how you can trust, so to think of these two systems as alternating,
- and that sometimes our thinking might be somewhere in-between. It's a useful heuristic, a use theory
- by which for us in human computer interaction, to think of how could we stop people, or reduce their
- bad thinking, or their biased thinking, and how we might promote what's called system two thinking.
- Having been inspired by reading this theory, what we do next, is we develop our own concepts,
- to inform the design of the technology. So this is where I've come up with the collaboration of
- my students, in particular Leon, are you here Leon? Yes, you're there, Reicherts,
- a notion of scaffolded thinking. The idea here, just like scaffolding, is that you somehow use
- the technology to guide people and maybe stop them, and slow them down, to reflect more on
- their decisions. I'm going to give a case study where we think we can design technology to support
- scaffolded thinking. Then the second one I'm going to talk about is what I'm calling integrated
- thinking. This is designing technology to help people to externalise their thoughts, to be more
- systematic, when problem solving. So the first one, scaffolded thinking, and we think that we can
- use this concept to help us design technology to support people who invest in stocks. I don't know
- about you, but what you did during the pandemic, but apparently there was an astronomical uptake of
- trading apps, any of you dabble in trading apps? There's a few of you who admit to it. Apparently
- over 130 million people have used them in 2021, and the most popular is Robinhood. These are
- designed for the novice person who doesn't have much expertise. Many people new to investing have
- made costly mistakes, and what we wanted to do is to think about how we might design technology to
- slow down their thinking, and to help them, so that they don't make these mistakes.
- So what happens when you've just invested in a stock, and it goes up, and then it goes stable,
- and then you see this on your phone. You panic, you get really emotional, you get all sweaty, the
- palms of your hands go like this, you don't know what to do, and you think well if I just leave
- it like that I'm going to lose all my money. So you sell, but you often sell too much, too fast,
- and then you regret it later. The problem with novice traders is they don't have a good strategy
- to deal with this situation. This is where we come in, is to think how can we help novice traders to
- learn to think more methodically. This is one of my PhD students here, Ava, who volunteered
- to be a model! Is seeing that, you get the stress, and it's really difficult to think under stress.
- Professional traders develop the voice of reason, and they will have a set of questions and criteria
- they use before finalising their trading decisions, unless they've had a couple of
- glasses of wine, they will maybe think through is this the best thing to do. Beginner traders don't
- have this, and they make rash decisions. So we wanted to help them to become more
- experienced and expert, by thinking through having this voice of reason, and to scaffold it,
- so that it stops them acting impulsively. Also to think about what they're doing, and why,
- and if you see over here to the right, these are the interior monologue, or questions that we'd
- like them to be thinking through, in a way which many expert traders do. So things like rather
- than I need to act fast, it's how did I come to this conclusion, have you considered Criteria X,
- overall this seems to be the best alternative. So having this conversation with yourself, and then
- you can decide whether it's good to stick, or to sell. So we decided to choose a chatbot for our
- technology intervention, and for those of you who are not familiar with chatbots, but I suspect most
- of you have, if you go onto British Gas, or any of those banking sites, they now all have chatbots.
- It's essentially a virtual agent that a person has a conversation with, so it can be customer
- services, marketing, sales, travel, and this one is travel. Where the user types in on the right
- a question, and the chatbot will answer, and then the user will then ask another question, or answer
- it. So it's having a simple conversation. We designed our chatbot in this context. Here was to
- probe traders about their intentions, and to help externalise their hunches that aren't necessarily
- well thought through. So our chatbot is called a ProberBot, because essentially it's probing
- the user, asking them questions. We designed it such that it would be embedded in the software,
- so that as the user is looking at how well their stocks are doing, and whether to sell or to buy,
- the ProberBot will pop-up at an opportune time and ask them questions. Here it says, 'If your
- investment hypothesis has changed, what made you change it?' The user types in the blue box,
- 'Recent news.' So I'm going to show you how it works in action, and that we developed a
- software simulation for trading, and as you can see here these on the left are the stock lists
- that the person has, and on the right it shows the information, and then this will pop-up if
- they want to trade, and that's the point when the trader bot pops-up and says, 'What's your
- investment hypothesis?' and this is to slow down and get them to think, and they'll type in an
- answer, and that may be enough for them to think about is this what I wanted to do. So that's how
- it works. Just to recap the design rationale, so it pops-up at key moments, when the user's about
- to make a trade, it promotes short conversations with the user to stop, think and reflect. It's
- embedded in the trading tool so that it dovetails the task execution and the thinking. So how
- effective is our chatbot? Well my student, Leon Reicherts, ran a pilot study with six traders,
- and presented three scenarios to them, where they had to make investment decisions, whether
- to buy or to sell a stock. Then we used the HCI method of think aloud, and in-depth interviews,
- I'm not going to go into the details, there is a paper out there if you're interested. The idea was
- would it make them stop and think, and from the think aloud, so this was very much the case,
- that it did, very occasionally it was annoying for it to pop-up, and that's something that you have
- to design is not for it to become clippie when it keeps popping up too much. To encourage reflection
- on decision making by helping in the moment, when it matters. Also they said that it would
- help reduce impulsive actions. So this suggests that this approach, by having this type of chatbot
- appear, can make an investor's thinking more systematic. Now I've talked about novice traders,
- what about expert traders, where they have only too much information, too much knowledge,
- and they can be tempted to be naughty. This is the second case study that I want to present,
- which is financial institutions are responsible for detecting this naughtiness, which is essential
- market abuse and trading. Where someone who's got confidential information, uses that to their
- advantage, so here's one that was in the news recently, where an investor accuses Rocket's Dan
- Gilbert of insider trading, claiming that they had pocketed 500 million. A lot of this happens. As
- I said financial institutions try and stop it, or try and detect it. They employ compliance officers
- to do this, and their job is to detect this, but to do this by conducting investigations, and
- curating and collating data from several sources, by which to make up this and to see whether it's
- true. It's an awful lot of work that's involved in being a compliance officer. This is a hierarchical
- task analysis, which I'm not going to go through the different steps, I'm just going to show you
- that there are many steps involved in doing this. There's a huge amount of cognitive work, much
- multitasking, they have to scan through thousands of alerts, sift through millions of emails, and
- check lots of news feeds, and there's huge demands on their attention, constantly having to switch
- between various resources. Much of it is done inside the expert's head, and occasionally they
- might jot down their notes, and their thoughts. What if they were given a new software toolkit,
- that could help them with this work, and support more integrated thinking? This is where I was
- working with a behavioural science team at Nasdaq a couple of years ago, Wendy Jephson who I think
- is in the audience somewhere, and Anna Leslie, they both have left now, and are co-founders of a
- start-up called, Let's Think. What they did was to think about how you can develop what's called an
- investigative canvas, which is a set of software tools, where disparate information can be brought
- together in one place. Rather than having to go in and out of all of these different software tools,
- to have them there side-by-side, and to help them to make and discovery new connections. So there
- were lots of tools that they came up with, and the canvas was in the middle, the alert to the
- network, of the historian, the checker, and so on. The way in which this was designed is that the
- compliance officer can decide which tools to bring together, so they start off with a blank canvas,
- called the investigator canvas. Then on the top there, is the case builder, where they can start
- to build up their case, they can populate it with information that they've found, with potential
- alerts, then they might want to bring up what's called the people profile, and this, by the way
- this is an early design. Here you can see what's going on between two people, is there any strange
- communication, or lots of communication that's happening. So they can start to see there.
- Then they might want to build up, add another tool, which this tool here is trading information,
- that will be very useful for them, when building up this picture. Over here is what's a scratchpad,
- where you can bring information from the different tools, and have it in place ready to add to the
- case builder. So I've given you an example of a few of the tools, but there are many others that
- were being developed. Howe effective is this approach? Well I think our initial evaluation
- showed how it could be used by compliance officers, to externalise their thoughts,
- but also project their sometimes random internal thoughts, and make them more systematic. Also
- they could share it with other team members, rather than just jotting it down on a notebook,
- enabling them to understand each other's lines of thinking, and maybe collaborate. One of my PhD
- students described it as a whiteboard on steroids. Because it's allowing you to do much more, you can
- discover new connections through having this set of side-by-side tools, and you can move the pieces
- of information, all the tools around, a bit like you do in scrabble, and test hypotheses that come
- to mind. Also it maybe enables you to think about something you may not have. If you get distracted,
- you can pick up where you left off, because it's out there. So how generalisable is this
- approach? Well we've been noticing how others are now developing what are called orchestration
- platforms, where they take siloed data from multiple storage locations, and organise it in
- a way that data analysts have it ready to hand. So there's a lot of interest in this new approach to
- thinking about integrated thinking. The start-up company, which I am part of, as the CTO, is called
- letsthink.com, and if any of you are interested, or have areas where you think this approach
- would be useful, do get in touch. We're trying to develop other kinds of canvas tools in education,
- in finance, and our strapline is, Enabling People To Think Brilliantly. So I want to recap,
- I've talked through two approaches by which we've used theory, and HCI, to think about designing
- tools to empower us. I've shown how we can try to slow down people's thinking,
- and we can scaffold and integrate their thoughts, we can externalise their cognition, and Marta
- mentioned one of the theories I've developed, because it's called external cognition, which I
- won't be going through today. That's been one of my contributions is to think about how we can do
- that, and what the design principles can be. We can also see new connections, and it can help us
- reason and reflect, and in some cases reduce the biases, in our decision making. Also as we saw
- there's the potential for supporting multitasking. So matching technology type, to thinking type,
- still is very much an artform, rather than a science. In some ways it doesn't really matter,
- I think the key thing is the theory that you use, by which to inform which of these and why. Just to
- recap, we turned that theory, and it can be from psychology, it can be from behavioural economics,
- into concepts to inform the design of an interface. So I have come up with numerous
- design principles throughout my career, and one I'm just going to mention is dynalinking, which
- is where you link representations on different displays, so that if you make a change to this
- one, it's reflected, or it's changed in that one. So that you can see by looking around, what
- the effects are of making a change, it might be a simulation, or you might be building something up.
- This type of dynalinking is really important when you're thinking about designing complex
- interfaces. We also look at all sorts of questions about the specifics of the interface, so should we
- use voice, or text, what type of conversation, should it be open, what type of feedback, where
- to place information on the display. What kind of interactivity, and how much. So there are lots of
- questions that automatically come to us when we start thinking about new areas by which to design
- these technologies. I'm going to finish though with thinking about the future. We've heard about
- how Abby has changed her mind about how AI can make a difference, and is very useful. I believe
- very strongly that there's a huge potential, and we're just seeing it, in AI changing how we think,
- I'm not going to go into the discussion about whether it's going to take over our jobs,
- I'll leave that for someone else. I think it can support creative thinking, and in particular in
- the arts and design. So what is creative thinking? Well it involves us looking at things differently,
- finding novel designs, and solutions. Essentially in a [?nutcase 0:31:25.4] it's making something
- new, so it could be a poem, could be a picture, could be a design, could be a piece a music,
- a recipe, a dance, an app. Some of us find it quite difficult to be creative. So wouldn't it be
- wonderful if we could design AI tools to help us, to discover new ways of being creative, and it is
- happening, this summer I was amazing at just how many people were just talking about these new AI
- tools. I suspect some of you out there have tried them. They have emerged to support creativity,
- and in particular OpenAI have developed a process known as diffusion, that turns text into images,
- so the user types in some words, for those of you who haven't tried it, into the box here,
- and then the AI tool generates images to match them. So Dall-E 2 is perhaps the best known one,
- but there are others called Craiyon, Midjourney and Stable Diffusion. So here you can see my
- first attempt was I typed in, 'Blue sky, sloths, rainforest, melting clocks,' and that's what it
- came back with, I could never have come up with that. If you have tried one of these tools,
- there's a big waiting list for many of them, but this one you can get onto straightaway, called
- Craiyon. It says, 'What do you want to see?' and I first typed in, 'Cat sat on a mat,' thinking, you
- can put any sentence in there, and this is what it came back with. They're quite cute, some of them
- have got a bit squiffy eyes, and one of them looks like it's had its nose in the jam, but they're all
- sitting on a mat, and they look like they are thinking, so it's quite clever at how it does
- that matching up. Perhaps the one that's the most advanced through is Dall-E 2. I first typed in, 'A
- modern painting of a professor giving a lecture, and being nervous,' and it came back with four
- male professors, so I thought, that's not good! So I typed in, 'A modern painting of Yvonne Rogers,
- giving a lecture, and being nervous.' This is what it came back with, they don't really look like me,
- but well at least I don't think they look like me, but it certainly does look like someone who's
- giving a lecture. The one on the right looks like they're quite angry, rather than nervous. It's
- like you then want to write another sentence, and you can't stop yourself, using these. There's mass
- appeal, so whoever I talk to is just really excited by these, whether they're artists,
- computer scientists, architects, designers, and the general public, have all gone crazy using
- them, why? It's good to have a look at one of the engineers who developed this, Aditya Ramesh, said,
- 'Dall-E is a very useful assistant, that amplifies what a person can normally do, but is really is
- dependent on the creativity of the person using it,' and I couldn't agree more, and I've made in
- red this word 'amplifies'. It's not replacing, it's getting you to think again, well what can
- I write now, could it come up with something else? Some people might say, well is this creativity? I
- would argue, yes, so when I typed in, 'Is Dall-E too creative?' it came back with this lovely
- design there, on the left, but each time you write a sentence or a few words, the AI tool makes us
- think of a new idea. It enables us to dare to think differently. Some say, 'Well is it really
- an artform?' and I was having a discussion in my lab this week, and we were saying,
- 'Well it's just like photography became a new form of creativity, so will this new breed of AI apps,
- they've only just started to come out, in the next year or two we'll see many more.' Another debate,
- and it's not one for me to talk about here, but just to mention it, is it stealing the work of
- artists it uses in its training data, and can we find a way of compensating or paying for them?
- I want to finish really by saying, in the future successful tools, AI tools would be those that
- help you humans in their work. Just like the ProberBot which I talked about, the chatbot,
- the most effective AI tools will be those that are embedded in other software tools, so that
- you use them whilst you're doing your task, or your work. Just like the investigative canvas,
- I think the most powerful AI tools will be those that facilitate integrated thinking, enabling us
- to think and use more and more resources. I want to end by giving that Microsoft are funding this,
- by a new Microsoft tool that I think is really excited, and it's called Microsoft Designer.
- Unfortunately there is a waiting list to get this, but hopefully not too long for me to get my hands
- on. What it does, is it uses Dall-E 2, so you can type in here, with a description like, 'Kitten
- adoption day,' and it will come up with some designs, and then you can use those designs in
- whatever it is that you're creating. So it might be a website, it might be a poster, it might be a
- newsletter, it might be social media. Here again it's embedding the tool in what you are doing.
- Here again you can start with, just add or remove content, so here they're creating a newsletter,
- and it's this idea that it makes it really easy for anyone to use, and opens up many possibilities
- by which to think about new designs. So I'm really excited by this tool, and I think there'll be many
- more that are coming out, that actually match what we as human beings are doing, rather than
- replacing us. So to conclude. I think there are diversity of technology tools to think with,
- and I've just described a couple of them, and my field, human computer interaction has helped
- to design and shape those. The most empowering tools are those that are embedded in ongoing user
- tasks and activities, especially those with a canvas, that enable people to put things,
- move them around, and to discover, and to explore and investigate. Those that enable professionals,
- and the general public, to extend how they create work. I think the future is very much human AI
- thinking, rather than AI replacing thinking, and I've always thought that, and I think the best
- tools will be there to empower us, and to engage us, and to excite us. I'd like to thank Microsoft,
- the Royal Society, and the late Robin Milner, for this award, and also the many, many researchers
- who I've collaborated with, and I've only really mentioned those in the universities I've worked
- on, and on Equator. There are many others in other universities throughout the world,
- without them I wouldn't be here today. So thank you very much, if your name's not on here,
- and you're in the audience, and you think it should be, let me know! Thank you.
- F1: Thank you. What a fascinating lecture, we have time for questions,
- and I wanted to remind those watching us online,
- that you can ask questions on Slido. Do we have any questions? Yes, I can see Julie.
- Julie: Thank you, Yvonne, that was a marvellous talk,
- and it looks to me that probably you'll do your best work ahead of you. So I have a question,
- when we were both starting out, I was with Weiser that computers should be hidden,
- and you were more that the computer should vie for your attention. Do you think the
- computer now does vie for our attention, and do you think that's a positive thing?
- Yvonne Rogers: That's a very good question, for those of you who didn't hear it, I worked
- with Julie on a project funded by Intel, and I was very much for making technology be visible,
- and engage us, and she was very much for the Mark Weiser view of making it invisible,
- and hidden. Now she's saying, has it not gone too far, it's taking too much of our attention.
- I would agree, I think there are some people have got really addicted to using their mobile phones,
- far too much, and there are some very clever people out there who've designed some apps,
- and games, and so on and so forth, which are difficult to put down. I think the way to overcome
- that, like any addictive activities, whether it's taking drugs, or alcohol, or eating too much food,
- or all of the others, is we have to find ways to help people who find it really difficult.
- There are various software tools out there, or attempts to try and get people to stop,
- and sometimes they're quite blunt instruments, and I think there's a lot of opportunity to help
- people to try and wean themselves off, or to simply just to throw their phone away. Yes,
- I agree, I think I'm probably guilty myself sometimes of using it too much.
- F1: Okay, I'll go to someone online, you will be next, but yes,
- please think up some more questions. So this is a question from Warren Park,
- thank you very much for the great lecture, during the lecture you have shown two case studies,
- which make use of chatbot and visualisation respectively, to make people think. What
- are your views on what kind of contexts each method could help people to think the best?
- Yvonne Rogers: Oh, that's a difficult one. I think chatbots can be used in a wide range of contexts,
- and for different activities, so I mentioned one or two types that are used in commercial domains,
- but also we were trying to think about how it can probe. They've been used for other applications,
- and contexts, for example there is a chatbot called Replika, which has been designed to help
- people in their wellbeing, and get them to think, and interact with it. So they've been used for
- different contexts. In terms of visualisations, again I think there are many application areas
- where you can use visualisations. We have seen that in data analysis, and some of the work
- I'm currently engaged in is thinking about what types of data visualisations might we create for
- lifelong narratives, for people, and how they can reflect on different aspects of their life. Not
- just coming up with graphs, but to think of what other kinds of visualisations might there be. So
- I think as I showed in the slides, there are no hard and fast rules as to which type of technology
- you should use for which type of activity. It's obviously easier and cheaper to design for a
- mobile phone, and one of my colleagues who can't be here tonight, wrote a book called There's Not a
- Mobile App for That, or something along that line. Everyone just goes straight to designing apps,
- because it's easier to do. I actually think that we can design technologies,
- a whole range of technologies, rather than just going for one that's easier and cheaper.
- F1: Thank you. So can we now…
- Suyush: Hi, my name is [?Suyush 0:43:22:8], I'm a student at Goldsmiths College here,
- and I'm studying game design. I'm looking to make specially
- educational games, games that can teach kids about different subjects, and different topics.
- The slide that you showed about system one, system two thinking was really interesting,
- because in games a lot of it is about really fast reflexes, shooting, going around system one, and
- then with educational games you want to ask them some question or make them think or learn, which
- engages the system two brain. It makes the game less fun, and entertaining, and it makes it kind
- of boring. So how can one solve such a challenge of mixing both learning and while having fun?
- Yvonne Rogers: I don't think system two is always boring, but maybe children see it as such. I think
- it's meant to be a metaphor, these system one and system two, and I think the key is to be able to
- alternate between those. So at certain points let them be fast, and just react, and at other times
- you might want to slow them down, so that they can be more strategic and get to think about,
- is this the best way to race, or whatever else you're wanting to do in the educational
- game. I think some early educational software was designed to try and combine different approaches,
- and different strategies. So use it just as a heuristic, rather than how can I get more
- system two. To think, well how can we alternate, they've been playing really fast for a long time,
- maybe it's a good time to give them an activity that will slow them down,
- or get a chatbot that will say, 'Is this the best thing, maybe you should think of doing this?' To
- encourage what's called metacognition, which is thinking about your thinking,
- rather than just constant reacting. So I think that would be my suggestion.
- F1: Okay, can we go to there, and I've seen, okay, Anthony, and there was one more.
- F2:
- Thank you so much. Thank you so much, Yvonne, for really amazing, eye-opening talk. I have
- a question about the chatbot, again the system one and system two thinking, I find that whole
- concept extremely interesting. So did you consider how to engage the users more with the chatbot,
- for example in that system one thinking, is presumably when the person is very overwhelmed
- with their own emotions, so they're either angry, or very scared, or very sad, something really is
- happening. Then having something pop-up in that moment, engaging, knowing that it's a
- robot might just be immediately the person might not consider even engaging with the chatbot. So
- did you consider what the chatbot could be doing, like could chatbot maybe be affecting emotions of
- the person, maybe creating a shock scenario for example, or where they show what could
- happen if they make this bad decision. Or maybe in using trust, or just a question of what were
- the different ideas of how, why would the person engage with chatbot in that moment of experience?
- Yvonne Rogers: That's a really good question, and I think our research in this area has only just
- begun, and we started off looking into, first of all we looked at how we could facilitate
- teams of clinicians working together, sense making, with data that they didn't really,
- not necessarily understand, but they didn't know what was causing the different trends.
- So we designed our chatbots to trigger more conversation between the teams,
- and it was very much thinking about how you can get more conversations going.
- Whereas the next tranche of research was looking at individual users,
- and how you might get them to stop and think. So I think there' s a huge scope for using chatbots,
- that model, but maybe understand the types of human emotions, and tap into them. The key thing
- is you need to find a sweet spot, because you can, just annoy people, and they'll switch them off,
- and they become annoying or frustrating. So that's where doing some good user testing can come up,
- to help, is this too much, and our first ProberBot perhaps was a bit too in the face of the trader,
- so we reduced the amount of times and the conversation. So and the key is how long
- should the conversation be, so that they can get back to the task. If it's just that they want to
- explore their mental health, or their emotions, you might have a much longer conversation,
- which is what Replika does. So I think there's a huge scope there for doing much more in this area,
- and getting beyond the Q&A model that's very much under much commercial use of chatbots.
- F1: I think there was a question
- from Anthony.
- Anthony: Thank you, Yvonne, for a really excellent, and informative lecture. I
- am now beginning to wonder whether this question follows neatly from the preceding
- question. So one of the ways we scaffold thinking, in society, is through debates,
- challenge, criticism, a quite scratchy way of engaging with people. So I'm interested
- in where that fits within a model, and how you can do that whilst remaining engaging?
- Yvonne Rogers: That's a tricky one, because even humans themselves, can find that hard to
- be all of those things, when particularly in marriages, in understanding when it's best to say
- certain things, when it's good to be scratchy, and when it's good to be blunt and open-ended. I think
- there's a lot of scope for us moving, there's been a lot of work in AI and natural language
- processing for quite simple conversations, but I think in the last year or two there's been more
- understanding of the nature of conversations, the nature of these discussions that might go
- on. So I don't have an answer to that, other than more research into these things. Also
- for us to understand a bit better, what goes on in these types of discussions, and scratchiness
- as you call it. Do we have good understandings and theories about what goes on in human conversations
- of this nature, and if so can we borrow from that, and design these types of chatbots,
- and other interventions to improve them. Certainly some of these very large government meetings,
- it would be very powerful, and useful, to have maybe some of these chatbots to help!
- F1: Okay, looking at the time, I think one, I will take these last two questions, so let's go there.
- Edwin Gregory: Yeah, hi, Edwin Gregory, I'm a business leader,
- so I'm coming at it from that aspect. So really interesting presentation and
- lecture. I was just wondering though on your financial services example, it was about AI to
- slow down the thinking and scaffold the thinking, what about access to experts' opinion,
- and then what did that open up in terms of regulatory requirements as well. So
- I'd imagine that was quite a tricky subject but I'm really interested in your thoughts on that.
- Yvonne Rogers: I think there were two case studies, one is we were focusing very much on
- novice traders, where we were trying to get them to develop new strategies, and to think about the
- criteria. In the financial world, I'm not an expert on the regulatory matters there as to
- how… Sorry, I wasn't sure what the second part of your question was?
- Edwin Gregory: Yes, so a lot of the AI you were talking about was around decision process,
- part of the decision process is access to expert opinion. Now in financial services,
- expert opinion is regulated, so it opens up, I imagine it's a complete minefield,
- because what I would want to do as an investor, okay, what do the experts think about this,
- and how does that affect my decision making, how do I get access to it. Through the technology,
- I mean the regulation, it'd be quite stifling in terms of how to get access to that,
- so I just wondered what your thoughts were, and whether you came across that in your example?
- Yvonne Rogers: I think we steer clear of that, because of those very reasons,
- if we have information that others, competitors might find very useful, and we just give that
- freely out in our chatbot, and I think that we weren't trying to tap into that expertise,
- in order to let people interact with expert chatbots. It was more getting them to develop
- their own thinking strategies. That's a very different area I think is to work that,
- and I think we steered clear of that, because it is a minefield!
- F1: I think we've got the last question there.
- F3: This is probably a broad question, but I was wondering how such designs of
- decision support tools might be applied to more time constrained settings like in healthcare,
- where they're making high stakes decisions, and they're already cognitively overloaded.
- They may not have the cognitive resources at their disposal to make such systematic, system
- two type of decisions. Yes, just coming from a PhD student studying decision making and DTs.
- Yvonne Rogers: I think artificial intelligence has come a long way towards helping people under those
- types of conditions, in decision making, and in particular in diagnostic, diagnosing. So they will
- continue to be developed, to help, but the key is not to let them, in my view, take over completely,
- but for people in these situations to know when to trust them, and when to use them, and what they
- can do themselves, or what they would like to do themselves. That I think is what we call human AI,
- is very much one of the research areas that's happening at the moment, is to think about
- where AI can replace certain activities, that are time consuming, or that can be unreliable,
- and that for doctors and other clinicians can use those. Also to give them the new tools that
- can empower them to be creative, in ways which they couldn't. So I think there are two things,
- one is as you said, for people who are overworked, or how can we help them,
- but also those who are trying to think about the future of medicine, how can we
- help them with these new types of creativity tools, so I think there's a place for both.
- F1: Okay. So thank you for the very stimulating questions, and thanks,
- Yvonne, for the great lecture. Now I think I have the pleasure of actually giving the award.
- Yvonne Rogers: Thank you.
- There it is. Thank you very much, oh, and another scroll, wow, look at that,
- thank you. Well I will hang this somehow around my neck, but not for the time being,
- but it's really, I'm really, words fail me, I'm just touched by how so many of you are here, and
- also for you to think that I am worthy of this award, so thank you very much.
AI is designed to do the thinking for us, but can it actually help humans to think as well?
91TV Milner Award and Lecture 2022 given by Professor Yvonne Rogers FRS.
Digital technologies have much potential for helping us think: enhancing how we perceive, attend to, notice, analyse and remember events, people, data and other information. But how do we make it happen - especially against the backdrop of AI which aims to do the thinking for us? Professor Rogers’ research is concerned with designing innovative interfaces that can extend how we think when we learn, work and play. Her approach is to make interfaces interactive and empowering; steering, scaffolding and challenging people to think differently and creatively.
In Professor Rogers’ lecture, she will describe how we can open up people’s minds more through designing technology with them in mind.
Supported by Microsoft Research
About the Royal Society
91TV is a Fellowship of many of the world's most eminent scientists and is the oldest scientific academy in continuous existence.
/
Subscribe to our YouTube channel for exciting science videos and live events.
Find us on:
Bluesky:
Facebook:
Instagram:
LinkedIn:
TikTok: