Lucy Cheke — University of Cambridge
In our lab we’ve been moving recently to both online and gamified ways of testing cognition in children. its early days, but in this talk I will share progress so far using two platforms: Gorilla Game Builder and the Animal AI Environment. I will share some preliminary and proof-of-concept data using these games.
Full Transcript:
Lucy Cheke 0:00
So I’m going to talk about a couple of research projects we’ve been doing over the last couple of years, trying to introduce some gamification in to cognitive research. So I’m going to tell you about two studies we’ve got running with using online games of quite different sorts using quite different approaches to gamification. One using Gorilla game builder, which is new and shiny, and one, using the animal AI environment, which is a sort of home built thing that I’ll tell you a little bit more about. Just a caveat, before I start, I am not an expert on online research, by any means or gamification, I’m just sharing some current research that we’re doing. But I’m hoping that not only there might be some useful things for other people in here, but also, maybe we can get some help from some of you guys.
So the thing that I started off thinking with trying to gamify is that we need to meet kids, when we’re talking about trying to do research with kids, we need to meet kids where they already are. Kids are playing computer games, if they were allowed to get away with it all day, every day. If we want kids to engage with our research tasks, we can learn a lot from what they are spontaneously choosing to do with their time already. So the first kind of subset I’m going to talk about is what I’ve called tablet tots, that is to say very small children. And we’re used to researching with very small children in the lab, because there’s lots of reason, you know, with kind of physical handleable tasks, because up until very recently, this is not a group, you know, we’re talking to sort of one to five year olds, this is not a group that we can usually easily do computer tasks with. But with the advent of tablets, we now have and you know proper tablets that they have at home all the time, not the old fashioned kind, we now have an opportunity to really exploit the kind of experience that children young children now have. And again, I’m speaking here as a parent of small children. These are my two small children here to age two and four in that picture playing on their tablets. Who would if if allowed to play tablets all day long. And what we can learn as a parent from just watching this is that there’s a few things that particularly for the younger kids seem to be quite reassuring, as would would be games developers.
Firstly, these games are not sophisticated. They don’t need a clever backstory, they don’t need anything really fancy. They young children year 2, 3, 4 year olds have very simple tastes. Repetitive is fine, repetitive is good. They like repetitive. They can’t read. So many games on the tablets, let alone so many experiments have text instructions, you need to have audio instructions, they will click on the most obvious thing on the screen. And they don’t know gaming and internet conventions. If you have a symbol that universally means something, unless it’s literally the play symbol with a triangle, they don’t know what it means.
And dexterity for the young kids, particularly the dexterity isn’t great. So dragging and dropping is fine for like three and a half up. But my two year old quite struggles with it. So I wanted to create a game that might allow us, particularly during the pandemic to start testing really little kids online. And I came to Jo on Gorilla generally, with my needs in that I cannot code I have very little time. And I have very few resources. And Jo said, Well, we’ve got this new thing. And it’s gorila game builder. And I promise you, they’re not paying me. But this was quite a find. So with gorilla game builder, I’m sure that you’re going to be taken through it in more detail. But basically, you get a sort of screen like this, where you can
3:50
you pick kind of create with game builder here. And then you kind of you build a screen with objects in the screen, you can move them around and quick and clickable and so on. And then you have lots of different pages on the experiment. But then you can also kind of animate to change the nature of the what’s on the screen as people go through the task. So I wanted to start very simple, because I didn’t know what I was doing. So I created a very simple visual search task.
So this task, the pirate game is a very simple visual search task. Essentially, you get given a target item. So you get the pirate and they say Please can you find this, so this is silver coin. And then after that you get an array. So this is an example of a pop out array. That is to say that the target is different from the, notably different from the distractors. So the idea is that this was sort of pop out at you. Or you can have what I have referred to as a camo array, which is where the items themselves don’t stand out terribly well from the background and are quite similar to one another. And this involves a lot much more search effort. So what we varied here was set size, ie the number of distractors, the obviousness of the target, and the position. So I’m just going to show you a quick little video of what this task looks like. If someone could tell me whether or not the sound works on this, that would be really great.
5:29
Could you hear that
Jo Evershed
the sound isn’t coming through to us.
Lucy Cheke 5:33
Okay, the sound, the only thing with the sound is that all of the instructions also read out as well as being
Jo Evershed
lovely, that’s such a nice thing to do for young kids.
Lucy Cheke
So this is kind of fun this and then you have an array, and the child has to go and tap or click on the target. So these are the practice trials. And these are nice, big obvious targets.
6:04
That’s the end of the practice session. And then we have more trials, but just with much more varied stimuli. So these are kind of slightly slightly obvious examples, and then you have these ones that are the really difficult ones.
Okay. So that’s the game is incredibly simple. And actually, the gamification isn’t great either for in terms of my design, because you know, we didn’t do much in terms of like, cool animations that happen when you get the right answer, and so forth. And that was mostly because I was so short of time, I designed a whole animation to do but I didn’t have time to do it. But it’s totally doable with with the system. So I’m just going to show you a little bit about the kind of data quality and stuff of the data we’ve got so far, this is work in progress, we’ve so far only got nine four to seven year olds, and 16, eight to 12 year olds. This is, as Jo mentioned, a study on cognition in kids with long COVID, where we’re testing those who have had COVID, and those who have not had COVID. If you have a child aged four to 12, please feel free to sign up, we’re still recruiting, you can just follow that QR code to do so. And I’ll be at the end of the talk as well. So with a bearing in mind that this is work in progress. And I’m not going to show you the COVID long COVID data because the sample is too small for that so far. But what I wanted to check was to whether or not this was showing the same sort of the right sorts of patterns that we’d expect to see with a visual search task.
So you can see here this is you’ve got children, different age groups, and the check the performance in terms of reaction time, and accuracy across these different levels of difficulty. So here, we can see that as you might expect, the more obvious the target, the quicker kids are and the more accurate. And that’s actually with the older kids performance. main reaction time remained pretty steady, pending on how many distractors there were with the younger children, that went up quite significantly. And this is what we were really looking for this effect of the interaction between set size and pop out, which is something you really expect to see with a visual search task. So here we can see that when the distractor, then the target is obvious in the pop out condition, you see this flat reaction time. Whereas in the camo condition, when the item is quite subtly different from the distractors, you get an increase in reaction time as the set size increases. And that suggests we’re picking up on the two kinds of visual search the pop out effect, where you just see it, and the serial, you know, having to look at every single item in turn sort of effect. And that’s really reassuring.
The other thing I was looking for was a left hand bias, we tend to have a left hand bias in visual search, mostly because in English, we read from left to right. And also because of the left hand right hand brain issue that were both mostly right handed. And again, because I was able to dictate exactly where on the screen each of these coins were, and they each had a location, I can then map that. And I found that I’ve got this nice central to left hand side bias. So that was just an example of the first go, I’ve had doing things in game builder. I made a functional visual search task in about two weeks with no coding. We can put audio instructions onto the little kids don’t have to read it’s playable on tablets, laptops, computers, and phones, but I opted out of that. And it was by far the most popular of all the tasks that were in that larger study that that took that was part in. I’ll definitely be making more tasks this way. And with more time, I think I can make something a lot better. But that’s what I could make in less than two weeks, the time I had spare in two weeks, which I think says a lot.
9:51
The other thing I’m going to talk to you through is the other kind of set of research that I’ve been doing the last couple of years, and this is potentially for slightly older kids. Some cognitive abilities don’t really lend themselves to be tested on this sort of 2d tablet style game that I’ve just showed you. These are better tested with games using embodiment or some form of embodiment where you are a first person individual exploring an environment interacting with objects. And these are really popular with kids. As we can see, with the popularity of games such as Minecraft and Fortnite. They’re normally first person shooters, but you don’t actually there’s no law that says you have to have a whole have a gun in the game.
So I want to introduce you to the animal AI environment. This is an environment created by a research team that I work with, created in Unity and originally designed to assess cognition in AI, using tasks that are designed for testing cognition in animals. And that can be or could, have been physically implemented with animals and small children in real life. So to see kind of a little bit what it looks like the the aim of the game is to collect these green balls or yellow balls as well. So it there’s very simple aims retrieve food, which of green or yellow balls, avoid poison, which are red balls, and avoid lava, which is a red or orange floor. And so, we’ve did a kind of a proof of concept study looking to see looking at comparing deep reinforcement learning AIs, and six to 10 year old children on a subset of the tasks that we presented in a competition and AI competition a couple of years ago. There were 900 tasks in the middle competition, kids can’t do it, the 900 tasks. So we chose 40 tasks from 10 domains that kind of span, the kind of simplest kind of common sense kind of cognitive tasks that people do with animals. And this is, Kozzy Voudouris, who is my PhD students who did this work.
So how do AIs compared to six to 10 year olds, well, a summary of the data is not well, children outperformed AIs on everything. There was no significant difference between the age groups. But interestingly, experience with similar computer games did make a difference to the results. So that’s just a thing to note that if you are gamifying, in a way that matches something kids might be doing, then they will have some expertise that will differentiate between different children. And because he did a cluster analysis showing that you could identify which individuals were AIs and which were children, regardless of the age of the child, or the flavour of the AI. And one thing we looked at particularly, for example, was object permanence. This is something that kids past the age of about 12 months in real life scenarios, we saw that with the agents, we could see that we can watch what they did. So not only look at the score, but watch what they did. And if you look back, not only do we know that, for example, the top agent only scored 25%, we know that they got that score by using the rule always go left. So actually, if we compare that children to even the best AIS, children outperformed them on every single thing.
There were one set of things that for example, the children didn’t do so well on. This is a tool, an example of a tool use task. And this is a nine year old who passed it. And we realised afterwards. And this is one of the importance of kind of doing these proof of concept and preliminary stuff, that these were just way too difficult. It sounds obvious to pull a hook to get a reward. But actually, the physical manipulation of the hook was actually very difficult for children and AIs both. But what was interesting is that the difference between the children the AI is was completely observable even when both failed. So you can see here on the left, and AI basically just going around in circles. Whereas the child’s takes a really considered approach, we can watch back exactly what the child did. And then through different ways of coding the data, we can start to get maps of not just performance, but a pattern of performance and type of behaviour.
We’re taking this forward with the Animal AI environment. Firstly, I’m working on some particularly large batteries. So Kozzy’s working on an object permanence battery, and this is Denia, who’s working on a space and objecthood battery. We’re also working on new features. So we were building in reward dispensers, which are these ripening and decaying rewards and new playable characters to make it cuter and funner. So we’re basically trying to make it more versatile and attractive. It is currently available for other researchers to use online, but it’s really tricky and fiddly at the moment. So we’re trying to work on making it more available as well as to this to this end, we will be hiring someone. So please do watch this space. And if you’ve got game dev experience, particularly with unity and putting things online, get in touch.
14:30
And the point of this point, I realise I’m running out of time, but the point of this particular approach is that we’re really trying to build this for translational potential. The animal AI tasks are designed to be because they’re in this kind of physical environment that could be real. It could be a room in a lab. They are playable as computer games, yes, but they are directly related to those tasks that can and are used physically in the lab, both with animals and children. So this facilitates translation between animal models, lab work with babies, toddlers, children too young for computers, and online games for older kids and adults and computational models, because remember, this was designed for and it’s being used for assessing AI models. So we can make computational models of cognition of impairment of development, and directly test theoretical accounts, all within the same environment. So to summarise, Einstein apparently said that having fun is the best way to learn. Having fun is I think, also the best way to test what has been learned. And I’m certainly going to be moving more towards gamification in everything I do from now on. So just a quick thank you to the, you know, the many, many people that have been involved in both of those research projects. And thank you for to gorilla and to Jo for inviting me to speak today. If you want to take part in the one to try out the the pirate game on your own kid, you can follow this link down down here at the bottom here, or scan this QR code, and it’ll take you to it. Thank you very much.
Jo Evershed 15:58
Thank you, Lucy, that was amazing. I am going to ask the audience in the chat to put any questions you have for Lucy into the into the q&a now. So I can come back to those in a minute. I’m just trying to think what I want to ask you, I just find that I actually just find that all so fascinating. What I don’t understand is how have you been able to do all of this, you’re like a mum, you’ve got two kids about the same age as mine
Lucy Cheke 16:31
Thank you. Okay, so I did, I did the visual search task. And that was mostly because with the COVID project, it’s funny, I have two hats on, I am a researcher in the psychology department doing the effects of kind of health on memory and cognition. And that basically has no funding and is running on volunteers and students and like me trying to do stuff myself. And then I’m also I work in an AI, an AI institute called the Centre for the future of intelligence. And there, I’m the director of the kinds of intelligence projects and we’ve got multiple, incredibly talented postdocs doing amazing things and PhD students as well. So this is well resourced, This is not. So on this side, I did the game builder thing myself, because I didn’t think it was fair to expect anyone else to kind of work out this new tool. There is a moth on my face. And with the animal AI thing that’s been developed with, it’s mostly funded by people who are very interested in understanding the cognitive abilities of AIs at the moment, which is a big deal for the conflict of economic policy, security and so forth reasons. But I’m keeping it so that it can still test kids so that we can bring it more further into psychology. And I’d really love to build a community around people using this to test kids because because I think it has loads of potential. But yeah, there’s a there’s a there’s a big team.
Jo Evershed 17:51
that. So that’s amazing. I totally agree with you. I think the idea that AIs are going to reach adult level cognition without first developing childlike cognition is unlikely. And the data, like first, right, yeah, so we’re gonna have to work this way back and layer them in. And that was so interesting to see how the AIs were performing against what was it six year olds, six to 10 year olds. And still very basic, but if we can get that data from younger children, that’s going to help that that journey. But of course, at the moment, you can’t get younger children to do this stuff online, they’re too complicated. But if we can simplify them, then we might be able to start getting to a point where the AI can outperform a very young child, if we’re confident they’re using the controls. So I think that’s, that’s tremendously interesting question has come in. Thank you. Do you ever allow adults to play with their kids eg as a fun activity to do together? Because then adults could try to encourage kids kids to beat them?
Lucy Cheke 18:48
That’s a good, that’s a good point. It’s a good idea. And we haven’t done that. And with the pirate game. I’m not sure how I mean, yeah, there’s practice trials and so on. But I think with game builder, you probably heard with animal AI, certainly, like there’s basically we had practice trials. And there’s, and it’s very open, which is why it takes actually a lot of skill and developer time. So it’s currently tricky. But we, we had that in the study that we ran, we had children asking requesting to be able to come back and just keep playing and messing around with it. And so we had to kind of have a like a two tier system, like we’re collecting data from you versus like, you’ve come back for the 16th time and are trying to beat your own record. So definitely, there was lots of scope for that. And the tasks we were doing were boring for adults, but that’s because it was really like here’s a reward can you get it? But they don’t need to be boring for adults and we’ve made some really hard ones now. So hopefully we should be when we are planning on testing adults as well. So I think that’s that’s really great idea a way of engaging kids.
Jo Evershed 19:53
Yeah, and if you were using game builder, there are two ways you could do that. You could have like another grown up take her has a go and now the child has a go so you could interleave them that could work really well. Or we do have multiplayer coming to gorilla so you could set up so that a parent and a child could in fact play together and respond to the same task. I guess the bit that’s always tricky though is you don’t want the parents coaching the child because you actually want to genuinely capture the child’s behaviour. That’s always the tricky bit to compensate for. I hope I have tweeted your link that you need more participants. If anybody on the call today has got chil- young children, please do consider reaching out to Lucy.