Nick Hodges- Gorilla
In this talk, Nick will give an introduction to Gorilla’s new generation of tools, Task Builder 2. He’ll talk about how it works, what prompted the Gorilla team to develop it, and some examples of the tasks you can create.
Jo Evershed 0:00
which is Nick Hodges from gorilla who’s going to share how some of the new features that are available in Task Builder 2, the new, new offering from Gorilla. And then after that talk, we’re going to have a break. So stick around to discover how the researchers Neil, Saloni, and Myles have created their studies, and how in the future, it’s going to be even easier to create these studies and deploy them online. Saloni, Neil, miles, thank you, you’ve all been absolutely brilliant. Do stick around.
Nick Hodges 0:29
Oh, thank you, right. Let me just show this. Right. Can you see my slides? And hear me? Cool. Wonderful, thank you. So six years ago, a researcher, probably just like many of you watching, had been trying for months to get her staircasing tasks to work. And as you can see here, quite clearly got to the end of her tether. I’m sure many of you have felt like this at times. Six years later, come on slides. Fast forward six years later. And she’s built the same task in gorilla in around two hours. And Jade is now a colleague of ours here at Gorilla. And it was at this point where she managed to put the task together so quickly that we knew we were onto something with our designs.
So hello, everyone. Good morning, I’m Nick. And I’m going to show you the next generation of our tools, task buider 2. Now, our vision here at Gorilla is to give you a single platform with a common visual language for questionnaires for tasks for human computer interaction games, multiplayer, shops, and so on. And the big idea that we’re sort of running with at the moment is that everything is nicely integrated. And everything is, is self aware of all the different components that it’s working with.
Now, that all sounds very high level, but fundamentally, I want to turn you from this poor person on the right to this much more, c’mon, relaxed person on the left, that’s what we’re shooting for. So the first thing with task builder 2 that we really want to do is just make it that much faster and easier to iterate on your task and add in different ideas. So picture this scenario where it doesn’t matter whether you’re a PhD student or a postdoc, you’re a PI. I’m sure you’re all familiar with this scenario. So you’re building imagine you’re building a task. It’s a simple little puzzle, I’m sure some of you have seen this example before. It’s a little three by three grid with a missing picture in the bottom right. And you’ve got to say which one the right one is. So you’ve got some instructions, and then we go through some trials. So I think that one’s probably the cat. Not quite sure, let’s have a look. Let’s go for the lamp perhaps. And this one could be the burger.
So you build instructions, you put some trials together, and then you go to your go to review it with your colleagues, whether that’s your PR, your supervisor, your review board, and someone will probably say oh, we should probably do some practice trials, which tells you whether you’re correct or not. So we can see whether people are getting it. Great. So that’s nice and simple, we could do that. Let’s go ahead and just add that straight in. So we’re going to add an object in here, we’ll grab the feedback, we’ll put it straight in, move it around a bit, done.
We can actually preview our trial straight in here. So we can see if this is working. So if we click on the cat that’s on to the right answer, and we can see that that’s correct in the stream down the right. And the snail is coming through, I think Correct. Great. So when you go back, and you chose to go put in the feedback now and say, Oh, this is really cool. But I think we should add a screen after the practice trials, sothey can do them again if they want. Okay, let’s do that. So here is the spreadsheet that’s driving that task. So you’ve got instructions, you’ve got the free practice trials. And I’ve introduced a little practice complete screen at the end. So here it is, is just a simple bit of text with a yes, no. And all I’m going to add in here is ability to just jump to a different row of the spreadsheet. So at the end, we say, well, if they respond to they respond, Yes, then we want to jump back to the row where our practice chart starts, which is still in our spreadsheet.
So if we go ahead and look at that, we can see this happening. So there we go. Here’s our instruction stream. Again, we go through our practice trials, we can see the feedback is coming through now. And then oh, we made a bit of a hash of this that so yes, let’s do this again. Yes. So we go back, we do them again. Oh, I see. There we go. And there’s a chandelier. And this one must be that one is the red burger. Okay, great. And then we say no, don’t do that again. And then it goes on through the main trials.
You go back to your supervisor say, oh, that’s really good. But actually thinking about this. Let’s not give them a choice. If they get if they get anything wrong, they just have to do it again. So again, we’ve made this nice easy to do we want to save, first we need to do is make sure we keep track of any mistakes. So we’re going to add this save accuracy component in here and say, we’re going to store the number of incorrect responses in a field called mistakes. And then all we can do here is change our criteria. So instead of it being when they say yes, we’re going to say well, whenever that mistakes field
hits a certain criteria, so whenever that mistakes field is greater than zero, then we’ll jump back now, and I don’t know about you like I can’t afford to spend my time doing this in code for every task doing this in the UI like this is so much faster and quicker. And I think the key point for me is that these are thing these aren’t these are key important scientific decisions that you want to be able to make about your participants experience. You don’t want implementing them to be burdensome, you want them to be quick and easy. You just want to just turn it on and away you go.
Now we’re going to hear more, I think later today about improving the experience of your participants and how that gets you much better quality data. But the key thing for me is it just needs to be really easy to add all these things in. I was quite inspired watching Myles talk just now particularly well, all of it. But certainly the bits goes down where he’s saying, well, now that we found these findings, he wants to try all this other stuff with the synchronising with the ballerina with all these other kinds of things. This is the kind of thing I’m talking about where you can have ideas and get them into this thing really nice and quickly.
So that’s a, that’s an example of how how we have reimagined your workflow in our in our new tooling. I want to show you some of the ideas I’ve showed you in more detail and how they actually interoperate. So the first one is live preview. And the first point to make is that that preview panel in the centre with the four response options, and the bit in the in the top, that is the actual live task engine running for there in real time right there in the tooling. So when you’re moving stuff around, so you can obviously you can lay stuff out visually, you can, you can see exactly what the person is going to see in that window. Everything is reactive and visual. So you can drag things around, you can resize them, we’ve got a grid to help you lay stuff out nicely, you can change the scale of that grid, if you want a bit more fine grained control. If you want to be really, really exact, you can just turn the grid off completely. And it will switch to a completely pixel based coordinate system. But and then you can always switch back if you change your mind and put your readjust your stimuli to put them where you want them.
As well as that you can see how this looks on different sized screens. So you can just flick through these previews here and automatically dynamically scale that panel to those screen resolutions. Because it’s really easy to see how it’s going to shake out in different screens. On the debug panel over here, you can actually look at individual rows of your spreadsheet. So you can jump into specific trials to see how they work. And this makes it really easy to troubleshoot things.And finally, you can play the trials for real yourself, no need to have to go into preview and gets the right thing. And you can see on the right we’re seeing the actual logic of what’s going on here. So if I choose this now you can see it’s been marked as incorrect. As we go through that.
The next place this applies is also when it comes to spreadsheet randomization. So I’m sure many of you familiar with using the spreadsheet to randomise your tasks. So here is a spreadsheet for my task where I’ve got my instructions and debrief. And I’ve got six trials going down there with their puzzle and their response options all configured up. So now, if you I want to randomise this, I’ve added a column — column here to just so all the trials I want to randomise have got a one in them and then and I can turn on randomization, adding randomised trials and then specify the column I want to randomise on. I can click the little randomization Preview button, the little dice there. And I can just click it over and over again to see the randomization logic running for real. And this gives me a preview of what’s going to happen when my participants and take part and how it might randomise them.
If I wanted to do more complex randomization, so for example, the first one, I did is just randomise that, that set of trials. But here I’m going to I’m going to do it based on the same field. But I just wanted to take a random subset, I’d want to take three of those trials, and then randomise them, so I can just reorder those. So we’re going to take three of that, that section, and do them in a random order. And again, we can see that happening. And we can see those trials coming through as a random subset of those in a random order. So all of this is live, all of this is reactive. And order, this is showing you exactly what’s going to happen all completely within the tooling.
The second big idea here is what we call the component system. So now in a traditional system, each element on the screen is just a single thing. So just an image or a button or something like that. Now, the problem with that approach is that you quickly end up with a sort of combinatorial explosion of behaviours, you need an image, a clickable image or draggable image and a clickable text and a draggable text and so on. And this that ends up being sort of a limited and somewhat brittle. So the solutions, this is components. So instead of each of being one thing, each object is made up of separate components, and you can kind of connect them together like Lego bricks. So in the simplest case, you might want something that is an image. So we add an image component. We want you to be able to generate response when you click on it. So we add a click response component. If we want you to be able to drag it, we add a draggable component. If you want it to start out invisible and then appear after 2000, 2000 milliseconds, you can add a trigger visible component and you set up one trigger so on the start of the screen, you can see that we’re going to set the visibility to invisible and then a second trigger on time elapsed after two seconds, we’re going to set it to be visible again.
And then for all of this behaviour, if actually later on you change your mind and actually notice sorry, this isn’t gonna be an image it’s going to be a video. All you need to do is swap out the image for a video and all the other stuff behaves exactly the same way. All these other components don’t really care what the the main thing you’re playing with is over here. And they all interoperate really, really seamlessly. So we can see this in action now. So here’s, I’ve got an image here with a stimulus on it. And let’s go through that trigger visible one here. So you can see here, we can just the all of the UI here, reacts to the settings you’re putting in. So it’s all nicely aware, it only shows you the settings that you need. And it makes it really, really easy to configure all these things visually. So you haven’t had to touch any code to do all of this stuff. This is all happening here. Because the after two seconds delay, then my image appears.
Now, the next thing I was taught was binding. Now binding is a key concept in Gorilla, we had this in task builder one, but we’ve extended it and made it more powerful and easier to use. The basic concept of binding is you’ve seen is that when you have your your trial, which in this case, say has a fixation, a stimulus and a response, well, the fixation is the same. And maybe the response items are the same, but we want to change the stimulus with each trial. So what we want to do is basically inject the value from our spreadsheet into that into that stimulus box where we where we define what our stimulus is going to be. Now, it could come from your spreadsheet, but it can come from other places, you can also do it in the experiment tree. So if you’re randomising people to two conditions, you might want to specify here at the manipulation level, well, group A is going to have a time limit, there’s bound to a manipulation and in one condition, they get 10 seconds, now they get 20. And so this is all done in here so that whenever so in this case, here’s the binding that stimulus to the to the spreadsheet, so you can just add it as a fixed one as a static one, or you click this little binding button, little chain links. And then you can choose the column that you want to bind it to. So no having, no having to type in magic things, it’s already aware of all the columns that you have, if you specify a new one, it appears, the same thing goes for manipulations. And finally the store. So the store is a new version of embedded data. And you can do the same kind of thing. So when you’re saving stuff out to embedded data to store it later, that all kind of happens through this UI. And ideally, when you want to bind it back in again, to use it.
You’ve now got much more control over responses. So the response processing has taken a big way up. Now, in most trials, one way or another, we present something on the screen and wait for the participant to do something and record their response. Now, by default, you wait for response from the participant and advance on the first one. But that’s not always what we want. The simplest thing you might want to do is score it. So you can add in a score at your screen. And you can mark it correct if it matches a particular answer, you might want to only advance on the first correct response, not just any response. So they can get it wrong as many times as they like and you only advance when they finally get it right. You might want to only advance on a specific response.
More getting more complex you might want to put in, you’re able to force them to do a certain amount of effort, they might have to do 20 responses in order to get through to the next one. And this is a way in which you can essentially take these individual responses and combine them into one sort of bigger one that you then process later. So and you can do the same thing with compound responses. And I’m going to show you an example of this in a moment, where we essentially want to take a bunch of responses and to join them together as if they’re spelling out a word or something like this. But again, the important point here is that all of these things are decoupled and modular. So you can combine them together in a way that works for you.
So here’s a good a quick example of a digit span, this is the screen where you actually enter the numbers at the end. So we’ve shown the participant a sequence of three numbers, and you’ve got to remember them. So you can see if I just hit 123, we can see the response building up in that panel on the right. And you can you can use a backspace there if they want to delete an answer. And we can see that that will come through in the panel on the right.Now each of those individual buttons is just a regular button with a response on it. And it’s this compound responses piece that is doing the joining together. So what a compound response component is doing is is saying okay, well, we’re going to wait until we’ve got at least three and only then do we pass it on through the pipeline to what we’ve got there is the score, which can then mark it against our correct answer. You can use the same thing to do things like trail making. So this is just a this is just a simple example. But you can imagine this could have a map or something behind it, where they’re clicking the things in order in order to build a trail. And again, you can see the responses coming through on the side. And it’s the same compound responses as joining all those things together. In order to basically amalgamate all those individual clicks into one sort of canonical response.
So with all this together, there’s a huge amount of other things that you can do here. Where you can do that we can do difficulty ratcheting. So where you have multiple spreadsheets of different difficulty might have a spreadsheet of easy trials, medium trials and hard trials. And again, we can do all this really, really easily in the UI you simply store the count of correct and incorrect answers then you add a criteria here for well for moving between the spreadsheets that lets you do difficulty ratcheting.
You could do staircasing and this is the one that Jade was building earlier. We do the same trick of counting their correct and incorrect answers. And then we add criteria for stepping our target value in this case, how long they get to see a stimulus for we step that target value up and down. We they get less time to see it, the better they do it You can do within task branching. So jumping between rows and spreadsheets. So again, based on the criteria you want to have, you can jump to a different row, that was how we did the going back to your, your practice trials again. And just branching within different screens, again, with the same criteria, you can say, Well, based on what they say here, or what happens here, we want to jump to this other screen.
And so that’s where we are at the moment. This is live now on Gorilla, and you can all go and have a go with it. And coming soon is scripting. So the main things we want to add in here, first of all, the ability for you to build your own components. Now, the crucial thing about this is that when you build a component yourself, you can expose the settings that it needs as UI elements in the actual tooling. So when you’ve built your component, you’ll be able to add a little configuration menu, just like all the ones that you’ve seen. So people try to configure how they want your component to work. And you’ve got access to all of the binding tech and everything else that’s in there. So other people using your code, they don’t need to copy in your script and mess around with it, they can, it will come up for them just like any other any other piece in the tools.
And all of the things that I’ve shown you whether they’re visual things, like images, or videos or anything like that, you can build those anything to do with behaviour, whether they show or hide whether they trigger or this other kind of stuff, any of the response processing things. And indeed any other randomization things. They’re all just components, and you script them all in the same kind of way with the same set of interfaces. So it’s all really, really consistent and easy to get into.
And then finally, the other bit that’s coming soon is our new evolution of open materials. So obviously at the moment, in open materials, you can share experiments and tasks, but the evolution we want to put in there as you can also post components to that as well. So whenever you build something that’s really useful and reusable, you can post that to open materials. And it’ll actually come through in the UI for people to put into their own tasks.
I can see we’re running a bit over time, there’s one other thing I wanted to show you quickly. So this is a very simple demo we made for doing your UI and UX research. So here we are, I’ve got the gorilla website, mounted in a tablet frame. And what I’ve done is I’ve added little hotspots, which is essentially a clickable region of the screen over the top of the image to see how they navigate through this UI. And what I’m interested in is whether they go through the signup flow where they go and look at the tools. And again, this is done really simply, you can see in the tooling here, we’ve got here’s the landing page, you see there you can see that checkerboard is where the where the hotspot is. And then if you go into the signup one, we’ve got two more there one if they click on the signup flow. And one if they click on the header, and in the screen, there’s my branching, which basically listens for a response and then jumps to a particular screens based on where they’ve clicked.
So all of that lets you do things like so what we’re really trying to do is expand out all these other different kinds of areas of research that you can do in these tools. And we are on time, so I was going to I think multiplayer we’ve done before, but we can come back to that later. And the last thing I wanted to share with everyone is that obviously Task Builder 2 is out now. Now we are we’re seeing lots and lots take up for it, which is really exciting to see what people are building with it. And finally, I just wanted to tease that we are also building questionnaire builder 2. I know there’s been a lot of appetite for this amongst Gorilla users. So all the goodies that you want and expect from a powerful questionnaire builder is going to be there. So whether responses are optional or required, conditional, conditional questions, branching ‚scoring, being able to accept your URL parameters in from other sources and indeed the script same kind of scripting support that I’ve talked about for task builder 2 that’s going to be in questionaire builder 2 as well, and that is all coming soon as well. And so that is everything that we’ve been building at the moment and we’re really excited to see where you can take it.
Jo Evershed 18:55
Fabulous, thank you Nick for sharing the improvements to Gorilla task builder, and where all of that work is going. That was absolutely brilliant.