Alex Anwyl-Irvine, Cambridge University
@AlexanderIrvine
Full Transcript:
Alex Anwyl-Irvine:
Thanks for joining everyone. Welcome to the specialist seminar on our MouseView.js library. I’m Alex Anwyl-Irvine, I’m just coming to the end of my PhD at the University of Cambridge and I currently work as an R&D intern at Cambridge cognition and I was also a part-time developer at Gorilla.
Alex Anwyl-Irvine:
Myself, Tom and Edwin made this library, and most of the software development has been done by myself, which is why I’m going to give this talk today about the library and much of what we’ll discuss is also detailed in our pre-print on site archive, which you can check out on our website.
Alex Anwyl-Irvine:
So just to give a brief outline, this talk will be in three main parts. So the first will be a bit behind why we thought this tool would be the answer to some of our research challenges. The second will be about how the data gathered using MouseView compares to a typical free viewing eye tracking task. And then thirdly, I’ll talk about how you actually add this to an experiment using Gorilla Zone as an example.
Alex Anwyl-Irvine:
So firstly, why did we decide to make this? As the pandemic closed multiple labs this year and last year, eye tracking became difficult to do in a typical way and the obvious alternative was to use webcams to do this, which generally speaking is a great option. There are plenty tools out there like WebGazer, which has been built into platforms such as Gorilla and jsPysch and others. However, we found solutions like this tend to amplify the issues that your experience with eye tracking in the lab, and these are things like losing the eyes or issues by finding landmarks on the face or contrast problems with the camera and eyewear, and further to the list, you get lots of attrition which we have to combat with large of samples.
Alex Anwyl-Irvine:
And then even after that, we end up with relatively low resolution data, which for certain research questions is interesting, but for us and our collaborators, this was quite difficult to work with. And so we thought, wouldn’t it be great if we could improve upon these accuracy issues with a different tool. And wouldn’t it be even better if we could avoid the awkward calibration stages and those errors too, these are the source of much of the attrition that you can experience when running these types of experiments online.
Alex Anwyl-Irvine:
And the solution we came up with is mouse tracking with an occluding layer and this is a pretty simple idea, and it has none of the fiddly technical issues that you have with webcam eye tracking. And this works in the following main steps. So first, we create an overlay atop any website, and this could either be solid or a partial cover.
Alex Anwyl-Irvine:
Second, we then detect the mouse position or touch position on the touch screen and then we record those coordinates. And at the same time, an aperture is cut in the overlay, which reveals the content underneath. We then iterate between points two and three, which moves the aperture to the mouse location or finger location on the touch screen, every time the screen refreshes, and this forces the user to move the mouse to any area that they’re interested in. And over time we can build up an attentional map, which is analogous to those that we make with eye-trackers. And to drive this point home, here’s an example of maps we produced using eye trackers on the top, and then the equivalent map, just using a MouseView.js, even though this is just collapsed over time, you can see that they’re pretty similar topologically.
Alex Anwyl-Irvine:
And this is what MouseView looks like in practice. So the end result is here and you can see the user explore the picture with an aperture and revealing the face of the dog and the cat. And then in a second, you’ll see a heat map which reveals the attended locations of this image. And so that’s essentially how it works.
Alex Anwyl-Irvine:
In addition to this base functionality, MouseView gives you lots of configuration options. So firstly, you can vary the aspects of that overlay that cover the screen. You can alter the transparency, so here its going from about 50% to 0% transparent. You can change the color to anything you want. And a big strength is the ability to blur anything underneath that overlay, even if there’s a transparency of zero. And this allows participants to see the low-level features of the whole scene and use that to guide their attention and simulates something like the non-foveated area of our visual field.
Alex Anwyl-Irvine:
However, this is an experimental feature and can take one to four seconds to initiate, so isn’t appropriate for things like videos. We do provide a callback function, which allows you to hide the contents of the website or experiment from view, whilst it generates, but things like Gorilla Zone actually already do this for you so you don’t really need to worry. The second part of the configuration is you can alter the size of the aperture, so you can change it from being very small to very big. And you can also express it as a percentage of the page, width and height. And importantly, you can also apply a Gaussian blur to the edge of this, which softens the aperture and again, this simulates some elements of foveated vision, so the edge of that foveated blur.
Alex Anwyl-Irvine:
So it’s well of me telling you about this tool and its settings, but we wanted to find out how well this tool could replicate eye tracking results. And I’m going to give a brief outline here because others are going to talk more about this later, but I’ll show you the basis of our validation paper.
Alex Anwyl-Irvine:
So our empirical question was based on a paper published by Tom and Edwin previously, which was a passive viewing eye tracking task and it used an in-lab eye tracker. In this task, people viewed a series of pairs of images for 10, 20 seconds, and these were either a neutral image paired with a pleasant stimuli or a neutral image paired with a disgusting stimuli. I’ve used emojis here to avoid having to show you the images, but I promise you’re not missing out on anything because they quite disgusting, but you might see some of them later, so that’s exciting.
Alex Anwyl-Irvine:
So when we look at the lab eye tracking results of these two types of trials, we find this pattern. We have an early involuntary look to the evocative picture, regardless of what it is and then later, voluntary bias after this. And this bias goes in the direction of the pleasant image, so people are looking at the thing that they’re interested in, and this is quite nice and then it’s flipped for the unpleasant or disgusting images.
Alex Anwyl-Irvine:
And when we used MouseView to replicate this experiment, we found this pattern. We found no involuntary look, but we suspect this may be because mouse tracking only tracks top-down attentional control. And I’m sure Tom will touch on this later, but we did replicate the later component, the voluntary bias, people dwelt on the pleasant image, but avoided the disgusting image quite strongly. And as avoidance strengthens over time, which was also the case in the lab eye tracking study. We found that this element was reliable and valid, but you can read more about that in the pre-print.
Alex Anwyl-Irvine:
Okay, so like I said, we’re going to get more information on what people have done with this tool later, but I’m also here to talk about, how to get experiments working yourself. And thanks to the many open source in general purpose nature of this tool, there are many options available to you and these range from full on app development to no programming at all, which is what this represents. The [inaudible 00:07:45] JS version can be included in any Apple website. And we have a node package manager repository and a GitHub set up and I’ve tried to make this as straightforward for developers to use as I can. But to do that, you do need some coding experience, in the middle between coding and no coding. You’ve got experiment builders like PsychoJS and PsychoPy and jsPsych. However, you still do need to do a bit of programming to understand this, but Gorilla is more towards the no coding thing, so its appropriate for a small tutorial, whenever you got 15 minutes.
Alex Anwyl-Irvine:
So in Gorilla, you can use the code implementation which allows you to customize things a bit more, but you can use their Task Builder, and they’ve recently created a drag and drop zone that involves absolutely no coding at all. And that’s what I’m going to show you how to use today.
Alex Anwyl-Irvine:
So we have a Gorilla Zone. So this is an example of a simple trial, identical to the type of free viewing experiment I spoke about earlier, and this is the Gorilla screens in the task. You have three screens here, so you’ve got a fixation cross with a timer to get a delay between each trial. And then you have a screen with a button, which is very important because it ensures when they press that button that the cursor or finger is at the center of the screen and they’re not seeing one image more than the other. And then finally the stimuli themselves, side-by-side or sharing for 20 seconds. And it’s this screen that we’re going to put our Gorilla Zone into.
Alex Anwyl-Irvine:
And you select this viewing screen and this brings up this layout, which shows the left image and the right image. And this is a little timing widget. You press the edit layout button and it shows the layout of the screen with these gray boxes. And we want to add a new zone in, so we press that button and this creates a new zone, which we then select with our mouse and brings up this menu where we go to the advanced tab and if you’re on the beta program, you’ll see that little MouseView box there. And this is what we press to add in, and then it’s added in.
Alex Anwyl-Irvine:
And you can see it here in the center. And this will have been added with our default settings, which you can change by finding the correct section in the configuration settings. So you could scroll down, here’s the mouse tracking settings. So specifically here in the mode, record, which I’ll use for this purpose, just records the mouse coordinates and does the MouseView overlay on this screen. You can use upload, which in combination with this setting here allows you to apply a MouseView to your whole experiment so you’re continuously tracking over multiple screens.
Alex Anwyl-Irvine:
This then has a dropdown box with the MouseView specific settings. And in this dropdown box, you have the things I spoke about earlier, so the overlay color, the alpha which deals with transparency, the Gaussian blur, standard deviation and pixels and the aperture size, and these are all configurable. Now, once we’ve done that, we’re ready to run the example experiment. So play this. So when you preview the task and launch it, you will receive the instructions and then when they press next, the button appears, they click on it and here’s our blur with the aperture under.
Alex Anwyl-Irvine:
And it really is as simple as that, and this data’s being recorded and you’ll be able to download it in a spreadsheet that has all of the coordinates, but I don’t have time to tell you how to analyze that, but hopefully we’ll be releasing some codes to help people do that on our website.
Alex Anwyl-Irvine:
Okay, so I think that’s about it and thanks for listening to me, you can visit mouseview.org, which has detailed documentation, links to our pre-print and examples of various different levels of implementation I spoke about. The Zone on Gorilla is in closed beta, so you can get it added to your account by contacting them, using a form. I’d suggest doing this, if you just want to play around with different experiment designs, it’s free to make an experiment, so it’s a pretty cool way of working out what you can do. And for updates in the future, then follow me on Twitter, but a shameless promotion, and also follow anyone in this seminar because I think we’ll all be doing things in the future.
Alex Anwyl-Irvine:
I’d like to thank Edwin and Tom who have been great at making this project happen, but also Will at Gorilla who led programming the drag and drop zone and also Thomas Pronk and Rebecca Hirst, PsychoPy, for implementing it there, and making our examples. Okay. Thanks.