Alex Anwyl-Irvine, CamÂbridge UniÂverÂsiÂty
@AlexanderIrvine
Full TranÂscript:
Alex Anwyl-Irvine:
Thanks for joinÂing everyÂone. WelÂcome to the speÂcialÂist semÂiÂnar on our MouseView.js library. I’m Alex Anwyl-Irvine, I’m just comÂing to the end of my PhD at the UniÂverÂsiÂty of CamÂbridge and I curÂrentÂly work as an R&D intern at CamÂbridge cogÂniÂtion and I was also a part-time develÂopÂer at Gorilla.
Alex Anwyl-Irvine:
Myself, Tom and Edwin made this library, and most of the softÂware develÂopÂment has been done by myself, which is why I’m going to give this talk today about the library and much of what we’ll disÂcuss is also detailed in our pre-print on site archive, which you can check out on our website.
Alex Anwyl-Irvine:
So just to give a brief outÂline, this talk will be in three main parts. So the first will be a bit behind why we thought this tool would be the answer to some of our research chalÂlenges. The secÂond will be about how the data gathÂered using MouÂseÂView comÂpares to a typÂiÂcal free viewÂing eye trackÂing task. And then thirdÂly, I’ll talk about how you actuÂalÂly add this to an experÂiÂment using GorilÂla Zone as an example.
Alex Anwyl-Irvine:
So firstÂly, why did we decide to make this? As the panÂdemÂic closed mulÂtiÂple labs this year and last year, eye trackÂing became difÂfiÂcult to do in a typÂiÂcal way and the obviÂous alterÂnaÂtive was to use webÂcams to do this, which genÂerÂalÂly speakÂing is a great option. There are plenÂty tools out there like WebGazÂer, which has been built into platÂforms such as GorilÂla and jsPysch and othÂers. HowÂevÂer, we found soluÂtions like this tend to ampliÂfy the issues that your expeÂriÂence with eye trackÂing in the lab, and these are things like losÂing the eyes or issues by findÂing landÂmarks on the face or conÂtrast probÂlems with the camÂera and eyeÂwear, and furÂther to the list, you get lots of attriÂtion which we have to comÂbat with large of samples.
Alex Anwyl-Irvine:
And then even after that, we end up with relÂaÂtiveÂly low resÂoÂluÂtion data, which for cerÂtain research quesÂtions is interÂestÂing, but for us and our colÂlabÂoÂraÂtors, this was quite difÂfiÂcult to work with. And so we thought, wouldÂn’t it be great if we could improve upon these accuÂraÂcy issues with a difÂferÂent tool. And wouldÂn’t it be even betÂter if we could avoid the awkÂward calÂiÂbraÂtion stages and those errors too, these are the source of much of the attriÂtion that you can expeÂriÂence when runÂning these types of experÂiÂments online.
Alex Anwyl-Irvine:
And the soluÂtion we came up with is mouse trackÂing with an occludÂing layÂer and this is a pretÂty simÂple idea, and it has none of the fidÂdly techÂniÂcal issues that you have with webÂcam eye trackÂing. And this works in the folÂlowÂing main steps. So first, we creÂate an overÂlay atop any webÂsite, and this could either be solÂid or a parÂtial cover.
Alex Anwyl-Irvine:
SecÂond, we then detect the mouse posiÂtion or touch posiÂtion on the touch screen and then we record those coorÂdiÂnates. And at the same time, an aperÂture is cut in the overÂlay, which reveals the conÂtent underÂneath. We then iterÂate between points two and three, which moves the aperÂture to the mouse locaÂtion or finÂger locaÂtion on the touch screen, every time the screen refreshÂes, and this forces the user to move the mouse to any area that they’re interÂestÂed in. And over time we can build up an attenÂtionÂal map, which is analÂoÂgous to those that we make with eye-trackÂers. And to driÂve this point home, here’s an examÂple of maps we proÂduced using eye trackÂers on the top, and then the equivÂaÂlent map, just using a MouseView.js, even though this is just colÂlapsed over time, you can see that they’re pretÂty simÂiÂlar topologically.
Alex Anwyl-Irvine:
And this is what MouÂseÂView looks like in pracÂtice. So the end result is here and you can see the user explore the picÂture with an aperÂture and revealÂing the face of the dog and the cat. And then in a secÂond, you’ll see a heat map which reveals the attendÂed locaÂtions of this image. And so that’s essenÂtialÂly how it works.
Alex Anwyl-Irvine:
In addiÂtion to this base funcÂtionÂalÂiÂty, MouÂseÂView gives you lots of conÂfigÂuÂraÂtion options. So firstÂly, you can vary the aspects of that overÂlay that covÂer the screen. You can alter the transÂparenÂcy, so here its going from about 50% to 0% transÂparÂent. You can change the colÂor to anyÂthing you want. And a big strength is the abilÂiÂty to blur anyÂthing underÂneath that overÂlay, even if there’s a transÂparenÂcy of zero. And this allows parÂticÂiÂpants to see the low-levÂel feaÂtures of the whole scene and use that to guide their attenÂtion and simÂuÂlates someÂthing like the non-foveatÂed area of our visuÂal field.
Alex Anwyl-Irvine:
HowÂevÂer, this is an experÂiÂmenÂtal feaÂture and can take one to four secÂonds to iniÂtiÂate, so isn’t approÂpriÂate for things like videos. We do proÂvide a callÂback funcÂtion, which allows you to hide the conÂtents of the webÂsite or experÂiÂment from view, whilst it genÂerÂates, but things like GorilÂla Zone actuÂalÂly already do this for you so you don’t realÂly need to worÂry. The secÂond part of the conÂfigÂuÂraÂtion is you can alter the size of the aperÂture, so you can change it from being very small to very big. And you can also express it as a perÂcentÂage of the page, width and height. And imporÂtantÂly, you can also apply a GaussÂian blur to the edge of this, which softÂens the aperÂture and again, this simÂuÂlates some eleÂments of foveatÂed vision, so the edge of that foveatÂed blur.
Alex Anwyl-Irvine:
So it’s well of me telling you about this tool and its setÂtings, but we wantÂed to find out how well this tool could repliÂcate eye trackÂing results. And I’m going to give a brief outÂline here because othÂers are going to talk more about this latÂer, but I’ll show you the basis of our valÂiÂdaÂtion paper.
Alex Anwyl-Irvine:
So our empirÂiÂcal quesÂtion was based on a paper pubÂlished by Tom and Edwin preÂviÂousÂly, which was a pasÂsive viewÂing eye trackÂing task and it used an in-lab eye trackÂer. In this task, peoÂple viewed a series of pairs of images for 10, 20 secÂonds, and these were either a neuÂtral image paired with a pleasÂant stimÂuli or a neuÂtral image paired with a disÂgustÂing stimÂuli. I’ve used emoÂjis here to avoid havÂing to show you the images, but I promise you’re not missÂing out on anyÂthing because they quite disÂgustÂing, but you might see some of them latÂer, so that’s exciting.
Alex Anwyl-Irvine:
So when we look at the lab eye trackÂing results of these two types of triÂals, we find this patÂtern. We have an earÂly involÂunÂtary look to the evocaÂtive picÂture, regardÂless of what it is and then latÂer, volÂunÂtary bias after this. And this bias goes in the direcÂtion of the pleasÂant image, so peoÂple are lookÂing at the thing that they’re interÂestÂed in, and this is quite nice and then it’s flipped for the unpleasÂant or disÂgustÂing images.
Alex Anwyl-Irvine:
And when we used MouÂseÂView to repliÂcate this experÂiÂment, we found this patÂtern. We found no involÂunÂtary look, but we susÂpect this may be because mouse trackÂing only tracks top-down attenÂtionÂal conÂtrol. And I’m sure Tom will touch on this latÂer, but we did repliÂcate the latÂer comÂpoÂnent, the volÂunÂtary bias, peoÂple dwelt on the pleasÂant image, but avoidÂed the disÂgustÂing image quite strongÂly. And as avoidÂance strengthÂens over time, which was also the case in the lab eye trackÂing study. We found that this eleÂment was reliÂable and valid, but you can read more about that in the pre-print.
Alex Anwyl-Irvine:
Okay, so like I said, we’re going to get more inforÂmaÂtion on what peoÂple have done with this tool latÂer, but I’m also here to talk about, how to get experÂiÂments workÂing yourÂself. And thanks to the many open source in genÂerÂal purÂpose nature of this tool, there are many options availÂable to you and these range from full on app develÂopÂment to no proÂgramÂming at all, which is what this repÂreÂsents. The [inaudiÂble 00:07:45] JS verÂsion can be includÂed in any Apple webÂsite. And we have a node packÂage manÂagÂer reposÂiÂtoÂry and a GitHub set up and I’ve tried to make this as straightÂforÂward for develÂopÂers to use as I can. But to do that, you do need some codÂing expeÂriÂence, in the midÂdle between codÂing and no codÂing. You’ve got experÂiÂment builders like PsyÂchoÂJS and PsyÂchoPy and jsPsych. HowÂevÂer, you still do need to do a bit of proÂgramÂming to underÂstand this, but GorilÂla is more towards the no codÂing thing, so its approÂpriÂate for a small tutoÂrÂiÂal, whenÂevÂer you got 15 minutes.
Alex Anwyl-Irvine:
So in GorilÂla, you can use the code impleÂmenÂtaÂtion which allows you to cusÂtomize things a bit more, but you can use their Task Builder, and they’ve recentÂly creÂatÂed a drag and drop zone that involves absoluteÂly no codÂing at all. And that’s what I’m going to show you how to use today.
Alex Anwyl-Irvine:
So we have a GorilÂla Zone. So this is an examÂple of a simÂple triÂal, idenÂtiÂcal to the type of free viewÂing experÂiÂment I spoke about earÂliÂer, and this is the GorilÂla screens in the task. You have three screens here, so you’ve got a fixÂaÂtion cross with a timer to get a delay between each triÂal. And then you have a screen with a butÂton, which is very imporÂtant because it ensures when they press that butÂton that the curÂsor or finÂger is at the cenÂter of the screen and they’re not seeÂing one image more than the othÂer. And then finalÂly the stimÂuli themÂselves, side-by-side or sharÂing for 20 secÂonds. And it’s this screen that we’re going to put our GorilÂla Zone into.
Alex Anwyl-Irvine:
And you select this viewÂing screen and this brings up this layÂout, which shows the left image and the right image. And this is a litÂtle timÂing widÂget. You press the edit layÂout butÂton and it shows the layÂout of the screen with these gray boxÂes. And we want to add a new zone in, so we press that butÂton and this creÂates a new zone, which we then select with our mouse and brings up this menu where we go to the advanced tab and if you’re on the beta proÂgram, you’ll see that litÂtle MouÂseÂView box there. And this is what we press to add in, and then it’s added in.
Alex Anwyl-Irvine:
And you can see it here in the cenÂter. And this will have been added with our default setÂtings, which you can change by findÂing the corÂrect secÂtion in the conÂfigÂuÂraÂtion setÂtings. So you could scroll down, here’s the mouse trackÂing setÂtings. So specifÂiÂcalÂly here in the mode, record, which I’ll use for this purÂpose, just records the mouse coorÂdiÂnates and does the MouÂseÂView overÂlay on this screen. You can use upload, which in comÂbiÂnaÂtion with this setÂting here allows you to apply a MouÂseÂView to your whole experÂiÂment so you’re conÂtinÂuÂousÂly trackÂing over mulÂtiÂple screens.
Alex Anwyl-Irvine:
This then has a dropÂdown box with the MouÂseÂView speÂcifÂic setÂtings. And in this dropÂdown box, you have the things I spoke about earÂliÂer, so the overÂlay colÂor, the alpha which deals with transÂparenÂcy, the GaussÂian blur, stanÂdard deviÂaÂtion and pixÂels and the aperÂture size, and these are all conÂfigÂurable. Now, once we’ve done that, we’re ready to run the examÂple experÂiÂment. So play this. So when you preÂview the task and launch it, you will receive the instrucÂtions and then when they press next, the butÂton appears, they click on it and here’s our blur with the aperÂture under.
Alex Anwyl-Irvine:
And it realÂly is as simÂple as that, and this data’s being recordÂed and you’ll be able to downÂload it in a spreadÂsheet that has all of the coorÂdiÂnates, but I don’t have time to tell you how to anaÂlyze that, but hopeÂfulÂly we’ll be releasÂing some codes to help peoÂple do that on our website.
Alex Anwyl-Irvine:
Okay, so I think that’s about it and thanks for lisÂtenÂing to me, you can visÂit mouseview.org, which has detailed docÂuÂmenÂtaÂtion, links to our pre-print and examÂples of varÂiÂous difÂferÂent levÂels of impleÂmenÂtaÂtion I spoke about. The Zone on GorilÂla is in closed beta, so you can get it added to your account by conÂtactÂing them, using a form. I’d sugÂgest doing this, if you just want to play around with difÂferÂent experÂiÂment designs, it’s free to make an experÂiÂment, so it’s a pretÂty cool way of workÂing out what you can do. And for updates in the future, then folÂlow me on TwitÂter, but a shameÂless proÂmoÂtion, and also folÂlow anyÂone in this semÂiÂnar because I think we’ll all be doing things in the future.
Alex Anwyl-Irvine:
I’d like to thank Edwin and Tom who have been great at makÂing this project hapÂpen, but also Will at GorilÂla who led proÂgramÂming the drag and drop zone and also Thomas Pronk and RebecÂca Hirst, PsyÂchoPy, for impleÂmentÂing it there, and makÂing our examÂples. Okay. Thanks.


