Ryan MoreÂhead, UniÂverÂsiÂty of Leeds
@rmhead
Full TranÂscript:
I’m Ryan MoreÂhead. I’m a co-direcÂÂtor of the ImmerÂsive CogÂniÂtion Lab here at the UniÂverÂsiÂty of Leeds, and I’m not sure how chopÂpy this video is for you guys, but I wantÂed to put it on here because it’s someÂbody playÂing a first-perÂÂson shootÂer, AME trainÂer, on their perÂsonÂal comÂputÂer at home. And it kind of repÂreÂsents the best that you can get in terms of skilled motor behavÂior with a mouse and keyÂboard and a home PC comÂputÂer. This is someÂthing that over time, I think we want to try to get to with browsÂer-based experÂiÂments. So I come from a backÂground of motor conÂtrol, human comÂpuÂtaÂtionÂal motor conÂtrol, that JP just talked a litÂtle bit about the kind of equipÂment that we use. And in parÂticÂuÂlar, we use equipÂment that has high spaÂtial and temÂpoÂral fideliÂty. So it’s often samÂpling health where you are, where you’re at the posiÂtion of your limb at 200 Hertz, or even up to a thouÂsand Hertz. And you’re getÂting visuÂal feedÂback as fast as 240 Hertz.
The sitÂuÂaÂtion on the interÂnet is a litÂtle bit difÂferÂent. Now, the ICON lab here at Leeds we are using UniÂty WebGL, and we’re hostÂing it on AmaÂzon Web SerÂvices and using some othÂer softÂware such as C‑sharp and JavaScript to do the experÂiÂments that we’re doing. But we have the funÂdaÂmenÂtal limÂiÂtaÂtion that all interÂnet-based research has, which is that we don’t know the equipÂment peoÂple are using and it can actuÂalÂly be pretÂty poor equipÂment. So often that means the disÂplay is at best refreshÂing at 60 Hertz, but occaÂsionÂalÂly up to 240 Hertz. And the hand trackÂing that we use, whether a mouse or keyÂboard pressÂes, is maybe 60 Hertz and potenÂtialÂly up to a thouÂsand Hertz. So that means two things, one is that often this is less preÂcise than what we’re used to, but also it’s highÂly variÂable from perÂson to person.
If you’re focusÂing just on keyÂboardÂing tasks, which is someÂthing that a postÂdoc in our lab, EmiÂly Williams is doing, then this priÂmarÂiÂly just affects the samÂpling interÂval that you can detect a key press. So 60 Hertz means a 16.67 milÂlisecÂonds is about every time that you’re going to get a data point. And it’s also when you can present inforÂmaÂtion back to the parÂticÂiÂpant. AnothÂer eleÂment of keyÂboards online is this, n‑key rollover? You can actuÂalÂly know if someÂbody has a sort of high perÂforÂmance gamÂing keyÂboard where you can detect six or eight or howÂevÂer many simulÂtaÂneÂous key pressÂes, and so it’s posÂsiÂble that in some casÂes you can detect realÂly relÂeÂvant key pressÂes that are errors that peoÂple are making.
That’s pretÂty much the case for keyÂboards, but for mouse moveÂments, it’s a litÂtle bit more comÂpliÂcatÂed. And I’ll talk about sevÂerÂal difÂferÂent eleÂments of this durÂing the talk for your sort of vanilÂla, just reachÂing from a start locaÂtion to a tarÂgetÂed experÂiÂments, such as the kind that Matthew WarÂburÂton, a PhD stuÂdent, in the lab is doing. This means that you might be using an optiÂcal mouse, or you might be using a track pad, and I think it’s imporÂtant to keep in mind that these are difÂferÂent bioÂmeÂchanÂiÂcalÂly and also in terms of the mechÂaÂnisms of how they work. And while we were preparÂing for Matthew’s experÂiÂment, we got some data from Anisha Chandy and Jonathan Tsay at the Ivry Lab at UC BerkeÂley and anaÂlyzed the variÂabilÂiÂty in reached direcÂtion and also the accuÂraÂcy. And what I’m showÂing you here is the sucÂcess for difÂferÂent tarÂget direcÂtions for 24 difÂferÂent tarÂgets around 360 degrees, both when you have visuÂal feedÂback of where you’re movÂing to, and when you don’t have visuÂal feedÂback, shown in red. And you can clearÂly see that it’s difÂferÂent across direcÂtions, but also difÂferÂent across the two devices.
And so that’s someÂthing that you want to keep in mind when you’re runÂning experÂiÂments like this, is that you may find difÂferÂences based on the equipÂment that the parÂticÂiÂpanÂt’s using. So for the rest of the talk I’m going to focus on a speÂcifÂic task, which is an interÂcepÂtive timÂing task that John PickÂaÂvance, a PhD stuÂdent, is doing. The tasks that he had at the beginÂning of quarÂanÂtines and lockÂdowns was to transÂlate this lab based experÂiÂment with a high refresh rate on both the input and outÂput of his equipÂment into an online task. And typÂiÂcalÂly in interÂcepÂtive timÂing tasks, there’s a litÂtle tarÂget movÂing across the screen, and you’re tryÂing to interÂcept it with a curÂsor that you can only move in one dimenÂsion and you do that by slidÂing a hanÂdle along a rail.
So John develÂoped this task for use with chilÂdren in schools. And so he kind of gamÂiÂfied it and used instead of just a black block, he used a unidenÂtiÂfied fruit object, and that moves across the screen on a fixed path. And then there’s a fruit bat down in a cave that you have to fly out and try to interÂcept the fruit. And you want to try to do this withÂin a 100 to 300 milÂlisecÂonds. An imporÂtant thing to point out here is that you can move your mouse latÂerÂalÂly on this, but the bat itself will only move verÂtiÂcalÂly. And so it’s, it’s only movÂing along the verÂtiÂcal path for this task. And hopeÂfulÂly let’s see, I think I have to play this one. HopeÂfulÂly you guys can see this video, let it loop through a few times. This is what the task looks like. You just move the bat out, try to hit the tarÂget, and if you do hit it, you get this kind of GotÂti splat popped up on the screen and also a tone place.
So an imporÂtant thing to keep in mind durÂing this actuÂalÂly, sorÂry, we colÂlect this data on AmaÂzon and ProÂlifÂic. But an imporÂtant thing to keep in mind is that the equipÂment that we’re using to record this data in the first place has limÂits, and so JP was tryÂing to look and see if you have someÂbody makÂing a moveÂment across difÂferÂent peoÂple as one perÂson wants to make a big moveÂment and othÂer perÂsonÂ’s makÂing a small moveÂment, or if they have difÂferÂences in gain on their mouse, or difÂferÂences in qualÂiÂty of mouse, how is that going to affect things? And so he used some equipÂment to meaÂsure if you make difÂferÂent mouse moveÂments of increasÂing velocÂiÂties, will your mouse be able to track it? And it turns out that for all mice, includÂing very high end gamÂing mice, if you start to move a litÂtle bit faster than a meter per secÂond with them, that they start to bug out and give you realÂly bad data.
And what this led us to do is actuÂalÂly introÂduce a screen at the beginÂning of his task, where he can have peoÂple try to move the litÂtle bat from a start posiÂtion to a finÂish with one litÂtle moveÂment that’s indiÂcatÂed by a gift on the screen for what they’re tryÂing to do, whether they’re using a track pad or an optiÂcal mouse. This allows us to kind of stanÂdardÂize and ameÂlioÂrate the difÂferÂences across parÂticÂiÂpants’ comÂputÂers. So what JP or John PickÂaÂvance wantÂed to look at with this task is not just the interÂcepÂtive timÂing, but actuÂalÂly stopÂping yourÂself from movÂing in the conÂtext of an interÂcepÂtion tasks. So he had a subÂset of triÂals where the screen’s backÂground changed colÂors indiÂcatÂing that it was dawn. And when it’s dawn, the screen may stay like this, state orange, and if it stays orange then you’re free to move out and interÂcept the tarÂget the same way that you would if it’s nightÂtime, but on a subÂset of triÂals, the screen will actuÂalÂly change color.
And hopeÂfulÂly it’ll not be too chopÂpy for you guys. The screen will actuÂalÂly change colÂor and if you move the bat outÂside of the cave, when it’s dayÂtime, you’ll actuÂalÂly get sunÂburned and lose the triÂal. So these are stop triÂals or no-go triÂals. You don’t actuÂalÂly want to move on them. And the fact that the screen changes colÂor will indiÂcate whether this is a triÂal where you’re cerÂtain that you can move freely and interÂcept the tarÂget, or if it’s a triÂal where you don’t know whether it’s going to be safe to go or not. And these are allotÂted in the experÂiÂment, 50% are cerÂtain triÂals, 50% are uncerÂtain and out of every block of uncerÂtain, sorÂry, these are ranÂdomÂly interÂleaved, but out of the uncerÂtain triÂals, 5 of the 15 are on our stock triÂals. So that’s 33% of your stock triÂals, sorÂry, 33% of your uncerÂtain triÂals or stop triÂals, which is actuÂalÂly realÂly imporÂtant for these designs. And then we have eight blocks of triÂals. And for all the data I’m going to show you for the most part, we’re showÂing 52 peoÂple that we colÂlectÂed off of Prolific.
ImporÂtantÂly for this task, for being a Stop SigÂnal ReacÂtion Time task, there’s some basic things that we need to make sure are going on, which is that I’m introÂducÂing this screen change, where the tells you to stop movÂing and not move out of the cave. ActuÂalÂly it is chalÂlengÂing for you. And so what we do is, durÂing the triÂal, the tarÂget appears and starts to move. And there’s a time where we think you should start movÂing and we’re going to present the stop sigÂnal iniÂtialÂly, actuÂalÂly, right about when we think you want to start movÂing. And then we stairÂcase the stop sigÂnal back and forth to deterÂmine a time where it’s actuÂalÂly preÂsentÂed before you would need to start movÂing. And you could only stop yourÂself on 50% of those trials.
So we’re always stairÂcasÂing this, for each indiÂvidÂual, 50% of their triÂals, they’re makÂing it when the stop sigÂnal is preÂsentÂed and 50% they’re failÂing. And then what we do is find the actuÂal reacÂtion time. So how long before you were startÂing, you were going to start movÂing? Are you able to stop yourÂself? What we find in this task, it’s about 200 milÂlisecÂonds, which is conÂsisÂtent with othÂer stop sigÂnal tasks. So that’s good. We’re kind of meetÂing the criÂteÂria there. And howÂevÂer, what we’re realÂly interÂestÂed in here are proacÂtive stopÂping, which are meaÂsures where, you know it’s an uncerÂtain triÂal and you’re actuÂalÂly doing someÂthing difÂferÂent on these uncerÂtain triÂals than what you would do on the cerÂtain triÂals. And we have a few difÂferÂent meaÂsures of this. ImporÂtantÂly, we’re only lookÂing at triÂals where you actuÂalÂly made a movement.
And so, but we’re comÂparÂing uncerÂtain triÂals where you move to cerÂtain triÂals where you moved. So one of the meaÂsures here is MoveÂment Time. And what you can see is that there’s a clear difÂferÂence in the cerÂtain and uncerÂtain MoveÂment Times where peoÂple are movÂing faster on the uncerÂtain triÂals. Also the IniÂtiÂaÂtion Time, so when peoÂple start to move is faster on uncerÂtain, and for both of these, they get faster over the block. And then also for anothÂer meaÂsure called TimÂing Error, we see a difÂferÂence here as well, where peoÂple are actuÂalÂly latÂer on the uncerÂtain and to make this TimÂing Error a litÂtle bit more intuÂitive or palÂpaÂble for you guys, it’s kind of tanÂtaÂmount to where you’re hitÂting the tarÂget with the bat. So, here I’m plotÂting on cerÂtain triÂals on these graphs at the botÂtom here where peoÂple hit on the actuÂal fruit UFO here at the top, and you can see that they mostÂly hit the front right corÂner, which is actuÂalÂly the ideÂal spot to hit to maxÂiÂmize your success.
And on the uncerÂtain triÂals startÂing out there iniÂtialÂly, and sort of creepÂing back over time as the experÂiÂment goes on, which you can see over here. And so for both of these meaÂsures, this IniÂtiÂaÂtion Time, sorÂry, these are all sigÂnifÂiÂcant, but for both of these meaÂsures, IniÂtiÂaÂtion Time and TimÂing Error, we were interÂestÂed in whether peoÂple were doing this conÂsciousÂly or whether it was just someÂthing that emerged out of the task. And so what we did is put a task at the end of the experÂiÂment where we had peoÂple watch a video of the UFO going by. And we told them to note when they would have tried to start movÂing durÂing the actuÂal task. And then afterÂwards we let them posiÂtion the UFO at that locaÂtion. And you can see that they clearÂly are putting the cerÂtain, the UFO on a cerÂtain backÂground earÂliÂer than they are uncertain.
So they’re aware that they’re iniÂtiÂatÂing latÂer. And we also had them click where they were tryÂing to hit the UFO with the bat on cerÂtain verÂsus uncerÂtain triÂals. And we found a simÂiÂlar thing where they indiÂcatÂed furÂther back on the UFO for where they are tryÂing to hit so that we think this is imporÂtant because these proacÂtive meaÂsures are not just someÂthing that’s implicÂitÂly emergÂing through some unconÂscious learnÂing process. But this is someÂthing that they activeÂly know they’re doing. There’s one final point that I just want to make here. That’s actuÂalÂly a methodÂologÂiÂcal point. And this has to do with the lag, because we know that there are difÂferÂences across peoÂple’s comÂputÂers. So what I’m plotÂting on these plots is the intendÂed posiÂtion of the tarÂget. So at where the tarÂgets supÂposed to be movÂing, givÂen the amount of time that’s elapsed in the triÂal verÂsus where we actuÂalÂly record the tarÂget is.
And for some triÂals, there’s very litÂtle difÂferÂence between these two things. But on some peoÂple’s comÂputÂers, their comÂputÂer chugs a litÂtle bit drops a frame, and they have some trouÂble with the tarÂget appearÂing where it’s supÂposed to be. And those are kind of highÂlight triÂals. And this is someÂthing that actuÂalÂly affects some of our meaÂsures. So here on the left is the proÂporÂtion of times that they hit the tarÂget on cerÂtain triÂals. And you see that there there’s about a 10% reducÂtion in perÂforÂmance when peoÂple have a lot of lagÂgy triÂals, and then they also have longer Stop SigÂnal ReacÂtion Times to the same thing. And John noticed that this actuÂalÂly seemed to be disÂproÂporÂtionÂateÂly affectÂing peoÂple that were using a track pad. So he added a litÂtle screen to the beginÂning of the task, that if someÂone indiÂcatÂed that they had a track pad that told them to go plug in their lapÂtop before they startÂed the task.
And so in an iniÂtial pilot with this, he had to throw out over half the parÂticÂiÂpants, 11 of which were for low hit perÂcentÂage. And after adding in this litÂtle screen, he has to throw out far fewÂer for bad perÂforÂmance. HowÂevÂer, you still actuÂalÂly still see that that peoÂple using a track pad have more lagÂgy triÂals than peoÂple using an optiÂcal mouse. So in genÂerÂal, lapÂtops aren’t as high perÂformÂing comÂputÂers as a deskÂtop PC. So for a quick sumÂmaÂry here, when you’re getÂting data from either an optiÂcal or a track pad mouse, you need to take into conÂsidÂerÂaÂtion that these are bioÂmeÂchanÂiÂcalÂly difÂferÂent and may result in difÂferÂent sucÂcess rates for reachÂes in difÂferÂent directions.
There are upper bounds on how fast you can move a mouse and so that can affect the moveÂments that you can record. And you should try to ameÂlioÂrate that if you can and lag itself on because of variÂable equipÂment across parÂticÂiÂpants can affect the perÂforÂmance that you see in these tasks. HowÂevÂer, we can still get design tasks that get the stanÂdard effects that we see in the field, and also find some interÂestÂing new findÂings with these techÂniques. So my genÂerÂal thought here is that online experÂiÂments are cool. And I’d like to thank everyÂbody in the lab that conÂtributed to this. So thank you.


