Ryan Morehead, University of Leeds
@rmhead
Full Transcript:
I’m Ryan Morehead. I’m a co-director of the Immersive Cognition Lab here at the University of Leeds, and I’m not sure how choppy this video is for you guys, but I wanted to put it on here because it’s somebody playing a first-person shooter, AME trainer, on their personal computer at home. And it kind of represents the best that you can get in terms of skilled motor behavior with a mouse and keyboard and a home PC computer. This is something that over time, I think we want to try to get to with browser-based experiments. So I come from a background of motor control, human computational motor control, that JP just talked a little bit about the kind of equipment that we use. And in particular, we use equipment that has high spatial and temporal fidelity. So it’s often sampling health where you are, where you’re at the position of your limb at 200 Hertz, or even up to a thousand Hertz. And you’re getting visual feedback as fast as 240 Hertz.
The situation on the internet is a little bit different. Now, the ICON lab here at Leeds we are using Unity WebGL, and we’re hosting it on Amazon Web Services and using some other software such as C‑sharp and JavaScript to do the experiments that we’re doing. But we have the fundamental limitation that all internet-based research has, which is that we don’t know the equipment people are using and it can actually be pretty poor equipment. So often that means the display is at best refreshing at 60 Hertz, but occasionally up to 240 Hertz. And the hand tracking that we use, whether a mouse or keyboard presses, is maybe 60 Hertz and potentially up to a thousand Hertz. So that means two things, one is that often this is less precise than what we’re used to, but also it’s highly variable from person to person.
If you’re focusing just on keyboarding tasks, which is something that a postdoc in our lab, Emily Williams is doing, then this primarily just affects the sampling interval that you can detect a key press. So 60 Hertz means a 16.67 milliseconds is about every time that you’re going to get a data point. And it’s also when you can present information back to the participant. Another element of keyboards online is this, n‑key rollover? You can actually know if somebody has a sort of high performance gaming keyboard where you can detect six or eight or however many simultaneous key presses, and so it’s possible that in some cases you can detect really relevant key presses that are errors that people are making.
That’s pretty much the case for keyboards, but for mouse movements, it’s a little bit more complicated. And I’ll talk about several different elements of this during the talk for your sort of vanilla, just reaching from a start location to a targeted experiments, such as the kind that Matthew Warburton, a PhD student, in the lab is doing. This means that you might be using an optical mouse, or you might be using a track pad, and I think it’s important to keep in mind that these are different biomechanically and also in terms of the mechanisms of how they work. And while we were preparing for Matthew’s experiment, we got some data from Anisha Chandy and Jonathan Tsay at the Ivry Lab at UC Berkeley and analyzed the variability in reached direction and also the accuracy. And what I’m showing you here is the success for different target directions for 24 different targets around 360 degrees, both when you have visual feedback of where you’re moving to, and when you don’t have visual feedback, shown in red. And you can clearly see that it’s different across directions, but also different across the two devices.
And so that’s something that you want to keep in mind when you’re running experiments like this, is that you may find differences based on the equipment that the participant’s using. So for the rest of the talk I’m going to focus on a specific task, which is an interceptive timing task that John Pickavance, a PhD student, is doing. The tasks that he had at the beginning of quarantines and lockdowns was to translate this lab based experiment with a high refresh rate on both the input and output of his equipment into an online task. And typically in interceptive timing tasks, there’s a little target moving across the screen, and you’re trying to intercept it with a cursor that you can only move in one dimension and you do that by sliding a handle along a rail.
So John developed this task for use with children in schools. And so he kind of gamified it and used instead of just a black block, he used a unidentified fruit object, and that moves across the screen on a fixed path. And then there’s a fruit bat down in a cave that you have to fly out and try to intercept the fruit. And you want to try to do this within a 100 to 300 milliseconds. An important thing to point out here is that you can move your mouse laterally on this, but the bat itself will only move vertically. And so it’s, it’s only moving along the vertical path for this task. And hopefully let’s see, I think I have to play this one. Hopefully you guys can see this video, let it loop through a few times. This is what the task looks like. You just move the bat out, try to hit the target, and if you do hit it, you get this kind of Gotti splat popped up on the screen and also a tone place.
So an important thing to keep in mind during this actually, sorry, we collect this data on Amazon and Prolific. But an important thing to keep in mind is that the equipment that we’re using to record this data in the first place has limits, and so JP was trying to look and see if you have somebody making a movement across different people as one person wants to make a big movement and other person’s making a small movement, or if they have differences in gain on their mouse, or differences in quality of mouse, how is that going to affect things? And so he used some equipment to measure if you make different mouse movements of increasing velocities, will your mouse be able to track it? And it turns out that for all mice, including very high end gaming mice, if you start to move a little bit faster than a meter per second with them, that they start to bug out and give you really bad data.
And what this led us to do is actually introduce a screen at the beginning of his task, where he can have people try to move the little bat from a start position to a finish with one little movement that’s indicated by a gift on the screen for what they’re trying to do, whether they’re using a track pad or an optical mouse. This allows us to kind of standardize and ameliorate the differences across participants’ computers. So what JP or John Pickavance wanted to look at with this task is not just the interceptive timing, but actually stopping yourself from moving in the context of an interception tasks. So he had a subset of trials where the screen’s background changed colors indicating that it was dawn. And when it’s dawn, the screen may stay like this, state orange, and if it stays orange then you’re free to move out and intercept the target the same way that you would if it’s nighttime, but on a subset of trials, the screen will actually change color.
And hopefully it’ll not be too choppy for you guys. The screen will actually change color and if you move the bat outside of the cave, when it’s daytime, you’ll actually get sunburned and lose the trial. So these are stop trials or no-go trials. You don’t actually want to move on them. And the fact that the screen changes color will indicate whether this is a trial where you’re certain that you can move freely and intercept the target, or if it’s a trial where you don’t know whether it’s going to be safe to go or not. And these are allotted in the experiment, 50% are certain trials, 50% are uncertain and out of every block of uncertain, sorry, these are randomly interleaved, but out of the uncertain trials, 5 of the 15 are on our stock trials. So that’s 33% of your stock trials, sorry, 33% of your uncertain trials or stop trials, which is actually really important for these designs. And then we have eight blocks of trials. And for all the data I’m going to show you for the most part, we’re showing 52 people that we collected off of Prolific.
Importantly for this task, for being a Stop Signal Reaction Time task, there’s some basic things that we need to make sure are going on, which is that I’m introducing this screen change, where the tells you to stop moving and not move out of the cave. Actually it is challenging for you. And so what we do is, during the trial, the target appears and starts to move. And there’s a time where we think you should start moving and we’re going to present the stop signal initially, actually, right about when we think you want to start moving. And then we staircase the stop signal back and forth to determine a time where it’s actually presented before you would need to start moving. And you could only stop yourself on 50% of those trials.
So we’re always staircasing this, for each individual, 50% of their trials, they’re making it when the stop signal is presented and 50% they’re failing. And then what we do is find the actual reaction time. So how long before you were starting, you were going to start moving? Are you able to stop yourself? What we find in this task, it’s about 200 milliseconds, which is consistent with other stop signal tasks. So that’s good. We’re kind of meeting the criteria there. And however, what we’re really interested in here are proactive stopping, which are measures where, you know it’s an uncertain trial and you’re actually doing something different on these uncertain trials than what you would do on the certain trials. And we have a few different measures of this. Importantly, we’re only looking at trials where you actually made a movement.
And so, but we’re comparing uncertain trials where you move to certain trials where you moved. So one of the measures here is Movement Time. And what you can see is that there’s a clear difference in the certain and uncertain Movement Times where people are moving faster on the uncertain trials. Also the Initiation Time, so when people start to move is faster on uncertain, and for both of these, they get faster over the block. And then also for another measure called Timing Error, we see a difference here as well, where people are actually later on the uncertain and to make this Timing Error a little bit more intuitive or palpable for you guys, it’s kind of tantamount to where you’re hitting the target with the bat. So, here I’m plotting on certain trials on these graphs at the bottom here where people hit on the actual fruit UFO here at the top, and you can see that they mostly hit the front right corner, which is actually the ideal spot to hit to maximize your success.
And on the uncertain trials starting out there initially, and sort of creeping back over time as the experiment goes on, which you can see over here. And so for both of these measures, this Initiation Time, sorry, these are all significant, but for both of these measures, Initiation Time and Timing Error, we were interested in whether people were doing this consciously or whether it was just something that emerged out of the task. And so what we did is put a task at the end of the experiment where we had people watch a video of the UFO going by. And we told them to note when they would have tried to start moving during the actual task. And then afterwards we let them position the UFO at that location. And you can see that they clearly are putting the certain, the UFO on a certain background earlier than they are uncertain.
So they’re aware that they’re initiating later. And we also had them click where they were trying to hit the UFO with the bat on certain versus uncertain trials. And we found a similar thing where they indicated further back on the UFO for where they are trying to hit so that we think this is important because these proactive measures are not just something that’s implicitly emerging through some unconscious learning process. But this is something that they actively know they’re doing. There’s one final point that I just want to make here. That’s actually a methodological point. And this has to do with the lag, because we know that there are differences across people’s computers. So what I’m plotting on these plots is the intended position of the target. So at where the targets supposed to be moving, given the amount of time that’s elapsed in the trial versus where we actually record the target is.
And for some trials, there’s very little difference between these two things. But on some people’s computers, their computer chugs a little bit drops a frame, and they have some trouble with the target appearing where it’s supposed to be. And those are kind of highlight trials. And this is something that actually affects some of our measures. So here on the left is the proportion of times that they hit the target on certain trials. And you see that there there’s about a 10% reduction in performance when people have a lot of laggy trials, and then they also have longer Stop Signal Reaction Times to the same thing. And John noticed that this actually seemed to be disproportionately affecting people that were using a track pad. So he added a little screen to the beginning of the task, that if someone indicated that they had a track pad that told them to go plug in their laptop before they started the task.
And so in an initial pilot with this, he had to throw out over half the participants, 11 of which were for low hit percentage. And after adding in this little screen, he has to throw out far fewer for bad performance. However, you still actually still see that that people using a track pad have more laggy trials than people using an optical mouse. So in general, laptops aren’t as high performing computers as a desktop PC. So for a quick summary here, when you’re getting data from either an optical or a track pad mouse, you need to take into consideration that these are biomechanically different and may result in different success rates for reaches in different directions.
There are upper bounds on how fast you can move a mouse and so that can affect the movements that you can record. And you should try to ameliorate that if you can and lag itself on because of variable equipment across participants can affect the performance that you see in these tasks. However, we can still get design tasks that get the standard effects that we see in the field, and also find some interesting new findings with these techniques. So my general thought here is that online experiments are cool. And I’d like to thank everybody in the lab that contributed to this. So thank you.