Olivia Kim, Princeton
@oliviaakim
Full Transcript:
Olivia:
Okay.
Speaker 2:
I see it.
Olivia:
Cool, thanks. So my name is Olivia and I’m a post-doc at Princeton working in Jordan Taylor’s lab. But today I’m going to show you some data that I collected in collaboration with Alex Forrence and Sam McDougle at Yale, where we remotely quantified visual motor learning and tested whether or not a movement wasn’t really a requirement for this kind of adaptation. But first, I’m going to take a second to discuss how studies of motor learning have often been tightly controlled, which JT and Ryan both touched on that. So here again is an armed robot. And in the lab, we maintain precise control over both how people are moving. So we can keep that movement in a single plane and apply forces to it. But we can also control exactly what they’re seeing. We can conceal the vision of the hand and present feedback via a monitor and a mirror presenting data at a specific frame rate.
Olivia:
Additionally, during these experiments, the experimenter is usually present. So we provide instructions to the participant. And if there are any kinds of confusions, or if we notice that the person is doing something wrong, we can clarify and correct to make sure that the task is going on, as we anticipate. And one big question that we had going into research this year, especially considering COVID, is whether or not we can observe canonical motor adaptation online, where we have less control?
Olivia:
So we don’t have control over how participants are doing the task. Like Ryan said, they might be using a track pad or a mouse, and we didn’t want to place some requirement to use either, in case people lied, when they reported the objects that they were using. And additionally, people might be sitting up, they might be lying down, they could be in any configuration, which could affect the biomechanics of their movement. Additionally, people are at home so they might be distracted by surprising noises or some kind of interruption from a pet or a family member or alternately, just the temptation to look at your phone since it’s always there. And there’s nobody to ask you not to.
Olivia:
So in addition to that higher level methodological question we had, the research question about whether movement was actually necessary for implicit motor adaptation. So in standard trials and reaching tasks, which is what I’m going to use today, usually the target is presented and there’s some cue to start that movement and people move their hand towards the target and receive some visual feedback in term, in the form of a visually displayed cursor. To induce learning, we present this rotation and instructing people to aim directly at the target and then ignore the movement of the cursor, restricts this error to a sensory prediction error. So the deviation between your hand position and the cursor position is a very salient signal for brain regions like the cerebellum, that actually causes learning. So in this figure here from one of Ryan’s papers in 2017, on the Y axis, we have hand angle or change in hand angle, which is the measurement movement in this task.
Olivia:
And when people are instructed to ignore the movement of the cursor and all they get is this sensory prediction error, you can see here that there is a gradually change to counterbalance that rotation. And people are unaware of this happening, so we call it implicit adaptation. On trials without movement, what we did, is we presented this cue for participants to reach towards the target, but then change it to magenta, to indicate to them, to withhold their reach. And then we played a simulated person movement showing a visual error of missing the target. So on both kinds of trials, visual errors are displayed regardless of whether participants actually moved. And we asked whether implicit adaptation occurs under both conditions. Is all that’s needed, some kind of representation of the goal plus some error, or do we actually need to move in order to learn to update that movement in the future. We employed a single trial learning design in order to minimize the effects of distractions.
Olivia:
So in a traditional block design, sort of like what I showed you in Ryan’s paper, we measure cumulative effects over hundreds of trials. So there’s some baseline period and then hundreds of trials of manipulation. And when there a break in the study, as shown in this paper from Hyosub Kim, there is some effect on task performance and it’s unpredictable how breaks by self-guided participants at any given time could introduce noise in this cumulative learning signal. So we employed a single trial learning design to measure effects across a triplet of trials. In this particular design, we measured movements on trial 1, introduced some perturbation that rotation or visual error on the last slide and measured movement again on trial 3. The difference between movements on these 2 trials was called The Learn, the amount of learning from the perturbation or the learning from that trial, and this limits the effects of distractions to the triplets on which they occur.
Olivia:
And it allows us to easily exclude these trials with perhaps poor reaction times or long inter trial intervals with minimal data loss. Additionally, the traditional block design is hours long and maybe somewhat repetitive. And without an experimenter there to suggest that people should stay engaged in the task, doing exactly the same thing of hundreds of times in the row, it might be quite boring and promote distraction. Whereas on the single trial learning design, we have a variety of things happening on different trials. So perhaps with this variability, we encourage people to stay more interested in what’s going on and perhaps look away less often. Also in this particular study, we presented a variety of kinds of triplets, but broadly we had triplets with movement on this middle probe trial and trials without movement that rotation applied could either have been zero degrees or plus or minus 15 degrees.
Olivia:
So we can measure adaptation in response to both directions of error and hopefully baseline adaptation with no change in movement with a zero degree of rotation. And these flicking trials contained no feedback so we can get a pure measurement of the change of movement. And again, that change in movement was measured across trials 1 and 3.
Olivia:
We took some additional efforts to streamline the remote participant experience, presuming that happy participants that understand what’s going on will promote a good data collection, whereas people who are confused or frustrated will take our money and will promote data that we can’t really use later. So Alex Forrence took some efforts using Phaser, which is an online HTML5 free game framework to make these, our instructions, very legible and visually appealing. So as you can see the text scrolls across the screen, drawing your visual attention and hopefully encouraging people to read it. Additionally, if people made an error that we could detect programmatically, we give them a right reminder. So that’s a task where somebody moved when they were supposed to withhold their movement and they see this message again, to ensure that the instructions are actually received in case they clicked through quickly, the first time they were shown.
Olivia:
And in order to verify that instructions were understood, so we had some controls built into the experimental design. So, for instance, we can measure people that were moving the mouse when they were instructed not to and exclude those trials or participants. But we also presented a brief multiple choice quiz after this study.
Olivia:
There were three questions with three answers each. So, if they were purely guessing, only about 4% of responses should be correct. And we found that providing some incentive to attend to the quiz actually seemed to improve our ability to gauge whether people understood what was going on. So without an incentive about 48% of prolific participants answered all questions correctly, which was a little bit disappointing and kind of confusing since the data from people who answered the attention checks correct were very similar to the data from people who did not, but providing a $0.50 bonus to get all of the questions correct, increased that number to about 74% of prolific participants and revealed on bigger differences and people who appeared to understand the instructions and people who did not. So this is a little bit more expensive, but it provides some peace of mind about the data quality and at least whether or not we’re effectively communicating the instructions to people.
Olivia:
And one last thing that we did was streamline the remote participant experience to make things a little bit easier in the lab because we have control over the absolute starting location. And the cursor is tied to the hand position, with the starting of the patients shown in this white circle here. We can concealed the hand in between trials and only show the distance between the hand and the center of the location by the diameter or the radius of a green circle that appears on the screen. And when people move closer to the center, that circle gets smaller, but it doesn’t reveal the X and Y coordinates with their hand. This is a much easier in person because you have some kind of proprioceptive information, but a lot of people struggled with this online. So what Alex did is he set up a system where when the cursor moved past the target, it would automatically reappear close to the center here. So you can see that happen again.
Olivia:
And this reduce the occurrence of unusually long search times that were over 12 seconds or sometimes over 30 seconds. And it reduced the average inter trial interval, allowing us to fit more trials into the same amount of time while reducing participant complaints about the difficulty of the task and not really affecting the degree of learning that we saw. So if I just show you the data that we collected from this study, you can see that visual errors were sufficient to drive motor adaptation, or in other words, movement was not necessary. So on the trials with movement on the propitiation trial, you can see that when no rotation was applied, there was no adaptation, but when a 15 degree rotation was applied, there was about three degrees of adaptation across the triplet. Similarly, when trials without movement were presented, we saw about two degrees of adaptation in the presence of a 15 degree rotation.
Olivia:
And I don’t know if I showed that this was a significant main effect of rotation, indicating that adaptation occurs under both conditions in this study. Similarly, the amount of single trial learning that we observed here is consistent with previous and lab studies. So this is a figure compiled by Hyosub Kim, showing the learning rate in a variety of studies over the years of doing reaching tasks. And if we superimpose our data, you can see that both trials with movement and trials without movement fall within the range of what’s been observed before, which is encouraging and suggests that we’re tapping into the same process that was tapped into by these studies in the lab.
Olivia:
So as an initial summary, we’ve shown that the motor adaptation on a patient can be measured online, similar to what JT talked about in the introduction. And this occurs despite our lack of control over the absolute hand position, the type of mouse that participants are using, and the environment that they’re doing the task in, suggesting that implicit adaptation, at least at the level of single trials, maybe a higher level feature of motor control. It’s not dependent on the specific features of movements across subjects. Which is encouraging and suggest that like previous findings in the lab should be generalizable outside of the lab and in time more situations.
Olivia:
And regarding our research questions, we’ve demonstrated that movement is not really required for implicit motor adaptation as it’s similar when participants move, or remain still when they view an error, and movements themselves don’t need to be tied to errors to be the basis for motor adaptation. And I’d like to take a second to thank you for your time and to thank my coauthors for everything that they’ve contributed my colleagues, and our funding sources. That’s more data. If you want to talk about that.
Speaker 2:
Great, Olivia, thank you very much.