Remote­ly Quan­ti­fy­ing Visuo­mo­tor Learn­ing with and with­out Movement

Olivia Kim, Prince­ton
@oliviaakim

YouTube

By load­ing the video, you agree to YouTube’s pri­va­cy pol­i­cy.
Learn more

Load video

Full Tran­script:

Olivia:
Okay.

Speak­er 2:
I see it.

Olivia:
Cool, thanks. So my name is Olivia and I’m a post-doc at Prince­ton work­ing in Jor­dan Tay­lor’s lab. But today I’m going to show you some data that I col­lect­ed in col­lab­o­ra­tion with Alex For­rence and Sam McDougle at Yale, where we remote­ly quan­ti­fied visu­al motor learn­ing and test­ed whether or not a move­ment was­n’t real­ly a require­ment for this kind of adap­ta­tion. But first, I’m going to take a sec­ond to dis­cuss how stud­ies of motor learn­ing have often been tight­ly con­trolled, which JT and Ryan both touched on that. So here again is an armed robot. And in the lab, we main­tain pre­cise con­trol over both how peo­ple are mov­ing. So we can keep that move­ment in a sin­gle plane and apply forces to it. But we can also con­trol exact­ly what they’re see­ing. We can con­ceal the vision of the hand and present feed­back via a mon­i­tor and a mir­ror pre­sent­ing data at a spe­cif­ic frame rate.

Olivia:
Addi­tion­al­ly, dur­ing these exper­i­ments, the exper­i­menter is usu­al­ly present. So we pro­vide instruc­tions to the par­tic­i­pant. And if there are any kinds of con­fu­sions, or if we notice that the per­son is doing some­thing wrong, we can clar­i­fy and cor­rect to make sure that the task is going on, as we antic­i­pate. And one big ques­tion that we had going into research this year, espe­cial­ly con­sid­er­ing COVID, is whether or not we can observe canon­i­cal motor adap­ta­tion online, where we have less control?

Olivia:
So we don’t have con­trol over how par­tic­i­pants are doing the task. Like Ryan said, they might be using a track pad or a mouse, and we did­n’t want to place some require­ment to use either, in case peo­ple lied, when they report­ed the objects that they were using. And addi­tion­al­ly, peo­ple might be sit­ting up, they might be lying down, they could be in any con­fig­u­ra­tion, which could affect the bio­me­chan­ics of their move­ment. Addi­tion­al­ly, peo­ple are at home so they might be dis­tract­ed by sur­pris­ing nois­es or some kind of inter­rup­tion from a pet or a fam­i­ly mem­ber or alter­nate­ly, just the temp­ta­tion to look at your phone since it’s always there. And there’s nobody to ask you not to.

Olivia:
So in addi­tion to that high­er lev­el method­olog­i­cal ques­tion we had, the research ques­tion about whether move­ment was actu­al­ly nec­es­sary for implic­it motor adap­ta­tion. So in stan­dard tri­als and reach­ing tasks, which is what I’m going to use today, usu­al­ly the tar­get is pre­sent­ed and there’s some cue to start that move­ment and peo­ple move their hand towards the tar­get and receive some visu­al feed­back in term, in the form of a visu­al­ly dis­played cur­sor. To induce learn­ing, we present this rota­tion and instruct­ing peo­ple to aim direct­ly at the tar­get and then ignore the move­ment of the cur­sor, restricts this error to a sen­so­ry pre­dic­tion error. So the devi­a­tion between your hand posi­tion and the cur­sor posi­tion is a very salient sig­nal for brain regions like the cere­bel­lum, that actu­al­ly caus­es learn­ing. So in this fig­ure here from one of Ryan’s papers in 2017, on the Y axis, we have hand angle or change in hand angle, which is the mea­sure­ment move­ment in this task.

Olivia:
And when peo­ple are instruct­ed to ignore the move­ment of the cur­sor and all they get is this sen­so­ry pre­dic­tion error, you can see here that there is a grad­u­al­ly change to coun­ter­bal­ance that rota­tion. And peo­ple are unaware of this hap­pen­ing, so we call it implic­it adap­ta­tion. On tri­als with­out move­ment, what we did, is we pre­sent­ed this cue for par­tic­i­pants to reach towards the tar­get, but then change it to magen­ta, to indi­cate to them, to with­hold their reach. And then we played a sim­u­lat­ed per­son move­ment show­ing a visu­al error of miss­ing the tar­get. So on both kinds of tri­als, visu­al errors are dis­played regard­less of whether par­tic­i­pants actu­al­ly moved. And we asked whether implic­it adap­ta­tion occurs under both con­di­tions. Is all that’s need­ed, some kind of rep­re­sen­ta­tion of the goal plus some error, or do we actu­al­ly need to move in order to learn to update that move­ment in the future. We employed a sin­gle tri­al learn­ing design in order to min­i­mize the effects of distractions.

Olivia:
So in a tra­di­tion­al block design, sort of like what I showed you in Ryan’s paper, we mea­sure cumu­la­tive effects over hun­dreds of tri­als. So there’s some base­line peri­od and then hun­dreds of tri­als of manip­u­la­tion. And when there a break in the study, as shown in this paper from Hyosub Kim, there is some effect on task per­for­mance and it’s unpre­dictable how breaks by self-guid­ed par­tic­i­pants at any giv­en time could intro­duce noise in this cumu­la­tive learn­ing sig­nal. So we employed a sin­gle tri­al learn­ing design to mea­sure effects across a triplet of tri­als. In this par­tic­u­lar design, we mea­sured move­ments on tri­al 1, intro­duced some per­tur­ba­tion that rota­tion or visu­al error on the last slide and mea­sured move­ment again on tri­al 3. The dif­fer­ence between move­ments on these 2 tri­als was called The Learn, the amount of learn­ing from the per­tur­ba­tion or the learn­ing from that tri­al, and this lim­its the effects of dis­trac­tions to the triplets on which they occur.

Olivia:
And it allows us to eas­i­ly exclude these tri­als with per­haps poor reac­tion times or long inter tri­al inter­vals with min­i­mal data loss. Addi­tion­al­ly, the tra­di­tion­al block design is hours long and maybe some­what repet­i­tive. And with­out an exper­i­menter there to sug­gest that peo­ple should stay engaged in the task, doing exact­ly the same thing of hun­dreds of times in the row, it might be quite bor­ing and pro­mote dis­trac­tion. Where­as on the sin­gle tri­al learn­ing design, we have a vari­ety of things hap­pen­ing on dif­fer­ent tri­als. So per­haps with this vari­abil­i­ty, we encour­age peo­ple to stay more inter­est­ed in what’s going on and per­haps look away less often. Also in this par­tic­u­lar study, we pre­sent­ed a vari­ety of kinds of triplets, but broad­ly we had triplets with move­ment on this mid­dle probe tri­al and tri­als with­out move­ment that rota­tion applied could either have been zero degrees or plus or minus 15 degrees.

Olivia:
So we can mea­sure adap­ta­tion in response to both direc­tions of error and hope­ful­ly base­line adap­ta­tion with no change in move­ment with a zero degree of rota­tion. And these flick­ing tri­als con­tained no feed­back so we can get a pure mea­sure­ment of the change of move­ment. And again, that change in move­ment was mea­sured across tri­als 1 and 3.

Olivia:
We took some addi­tion­al efforts to stream­line the remote par­tic­i­pant expe­ri­ence, pre­sum­ing that hap­py par­tic­i­pants that under­stand what’s going on will pro­mote a good data col­lec­tion, where­as peo­ple who are con­fused or frus­trat­ed will take our mon­ey and will pro­mote data that we can’t real­ly use lat­er. So Alex For­rence took some efforts using Phas­er, which is an online HTML5 free game frame­work to make these, our instruc­tions, very leg­i­ble and visu­al­ly appeal­ing. So as you can see the text scrolls across the screen, draw­ing your visu­al atten­tion and hope­ful­ly encour­ag­ing peo­ple to read it. Addi­tion­al­ly, if peo­ple made an error that we could detect pro­gram­mat­i­cal­ly, we give them a right reminder. So that’s a task where some­body moved when they were sup­posed to with­hold their move­ment and they see this mes­sage again, to ensure that the instruc­tions are actu­al­ly received in case they clicked through quick­ly, the first time they were shown.

Olivia:
And in order to ver­i­fy that instruc­tions were under­stood, so we had some con­trols built into the exper­i­men­tal design. So, for instance, we can mea­sure peo­ple that were mov­ing the mouse when they were instruct­ed not to and exclude those tri­als or par­tic­i­pants. But we also pre­sent­ed a brief mul­ti­ple choice quiz after this study.

Olivia:
There were three ques­tions with three answers each. So, if they were pure­ly guess­ing, only about 4% of respons­es should be cor­rect. And we found that pro­vid­ing some incen­tive to attend to the quiz actu­al­ly seemed to improve our abil­i­ty to gauge whether peo­ple under­stood what was going on. So with­out an incen­tive about 48% of pro­lif­ic par­tic­i­pants answered all ques­tions cor­rect­ly, which was a lit­tle bit dis­ap­point­ing and kind of con­fus­ing since the data from peo­ple who answered the atten­tion checks cor­rect were very sim­i­lar to the data from peo­ple who did not, but pro­vid­ing a $0.50 bonus to get all of the ques­tions cor­rect, increased that num­ber to about 74% of pro­lif­ic par­tic­i­pants and revealed on big­ger dif­fer­ences and peo­ple who appeared to under­stand the instruc­tions and peo­ple who did not. So this is a lit­tle bit more expen­sive, but it pro­vides some peace of mind about the data qual­i­ty and at least whether or not we’re effec­tive­ly com­mu­ni­cat­ing the instruc­tions to people.

Olivia:
And one last thing that we did was stream­line the remote par­tic­i­pant expe­ri­ence to make things a lit­tle bit eas­i­er in the lab because we have con­trol over the absolute start­ing loca­tion. And the cur­sor is tied to the hand posi­tion, with the start­ing of the patients shown in this white cir­cle here. We can con­cealed the hand in between tri­als and only show the dis­tance between the hand and the cen­ter of the loca­tion by the diam­e­ter or the radius of a green cir­cle that appears on the screen. And when peo­ple move clos­er to the cen­ter, that cir­cle gets small­er, but it does­n’t reveal the X and Y coor­di­nates with their hand. This is a much eas­i­er in per­son because you have some kind of pro­pri­o­cep­tive infor­ma­tion, but a lot of peo­ple strug­gled with this online. So what Alex did is he set up a sys­tem where when the cur­sor moved past the tar­get, it would auto­mat­i­cal­ly reap­pear close to the cen­ter here. So you can see that hap­pen again.

Olivia:
And this reduce the occur­rence of unusu­al­ly long search times that were over 12 sec­onds or some­times over 30 sec­onds. And it reduced the aver­age inter tri­al inter­val, allow­ing us to fit more tri­als into the same amount of time while reduc­ing par­tic­i­pant com­plaints about the dif­fi­cul­ty of the task and not real­ly affect­ing the degree of learn­ing that we saw. So if I just show you the data that we col­lect­ed from this study, you can see that visu­al errors were suf­fi­cient to dri­ve motor adap­ta­tion, or in oth­er words, move­ment was not nec­es­sary. So on the tri­als with move­ment on the pro­pi­ti­a­tion tri­al, you can see that when no rota­tion was applied, there was no adap­ta­tion, but when a 15 degree rota­tion was applied, there was about three degrees of adap­ta­tion across the triplet. Sim­i­lar­ly, when tri­als with­out move­ment were pre­sent­ed, we saw about two degrees of adap­ta­tion in the pres­ence of a 15 degree rotation.

Olivia:
And I don’t know if I showed that this was a sig­nif­i­cant main effect of rota­tion, indi­cat­ing that adap­ta­tion occurs under both con­di­tions in this study. Sim­i­lar­ly, the amount of sin­gle tri­al learn­ing that we observed here is con­sis­tent with pre­vi­ous and lab stud­ies. So this is a fig­ure com­piled by Hyosub Kim, show­ing the learn­ing rate in a vari­ety of stud­ies over the years of doing reach­ing tasks. And if we super­im­pose our data, you can see that both tri­als with move­ment and tri­als with­out move­ment fall with­in the range of what’s been observed before, which is encour­ag­ing and sug­gests that we’re tap­ping into the same process that was tapped into by these stud­ies in the lab.

Olivia:
So as an ini­tial sum­ma­ry, we’ve shown that the motor adap­ta­tion on a patient can be mea­sured online, sim­i­lar to what JT talked about in the intro­duc­tion. And this occurs despite our lack of con­trol over the absolute hand posi­tion, the type of mouse that par­tic­i­pants are using, and the envi­ron­ment that they’re doing the task in, sug­gest­ing that implic­it adap­ta­tion, at least at the lev­el of sin­gle tri­als, maybe a high­er lev­el fea­ture of motor con­trol. It’s not depen­dent on the spe­cif­ic fea­tures of move­ments across sub­jects. Which is encour­ag­ing and sug­gest that like pre­vi­ous find­ings in the lab should be gen­er­al­iz­able out­side of the lab and in time more situations.

Olivia:
And regard­ing our research ques­tions, we’ve demon­strat­ed that move­ment is not real­ly required for implic­it motor adap­ta­tion as it’s sim­i­lar when par­tic­i­pants move, or remain still when they view an error, and move­ments them­selves don’t need to be tied to errors to be the basis for motor adap­ta­tion. And I’d like to take a sec­ond to thank you for your time and to thank my coau­thors for every­thing that they’ve con­tributed my col­leagues, and our fund­ing sources. That’s more data. If you want to talk about that.

Speak­er 2:
Great, Olivia, thank you very much.

 

Get on the Registration List

BeOnline is the conference to learn all about online behavioral research. It's the ideal place to discover the challenges and benefits of online research and to learn from pioneers. If that sounds interesting to you, then click the button below to register for the 2023 conference on Thursday July 6th. You will be the first to know when we release new content and timings for BeOnline 2023.

With thanks to our sponsors!