Explor­ing Motor Imagery and Action Pre­dic­tion through Jug­gling and Base­ball Tasks in Online Environments

YouTube

By load­ing the video, you agree to YouTube’s pri­va­cy pol­i­cy.
Learn more

Load video

Zach Besler — Uni­ver­si­ty of British Columbia

Full Tran­script:

Zach Besler 0:00
Thanks so much. Well, thanks so much, Jo for the oppor­tu­ni­ty to present today. And it’s been fan­tas­tic to hear about all the oth­er talks. I’m inter­est­ed in learn­ing and pre­dic­tions. I’ve already learned a lot. And this is far exceed­ed my pre­dic­tions. So we’re off to a great start today. Before we get start­ed, I want­ed to acknowl­edge that where I do my research at the Uni­ver­si­ty of British Colum­bia, is sit­u­at­ed in the tra­di­tion­al ances­tral and unseed­ed ter­ri­to­ries of the Musqueam fell with tooth, Squamish First Nations peo­ples. And the land has been a land of learn­ing for 1000s of years. So we’re very excit­ed to engage with that space, and use that space to learn more.

So what am I inter­est­ed in? Well, oh, the big overview for our research, in my lab in the motor skills lab is how our own motor expe­ri­ences impact how we learn from and make pre­dic­tions about oth­er peo­ple. So an over­all process that might explain why this hap­pens is called Motor sim­u­la­tion. Where if we’re watch­ing some­body else’s actions, either to learn, or we’re watch­ing those actions to try and pre­dict what some­one’s going to do next, we can rely on our own motor sys­tem while we’re watch­ing. And that through the shared path­ways in the brain between obser­va­tion and watch­ing, and action of doing, some­how, some way our brains are pre­pared to exe­cute those same actions. And so we can use sig­nals from our own body to pro­vide insight into how oth­er peo­ple might have accom­plished those move­ment goals, and how we might accom­plish those same move­ment goals mov­ing for­ward. So these are real­ly inter­est­ing top­ics for us. And this gives us kind of that top­ic of from imag­ine this, to what was that, and why we explore jug­gling and base­ball and all kinds of cool things online. So not every­one knows how to juggle.

So we, as Jo was men­tion­ing ear­li­er, we had designed an online exper­i­ment to teach peo­ple how to jug­gle using a cou­ple of dif­fer­ent tech­niques. But we’re also inter­est­ed in skilled per­for­mance, and being able to pre­dict the actions of oth­er peo­ple, and then also some eye track­ing our dynam­ic visu­al acu­ity tasks. So I will start with a bit of a dis­claimer here that, although we have been col­lect­ing data for a cou­ple of years, the data are still unpub­lished. So what I’ll be talk­ing about, and most­ly fram­ing this pre­sen­ta­tion for are some of the chal­lenges with con­duct­ing online research, and some of the things that I’ve learned along the way to help any­one else who’s start­ing online research for the first time, some of the tools we use to make this process just a lit­tle bit eas­i­er. So we can look at this as the overview for this top­ic, from fun­da­men­tal vision to applied sport. And I am Cana­di­an, so I’ll be talk­ing about the Cana­di­an per­spec­tive, as well. So for each one of these three tasks we had jug­gling, to assess action obser­va­tion and motor imagery, we use a base­ball pitch recog­ni­tion task for action pre­dic­tion. And we use the Lan­dolt see task for dynam­ic visu­al acu­ity. So I’ll talk about how we cre­at­ed those stim­uli to make them engag­ing and sus­tain real­ly active par­tic­i­pa­tion dur­ing our online stud­ies. And then some of the cool online task fea­tures in goril­la that we use in each one of these stud­ies, to real­ly max­imise our effects.

3:42
Awe­some. Sweet. So this is a pic­ture of our motor skills lab. And Lab life. And the before times, was pret­ty nice, because we have this real­ly nice pro­jec­tor screen that we can show videos for. Every­thing is nice and neat­ly con­trolled as every­body else has been talk­ing about. But it does have its draw­backs, in that it’s hard for us to recruit a large num­ber of sample.

And when we did­n’t have the option to use the lab, and we had to adapt our research. For a chang­ing world. We def­i­nite­ly went online. And so we had to basi­cal­ly start doing online research from scratch. It was­n’t any­thing that any of us in our depart­ment had real­ly explored before. And so I’m here to talk about my learn­ing process with that. So first off, watch­ing and imag­in­ing the actions of oth­ers. So we use a jug­gling task here. And essen­tial­ly, we tried to teach peo­ple how to jug­gle online. This was dur­ing the the ear­ly parts of the pan­dem­ic as well. Lots of peo­ple have time on their hands and want­i­ng to learn new things. So we thought that might be a nice task flow.

So from the research per­spec­tive, we were try­ing to find what are the effects of motor imagery on Have con­fi­dence and learn­ing. So if we imag­ine what some­body else’s move­ments will feel like, how does that influ­ence how well we think we would do at that task as well. And what’s real­ly neat about this is you can almost call this like the, the watch­ing the Olympics effect. So we’ve all watched the Olympics, we can see some of those swim­mers in the pool, just mak­ing it look so easy, just effort­less­ly glid­ing through the water. And when we watched those, the Olympics on TV were enthralled or fas­ci­nat­ed. And we also think, but yeah, I could, I could do that too. And it’s not until we get about 14 metres down the lane, in our recre­ation­al swim­ming pool, that we start to seri­ous­ly reeval­u­ate that ini­tial pre­dic­tion of our own abil­i­ties. And so that’s kind of what we were try­ing to get at here with jug­gling, where we had one group of par­tic­i­pants just watch jug­gling actions, and the oth­er group watched the action and then imag­ined imme­di­ate­ly after what it would feel like to do that same action them­selves, to try and see if there are any dif­fer­ences in their own self per­ceived rat­ings of con­fi­dence. And then after the learn­ing, we look to see who could actu­al­ly jug­gle after train­ing, which made for some inter­est­ing trends so far.

But I’ll be talk­ing more specif­i­cal­ly about how we tried to make this in the first place. So what we use was a head mount­ed GoPro, and then we use a con­cur­rent kind of tri­pod set up at the same time. So we had first and third per­son per­spec­tive video that we could use in dif­fer­ent types of tri­als, because motor imagery effects can be dif­fer­ent depend­ing on the phys­i­cal per­spec­tive that we take. And what was real­ly neat is that when you’re watch­ing videos of that head mount­ed GoPro, it’s a very immer­sive expe­ri­ence, you can see the balls being jug­gled right in front of you. And that real­ly helps keep all of our par­tic­i­pants engaged in that online environment.

So then, then you get your sec­ond prob­lem, which is okay, we’re try­ing to jug­gle online. How do you mea­sure imag­i­na­tion online, so what we did was we assessed the dura­tion of imagery. So using a space­bar, press and hold func­tion and grill up. So basi­cal­ly, the group that was engag­ing in imagery would press and hold the space­bar when they were begin­ning to imag­ine. And then they released the space­bar, when they were done imag­in­ing the task. And for a key press con­trol, we had the peo­ple who were not engag­ing in motor imagery, hold the space­bar down for a set peri­od of time that we would tell them to hold for. And so I have some notes here on how we made that response key­board hold release hap­pen. And there’s a great QR code here, which is drill a Sup­port page for key­board hold, release. And I might have logged I might have been in the top 1% of users for spend­ing time on that sup­port page, it was very help­ful, and we we made it work. Sweet. We’ll get into our sec­ond task here, which was the sport spe­cif­ic action pre­dic­tion tasks with base­ball players.

8:22
And this was real­ly fun, because we got to do some video col­lec­tion in the wild, and real­ly try and take the game to the peo­ple. So when we’re talk­ing about sports spe­cif­ic research, now, we’re tak­ing a bit of a piv­ot away from novice peo­ple learn­ing new tasks like jug­gling. And we start get­ting into a very spe­cif­ic pop­u­la­tion of peo­ple who have a lot of expe­ri­ence and exper­tise with a cer­tain task. And it’s real­ly impor­tant for us as researchers to under­stand how dif­fer­ent sports teams and sport clubs val­ue the same types of ques­tions that we might be asking.

So I have this kind of out­line of how to set up sports spe­cif­ic research, in our case, action pre­dic­tion research that coach­es and ath­letes actu­al­ly want to do. So the first thing to do is find a pro­gres­sive Sport Club. And there’s a grow­ing move­ment of embrac­ing tech­nol­o­gy and new ways of think­ing in sports, which is fan­tas­tic. And then it’s seek first to under­stand and then to be under­stood. So what skills are valu­able to coach­es, if we’re inter­est­ed in skill acqui­si­tion research? And then what process­es and expe­ri­ences might we val­ue and we might want to look at as researchers, and then take a step back and say if we’ve got this dia­logue between the researchers and the coach­es, are we using dif­fer­ent words but maybe talk­ing about the same things. So once you’ve got that down, you want to take the best videos, and this is cool, because we’ll kind of add on to the AI con­ver­sa­tions that we’ve been had Hav­ing some of the tech­no­log­i­cal advances that allow us to ask real­ly cool ques­tions. And then last­ly, we need to be acces­si­ble. And this is where online research plat­forms real­ly come in and are very helpful.

So com­ing back to the seek to under­stand and then to be under­stood, I’m talk­ing about action pre­dic­tion. But what what is that, real­ly? And where did that come from? So, action pre­dic­tion is our abil­i­ty to watch and antic­i­pate or cre­ate an expec­ta­tion for what the out­come of anoth­er per­son­’s action is going to be. So with a base­ball con­text, specif­i­cal­ly, for those of you who aren’t as famil­iar with base­ball, there’s a pitch­er, who throws the ball, and then the hit­ter must hit the ball. And those two pop­u­la­tions are rel­a­tive­ly dif­fer­ent. Peo­ple, there isn’t a lot of over­lap. So we get two sub­pop­u­la­tions one with motor expe­ri­ence, and one with visu­al expe­ri­ence. So from a researchers per­spec­tive, we were inter­est­ed to see if maybe peo­ple with motor expe­ri­ence, were engag­ing in motor sim­u­la­tion, and using that to guide their action pre­dic­tions, and kind of what were the strate­gies of peo­ple with visu­al expe­ri­ence. And that sounds great in the research realm. But the way that we designed the exper­i­ment also allowed coach­es to answer a ques­tion that they were inter­est­ed in, which was, who is good at action pre­dic­tion, who can pick up what the pitch­er is try­ing to throw, because the coach­es val­ue that abil­i­ty to extract infor­ma­tion ear­ly, to make good sports spe­cif­ic deci­sions. So we were able to feed two birds with one stone as it were.

So just talk­ing about how to take the best videos and what our process was for these. What’s real­ly excit­ing about being in 2022, is the future is now just as we’ve been see­ing with and some of the oth­er talks with the pos­si­bil­i­ties of AI, and AI, Humans are incred­i­ble. So here’s some clips of how we did video col­lec­tion in the wild. And, again, tried to take the research onto the field, and rel­e­vant make it rel­e­vant for the ath­letes that we’re study­ing. So we have here, our mod­el pitch­er, who will be throw­ing the pitch­es. And we had for this, we used an AI based mark­er­less bio­me­chan­ics track­er called ProPI­LOT AI. So we were able to pick up on the actu­al kine­mat­ics of the body as he was throw­ing dif­fer­ent pitch­es. So then we could cross ref­er­ence, if he was decep­tive with his move­ments, how was he able to deliv­er those pitch­es, and who is best attuned to the dif­fer­ences between those kinematics.

12:56
But hav­ing that part­ner­ship with the UBC base­ball Sports Club, we were able to set up quite a few cam­eras. So they have four and AI enabled cam­eras that sur­round their entire sta­di­um, we had two cam­eras, imme­di­ate­ly on field, to be in the per­spec­tive of a hit­ter, so some­one who would be fac­ing this pic­ture, we had our mark­er­less AI bio­me­chan­ics on the left here, and we had oth­er video on the right as well. So this allowed us to col­lect the high­est qual­i­ty video pos­si­ble and become an expert on exact­ly what was going on. We also have a ball track­er, so we knew exact­ly how fast the ball was trav­el­ling, how much it was spin­ning, and where it end­ed up. And so the AI enabled cam­eras all around the sta­di­um was placed by AI. So being able to use these dif­fer­ent sources of video, allow us to ask even more ques­tions, we were focused on action pre­dic­tion. But the future direc­tions of col­lab­o­rat­ing with sports clubs are end­less from con­tex­tu­al knowl­edge, being able to see were we in the right defen­sive sys­tem or not, there’s real­ly a lot of oppor­tu­ni­ties there. And then last­ly, being acces­si­ble and using the online research plat­form of Goril­la, which allowed us to col­lect data from a lot of dif­fer­ent ath­letes all at the same time. And so this is a clip of what it actu­al­ly looks like in guer­ril­la. So you’re going to see a pitch­er throw a type of pitch. And if you are an expert in the sport, you’ll be able to clas­si­fy one of the three dif­fer­ent pitch­es. So here we go. So you can see it’s it’s quite abrupt, it gets on you, but we have to be able to make these pre­dic­tions quick­ly to be able to respond to them in an effec­tive way. I’m going to touch on one more task that we cre­at­ed. Online. And that was the dynam­ic visu­al acu­ity test using the land­lord seat. So for those of you who maybe aren’t as famil­iar with the lat­er­al T, it’s a task for us to mea­sure dynam­ic visu­al acu­ity. When­ev­er we’re talk­ing about visu­al acu­ity. Sta­t­ic visu­al acu­ity is kind of like those pie charts in the optometrists office kind of II s, kind of there. But with dynam­ic visu­al acu­ity, we want to see how much you can resolve dif­fer­ent gap weights when objects are mov­ing, which can be help­ful in sport, as well. So what we did here was we have dif­fer­ent rings with gaps, and the gaps can be ori­ent­ed in dif­fer­ent posi­tions. And then who­ev­er is con­duct­ing the task has to indi­cate which direc­tion the open­ing is. Now, there are codes, there is code that has been writ­ten in Mat­lab to cre­ate these tasks. And there are a lot of inlab oppor­tu­ni­ties to do this research. But the chal­lenge here was how do I cre­ate a no code, no down­load, uni­ver­sal­ly com­pat­i­ble task here. And it was with the oth­er tasks, cre­at­ing those videos, upload­ing them online, it was chal­leng­ing in its own right. But this was the part as a researcher where I felt like I was stuck on a desert island. And all I had was a rock. Because I was think­ing, How on earth am I going to get a lit­tle ring with the right gap size? And just have it show up across the screen? Like how am I going to come up with that. And I thought about it and then realised, well, if Pow­er­Point is my rock, I might actu­al­ly be able to get off this island. So what I’m show­ing you here is a bit of a labour of love. While I was able to cre­ate this task, using only Pow­er­Point, and,

16:57
and grid. So to start, we use the screen cal­i­bra­tion fea­ture, which was in closed beta when we start­ed. So it was real­ly excit­ing to be one of the first peo­ple to use this. And so we need­ed this to know what the size of that gap would be in goril­la ver­sus what it would look like on my own screen. So here’s the Sup­port page for the screen cal­i­bra­tion tests. And I strong­ly rec­om­mend that those of you doing this type of research. Check that one out. So again, I need­ed to find a way to find out what size was my stim­uli show­ing up on goril­la. And then what size was I cre­at­ing in Pow­er­Point at dif­fer­ent cal­i­bra­tions. So we had to do some cross ref­er­enc­ing, I had some phys­i­cal rulers on my on my lap­top screen. And then once we fig­ured out that rela­tion­ship, it was time to do some pix­el on size. So we used cen­time­tre con­ver­sions. And then we were able to cre­ate our dif­fer­ent ring sizes, which is impor­tant because as you can see here, these last two pic­tures, they have the same size of ring, but the gap size, you can see for that right one, it’s just a lit­tle bit big­ger. And that’s what we’re real­ly inter­est­ed in. So down to the point one cen­time­tre mat­ters, and we were able to fine tune that using the screen cal­i­bra­tion in July. So I’ll just fin­ish off today by talk­ing about some of the con­sid­er­a­tions spe­cif­ic to Cana­da with online research. And then I’ll wrap it up, I know that we’re going here. So in terms of the Cana­di­an research process, typ­i­cal­ly we would recruit either online or using posters, but espe­cial­ly at the begin­ning of the pan­dem­ic, every­one was at home. So we were recruit­ing online. But that recruit­ment process, again, as I men­tioned, can start ear­li­er by engag­ing with spe­cif­ic sport clubs and organ­i­sa­tions because you’ll have a pop­u­la­tion of peo­ple who are real­ly inter­est­ed in doing the research you want to do, then what will have to hap­pen is the par­tic­i­pant sends an email to us as the research lab, indi­cat­ing that they’re inter­est­ed in con­duct­ing and par­tic­i­pat­ing in the research. So then we can send them a par­tic­i­pant demo­graph­ics tool. But this was anoth­er chal­lenge because with Cana­di­an pri­va­cy leg­is­la­tion, all par­tic­i­pant demo­graph­ic infor­ma­tion has to be stored in Cana­da, which means that it will have to be stored in Qualtrics. So we would pro­vide par­tic­i­pants with a par­tic­i­pant code. And then we would have the par­tic­i­pants use that code to ulti­mate­ly com­plete their exper­i­ment in goril­la. And there are many steps along this path where you can lose par­tic­i­pants. But the approach that we took is it starts with a qual­i­ty project. So if you have high qual­i­ty pride jacks and designs that are inter­est­ing and engag­ing for your par­tic­i­pants. And they can see the evi­dence of that we can car­ry par­tic­i­pants through these steps a lit­tle bit bet­ter. So I’ll just wrap up today with my last slides, to say, again, recruit­ment and buy­ing starts. Much ear­li­er in the process, par­tic­u­lar­ly when we’re engag­ing in sport, specif­i­cal­ly, try to find those mutu­al­ly valu­able ques­tions. The sec­ond point, I would say is to embrace mod­ern tech­nol­o­gy, AI is here. And it can be very pow­er­ful, espe­cial­ly when we have that AI human inter­ac­tion. And then last­ly, to dri­ve that sus­tained engage­ment dur­ing your online study is try­ing to immerse your par­tic­i­pants and make it as real as pos­si­ble, either through the gam­i­fied effects, or lit­er­al­ly tak­ing the sport game to the ath­letes. So that’s all I have for you today. Thanks so much. I’m look­ing for­ward to your ques­tions and feedback.

21:08
Zach, that was love­ly. Thank you. I still find the research that you do utter­ly aston­ish­ing. Every­body else in the chat. If you’ve got ques­tions for Zack, please put them in the q&a. If you’ve just got com­ments and want to say to Zack, what it is that you learned from his research today, please put that in the chat. It’s nice for him to hear what you’ve tak­en away, I learned so many things, I love the mes­sages you have about col­lab­o­rat­ing with the par­tic­i­pant, to make it inter­est­ing and valu­able to them, as well as inter­est­ing and valu­able to you that that’s a true part­ner­ship. So thank you for that mes­sage. And I also love the clev­er­ness of find­ing sit­u­a­tions where there’s gen­uine behav­iour. It’s asym­me­tries where you know how to do some­thing, but you don’t know what it looks like, only will you know what some­thing looks like, but you don’t know how to do it. I think those are such inter­est­ing real life sit­u­a­tions to exploit for our research. And we see it in lan­guage study as well, when you’re look­ing at bilin­guals and non bilin­guals, or spe­cial­ists was very niche lan­guages. I remem­ber read­ing a study about magi­cians who use very spe­cif­ic words to mean a dif­fer­ent thing from what us nor­mal peo­ple use a word to mean. So thank you for bring­ing all of those ideas togeth­er. In terms of a spe­cif­ic ques­tion for you. Oh, I don’t know. How do you how do you han­dle hand­ed­ness in this? Because of course, some peo­ple are right hand­ed and some peo­ple are left hand­ed. And does that change how they process these videos?

22:40
Yeah, so that’s a? That’s a great ques­tion. So we we tried to address hand­ed­ness in a cou­ple of dif­fer­ent ways. So the first thing was we had the two video cam­eras from two dif­fer­ent per­spec­tives. So whether you hit one way, or the oth­er way, the all the ath­letes will be shown video from that spe­cif­ic phys­i­cal per­spec­tive. So that was the first one we did if they were hit­ters. The sec­ond one was with the pitch­ers. So you know how I was talk­ing about with motor sim­u­la­tion, we engage our motor sys­tem. But what can be inter­est­ing is we all have more expe­ri­ence with one hand than the oth­er. So for pitch­ers, they will all throw balls, or with one arm. And so what we were inter­est­ed in is that would right hand­ed pitch­ers make bet­ter pre­dic­tions when they were watch­ing some­one throw­ing with a right hand, and then would left hand­ed pitch­ers be bet­ter at throw at mak­ing pre­dic­tions when they were watch­ing some­one who was left hand­ed who threw with their left hand. So what we did was we took video of a right hand­ed pitch­er. Here, actu­al­ly, I have an extra slide. And so we again using Pow­er­Point, we just flipped the video. So we made it’s the exact same pic­ture with the exact same phys­i­cal kine­mat­ics, same move­ment tim­ings, but now it appears that the ath­lete is throw­ing with the oppo­site hand. And this was real­ly cool because our pitch­ers so that peo­ple with move­ment expe­ri­ence. It is still unpub­lished but we we were also able to run this in per­son. So we ran it online and in per­son. And we had an over­all end of almost 100 And we showed a very inter­est­ing pat­tern where you know, right hand­ed peo­ple picked up the mod­el bet­ter when they appeared right hand­ed and the left hand­ed pitch­ers picked up the mod­el bet­ter when it was left hand­ed, depend­ing on the amount of infor­ma­tion they had, so that was a real­ly inter­est­ing find­ing for us. That is brilliant.

24:53
I had­n’t thought that you do that. And yet it is so easy. And so per­fect­ly con­trolled is the obvi­ous solu­tion. I love it. Thank you so much. There are a cou­ple of ques­tions in the q&a from Ash­ley, how do you think some of your pre­dic­tion work can trans­late to oth­er sports and domains?

25:17
Absolute­ly. So I think that’s one of the real­ly excit­ing things about action pre­dic­tion is that, you know, coach­es and com­men­ta­tors will talk about, look at that great read that that ath­lete made, like, they just have such great sport vision, you know, they’re able to, they’re able to know where that that team­mates going to be. And they make those no look pass­es, that can apply in pret­ty much every dif­fer­ent sport con­text. So if you’re in foot­ball, or kind of soc­cer, in the UK, for exam­ple, there’s been actu­al­ly quite a bit of action pre­dic­tion research with goal­keep­ers in penal­ty kicks. So you can imag­ine, you know, some­times the World Cup is on the line, and that goal­keep­er needs to pre­dict which way is the ball going to go? Which way do I need to dive? And so using kind of that same approach, we can cre­ate those same sorts of tasks to see are these goal­keep­ers bet­ter able to pre­dict and jump in the right direc­tion? One way or the oth­er? That’s just one exam­ple. But yeah, it can it can apply to a lot of dif­fer­ent sports situations.

26:25
But then like, it would also apply to dri­ving, right? Like I dri­ve every day, and I’m con­stant­ly hav­ing to pre­dict where peo­ple are going to move their cars or bicy­cles or pedes­tri­ans are going to walk?

26:36
Absolute­ly, yeah, there are one of that’s one of the most excit­ing things about action pre­dic­tion is once you start to realise just how much around us it is. So if you’re dri­ving, and you start to notice a cart drift­ing into your lane, but there’s no sig­nal, you’re imme­di­ate­ly think­ing, oh, this per­son might not be that good of a dri­ver. And they’re also going to try and cut me off. Those are already thoughts that you have, and then we can change our behav­iour by maybe putting on the brakes a bit, giv­ing that dri­ver more space. And we picked up on that by a maybe three foot devi­a­tion in a dri­ving path. So yeah, the pos­si­bil­i­ties for action pre­dic­tion, are, are many end­ed allows us through online plat­forms. to real­ly explore this more effi­cient­ly, it can be real­ly hard to take, like we did take 95 peo­ple through this action pre­dic­tion in per­son. And the amount of time that that took ver­sus us doing that same process online is not com­pa­ra­ble. So

27:44
I’m glad being able to take research online has been help­ful and it has enriched the datasets for for for the for the greater good of the sci­ence. Fan­tas­tic. There was anoth­er ques­tion from Ash­ley about, have you any­one else looked at using WebGL? Or uni­ty? I don’t think you’ve used WebGL or uni­ty, have you?

28:03
I haven’t yet.

28:05
No. And actu­al­ly the guer­ril­la game builder that we were pre­sent­ing ear­li­er, I don’t know if you were here then uses WebGL. So I think that ani­ma­tions that Zachary had to cre­ate in Pow­er­Point for this study would now just be real­ly easy to do because you’ve just put the image in and and do the ani­ma­tion live in live in goril­la fair­ly straight­for­ward­ly. And we have also host­ed uni­ty games in goril­la. We’re work­ing on anoth­er one at the moment. In fact, for rich­er, more com­pli­cat­ed games, it’s been a it’s then a bit hard­er because you don’t get all the behav­iour­al mea­sures auto­mat­i­cal­ly. So you have to lay­er on the dif­fer­ent tech depend­ing on exact­ly the out­come that you’re try­ing to achieve. Ash­ley, thank you for the ques­tions. Zack, thank you so much for your time today. That was fascinating.

Get on the Registration List

BeOnline is the conference to learn all about online behavioral research. It's the ideal place to discover the challenges and benefits of online research and to learn from pioneers. If that sounds interesting to you, then click the button below to register for the 2023 conference on Thursday July 6th. You will be the first to know when we release new content and timings for BeOnline 2023.

With thanks to our sponsors!