Zach Besler — UniÂverÂsiÂty of British Columbia
Full TranÂscript:
Zach Besler 0:00
Thanks so much. Well, thanks so much, Jo for the opporÂtuÂniÂty to present today. And it’s been fanÂtasÂtic to hear about all the othÂer talks. I’m interÂestÂed in learnÂing and preÂdicÂtions. I’ve already learned a lot. And this is far exceedÂed my preÂdicÂtions. So we’re off to a great start today. Before we get startÂed, I wantÂed to acknowlÂedge that where I do my research at the UniÂverÂsiÂty of British ColumÂbia, is sitÂuÂatÂed in the traÂdiÂtionÂal ancesÂtral and unseedÂed terÂriÂtoÂries of the Musqueam fell with tooth, Squamish First Nations peoÂples. And the land has been a land of learnÂing for 1000s of years. So we’re very excitÂed to engage with that space, and use that space to learn more.
So what am I interÂestÂed in? Well, oh, the big overview for our research, in my lab in the motor skills lab is how our own motor expeÂriÂences impact how we learn from and make preÂdicÂtions about othÂer peoÂple. So an overÂall process that might explain why this hapÂpens is called Motor simÂuÂlaÂtion. Where if we’re watchÂing someÂbody else’s actions, either to learn, or we’re watchÂing those actions to try and preÂdict what someÂone’s going to do next, we can rely on our own motor sysÂtem while we’re watchÂing. And that through the shared pathÂways in the brain between obserÂvaÂtion and watchÂing, and action of doing, someÂhow, some way our brains are preÂpared to exeÂcute those same actions. And so we can use sigÂnals from our own body to proÂvide insight into how othÂer peoÂple might have accomÂplished those moveÂment goals, and how we might accomÂplish those same moveÂment goals movÂing forÂward. So these are realÂly interÂestÂing topÂics for us. And this gives us kind of that topÂic of from imagÂine this, to what was that, and why we explore jugÂgling and baseÂball and all kinds of cool things online. So not everyÂone knows how to juggle.
So we, as Jo was menÂtionÂing earÂliÂer, we had designed an online experÂiÂment to teach peoÂple how to jugÂgle using a couÂple of difÂferÂent techÂniques. But we’re also interÂestÂed in skilled perÂforÂmance, and being able to preÂdict the actions of othÂer peoÂple, and then also some eye trackÂing our dynamÂic visuÂal acuÂity tasks. So I will start with a bit of a disÂclaimer here that, although we have been colÂlectÂing data for a couÂple of years, the data are still unpubÂlished. So what I’ll be talkÂing about, and mostÂly framÂing this preÂsenÂtaÂtion for are some of the chalÂlenges with conÂductÂing online research, and some of the things that I’ve learned along the way to help anyÂone else who’s startÂing online research for the first time, some of the tools we use to make this process just a litÂtle bit easÂiÂer. So we can look at this as the overview for this topÂic, from funÂdaÂmenÂtal vision to applied sport. And I am CanaÂdiÂan, so I’ll be talkÂing about the CanaÂdiÂan perÂspecÂtive, as well. So for each one of these three tasks we had jugÂgling, to assess action obserÂvaÂtion and motor imagery, we use a baseÂball pitch recogÂniÂtion task for action preÂdicÂtion. And we use the LanÂdolt see task for dynamÂic visuÂal acuÂity. So I’ll talk about how we creÂatÂed those stimÂuli to make them engagÂing and susÂtain realÂly active parÂticÂiÂpaÂtion durÂing our online studÂies. And then some of the cool online task feaÂtures in gorilÂla that we use in each one of these studÂies, to realÂly maxÂimise our effects.
3:42
AweÂsome. Sweet. So this is a picÂture of our motor skills lab. And Lab life. And the before times, was pretÂty nice, because we have this realÂly nice proÂjecÂtor screen that we can show videos for. EveryÂthing is nice and neatÂly conÂtrolled as everyÂbody else has been talkÂing about. But it does have its drawÂbacks, in that it’s hard for us to recruit a large numÂber of sample.
And when we didÂn’t have the option to use the lab, and we had to adapt our research. For a changÂing world. We defÂiÂniteÂly went online. And so we had to basiÂcalÂly start doing online research from scratch. It wasÂn’t anyÂthing that any of us in our departÂment had realÂly explored before. And so I’m here to talk about my learnÂing process with that. So first off, watchÂing and imagÂinÂing the actions of othÂers. So we use a jugÂgling task here. And essenÂtialÂly, we tried to teach peoÂple how to jugÂgle online. This was durÂing the the earÂly parts of the panÂdemÂic as well. Lots of peoÂple have time on their hands and wantÂiÂng to learn new things. So we thought that might be a nice task flow.
So from the research perÂspecÂtive, we were tryÂing to find what are the effects of motor imagery on Have conÂfiÂdence and learnÂing. So if we imagÂine what someÂbody else’s moveÂments will feel like, how does that influÂence how well we think we would do at that task as well. And what’s realÂly neat about this is you can almost call this like the, the watchÂing the Olympics effect. So we’ve all watched the Olympics, we can see some of those swimÂmers in the pool, just makÂing it look so easy, just effortÂlessÂly glidÂing through the water. And when we watched those, the Olympics on TV were enthralled or fasÂciÂnatÂed. And we also think, but yeah, I could, I could do that too. And it’s not until we get about 14 metres down the lane, in our recreÂationÂal swimÂming pool, that we start to seriÂousÂly reevalÂuÂate that iniÂtial preÂdicÂtion of our own abilÂiÂties. And so that’s kind of what we were tryÂing to get at here with jugÂgling, where we had one group of parÂticÂiÂpants just watch jugÂgling actions, and the othÂer group watched the action and then imagÂined immeÂdiÂateÂly after what it would feel like to do that same action themÂselves, to try and see if there are any difÂferÂences in their own self perÂceived ratÂings of conÂfiÂdence. And then after the learnÂing, we look to see who could actuÂalÂly jugÂgle after trainÂing, which made for some interÂestÂing trends so far.
But I’ll be talkÂing more specifÂiÂcalÂly about how we tried to make this in the first place. So what we use was a head mountÂed GoPro, and then we use a conÂcurÂrent kind of triÂpod set up at the same time. So we had first and third perÂson perÂspecÂtive video that we could use in difÂferÂent types of triÂals, because motor imagery effects can be difÂferÂent dependÂing on the physÂiÂcal perÂspecÂtive that we take. And what was realÂly neat is that when you’re watchÂing videos of that head mountÂed GoPro, it’s a very immerÂsive expeÂriÂence, you can see the balls being jugÂgled right in front of you. And that realÂly helps keep all of our parÂticÂiÂpants engaged in that online environment.
So then, then you get your secÂond probÂlem, which is okay, we’re tryÂing to jugÂgle online. How do you meaÂsure imagÂiÂnaÂtion online, so what we did was we assessed the duraÂtion of imagery. So using a spaceÂbar, press and hold funcÂtion and grill up. So basiÂcalÂly, the group that was engagÂing in imagery would press and hold the spaceÂbar when they were beginÂning to imagÂine. And then they released the spaceÂbar, when they were done imagÂinÂing the task. And for a key press conÂtrol, we had the peoÂple who were not engagÂing in motor imagery, hold the spaceÂbar down for a set periÂod of time that we would tell them to hold for. And so I have some notes here on how we made that response keyÂboard hold release hapÂpen. And there’s a great QR code here, which is drill a SupÂport page for keyÂboard hold, release. And I might have logged I might have been in the top 1% of users for spendÂing time on that supÂport page, it was very helpÂful, and we we made it work. Sweet. We’ll get into our secÂond task here, which was the sport speÂcifÂic action preÂdicÂtion tasks with baseÂball players.
8:22
And this was realÂly fun, because we got to do some video colÂlecÂtion in the wild, and realÂly try and take the game to the peoÂple. So when we’re talkÂing about sports speÂcifÂic research, now, we’re takÂing a bit of a pivÂot away from novice peoÂple learnÂing new tasks like jugÂgling. And we start getÂting into a very speÂcifÂic popÂuÂlaÂtion of peoÂple who have a lot of expeÂriÂence and experÂtise with a cerÂtain task. And it’s realÂly imporÂtant for us as researchers to underÂstand how difÂferÂent sports teams and sport clubs valÂue the same types of quesÂtions that we might be asking.
So I have this kind of outÂline of how to set up sports speÂcifÂic research, in our case, action preÂdicÂtion research that coachÂes and athÂletes actuÂalÂly want to do. So the first thing to do is find a proÂgresÂsive Sport Club. And there’s a growÂing moveÂment of embracÂing techÂnolÂoÂgy and new ways of thinkÂing in sports, which is fanÂtasÂtic. And then it’s seek first to underÂstand and then to be underÂstood. So what skills are valuÂable to coachÂes, if we’re interÂestÂed in skill acquiÂsiÂtion research? And then what processÂes and expeÂriÂences might we valÂue and we might want to look at as researchers, and then take a step back and say if we’ve got this diaÂlogue between the researchers and the coachÂes, are we using difÂferÂent words but maybe talkÂing about the same things. So once you’ve got that down, you want to take the best videos, and this is cool, because we’ll kind of add on to the AI conÂverÂsaÂtions that we’ve been had HavÂing some of the techÂnoÂlogÂiÂcal advances that allow us to ask realÂly cool quesÂtions. And then lastÂly, we need to be accesÂsiÂble. And this is where online research platÂforms realÂly come in and are very helpful.
So comÂing back to the seek to underÂstand and then to be underÂstood, I’m talkÂing about action preÂdicÂtion. But what what is that, realÂly? And where did that come from? So, action preÂdicÂtion is our abilÂiÂty to watch and anticÂiÂpate or creÂate an expecÂtaÂtion for what the outÂcome of anothÂer perÂsonÂ’s action is going to be. So with a baseÂball conÂtext, specifÂiÂcalÂly, for those of you who aren’t as familÂiar with baseÂball, there’s a pitchÂer, who throws the ball, and then the hitÂter must hit the ball. And those two popÂuÂlaÂtions are relÂaÂtiveÂly difÂferÂent. PeoÂple, there isn’t a lot of overÂlap. So we get two subÂpopÂuÂlaÂtions one with motor expeÂriÂence, and one with visuÂal expeÂriÂence. So from a researchers perÂspecÂtive, we were interÂestÂed to see if maybe peoÂple with motor expeÂriÂence, were engagÂing in motor simÂuÂlaÂtion, and using that to guide their action preÂdicÂtions, and kind of what were the strateÂgies of peoÂple with visuÂal expeÂriÂence. And that sounds great in the research realm. But the way that we designed the experÂiÂment also allowed coachÂes to answer a quesÂtion that they were interÂestÂed in, which was, who is good at action preÂdicÂtion, who can pick up what the pitchÂer is tryÂing to throw, because the coachÂes valÂue that abilÂiÂty to extract inforÂmaÂtion earÂly, to make good sports speÂcifÂic deciÂsions. So we were able to feed two birds with one stone as it were.
So just talkÂing about how to take the best videos and what our process was for these. What’s realÂly excitÂing about being in 2022, is the future is now just as we’ve been seeÂing with and some of the othÂer talks with the posÂsiÂbilÂiÂties of AI, and AI, Humans are incredÂiÂble. So here’s some clips of how we did video colÂlecÂtion in the wild. And, again, tried to take the research onto the field, and relÂeÂvant make it relÂeÂvant for the athÂletes that we’re studyÂing. So we have here, our modÂel pitchÂer, who will be throwÂing the pitchÂes. And we had for this, we used an AI based markÂerÂless bioÂmeÂchanÂics trackÂer called ProPIÂLOT AI. So we were able to pick up on the actuÂal kineÂmatÂics of the body as he was throwÂing difÂferÂent pitchÂes. So then we could cross refÂerÂence, if he was decepÂtive with his moveÂments, how was he able to delivÂer those pitchÂes, and who is best attuned to the difÂferÂences between those kinematics.
12:56
But havÂing that partÂnerÂship with the UBC baseÂball Sports Club, we were able to set up quite a few camÂeras. So they have four and AI enabled camÂeras that surÂround their entire staÂdiÂum, we had two camÂeras, immeÂdiÂateÂly on field, to be in the perÂspecÂtive of a hitÂter, so someÂone who would be facÂing this picÂture, we had our markÂerÂless AI bioÂmeÂchanÂics on the left here, and we had othÂer video on the right as well. So this allowed us to colÂlect the highÂest qualÂiÂty video posÂsiÂble and become an expert on exactÂly what was going on. We also have a ball trackÂer, so we knew exactÂly how fast the ball was travÂelÂling, how much it was spinÂning, and where it endÂed up. And so the AI enabled camÂeras all around the staÂdiÂum was placed by AI. So being able to use these difÂferÂent sources of video, allow us to ask even more quesÂtions, we were focused on action preÂdicÂtion. But the future direcÂtions of colÂlabÂoÂratÂing with sports clubs are endÂless from conÂtexÂtuÂal knowlÂedge, being able to see were we in the right defenÂsive sysÂtem or not, there’s realÂly a lot of opporÂtuÂniÂties there. And then lastÂly, being accesÂsiÂble and using the online research platÂform of GorilÂla, which allowed us to colÂlect data from a lot of difÂferÂent athÂletes all at the same time. And so this is a clip of what it actuÂalÂly looks like in guerÂrilÂla. So you’re going to see a pitchÂer throw a type of pitch. And if you are an expert in the sport, you’ll be able to clasÂsiÂfy one of the three difÂferÂent pitchÂes. So here we go. So you can see it’s it’s quite abrupt, it gets on you, but we have to be able to make these preÂdicÂtions quickÂly to be able to respond to them in an effecÂtive way. I’m going to touch on one more task that we creÂatÂed. Online. And that was the dynamÂic visuÂal acuÂity test using the landÂlord seat. So for those of you who maybe aren’t as familÂiar with the latÂerÂal T, it’s a task for us to meaÂsure dynamÂic visuÂal acuÂity. WhenÂevÂer we’re talkÂing about visuÂal acuÂity. StaÂtÂic visuÂal acuÂity is kind of like those pie charts in the optometrists office kind of II s, kind of there. But with dynamÂic visuÂal acuÂity, we want to see how much you can resolve difÂferÂent gap weights when objects are movÂing, which can be helpÂful in sport, as well. So what we did here was we have difÂferÂent rings with gaps, and the gaps can be oriÂentÂed in difÂferÂent posiÂtions. And then whoÂevÂer is conÂductÂing the task has to indiÂcate which direcÂtion the openÂing is. Now, there are codes, there is code that has been writÂten in MatÂlab to creÂate these tasks. And there are a lot of inlab opporÂtuÂniÂties to do this research. But the chalÂlenge here was how do I creÂate a no code, no downÂload, uniÂverÂsalÂly comÂpatÂiÂble task here. And it was with the othÂer tasks, creÂatÂing those videos, uploadÂing them online, it was chalÂlengÂing in its own right. But this was the part as a researcher where I felt like I was stuck on a desert island. And all I had was a rock. Because I was thinkÂing, How on earth am I going to get a litÂtle ring with the right gap size? And just have it show up across the screen? Like how am I going to come up with that. And I thought about it and then realised, well, if PowÂerÂPoint is my rock, I might actuÂalÂly be able to get off this island. So what I’m showÂing you here is a bit of a labour of love. While I was able to creÂate this task, using only PowÂerÂPoint, and,
16:57
and grid. So to start, we use the screen calÂiÂbraÂtion feaÂture, which was in closed beta when we startÂed. So it was realÂly excitÂing to be one of the first peoÂple to use this. And so we needÂed this to know what the size of that gap would be in gorilÂla verÂsus what it would look like on my own screen. So here’s the SupÂport page for the screen calÂiÂbraÂtion tests. And I strongÂly recÂomÂmend that those of you doing this type of research. Check that one out. So again, I needÂed to find a way to find out what size was my stimÂuli showÂing up on gorilÂla. And then what size was I creÂatÂing in PowÂerÂPoint at difÂferÂent calÂiÂbraÂtions. So we had to do some cross refÂerÂencÂing, I had some physÂiÂcal rulers on my on my lapÂtop screen. And then once we figÂured out that relaÂtionÂship, it was time to do some pixÂel on size. So we used cenÂtimeÂtre conÂverÂsions. And then we were able to creÂate our difÂferÂent ring sizes, which is imporÂtant because as you can see here, these last two picÂtures, they have the same size of ring, but the gap size, you can see for that right one, it’s just a litÂtle bit bigÂger. And that’s what we’re realÂly interÂestÂed in. So down to the point one cenÂtimeÂtre matÂters, and we were able to fine tune that using the screen calÂiÂbraÂtion in July. So I’ll just finÂish off today by talkÂing about some of the conÂsidÂerÂaÂtions speÂcifÂic to CanaÂda with online research. And then I’ll wrap it up, I know that we’re going here. So in terms of the CanaÂdiÂan research process, typÂiÂcalÂly we would recruit either online or using posters, but espeÂcialÂly at the beginÂning of the panÂdemÂic, everyÂone was at home. So we were recruitÂing online. But that recruitÂment process, again, as I menÂtioned, can start earÂliÂer by engagÂing with speÂcifÂic sport clubs and organÂiÂsaÂtions because you’ll have a popÂuÂlaÂtion of peoÂple who are realÂly interÂestÂed in doing the research you want to do, then what will have to hapÂpen is the parÂticÂiÂpant sends an email to us as the research lab, indiÂcatÂing that they’re interÂestÂed in conÂductÂing and parÂticÂiÂpatÂing in the research. So then we can send them a parÂticÂiÂpant demoÂgraphÂics tool. But this was anothÂer chalÂlenge because with CanaÂdiÂan priÂvaÂcy legÂisÂlaÂtion, all parÂticÂiÂpant demoÂgraphÂic inforÂmaÂtion has to be stored in CanaÂda, which means that it will have to be stored in Qualtrics. So we would proÂvide parÂticÂiÂpants with a parÂticÂiÂpant code. And then we would have the parÂticÂiÂpants use that code to ultiÂmateÂly comÂplete their experÂiÂment in gorilÂla. And there are many steps along this path where you can lose parÂticÂiÂpants. But the approach that we took is it starts with a qualÂiÂty project. So if you have high qualÂiÂty pride jacks and designs that are interÂestÂing and engagÂing for your parÂticÂiÂpants. And they can see the eviÂdence of that we can carÂry parÂticÂiÂpants through these steps a litÂtle bit betÂter. So I’ll just wrap up today with my last slides, to say, again, recruitÂment and buyÂing starts. Much earÂliÂer in the process, parÂticÂuÂlarÂly when we’re engagÂing in sport, specifÂiÂcalÂly, try to find those mutuÂalÂly valuÂable quesÂtions. The secÂond point, I would say is to embrace modÂern techÂnolÂoÂgy, AI is here. And it can be very powÂerÂful, espeÂcialÂly when we have that AI human interÂacÂtion. And then lastÂly, to driÂve that susÂtained engageÂment durÂing your online study is tryÂing to immerse your parÂticÂiÂpants and make it as real as posÂsiÂble, either through the gamÂiÂfied effects, or litÂerÂalÂly takÂing the sport game to the athÂletes. So that’s all I have for you today. Thanks so much. I’m lookÂing forÂward to your quesÂtions and feedback.
21:08
Zach, that was loveÂly. Thank you. I still find the research that you do utterÂly astonÂishÂing. EveryÂbody else in the chat. If you’ve got quesÂtions for Zack, please put them in the q&a. If you’ve just got comÂments and want to say to Zack, what it is that you learned from his research today, please put that in the chat. It’s nice for him to hear what you’ve takÂen away, I learned so many things, I love the mesÂsages you have about colÂlabÂoÂratÂing with the parÂticÂiÂpant, to make it interÂestÂing and valuÂable to them, as well as interÂestÂing and valuÂable to you that that’s a true partÂnerÂship. So thank you for that mesÂsage. And I also love the clevÂerÂness of findÂing sitÂuÂaÂtions where there’s genÂuine behavÂiour. It’s asymÂmeÂtries where you know how to do someÂthing, but you don’t know what it looks like, only will you know what someÂthing looks like, but you don’t know how to do it. I think those are such interÂestÂing real life sitÂuÂaÂtions to exploit for our research. And we see it in lanÂguage study as well, when you’re lookÂing at bilinÂguals and non bilinÂguals, or speÂcialÂists was very niche lanÂguages. I rememÂber readÂing a study about magiÂcians who use very speÂcifÂic words to mean a difÂferÂent thing from what us norÂmal peoÂple use a word to mean. So thank you for bringÂing all of those ideas togethÂer. In terms of a speÂcifÂic quesÂtion for you. Oh, I don’t know. How do you how do you hanÂdle handÂedÂness in this? Because of course, some peoÂple are right handÂed and some peoÂple are left handÂed. And does that change how they process these videos?
22:40
Yeah, so that’s a? That’s a great quesÂtion. So we we tried to address handÂedÂness in a couÂple of difÂferÂent ways. So the first thing was we had the two video camÂeras from two difÂferÂent perÂspecÂtives. So whether you hit one way, or the othÂer way, the all the athÂletes will be shown video from that speÂcifÂic physÂiÂcal perÂspecÂtive. So that was the first one we did if they were hitÂters. The secÂond one was with the pitchÂers. So you know how I was talkÂing about with motor simÂuÂlaÂtion, we engage our motor sysÂtem. But what can be interÂestÂing is we all have more expeÂriÂence with one hand than the othÂer. So for pitchÂers, they will all throw balls, or with one arm. And so what we were interÂestÂed in is that would right handÂed pitchÂers make betÂter preÂdicÂtions when they were watchÂing someÂone throwÂing with a right hand, and then would left handÂed pitchÂers be betÂter at throw at makÂing preÂdicÂtions when they were watchÂing someÂone who was left handÂed who threw with their left hand. So what we did was we took video of a right handÂed pitchÂer. Here, actuÂalÂly, I have an extra slide. And so we again using PowÂerÂPoint, we just flipped the video. So we made it’s the exact same picÂture with the exact same physÂiÂcal kineÂmatÂics, same moveÂment timÂings, but now it appears that the athÂlete is throwÂing with the oppoÂsite hand. And this was realÂly cool because our pitchÂers so that peoÂple with moveÂment expeÂriÂence. It is still unpubÂlished but we we were also able to run this in perÂson. So we ran it online and in perÂson. And we had an overÂall end of almost 100 And we showed a very interÂestÂing patÂtern where you know, right handÂed peoÂple picked up the modÂel betÂter when they appeared right handÂed and the left handÂed pitchÂers picked up the modÂel betÂter when it was left handÂed, dependÂing on the amount of inforÂmaÂtion they had, so that was a realÂly interÂestÂing findÂing for us. That is brilliant.
24:53
I hadÂn’t thought that you do that. And yet it is so easy. And so perÂfectÂly conÂtrolled is the obviÂous soluÂtion. I love it. Thank you so much. There are a couÂple of quesÂtions in the q&a from AshÂley, how do you think some of your preÂdicÂtion work can transÂlate to othÂer sports and domains?
25:17
AbsoluteÂly. So I think that’s one of the realÂly excitÂing things about action preÂdicÂtion is that, you know, coachÂes and comÂmenÂtaÂtors will talk about, look at that great read that that athÂlete made, like, they just have such great sport vision, you know, they’re able to, they’re able to know where that that teamÂmates going to be. And they make those no look passÂes, that can apply in pretÂty much every difÂferÂent sport conÂtext. So if you’re in footÂball, or kind of socÂcer, in the UK, for examÂple, there’s been actuÂalÂly quite a bit of action preÂdicÂtion research with goalÂkeepÂers in penalÂty kicks. So you can imagÂine, you know, someÂtimes the World Cup is on the line, and that goalÂkeepÂer needs to preÂdict which way is the ball going to go? Which way do I need to dive? And so using kind of that same approach, we can creÂate those same sorts of tasks to see are these goalÂkeepÂers betÂter able to preÂdict and jump in the right direcÂtion? One way or the othÂer? That’s just one examÂple. But yeah, it can it can apply to a lot of difÂferÂent sports situations.
26:25
But then like, it would also apply to driÂving, right? Like I driÂve every day, and I’m conÂstantÂly havÂing to preÂdict where peoÂple are going to move their cars or bicyÂcles or pedesÂtriÂans are going to walk?
26:36
AbsoluteÂly, yeah, there are one of that’s one of the most excitÂing things about action preÂdicÂtion is once you start to realise just how much around us it is. So if you’re driÂving, and you start to notice a cart driftÂing into your lane, but there’s no sigÂnal, you’re immeÂdiÂateÂly thinkÂing, oh, this perÂson might not be that good of a driÂver. And they’re also going to try and cut me off. Those are already thoughts that you have, and then we can change our behavÂiour by maybe putting on the brakes a bit, givÂing that driÂver more space. And we picked up on that by a maybe three foot deviÂaÂtion in a driÂving path. So yeah, the posÂsiÂbilÂiÂties for action preÂdicÂtion, are, are many endÂed allows us through online platÂforms. to realÂly explore this more effiÂcientÂly, it can be realÂly hard to take, like we did take 95 peoÂple through this action preÂdicÂtion in perÂson. And the amount of time that that took verÂsus us doing that same process online is not comÂpaÂraÂble. So
27:44
I’m glad being able to take research online has been helpÂful and it has enriched the datasets for for for the for the greater good of the sciÂence. FanÂtasÂtic. There was anothÂer quesÂtion from AshÂley about, have you anyÂone else looked at using WebGL? Or uniÂty? I don’t think you’ve used WebGL or uniÂty, have you?
28:03
I haven’t yet.
28:05
No. And actuÂalÂly the guerÂrilÂla game builder that we were preÂsentÂing earÂliÂer, I don’t know if you were here then uses WebGL. So I think that aniÂmaÂtions that Zachary had to creÂate in PowÂerÂPoint for this study would now just be realÂly easy to do because you’ve just put the image in and and do the aniÂmaÂtion live in live in gorilÂla fairÂly straightÂforÂwardÂly. And we have also hostÂed uniÂty games in gorilÂla. We’re workÂing on anothÂer one at the moment. In fact, for richÂer, more comÂpliÂcatÂed games, it’s been a it’s then a bit hardÂer because you don’t get all the behavÂiourÂal meaÂsures autoÂmatÂiÂcalÂly. So you have to layÂer on the difÂferÂent tech dependÂing on exactÂly the outÂcome that you’re tryÂing to achieve. AshÂley, thank you for the quesÂtions. Zack, thank you so much for your time today. That was fascinating.

