Zach Besler — University of British Columbia
Zach Besler 0:00
Thanks so much. Well, thanks so much, Jo for the opportunity to present today. And it’s been fantastic to hear about all the other talks. I’m interested in learning and predictions. I’ve already learned a lot. And this is far exceeded my predictions. So we’re off to a great start today. Before we get started, I wanted to acknowledge that where I do my research at the University of British Columbia, is situated in the traditional ancestral and unseeded territories of the Musqueam fell with tooth, Squamish First Nations peoples. And the land has been a land of learning for 1000s of years. So we’re very excited to engage with that space, and use that space to learn more.
So what am I interested in? Well, oh, the big overview for our research, in my lab in the motor skills lab is how our own motor experiences impact how we learn from and make predictions about other people. So an overall process that might explain why this happens is called Motor simulation. Where if we’re watching somebody else’s actions, either to learn, or we’re watching those actions to try and predict what someone’s going to do next, we can rely on our own motor system while we’re watching. And that through the shared pathways in the brain between observation and watching, and action of doing, somehow, some way our brains are prepared to execute those same actions. And so we can use signals from our own body to provide insight into how other people might have accomplished those movement goals, and how we might accomplish those same movement goals moving forward. So these are really interesting topics for us. And this gives us kind of that topic of from imagine this, to what was that, and why we explore juggling and baseball and all kinds of cool things online. So not everyone knows how to juggle.
So we, as Jo was mentioning earlier, we had designed an online experiment to teach people how to juggle using a couple of different techniques. But we’re also interested in skilled performance, and being able to predict the actions of other people, and then also some eye tracking our dynamic visual acuity tasks. So I will start with a bit of a disclaimer here that, although we have been collecting data for a couple of years, the data are still unpublished. So what I’ll be talking about, and mostly framing this presentation for are some of the challenges with conducting online research, and some of the things that I’ve learned along the way to help anyone else who’s starting online research for the first time, some of the tools we use to make this process just a little bit easier. So we can look at this as the overview for this topic, from fundamental vision to applied sport. And I am Canadian, so I’ll be talking about the Canadian perspective, as well. So for each one of these three tasks we had juggling, to assess action observation and motor imagery, we use a baseball pitch recognition task for action prediction. And we use the Landolt see task for dynamic visual acuity. So I’ll talk about how we created those stimuli to make them engaging and sustain really active participation during our online studies. And then some of the cool online task features in gorilla that we use in each one of these studies, to really maximise our effects.
Awesome. Sweet. So this is a picture of our motor skills lab. And Lab life. And the before times, was pretty nice, because we have this really nice projector screen that we can show videos for. Everything is nice and neatly controlled as everybody else has been talking about. But it does have its drawbacks, in that it’s hard for us to recruit a large number of sample.
And when we didn’t have the option to use the lab, and we had to adapt our research. For a changing world. We definitely went online. And so we had to basically start doing online research from scratch. It wasn’t anything that any of us in our department had really explored before. And so I’m here to talk about my learning process with that. So first off, watching and imagining the actions of others. So we use a juggling task here. And essentially, we tried to teach people how to juggle online. This was during the the early parts of the pandemic as well. Lots of people have time on their hands and wanting to learn new things. So we thought that might be a nice task flow.
So from the research perspective, we were trying to find what are the effects of motor imagery on Have confidence and learning. So if we imagine what somebody else’s movements will feel like, how does that influence how well we think we would do at that task as well. And what’s really neat about this is you can almost call this like the, the watching the Olympics effect. So we’ve all watched the Olympics, we can see some of those swimmers in the pool, just making it look so easy, just effortlessly gliding through the water. And when we watched those, the Olympics on TV were enthralled or fascinated. And we also think, but yeah, I could, I could do that too. And it’s not until we get about 14 metres down the lane, in our recreational swimming pool, that we start to seriously reevaluate that initial prediction of our own abilities. And so that’s kind of what we were trying to get at here with juggling, where we had one group of participants just watch juggling actions, and the other group watched the action and then imagined immediately after what it would feel like to do that same action themselves, to try and see if there are any differences in their own self perceived ratings of confidence. And then after the learning, we look to see who could actually juggle after training, which made for some interesting trends so far.
But I’ll be talking more specifically about how we tried to make this in the first place. So what we use was a head mounted GoPro, and then we use a concurrent kind of tripod set up at the same time. So we had first and third person perspective video that we could use in different types of trials, because motor imagery effects can be different depending on the physical perspective that we take. And what was really neat is that when you’re watching videos of that head mounted GoPro, it’s a very immersive experience, you can see the balls being juggled right in front of you. And that really helps keep all of our participants engaged in that online environment.
So then, then you get your second problem, which is okay, we’re trying to juggle online. How do you measure imagination online, so what we did was we assessed the duration of imagery. So using a spacebar, press and hold function and grill up. So basically, the group that was engaging in imagery would press and hold the spacebar when they were beginning to imagine. And then they released the spacebar, when they were done imagining the task. And for a key press control, we had the people who were not engaging in motor imagery, hold the spacebar down for a set period of time that we would tell them to hold for. And so I have some notes here on how we made that response keyboard hold release happen. And there’s a great QR code here, which is drill a Support page for keyboard hold, release. And I might have logged I might have been in the top 1% of users for spending time on that support page, it was very helpful, and we we made it work. Sweet. We’ll get into our second task here, which was the sport specific action prediction tasks with baseball players.
And this was really fun, because we got to do some video collection in the wild, and really try and take the game to the people. So when we’re talking about sports specific research, now, we’re taking a bit of a pivot away from novice people learning new tasks like juggling. And we start getting into a very specific population of people who have a lot of experience and expertise with a certain task. And it’s really important for us as researchers to understand how different sports teams and sport clubs value the same types of questions that we might be asking.
So I have this kind of outline of how to set up sports specific research, in our case, action prediction research that coaches and athletes actually want to do. So the first thing to do is find a progressive Sport Club. And there’s a growing movement of embracing technology and new ways of thinking in sports, which is fantastic. And then it’s seek first to understand and then to be understood. So what skills are valuable to coaches, if we’re interested in skill acquisition research? And then what processes and experiences might we value and we might want to look at as researchers, and then take a step back and say if we’ve got this dialogue between the researchers and the coaches, are we using different words but maybe talking about the same things. So once you’ve got that down, you want to take the best videos, and this is cool, because we’ll kind of add on to the AI conversations that we’ve been had Having some of the technological advances that allow us to ask really cool questions. And then lastly, we need to be accessible. And this is where online research platforms really come in and are very helpful.
So coming back to the seek to understand and then to be understood, I’m talking about action prediction. But what what is that, really? And where did that come from? So, action prediction is our ability to watch and anticipate or create an expectation for what the outcome of another person’s action is going to be. So with a baseball context, specifically, for those of you who aren’t as familiar with baseball, there’s a pitcher, who throws the ball, and then the hitter must hit the ball. And those two populations are relatively different. People, there isn’t a lot of overlap. So we get two subpopulations one with motor experience, and one with visual experience. So from a researchers perspective, we were interested to see if maybe people with motor experience, were engaging in motor simulation, and using that to guide their action predictions, and kind of what were the strategies of people with visual experience. And that sounds great in the research realm. But the way that we designed the experiment also allowed coaches to answer a question that they were interested in, which was, who is good at action prediction, who can pick up what the pitcher is trying to throw, because the coaches value that ability to extract information early, to make good sports specific decisions. So we were able to feed two birds with one stone as it were.
So just talking about how to take the best videos and what our process was for these. What’s really exciting about being in 2022, is the future is now just as we’ve been seeing with and some of the other talks with the possibilities of AI, and AI, Humans are incredible. So here’s some clips of how we did video collection in the wild. And, again, tried to take the research onto the field, and relevant make it relevant for the athletes that we’re studying. So we have here, our model pitcher, who will be throwing the pitches. And we had for this, we used an AI based markerless biomechanics tracker called ProPILOT AI. So we were able to pick up on the actual kinematics of the body as he was throwing different pitches. So then we could cross reference, if he was deceptive with his movements, how was he able to deliver those pitches, and who is best attuned to the differences between those kinematics.
But having that partnership with the UBC baseball Sports Club, we were able to set up quite a few cameras. So they have four and AI enabled cameras that surround their entire stadium, we had two cameras, immediately on field, to be in the perspective of a hitter, so someone who would be facing this picture, we had our markerless AI biomechanics on the left here, and we had other video on the right as well. So this allowed us to collect the highest quality video possible and become an expert on exactly what was going on. We also have a ball tracker, so we knew exactly how fast the ball was travelling, how much it was spinning, and where it ended up. And so the AI enabled cameras all around the stadium was placed by AI. So being able to use these different sources of video, allow us to ask even more questions, we were focused on action prediction. But the future directions of collaborating with sports clubs are endless from contextual knowledge, being able to see were we in the right defensive system or not, there’s really a lot of opportunities there. And then lastly, being accessible and using the online research platform of Gorilla, which allowed us to collect data from a lot of different athletes all at the same time. And so this is a clip of what it actually looks like in guerrilla. So you’re going to see a pitcher throw a type of pitch. And if you are an expert in the sport, you’ll be able to classify one of the three different pitches. So here we go. So you can see it’s it’s quite abrupt, it gets on you, but we have to be able to make these predictions quickly to be able to respond to them in an effective way. I’m going to touch on one more task that we created. Online. And that was the dynamic visual acuity test using the landlord seat. So for those of you who maybe aren’t as familiar with the lateral T, it’s a task for us to measure dynamic visual acuity. Whenever we’re talking about visual acuity. Static visual acuity is kind of like those pie charts in the optometrists office kind of II s, kind of there. But with dynamic visual acuity, we want to see how much you can resolve different gap weights when objects are moving, which can be helpful in sport, as well. So what we did here was we have different rings with gaps, and the gaps can be oriented in different positions. And then whoever is conducting the task has to indicate which direction the opening is. Now, there are codes, there is code that has been written in Matlab to create these tasks. And there are a lot of inlab opportunities to do this research. But the challenge here was how do I create a no code, no download, universally compatible task here. And it was with the other tasks, creating those videos, uploading them online, it was challenging in its own right. But this was the part as a researcher where I felt like I was stuck on a desert island. And all I had was a rock. Because I was thinking, How on earth am I going to get a little ring with the right gap size? And just have it show up across the screen? Like how am I going to come up with that. And I thought about it and then realised, well, if PowerPoint is my rock, I might actually be able to get off this island. So what I’m showing you here is a bit of a labour of love. While I was able to create this task, using only PowerPoint, and,
and grid. So to start, we use the screen calibration feature, which was in closed beta when we started. So it was really exciting to be one of the first people to use this. And so we needed this to know what the size of that gap would be in gorilla versus what it would look like on my own screen. So here’s the Support page for the screen calibration tests. And I strongly recommend that those of you doing this type of research. Check that one out. So again, I needed to find a way to find out what size was my stimuli showing up on gorilla. And then what size was I creating in PowerPoint at different calibrations. So we had to do some cross referencing, I had some physical rulers on my on my laptop screen. And then once we figured out that relationship, it was time to do some pixel on size. So we used centimetre conversions. And then we were able to create our different ring sizes, which is important because as you can see here, these last two pictures, they have the same size of ring, but the gap size, you can see for that right one, it’s just a little bit bigger. And that’s what we’re really interested in. So down to the point one centimetre matters, and we were able to fine tune that using the screen calibration in July. So I’ll just finish off today by talking about some of the considerations specific to Canada with online research. And then I’ll wrap it up, I know that we’re going here. So in terms of the Canadian research process, typically we would recruit either online or using posters, but especially at the beginning of the pandemic, everyone was at home. So we were recruiting online. But that recruitment process, again, as I mentioned, can start earlier by engaging with specific sport clubs and organisations because you’ll have a population of people who are really interested in doing the research you want to do, then what will have to happen is the participant sends an email to us as the research lab, indicating that they’re interested in conducting and participating in the research. So then we can send them a participant demographics tool. But this was another challenge because with Canadian privacy legislation, all participant demographic information has to be stored in Canada, which means that it will have to be stored in Qualtrics. So we would provide participants with a participant code. And then we would have the participants use that code to ultimately complete their experiment in gorilla. And there are many steps along this path where you can lose participants. But the approach that we took is it starts with a quality project. So if you have high quality pride jacks and designs that are interesting and engaging for your participants. And they can see the evidence of that we can carry participants through these steps a little bit better. So I’ll just wrap up today with my last slides, to say, again, recruitment and buying starts. Much earlier in the process, particularly when we’re engaging in sport, specifically, try to find those mutually valuable questions. The second point, I would say is to embrace modern technology, AI is here. And it can be very powerful, especially when we have that AI human interaction. And then lastly, to drive that sustained engagement during your online study is trying to immerse your participants and make it as real as possible, either through the gamified effects, or literally taking the sport game to the athletes. So that’s all I have for you today. Thanks so much. I’m looking forward to your questions and feedback.
Zach, that was lovely. Thank you. I still find the research that you do utterly astonishing. Everybody else in the chat. If you’ve got questions for Zack, please put them in the q&a. If you’ve just got comments and want to say to Zack, what it is that you learned from his research today, please put that in the chat. It’s nice for him to hear what you’ve taken away, I learned so many things, I love the messages you have about collaborating with the participant, to make it interesting and valuable to them, as well as interesting and valuable to you that that’s a true partnership. So thank you for that message. And I also love the cleverness of finding situations where there’s genuine behaviour. It’s asymmetries where you know how to do something, but you don’t know what it looks like, only will you know what something looks like, but you don’t know how to do it. I think those are such interesting real life situations to exploit for our research. And we see it in language study as well, when you’re looking at bilinguals and non bilinguals, or specialists was very niche languages. I remember reading a study about magicians who use very specific words to mean a different thing from what us normal people use a word to mean. So thank you for bringing all of those ideas together. In terms of a specific question for you. Oh, I don’t know. How do you how do you handle handedness in this? Because of course, some people are right handed and some people are left handed. And does that change how they process these videos?
Yeah, so that’s a? That’s a great question. So we we tried to address handedness in a couple of different ways. So the first thing was we had the two video cameras from two different perspectives. So whether you hit one way, or the other way, the all the athletes will be shown video from that specific physical perspective. So that was the first one we did if they were hitters. The second one was with the pitchers. So you know how I was talking about with motor simulation, we engage our motor system. But what can be interesting is we all have more experience with one hand than the other. So for pitchers, they will all throw balls, or with one arm. And so what we were interested in is that would right handed pitchers make better predictions when they were watching someone throwing with a right hand, and then would left handed pitchers be better at throw at making predictions when they were watching someone who was left handed who threw with their left hand. So what we did was we took video of a right handed pitcher. Here, actually, I have an extra slide. And so we again using PowerPoint, we just flipped the video. So we made it’s the exact same picture with the exact same physical kinematics, same movement timings, but now it appears that the athlete is throwing with the opposite hand. And this was really cool because our pitchers so that people with movement experience. It is still unpublished but we we were also able to run this in person. So we ran it online and in person. And we had an overall end of almost 100 And we showed a very interesting pattern where you know, right handed people picked up the model better when they appeared right handed and the left handed pitchers picked up the model better when it was left handed, depending on the amount of information they had, so that was a really interesting finding for us. That is brilliant.
I hadn’t thought that you do that. And yet it is so easy. And so perfectly controlled is the obvious solution. I love it. Thank you so much. There are a couple of questions in the q&a from Ashley, how do you think some of your prediction work can translate to other sports and domains?
Absolutely. So I think that’s one of the really exciting things about action prediction is that, you know, coaches and commentators will talk about, look at that great read that that athlete made, like, they just have such great sport vision, you know, they’re able to, they’re able to know where that that teammates going to be. And they make those no look passes, that can apply in pretty much every different sport context. So if you’re in football, or kind of soccer, in the UK, for example, there’s been actually quite a bit of action prediction research with goalkeepers in penalty kicks. So you can imagine, you know, sometimes the World Cup is on the line, and that goalkeeper needs to predict which way is the ball going to go? Which way do I need to dive? And so using kind of that same approach, we can create those same sorts of tasks to see are these goalkeepers better able to predict and jump in the right direction? One way or the other? That’s just one example. But yeah, it can it can apply to a lot of different sports situations.
But then like, it would also apply to driving, right? Like I drive every day, and I’m constantly having to predict where people are going to move their cars or bicycles or pedestrians are going to walk?
Absolutely, yeah, there are one of that’s one of the most exciting things about action prediction is once you start to realise just how much around us it is. So if you’re driving, and you start to notice a cart drifting into your lane, but there’s no signal, you’re immediately thinking, oh, this person might not be that good of a driver. And they’re also going to try and cut me off. Those are already thoughts that you have, and then we can change our behaviour by maybe putting on the brakes a bit, giving that driver more space. And we picked up on that by a maybe three foot deviation in a driving path. So yeah, the possibilities for action prediction, are, are many ended allows us through online platforms. to really explore this more efficiently, it can be really hard to take, like we did take 95 people through this action prediction in person. And the amount of time that that took versus us doing that same process online is not comparable. So
I’m glad being able to take research online has been helpful and it has enriched the datasets for for for the for the greater good of the science. Fantastic. There was another question from Ashley about, have you anyone else looked at using WebGL? Or unity? I don’t think you’ve used WebGL or unity, have you?
I haven’t yet.
No. And actually the guerrilla game builder that we were presenting earlier, I don’t know if you were here then uses WebGL. So I think that animations that Zachary had to create in PowerPoint for this study would now just be really easy to do because you’ve just put the image in and and do the animation live in live in gorilla fairly straightforwardly. And we have also hosted unity games in gorilla. We’re working on another one at the moment. In fact, for richer, more complicated games, it’s been a it’s then a bit harder because you don’t get all the behavioural measures automatically. So you have to layer on the different tech depending on exactly the outcome that you’re trying to achieve. Ashley, thank you for the questions. Zack, thank you so much for your time today. That was fascinating.