Casey L. Roark, PhD — UniÂverÂsiÂty of PittsÂburgh, DepartÂment of ComÂmuÂniÂcaÂtion SciÂence & DisÂorÂders CenÂter for the NeurÂal Basis of Cognition
AccuÂrate and preÂcise meaÂsureÂment of behavÂior is critÂiÂcal for underÂstandÂing human cogÂniÂtion. In experÂiÂmenÂtal conÂtexts, we colÂlect behavÂioral meaÂsures from parÂticÂiÂpants such as reacÂtion times of butÂton pressÂes. My work invesÂtiÂgates how we learn to group comÂplex objects in the senÂsoÂry world into difÂferÂent catÂeÂgories. I leverÂage comÂpuÂtaÂtionÂal modÂels of behavÂior that capÂiÂtalÂize on reacÂtion time inforÂmaÂtion to reveal psyÂchoÂlogÂiÂcalÂly meanÂingÂful and disÂtinct cogÂniÂtive processÂes. CritÂiÂcalÂly, these comÂpuÂtaÂtionÂal modÂels rely on accuÂrate and preÂcise meaÂsureÂments of reacÂtion time. Over the past two years, my lab has leverÂaged the GorilÂla ExperÂiÂment Builder and online recruitÂment strateÂgies to betÂter underÂstand indiÂvidÂual difÂferÂences in learnÂing and deciÂsion makÂing. In this talk, I will disÂcuss how online research has enabled examÂiÂnaÂtion of the mechÂaÂnisms of time-senÂsiÂtive learnÂing and deciÂsion makÂing in a diverse, globÂal population.
Full TranÂscript:
Casey Roark 0:00
Okay. Oh, I’m sure…there we go. Okay. So all right, so I’m gonna give talk to you a litÂtle bit about my research on learnÂing and deciÂsion makÂing, and how I have leverÂaged online methÂods to study that. So first, I’m going to give a brief overview of the type of work that I do givÂen this broad audiÂence that we have here. And then I’m going to give you details about a speÂcifÂic study that I first conÂductÂed in perÂson, and then used online research methÂods to conÂduct a repliÂcaÂtion in a wider sample.
Right. So my work focusÂes on how we learn about the senÂsoÂry world, and parÂticÂuÂlarÂly how we organÂise that world into catÂeÂgories. So your knowlÂedge of the catÂeÂgoÂry dog enables you to quickÂly idenÂtiÂfy this creaÂture here as a dog. And simÂiÂlarÂly, you know that this creaÂture is also a dog, even though it has this fanÂcy litÂtle zebra print coat on it. Alright, so what about this creaÂture? What is this, that might have takÂen you just a split secÂond longer to recogÂnise that this is actuÂalÂly a horse and not a zebra, because even though it has this patÂtern on its coat, it is in fact a horse underÂneath there. And finalÂly, you can quickÂly idenÂtiÂfy this last creaÂture as a zebra, based on its coat and othÂer features.
So you’re able to leverÂage your existÂing catÂeÂgoÂry knowlÂedge to genÂerÂalise to things that you’ve probÂaÂbly nevÂer seen before, like this dog or a horse in a zebra print jackÂet. And this seems maybe slightÂly trivÂial, but a comÂputÂer tryÂing to solve this probÂlem, or a child, infant, might have trouÂble telling us that this is a horse or a dog instead of a zebra because of their othÂer visuÂal simÂiÂlarÂiÂties. And so as humans, we can achieve these remarkÂable feats of genÂerÂalÂiÂsaÂtion that machines, for examÂple, find very difficult.
And so catÂeÂgoÂrizaÂtion is also not limÂitÂed to only the visuÂal modalÂiÂty, we use catÂeÂgories and sounds as well. And so catÂeÂgoÂrizaÂtion allows you to lisÂten to my voice, even if you’ve nevÂer heard me speak before. And when I say the words, bear and pair, you can recogÂnise these as difÂferÂent words that map onto these difÂferÂent meanÂings, even though they realÂly only difÂfer in this first sound, the buh verÂsus puh sound. And we’re able to do this remarkÂably flexÂiÂbly across difÂferÂent speakÂers across difÂferÂent conÂtexts. And so catÂeÂgoÂrizaÂtion is realÂly at the heart of these funÂdaÂmenÂtal processÂes like object recogÂniÂtion, indiÂvidÂual modalÂiÂty, and speech perÂcepÂtion in the audiÂtoÂry modality.
And I’m parÂticÂuÂlarÂly interÂestÂed in how we learn about new catÂeÂgories. So for examÂple, if you wantÂed to take up a bird watchÂing hobÂby, you might need to learn to disÂtinÂguish between these two difÂferÂent species of birds, which are a house finch and a purÂple Finch. SimÂiÂlarÂly, if you’re learnÂing a new lanÂguage, you need to learn about the sounds of that lanÂguage, which might be difÂferÂent from your own. So for instance, native speakÂers of non tonal lanÂguages like EngÂlish, would need to learn to disÂtinÂguish between tonal pitch patÂterns to disÂtinÂguish words in tonal lanÂguages, like ManÂdarin ChiÂnese. So for examÂple, in ManÂdarin ChiÂnese, you have the same sylÂlaÂble here, I’m showÂing /ma/ mapped with four difÂferÂent tone patÂterns, which comÂpleteÂly changes the meanÂing of that underÂlyÂing word. And just to give you an examÂple of what the sounds like, I hope the sound is comÂing through now, here is an examÂple of this first tone, it’s just high and staÂble tone over time, Ma. And then the secÂond tone is a risÂing tone over time, Ma.
Alright, so how do we study this in an experÂiÂmenÂtal conÂtext. So in a kind of very kind of pared down verÂsion of this kind of interÂest, it’s not as gamÂiÂfied as some othÂer tasks we’ve heard about today, we would play in an odd, for examÂple, a sound from a parÂticÂuÂlar kind of catÂeÂgoÂry. So I use these kind of alien like sounds that are kind of interÂestÂing for peoÂple to hear, and again, hopÂing the sound is comÂing through like that. And peoÂple make these overt choicÂes about what catÂeÂgoÂry they think that belongs to. So in this case, decidÂing is this catÂeÂgoÂry one or two, and then they get some kind of feedÂback about the response. So corÂrect or incorÂrect. And peoÂple might also do this in a visuÂal kind of task, which I’ll talk about more today. So seeÂing an image like this kind of arbiÂtrary image, just showÂing you on the screen that varies in the width and oriÂenÂtaÂtion of these lines. And then they’re makÂing these over deciÂsions and getÂting that feedback.
4:29
Alright, so then what we can do is look at peoÂple’s abilÂiÂty to learn catÂeÂgories in these conÂtexts. And so they’re learnÂing to make more accuÂrate deciÂsions, givÂen the feedÂback that they’re getÂting. So here I’m showÂing you the proÂporÂtion, corÂrect or accuÂraÂcy across blocks of a trainÂing task, where we train peoÂple on these audiÂtoÂry and visuÂal catÂeÂgories. So the audiÂtoÂry is in red and visuÂal in blue, you can see that overÂall on averÂage, and the darkÂer line here, peoÂple are able to learn these catÂeÂgories. And then in the lighter lines, what I’m showÂing you is indiÂvidÂual parÂticÂiÂpant perÂforÂmance. So you can see there’s lots of very abilÂiÂty and how well peoÂple are able to learn with some peoÂple up here in realÂly high levÂels of perÂforÂmance, and othÂers around this dashed line, which reflects chance levÂels of performance.
And so we can also look at othÂer aspects of their behavÂiour to underÂstand that the psyÂchoÂlogÂiÂcal proÂcessÂing is going on as peoÂple are learnÂing. So one of these is in their reacÂtion time or how fast they respond. So this is meaÂsured in milÂlisecÂonds here. And it’s just the time it takes them to actuÂalÂly push the butÂton to idenÂtiÂfy what catÂeÂgoÂry they think that either sound or image belong to. And so we can see here that our parÂticÂiÂpants were slightÂly slowÂer in the visuÂal task and this kind of earÂly blocks. So the blue line here is highÂer than the red line. But these kind of conÂverge over time as the learnÂing task goes on.
Okay, so what we realÂly want to underÂstand is what this inforÂmaÂtion about peoÂple’s choicÂes, and the reacÂtion times can tell us about what’s going on psyÂchoÂlogÂiÂcalÂly, in learnÂers minds as they’re doing these tasks. So to underÂstand this, we leverÂage comÂpuÂtaÂtionÂal modÂels called drift difÂfuÂsion modÂels that take into account both how accuÂrate deciÂsions are, and also how fast these deciÂsions are to estiÂmate sepÂaÂraÂble psyÂchoÂlogÂiÂcal processÂes in deciÂsion makÂing. So I’ll give you a sort of toy examÂple here to kind of just explain the logÂic of these modÂels. So when you saw this creaÂture earÂliÂer, you made again, this probÂaÂbly split secÂond deciÂsion about whether this was a horse or a zebra, but it was still a deciÂsion that you had to make. And so we can think of this, as soon as you see this image, this deciÂsion process starts unfoldÂing across time. So we start kind of accuÂmuÂlatÂing eviÂdence towards either decidÂing whether this is a horse or a zebra.
So let’s say you probÂaÂbly start a litÂtle closÂer to makÂing the zebra sort of deciÂsion, because I just showed you, the dog and the zebra print jackÂets, maybe I primed you slightÂly. But then as you get more and more inforÂmaÂtion from this image, seeÂing, okay, maybe it’s just this got this like a weird flap going on here, this is not a real zebra, this has to be a horse, you’re going to shoot up and evid- you accuÂmuÂlate the eviÂdence towards makÂing that deciÂsion. This is defÂiÂniteÂly a horse, not a zebra. And so we see this process through these modÂels as the accuÂmuÂlaÂtion of eviÂdence towards these conÂtrastÂing choicÂes. So here horse or zebra, in the catÂeÂgoÂrizaÂtion conÂtext, catÂeÂgoÂry one or catÂeÂgoÂry two. And then you make a deciÂsion when to cross a threshÂold of eviÂdence that you need to actuÂalÂly accuÂmuÂlate. So once you get enough inforÂmaÂtion that this was a horse, that’s when you make your decision.
So again, just being realÂly explicÂit about how this works in our kind of more arbiÂtrary tasks, where we either play a sound or show an image where peoÂple are decidÂing this catÂeÂgoÂry, we see this process unfoldÂing across time. So they’re accuÂmuÂlatÂing eviÂdence towards a parÂticÂuÂlar deciÂsion, let’s say catÂeÂgoÂry one, in this case, at a parÂticÂuÂlar rate. So basiÂcalÂly, how fast they’re getÂting inforÂmaÂtion from that stimÂuÂlus repÂreÂsents kind of how easy it is for them to kind of get inforÂmaÂtion to inform their deciÂsion. And then again, they’re going to try to reach this deciÂsion threshÂolds. And once they reach that threshÂold in this eviÂdence accuÂmuÂlaÂtion process, that’s when they’re actuÂalÂly going to iniÂtiÂate their response actuÂalÂly start the process of pressÂing the butÂton, which is reflectÂed in this dashed line here.
So this is that underÂlyÂing process that we are tryÂing to estiÂmate using this modÂelÂling approachÂes. And we’re going to look at how parÂticÂiÂpants learn this. And to disÂtinÂguish between two difÂferÂent catÂeÂgories, we can estiÂmate these kind of paraÂmeÂters here at the indiÂvidÂual subÂject levÂel, and also lonÂgiÂtuÂdiÂnalÂly across blocks as they are learnÂing. Right, so then, let me show you what we found here for this audiÂtoÂry and visuÂal task in the lab. So here at first, I’ll show you this paraÂmeÂter of eviÂdence accuÂmuÂlaÂtion rate, again, how fast they’re able to get the inforÂmaÂtion, they need to make the deciÂsion about that stimÂuÂlus. And so highÂer valÂues here are repÂreÂsentÂing kind of more effiÂcient eviÂdence accuÂmuÂlaÂtion. So you’re getÂting inforÂmaÂtion a lot more quickÂly and effiÂcientÂly as a process. And then here’s what that looks like for the audiÂtoÂry and visuÂal tasks. So we see here sort of this crossover, where iniÂtialÂly in our visuÂal task, parÂticÂiÂpants are less effiÂcient at getÂting inforÂmaÂtion than they are in the audiÂtoÂry tasks, with its crossÂes over across time. And by the end of trainÂing, they’re more effiÂcient in the visuÂal domain than the audiÂtoÂry domain.
9:24
We can also look at this othÂer paraÂmeÂter, we’ve talked about this deciÂsion threshÂold. And so here we can define these paraÂmeÂters based on whether they were more cauÂtious or less cauÂtious in their responsÂes. So highÂer valÂues here are reflectÂing times where parÂticÂiÂpants are waitÂing to gathÂer enough inforÂmaÂtion. So for examÂple, they’re lookÂing at that horse longer and longer to make sure that they realÂly have it right that it’s a horse and not a zebra. So here again, we’re seeÂing the sort of crossover between the two modalÂiÂties where iniÂtialÂly parÂticÂiÂpants are more cauÂtious with the audiÂtoÂry modalÂiÂty and they show this sort of steep decline and in how cauÂtious they are about that process as their accuÂraÂcy increasÂes across these difÂferÂent blocks.
All right, so all of this is realÂly about online research, right. And I’ve just talked to you about in perÂson research. So I want to show you now how we have leverÂaged online data colÂlecÂtion through GorilÂla to rapidÂly and effiÂcientÂly colÂlect data to repliÂcate this in perÂson study in a wider online samÂple. So in perÂson, we ran this study on 30 parÂticÂiÂpants, and online, we were able to run in nearÂly 100 parÂticÂiÂpants. And just to give you a sense of how much time this took us in perÂson, with a dedÂiÂcatÂed perÂson there to run the study, it took about a month to colÂlect this data, verÂsus data colÂlectÂed via GorilÂla in under 48 hours, so extremeÂly, extremeÂly fast. And the in perÂson study, we were limÂitÂed to our local popÂuÂlaÂtion in PittsÂburgh, PennÂsylÂvaÂnia in the US, whereÂas in our online repliÂcaÂtion, we were able to get a globÂal popÂuÂlaÂtion through ProÂlifÂic specifically.
And then finalÂly, in the lab, in perÂson, we ran parÂticÂiÂpants on our kind of conÂtrolled labÂoÂraÂtoÂry comÂputÂers and proÂfesÂsionÂal levÂel headÂphones. In enviÂronÂments, we could ensure were extremeÂly quiÂet, where it’s online, sorÂry, online, we ran parÂticÂiÂpants on their own comÂputÂers and using their own headÂphones, which are obviÂousÂly have a more variÂety of qualÂiÂty comÂpared to our in perÂson study. Right, then we can talk about what actuÂalÂly hapÂpened in this online repliÂcaÂtion. So as a reminder, this is what our in perÂson study looked like with accuÂraÂcy, and our indiÂvidÂual difÂferÂences across parÂticÂiÂpants. And then this is what the online study looked like. So you can see here, there’s more parÂticÂiÂpants and more of these lighter lines here. But genÂerÂalÂly, we’re seeÂing the same kind of patÂtern of accuÂraÂcy, we don’t see a lot of difÂferÂences between modalÂiÂties. And peoÂple genÂerÂalÂly are able to learn.
Then we can also look at this meaÂsure of reacÂtion time that we looked at. So this is our in perÂson study. And then this is what it looks like online. So immeÂdiÂateÂly, I’ll note that the scale here has changed for reacÂtion time. So where it’s in here, we’re in the sub secÂond sort of range in our in perÂson study, on averÂage, we have some folks who are kind of getÂting up above one secÂond, and even in this case above two secÂonds to respond, on averÂage on a triÂal. And so we’re seeÂing a lot more variÂabilÂiÂty in how the reacÂtion times look over time, we still have plenÂty of folks here who are respondÂing very quickly.
All right, then what does our deciÂsion processÂes assessed by these drift difÂfuÂsion modÂels? What are those look like? So again, we’re lookÂing at our eviÂdence accuÂmuÂlaÂtion rate on our in perÂson samÂple. And this is what our online samÂple looks like. So seeÂing still that crossover across modalÂiÂties, and realÂly very simÂiÂlar patÂtern across in perÂson and online. And then we have our deciÂsion threshÂold, again, our in perÂson seeÂing a difÂferÂent sort of patÂtern of crossover here between the modalÂiÂties. And then this is what we see online.
So we effecÂtiveÂly perÂfectÂly repliÂcatÂed these results. And this is realÂly excitÂing and meanÂingÂful. Because these folks were difÂferÂent from our in perÂson samÂple. Again, this is a globÂal popÂuÂlaÂtion, peoÂple were using their own machines, their own headÂphones, we saw that overÂall, they were slowÂer in a lot of casÂes. But they learned just as well as peoÂple who were seatÂed in there kind of a quiÂet conÂtrol levÂel of enviÂronÂment. And yet, we’re still seeÂing the same patÂterns of the psyÂchoÂlogÂiÂcal processÂes through these drift difÂfuÂsion modÂels. And this also realÂly tells us that using gorilÂla to colÂlect these reacÂtion time meaÂsures is capÂturÂing inforÂmaÂtion about the psyÂchoÂlogÂiÂcal processÂes that we see inside the lab as well, using just difÂferÂent softÂware that we’ve used across the years.
So I’ll just briefly kind of sumÂmarise the data benÂeÂfits of colÂlectÂing data online that we saw both in this experÂiÂment and what I’ve seen in my research in genÂerÂal. So first, as we’ve talked about, I’ve seen this abilÂiÂty to repliÂcate in samÂples outÂside of psyÂcholÂoÂgy, subÂject pools, or othÂerÂwise homogeÂnous samÂples that we see often inside of the lab or in a limÂitÂed abilÂiÂty to kind of colÂlect the data across, you know, a broad samÂple popÂuÂlaÂtion. And so this is both in the study that I’ve disÂcussed in detail today. But also anothÂer study lookÂing at inciÂdenÂtal catÂeÂgoÂry learnÂing in mulÂtiÂple experÂiÂments in this othÂer citaÂtion that I have here.
14:27
Online Data ColÂlecÂtion has also realÂly enabled seamÂless colÂlabÂoÂraÂtion across the world. So I have colÂleagues in Hong Kong who are able to access both the stimÂuÂlus mateÂriÂals and also data and were able to access to extremeÂly simÂply to be able to colÂlabÂoÂrate realÂly easÂiÂly rather than sendÂing files back and forth or sharÂing it in some othÂer way. That does get a bit clunky here, we can just work on it in the same platform.
And then finalÂly, this gives us realÂly the abilÂiÂty to recruit samÂples with more diverse expeÂriÂences. So obviÂousÂly I’ve menÂtioned Just kind of abilÂiÂty to look at the globÂal popÂuÂlaÂtion. But someÂthing else that we parÂticÂuÂlarÂly looked at in this speÂcifÂic study that I’ve listÂed here is lookÂing at peoÂple with a diverse array of music expeÂriÂences. So we just kind of looked at a samÂple, not realÂly specifÂiÂcalÂly samÂpling for music expeÂriÂence, but kind of just seeÂing what hapÂpens when you look at kind of just a broad samÂple of indiÂvidÂuÂals with music expeÂriÂence. And that’s realÂly someÂthing that’s only able to do with online research. Because it’s, you get a more diverse samÂple that way. All right. And this is, again, just all realÂly imporÂtant so that we can examÂine things like learnÂing and deciÂsion makÂing effiÂcientÂly and using these more genÂerÂal popÂuÂlaÂtions. And with that, I realÂly want to thank you for your time, and also the resources that have supÂportÂed this work. I put my conÂtact inforÂmaÂtion there on the screen, and also the inforÂmaÂtion about my colÂlabÂoÂraÂtors who are involved with this speÂcifÂic project that I’ve talked about in detail today. And I’d be hapÂpy to answer any questions.
Jo Everhshed 15:56
Casey, that was absoluteÂly brilÂliant. Thank you. AnyÂbody who’s got any quesÂtions for Casey, can you start putting them in the q&a? Now? HopeÂfulÂly, we get round to them. I’m still proÂcessÂing your talk doing this in real time. It’s getÂting towards the end of the day. I’m realÂly sorÂry. But I did have a quesÂtion. You, you’ve actuÂalÂly been realÂly genÂerÂous and posÂiÂtive about online research, genÂerÂalÂly. But there must have been some chalÂlenges getÂting this to work across so many peoÂple at scale across so many resÂcue research groups. What? Yeah, what were the chalÂlenges? And what perÂson did you have to become in order to resolve them?
Casey Roark 16:33
Yeah, I realÂly love the phrasÂing of that quesÂtion. And it looks like someÂone has asked that in the chat as well. Yeah, so I defÂiÂniteÂly there are, of course, drawÂbacks. And I think that GloÂria talked about this a lot in her talk earÂliÂer, lookÂing at specifÂiÂcalÂly this quesÂtion of involvÂing sound in these experÂiÂments online. So it’s imporÂtant to have checks of whether or not peoÂple are wearÂing headÂphones, makÂing sure you have you know, checks throughÂout an experÂiÂment to make sure peoÂple are conÂtinÂuÂalÂly attendÂing to your sounds, and not just like throwÂing the headÂphones off to the side and just conÂtinÂuÂalÂly pressÂing butÂtons. So those are some kind of real drawÂbacks. In genÂerÂal, just thinkÂing about how you can see variÂabilÂiÂty in behavÂiour, for examÂple, I show a lot of folks who weren’t realÂly able to learn the catÂeÂgories, or perÂformÂing it chance levÂels. And there’s a quesÂtion kind of always, in the back of my mind is like, are they actuÂalÂly just strugÂgling to learn and perÂformÂing at chance? Or did they just comÂpleteÂly like check out and they’re not interÂestÂed in learnÂing. And this is someÂthing we have to solve both in perÂson and online. But I think it becomes espeÂcialÂly hard when you can’t just kind of folÂlow up with them after in perÂson with a litÂtle bit of, you know, demand and say, Hey, like, did you realÂly try in this experÂiÂment? Yes, it’s a challenge.
Jo EverÂshed 17:43
DefÂiÂniteÂly a chalÂlenge. Have you ever conÂsidÂered using insert like addiÂtionÂal incenÂtives? So I think proÂlifÂic allow you to pay a bonus when peoÂple perÂform well, just to incenÂtivize peoÂple not to do that and at least see if the data is difÂferÂent. When when they do?
Casey Roark 17:59
Yeah, that’s a great quesÂtion. So incenÂtives and rewards are realÂly imporÂtant. And learnÂing, as was disÂcussed in some of the kind of marÂketÂing research today thing. But I think one thing, that’s kind of it is imporÂtant to kind of look at whether these things are difÂferÂent about offerÂing incenÂtives or not. But I’m realÂly also curiÂous about learnÂing when peoÂple are learnÂing what’s hapÂpenÂing when peoÂple are strugÂgling to learn. So I, this is someÂthing I’ve done in my in perÂson studÂies before is sayÂing, Hey, you’re gonna get a bonus, but then offerÂing the bonus, regardÂless of realÂly how they perÂform, so that it’s kind of more fair across difÂferÂent parÂticÂiÂpants. But you are startÂing to try to encourÂage that. So that’s defÂiÂniteÂly someÂthing that can be an incenÂtive, though, it brings up quesÂtions of fairÂness that I just want to highÂlight as well.
Jo EverÂshed 18:44
Yeah, the fairÂness quesÂtion realÂly gets us as researchers, doesÂn’t it? It also makes me think of someÂthing that I preÂsentÂed earÂliÂer, that JenÂny Rodd said is like, it’s realÂly hard to tell the difÂferÂence between parÂticÂiÂpants who just suck at your task, verÂsus those that aren’t tryÂing because recogÂnisÂing those four /ma/ tones is actuÂalÂly hard. Like to the WestÂern ear, it’s realÂly not an easy task.
Casey Roark 19:08
Yes, it’s very chalÂlengÂing. And so often I give peoÂple a variÂety of difÂferÂent tasks. So tryÂing to kind of underÂstand like, if you do realÂly well on one task, but not well, in anothÂer, that’s pretÂty simÂiÂlar. It might just be an attenÂtion kind of levÂel of thing, where you’re just kind of fatigued and tired and you don’t want to do the task anyÂmore. So givÂing peoÂple these kind of mulÂtiÂple ways to meaÂsure their behavÂiour over time and difÂferÂent tasks could also be a soluÂtion to that.
Jo EverÂshed 19:34
FanÂtasÂtic

