Eva D. Poort, Max Planck InstiÂtute for PsyÂcholinÂguisÂtics
@EvaDPoort
To what extent do our sensÂes shape our knowlÂedge of the meanÂings of words? StudÂies on popÂuÂlaÂtions with atypÂiÂcal senÂsoÂry expeÂriÂence (e.g. blind indiÂvidÂuÂals) are key in answerÂing this quesÂtion, but it can be difÂfiÂcult for such peoÂple to come to the lab. MovÂing research online offers many benÂeÂfits, but also posÂes some chalÂlenges. FirstÂly, when parÂticÂiÂpants are blind, stimÂuli must be preÂsentÂed audiÂtoÂriÂly, but some researchers disÂcourÂage online testÂing with audiÂtoÂry stimÂuli, due to worÂries about inacÂcuÂrate reacÂtion time meaÂsureÂments (Bridges, Pitiot, MacAskill, & Peirce, 2020). To address this, we conÂductÂed two experÂiÂments in which sightÂed parÂticÂiÂpants perÂformed a visuÂal and audiÂtoÂry simÂple reacÂtion time task online, and comÂpared this data to a preÂviÂous lab experÂiÂment (Hintz et al., 2020).
Between-parÂticÂiÂpant variÂaÂtion in reacÂtion times was greater in online experÂiÂments, espeÂcialÂly with audiÂtoÂry stimÂuli, but withÂin-parÂticÂiÂpant variÂaÂtion was simÂiÂlar in both online and lab-based experÂiÂments. For withÂin-parÂticÂiÂpant designs, we conÂclude it may be feaÂsiÂble to detect reacÂtion-time effects simÂiÂlar to lab-based research. SecÂondÂly, designÂing online experÂiÂments for peoÂple with atypÂiÂcal senÂsoÂry expeÂriÂence brings its own set of chalÂlenges. We thereÂfore also disÂcuss tips for makÂing online experÂiÂments accesÂsiÂble to blind parÂticÂiÂpants, such as ensurÂing comÂpatÂiÂbilÂiÂty with screen readÂing softÂware. Full author list: Eva D. Poort, GuillerÂmo MonÂtero-Melis, TaniÂta P. DuikÂer and Markus Ostarek.
Full TranÂscript:
Eva:
I think you should all be able to see my slides now. Please interÂrupt me if you can’t see them. So welÂcome everyÂone today to my talk on conÂductÂing online research with blind parÂticÂiÂpants. And I’m first going to take you through our reaÂsonÂing for why we actuÂalÂly decidÂed to conÂduct a research with blind parÂticÂiÂpants online, because it’s maybe not the first thing you would expect. So the first… To start, our research quesÂtion is, “To what extent do our sensÂes shape our knowlÂedge of the meanÂings of words?” and research with indiÂvidÂuÂals who expeÂriÂenced the world in an atypÂiÂcal manÂner is key in answerÂing this type of quesÂtion, but parÂticÂiÂpants who are blind, for examÂple, may find it difÂfiÂcult to come to the lab and of course, as peoÂple have menÂtioned before, if you recruit online, then you can reach a much largÂer samÂple. And this was a great benÂeÂfit for us, espeÂcialÂly because the pool of parÂticÂiÂpants is already quite small. And let’s also not forÂget the eleÂphant in the room, which is the curÂrent COVID-19 panÂdemÂic, which was realÂly the decidÂing facÂtor for us.
Eva:
So in this talk, I’m going to take you through the steps that we took to move our research online. And for us, the first step was to switch to preÂsentÂing stimÂuli audiÂtoÂriÂly, and this is perÂhaps the most usuÂal mode of preÂsenÂtaÂtion for many of you, but for us, it was cerÂtainÂly new. And because it was new, we did some readÂing on timÂing issues assoÂciÂatÂed with audiÂtoÂry stimÂuli, and we found that espeÂcialÂly Bridges et al. warn against meaÂsurÂing reacÂtion times in online experÂiÂments when you use audiÂtoÂry stimÂuli, because difÂferÂences in the parÂticÂiÂpanÂt’s hardÂware and softÂware may comÂproÂmise the accuÂraÂcy and preÂciÂsion of these meaÂsureÂments. So the first thing that we actuÂalÂly did was try to find out whether reacÂtion time meaÂsureÂments would be good enough for our purÂposÂes. And we did that by includÂing a visuÂal and audiÂtoÂry simÂple reacÂtion time task in our pre-test with sightÂed parÂticÂiÂpants, and in these tasks, parÂticÂiÂpants just simÂply press the butÂton as soon as they heard or saw the stimulus.
Eva:
And we then comÂpare the data that we gathÂered online against data that our colÂleagues had colÂlectÂed in the lab. And for those who are interÂestÂed, you can preÂview clone these tasks via GorilÂla Open MateÂriÂals. So when we look at the data, I’m going to first take you through the mean reacÂtion times and these give you an indiÂcaÂtion of the between-parÂticÂiÂpant variÂaÂtion in the data, and here we can see quite clearÂly that the variÂaÂtion in means was indeed greater in the online data for the audiÂtoÂry task than in the lab-based data, and the test of homoÂgeneÂity of variÂance also proves this. And this sugÂgests that difÂferÂences in softÂware and hardÂware do indeed impact on the accuÂraÂcy and preÂciÂsion of online reacÂtion time meaÂsureÂments when you use audiÂtoÂry stimÂuli, and this in turn can make it difÂfiÂcult to detect between-parÂticÂiÂpant effects.
Eva:
When we look at the visuÂal task, howÂevÂer, there seems to be slightÂly more variÂaÂtion in the online data again, but this time, the test of homoÂgeneÂity of variÂances was not sigÂnifÂiÂcant. And if we take a look at the stanÂdard deviÂaÂtions of the reacÂtion times, which give an indiÂcaÂtion of the withÂin-parÂticÂiÂpant variÂaÂtion, then you can see that actuÂalÂly the online data and the lab-based data look very, very simÂiÂlar, and this is also what the homoÂgeneÂity of variÂance tests showed. So the variÂances for the audiÂtoÂry task for the lab and the online data were essenÂtialÂly the same, and the same was true for the visuÂal task. So we actuÂalÂly had to run our pretest again because we had to make some changes to the iniÂtial design. So we endÂed up repliÂcatÂing our findÂings and reproÂducÂing them. As you can see here, the patÂtern in the data looks almost exactÂly the same as in the preÂviÂous graphs that I showed you.
Eva:
So this led us to conÂclude that between-parÂticÂiÂpant variÂaÂtion was indeed greater in the online task with audiÂtoÂry stimÂuli, but the withÂin-parÂticÂiÂpant variÂaÂtion was simÂiÂlar in both the online and lab-based data, regardÂless of the mode of preÂsenÂtaÂtion. So in othÂer words, we were reasÂsured that it should be posÂsiÂble to detect reacÂtion time effects if you use a withÂin-parÂticÂiÂpant design, and that was our plan anyÂway. So we carÂried on with our experÂiÂments, and in the next part of the talk, I’m going to take you through the furÂther steps that we took to ensure accesÂsiÂbilÂiÂty for blind parÂticÂiÂpants. So here, our startÂing point is that blind parÂticÂiÂpants navÂiÂgate the web using a screen readÂer and or a braille disÂplay. So whatÂevÂer’s on the screen is read out to them or shown on a braille disÂplay, which is up the botÂtom of their keyÂboard. And of course, they also don’t use the mouse, so all funcÂtionÂalÂiÂty must be tied to the keyboard.
Eva:
And before I take you through the things that we changed, I want to note that hapÂpiÂly, we didÂn’t actuÂalÂly have to change too much comÂpared to how we would norÂmalÂly set things up with sightÂed parÂticÂiÂpants, and this was realÂly great, but of course there were a few things that we did have to change. So because blind parÂticÂiÂpants use a screen readÂer or a braille disÂplay, navÂiÂgatÂing the web is much more linÂear process for them. So we write our instrucÂtions much more like spoÂken lanÂguage. We use simÂple words and short senÂtences, and we also repeat things a lot more often than we would probÂaÂbly do if we were designÂing experÂiÂments just for sightÂed peoÂple. We also keep our forÂmatÂting to a minÂiÂmum because this isn’t usuÂalÂly read out by a screen readÂer or disÂplayed on a braille disÂplay, but we do use HTML tags for things like headÂings because that inforÂmaÂtion is preÂsentÂed to blind parÂticÂiÂpants as well.
Eva:
We also proÂvide extra tips for our blind parÂticÂiÂpants on how to navÂiÂgate through the experÂiÂment. So in tasks, for examÂple, the screen readÂer has a tenÂdenÂcy to skip immeÂdiÂateÂly to the butÂton at the botÂtom of the screen if there is one, which means that parÂticÂiÂpants may acciÂdenÂtalÂly skip the instrucÂtions that are on the screen, which is of course not someÂthing that you want. So we have a levÂel one headÂing at the top of each page with instrucÂtions, and we tell parÂticÂiÂpants that this is the case and that they should always make sure to start from this levÂel one headÂing and navÂiÂgate down the page before they click on the next butÂton. We also had to change a couÂple of things when it comes to task funcÂtionÂalÂiÂty on respondÂing. So obviÂousÂly everyÂthing needs to be preÂsentÂed audiÂtoÂriÂly, which means that fixÂaÂtion process, for examÂple, become fixÂaÂtion beeps.
Eva:
Because we have lots of these audiÂtoÂry stimÂuli, we use a very helpÂful script that GorilÂla proÂvidÂed, that lets us preÂload our stimÂuli at the start of a task so that parÂticÂiÂpants aren’t faced with any loadÂing delays durÂing the task, which might make them think that the task is frozen. And the thing that we actuÂalÂly had to spend the most time on was to make sure that our response butÂtons worked, because it turns out that nearÂly any key on a stanÂdard keyÂboard is a comÂmand key of some kind for most screen readÂers, and these difÂfer between screen readÂers as well. So we had to do a lot of testÂing and fine tunÂing the instrucÂtions that we proÂvide to parÂticÂiÂpants to temÂporarÂiÂly turn these comÂmand keys off durÂing parts of the experÂiÂment when they need to respond and then back on again when they have to read instrucÂtions on the screen.
Eva:
We also did a lot of pilotÂing, as peoÂple have menÂtioned before, and we pilotÂed both on sightÂed and blind parÂticÂiÂpants. And for us, it was also realÂly helpÂful to conÂtact some orgaÂniÂzaÂtions that work with and for blind peoÂple, and they realÂly helped us figÂure things out in the earÂly stages of design and they could tell us all about how screen readÂers interÂact with web pages to help us fine-tune those instrucÂtions. As Simone did as well, we proÂvide extra supÂport via email and phone call or preferÂably video calls because as she said, parÂticÂiÂpants can then share their screen with you and you can basiÂcalÂly guide them through the experÂiÂment up to the disÂtance. And then we also make sure that the experÂiÂment for sightÂed parÂticÂiÂpants is exactÂly the same and this kind of goes withÂout sayÂing, but I’m menÂtionÂing it anyÂway, because it may require a warnÂing to your sightÂed parÂticÂiÂpants that nothÂing will be shown on screen durÂing parts of the experÂiÂment like the triÂals, because this is very counter-intuÂitive for sightÂed peoÂple and they might think that the experÂiÂment has crashed or something.
Eva:
So what do I want you to take away from my talk today? The first is that reacÂtion time meaÂsureÂments colÂlectÂed in online experÂiÂments with audiÂtoÂry stimÂuli are preÂcise enough for most purÂposÂes, at least if you use a withÂin-parÂticÂiÂpant design. And the secÂond is that conÂductÂing research with blind parÂticÂiÂpants online requires a bit more thought, but it is cerÂtainÂly posÂsiÂble and may also be preferÂable if your parÂticÂiÂpants find it difÂfiÂcult to travÂel to the lab. And as I’ve said before, it allows you to recruit much more wideÂly and reach a much largÂer samÂple size than you might othÂerÂwise be able to. Thank you very much for listening.
Sophie Scott:
Thank you very much, Eva. Thank you for that. The quick Q&A’s open. If anyÂbody’s got any quesÂtions, feel free to type them in. OthÂerÂwise, I will start with my quesÂtion and that will be the only quesÂtion, but you can keep askÂing them because Eva will be able to answer them. Here we go. There’s a quesÂtion about your platÂform. So which platÂform were you using for this testing?
Eva:
I’m not sure what is meant by platÂform exactly.
Sophie Scott:
Which kind of online testÂing sysÂtem were you using?
Eva:
I mean, yeah, we were testÂing in GorilÂla, and the parÂticÂiÂpants were recruitÂed through word of mouth mostly.
Sophie Scott:
Cool. That makes sense. Thank you.


