Con­duct­ing online research with blind participants

Eva D. Poort, Max Planck Insti­tute for Psy­cholin­guis­tics
@EvaDPoort

YouTube

By load­ing the video, you agree to YouTube’s pri­va­cy pol­i­cy.
Learn more

Load video

To what extent do our sens­es shape our knowl­edge of the mean­ings of words? Stud­ies on pop­u­la­tions with atyp­i­cal sen­so­ry expe­ri­ence (e.g. blind indi­vid­u­als) are key in answer­ing this ques­tion, but it can be dif­fi­cult for such peo­ple to come to the lab. Mov­ing research online offers many ben­e­fits, but also pos­es some chal­lenges. First­ly, when par­tic­i­pants are blind, stim­uli must be pre­sent­ed audi­to­ri­ly, but some researchers dis­cour­age online test­ing with audi­to­ry stim­uli, due to wor­ries about inac­cu­rate reac­tion time mea­sure­ments (Bridges, Pitiot, MacAskill, & Peirce, 2020). To address this, we con­duct­ed two exper­i­ments in which sight­ed par­tic­i­pants per­formed a visu­al and audi­to­ry sim­ple reac­tion time task online, and com­pared this data to a pre­vi­ous lab exper­i­ment (Hintz et al., 2020).

Between-par­tic­i­pant vari­a­tion in reac­tion times was greater in online exper­i­ments, espe­cial­ly with audi­to­ry stim­uli, but with­in-par­tic­i­pant vari­a­tion was sim­i­lar in both online and lab-based exper­i­ments. For with­in-par­tic­i­pant designs, we con­clude it may be fea­si­ble to detect reac­tion-time effects sim­i­lar to lab-based research. Sec­ond­ly, design­ing online exper­i­ments for peo­ple with atyp­i­cal sen­so­ry expe­ri­ence brings its own set of chal­lenges. We there­fore also dis­cuss tips for mak­ing online exper­i­ments acces­si­ble to blind par­tic­i­pants, such as ensur­ing com­pat­i­bil­i­ty with screen read­ing soft­ware. Full author list: Eva D. Poort, Guiller­mo Mon­tero-Melis, Tani­ta P. Duik­er and Markus Ostarek.

Full Tran­script:

Eva:
I think you should all be able to see my slides now. Please inter­rupt me if you can’t see them. So wel­come every­one today to my talk on con­duct­ing online research with blind par­tic­i­pants. And I’m first going to take you through our rea­son­ing for why we actu­al­ly decid­ed to con­duct a research with blind par­tic­i­pants online, because it’s maybe not the first thing you would expect. So the first… To start, our research ques­tion is, “To what extent do our sens­es shape our knowl­edge of the mean­ings of words?” and research with indi­vid­u­als who expe­ri­enced the world in an atyp­i­cal man­ner is key in answer­ing this type of ques­tion, but par­tic­i­pants who are blind, for exam­ple, may find it dif­fi­cult to come to the lab and of course, as peo­ple have men­tioned before, if you recruit online, then you can reach a much larg­er sam­ple. And this was a great ben­e­fit for us, espe­cial­ly because the pool of par­tic­i­pants is already quite small. And let’s also not for­get the ele­phant in the room, which is the cur­rent COVID-19 pan­dem­ic, which was real­ly the decid­ing fac­tor for us.

Eva:
So in this talk, I’m going to take you through the steps that we took to move our research online. And for us, the first step was to switch to pre­sent­ing stim­uli audi­to­ri­ly, and this is per­haps the most usu­al mode of pre­sen­ta­tion for many of you, but for us, it was cer­tain­ly new. And because it was new, we did some read­ing on tim­ing issues asso­ci­at­ed with audi­to­ry stim­uli, and we found that espe­cial­ly Bridges et al. warn against mea­sur­ing reac­tion times in online exper­i­ments when you use audi­to­ry stim­uli, because dif­fer­ences in the par­tic­i­pan­t’s hard­ware and soft­ware may com­pro­mise the accu­ra­cy and pre­ci­sion of these mea­sure­ments. So the first thing that we actu­al­ly did was try to find out whether reac­tion time mea­sure­ments would be good enough for our pur­pos­es. And we did that by includ­ing a visu­al and audi­to­ry sim­ple reac­tion time task in our pre-test with sight­ed par­tic­i­pants, and in these tasks, par­tic­i­pants just sim­ply press the but­ton as soon as they heard or saw the stimulus.

Eva:
And we then com­pare the data that we gath­ered online against data that our col­leagues had col­lect­ed in the lab. And for those who are inter­est­ed, you can pre­view clone these tasks via Goril­la Open Mate­ri­als. So when we look at the data, I’m going to first take you through the mean reac­tion times and these give you an indi­ca­tion of the between-par­tic­i­pant vari­a­tion in the data, and here we can see quite clear­ly that the vari­a­tion in means was indeed greater in the online data for the audi­to­ry task than in the lab-based data, and the test of homo­gene­ity of vari­ance also proves this. And this sug­gests that dif­fer­ences in soft­ware and hard­ware do indeed impact on the accu­ra­cy and pre­ci­sion of online reac­tion time mea­sure­ments when you use audi­to­ry stim­uli, and this in turn can make it dif­fi­cult to detect between-par­tic­i­pant effects.

Eva:
When we look at the visu­al task, how­ev­er, there seems to be slight­ly more vari­a­tion in the online data again, but this time, the test of homo­gene­ity of vari­ances was not sig­nif­i­cant. And if we take a look at the stan­dard devi­a­tions of the reac­tion times, which give an indi­ca­tion of the with­in-par­tic­i­pant vari­a­tion, then you can see that actu­al­ly the online data and the lab-based data look very, very sim­i­lar, and this is also what the homo­gene­ity of vari­ance tests showed. So the vari­ances for the audi­to­ry task for the lab and the online data were essen­tial­ly the same, and the same was true for the visu­al task. So we actu­al­ly had to run our pretest again because we had to make some changes to the ini­tial design. So we end­ed up repli­cat­ing our find­ings and repro­duc­ing them. As you can see here, the pat­tern in the data looks almost exact­ly the same as in the pre­vi­ous graphs that I showed you.

Eva:
So this led us to con­clude that between-par­tic­i­pant vari­a­tion was indeed greater in the online task with audi­to­ry stim­uli, but the with­in-par­tic­i­pant vari­a­tion was sim­i­lar in both the online and lab-based data, regard­less of the mode of pre­sen­ta­tion. So in oth­er words, we were reas­sured that it should be pos­si­ble to detect reac­tion time effects if you use a with­in-par­tic­i­pant design, and that was our plan any­way. So we car­ried on with our exper­i­ments, and in the next part of the talk, I’m going to take you through the fur­ther steps that we took to ensure acces­si­bil­i­ty for blind par­tic­i­pants. So here, our start­ing point is that blind par­tic­i­pants nav­i­gate the web using a screen read­er and or a braille dis­play. So what­ev­er’s on the screen is read out to them or shown on a braille dis­play, which is up the bot­tom of their key­board. And of course, they also don’t use the mouse, so all func­tion­al­i­ty must be tied to the keyboard.

Eva:
And before I take you through the things that we changed, I want to note that hap­pi­ly, we did­n’t actu­al­ly have to change too much com­pared to how we would nor­mal­ly set things up with sight­ed par­tic­i­pants, and this was real­ly great, but of course there were a few things that we did have to change. So because blind par­tic­i­pants use a screen read­er or a braille dis­play, nav­i­gat­ing the web is much more lin­ear process for them. So we write our instruc­tions much more like spo­ken lan­guage. We use sim­ple words and short sen­tences, and we also repeat things a lot more often than we would prob­a­bly do if we were design­ing exper­i­ments just for sight­ed peo­ple. We also keep our for­mat­ting to a min­i­mum because this isn’t usu­al­ly read out by a screen read­er or dis­played on a braille dis­play, but we do use HTML tags for things like head­ings because that infor­ma­tion is pre­sent­ed to blind par­tic­i­pants as well.

Eva:
We also pro­vide extra tips for our blind par­tic­i­pants on how to nav­i­gate through the exper­i­ment. So in tasks, for exam­ple, the screen read­er has a ten­den­cy to skip imme­di­ate­ly to the but­ton at the bot­tom of the screen if there is one, which means that par­tic­i­pants may acci­den­tal­ly skip the instruc­tions that are on the screen, which is of course not some­thing that you want. So we have a lev­el one head­ing at the top of each page with instruc­tions, and we tell par­tic­i­pants that this is the case and that they should always make sure to start from this lev­el one head­ing and nav­i­gate down the page before they click on the next but­ton. We also had to change a cou­ple of things when it comes to task func­tion­al­i­ty on respond­ing. So obvi­ous­ly every­thing needs to be pre­sent­ed audi­to­ri­ly, which means that fix­a­tion process, for exam­ple, become fix­a­tion beeps.

Eva:
Because we have lots of these audi­to­ry stim­uli, we use a very help­ful script that Goril­la pro­vid­ed, that lets us pre­load our stim­uli at the start of a task so that par­tic­i­pants aren’t faced with any load­ing delays dur­ing the task, which might make them think that the task is frozen. And the thing that we actu­al­ly had to spend the most time on was to make sure that our response but­tons worked, because it turns out that near­ly any key on a stan­dard key­board is a com­mand key of some kind for most screen read­ers, and these dif­fer between screen read­ers as well. So we had to do a lot of test­ing and fine tun­ing the instruc­tions that we pro­vide to par­tic­i­pants to tem­porar­i­ly turn these com­mand keys off dur­ing parts of the exper­i­ment when they need to respond and then back on again when they have to read instruc­tions on the screen.

Eva:
We also did a lot of pilot­ing, as peo­ple have men­tioned before, and we pilot­ed both on sight­ed and blind par­tic­i­pants. And for us, it was also real­ly help­ful to con­tact some orga­ni­za­tions that work with and for blind peo­ple, and they real­ly helped us fig­ure things out in the ear­ly stages of design and they could tell us all about how screen read­ers inter­act with web pages to help us fine-tune those instruc­tions. As Simone did as well, we pro­vide extra sup­port via email and phone call or prefer­ably video calls because as she said, par­tic­i­pants can then share their screen with you and you can basi­cal­ly guide them through the exper­i­ment up to the dis­tance. And then we also make sure that the exper­i­ment for sight­ed par­tic­i­pants is exact­ly the same and this kind of goes with­out say­ing, but I’m men­tion­ing it any­way, because it may require a warn­ing to your sight­ed par­tic­i­pants that noth­ing will be shown on screen dur­ing parts of the exper­i­ment like the tri­als, because this is very counter-intu­itive for sight­ed peo­ple and they might think that the exper­i­ment has crashed or something.

Eva:
So what do I want you to take away from my talk today? The first is that reac­tion time mea­sure­ments col­lect­ed in online exper­i­ments with audi­to­ry stim­uli are pre­cise enough for most pur­pos­es, at least if you use a with­in-par­tic­i­pant design. And the sec­ond is that con­duct­ing research with blind par­tic­i­pants online requires a bit more thought, but it is cer­tain­ly pos­si­ble and may also be prefer­able if your par­tic­i­pants find it dif­fi­cult to trav­el to the lab. And as I’ve said before, it allows you to recruit much more wide­ly and reach a much larg­er sam­ple size than you might oth­er­wise be able to. Thank you very much for listening.

Sophie Scott:
Thank you very much, Eva. Thank you for that. The quick Q&A’s open. If any­body’s got any ques­tions, feel free to type them in. Oth­er­wise, I will start with my ques­tion and that will be the only ques­tion, but you can keep ask­ing them because Eva will be able to answer them. Here we go. There’s a ques­tion about your plat­form. So which plat­form were you using for this testing?

Eva:
I’m not sure what is meant by plat­form exactly.

Sophie Scott:
Which kind of online test­ing sys­tem were you using?

Eva:
I mean, yeah, we were test­ing in Goril­la, and the par­tic­i­pants were recruit­ed through word of mouth mostly.

Sophie Scott:
Cool. That makes sense. Thank you.

 

Get on the Registration List

BeOnline is the conference to learn all about online behavioral research. It's the ideal place to discover the challenges and benefits of online research and to learn from pioneers. If that sounds interesting to you, then click the button below to register for the 2023 conference on Thursday July 6th. You will be the first to know when we release new content and timings for BeOnline 2023.

With thanks to our sponsors!