How do dif­fer­ent modes of instruc­tion deliv­ery impact data qual­i­ty in an online mul­ti-ses­sion cog­ni­tive study?

Jihanne Dumo, Uni­ver­si­ty of North­ern British Colum­bia
@JihanneDumo

YouTube

By load­ing the video, you agree to YouTube’s pri­va­cy pol­i­cy.
Learn more

Load video

Par­tic­i­pants com­plet­ing a study online can­not clar­i­fy their under­stand­ing of the task with an exper­i­menter, pos­si­bly lead­ing to reduced data qual­i­ty. The absence of an exper­i­menter can be par­tic­u­lar­ly detri­men­tal to the data qual­i­ty for designs that involve com­plex cog­ni­tive tasks and mul­ti­ple test­ing ses­sions. How­ev­er, the insur­gence of video con­fer­enc­ing tech­nol­o­gy now per­mits live inter­ac­tion between a par­tic­i­pant and an exper­i­menter, facil­i­tat­ing task com­pre­hen­sive­ness and pos­si­bly improv­ing data qual­i­ty of online stud­ies. The pur­pose of the cur­rent study was to deter­mine how the deliv­ery of task instruc­tions impacts data qual­i­ty in an online cog­ni­tive study.

In a between-sub­jects design, par­tic­i­pants com­plet­ed two test­ing ses­sions in either the Zoom con­di­tion where an exper­i­menter deliv­ered instruc­tions or in a writ­ten instruc­tion con­di­tion (no exper­i­menter). Each par­tic­i­pant com­plet­ed two cog­ni­tive tasks (spa­tial n‑back and Remote Asso­ciates Test) along with sur­veys. Data qual­i­ty was assessed through atten­tion checks, com­pre­hen­sion quizzes, task per­for­mance, and sur­vey test-retest reli­a­bil­i­ty. Data col­lec­tion was recent­ly com­plet­ed, and results will be pre­sent­ed at the con­fer­ence. As an inte­grat­ed ser­vice provider with its graph­i­cal user inter­face, help­ful sup­port team, and online com­mu­ni­ty, Goril­la has allowed us to cre­ate and run our first online study in less than a year.

Full Tran­script:

Jihanne:
Awe­some. Well, thank you so much for invit­ing me to give this talk. And I’m just going to say hi to every­body. I’m Jihanne, And today I’ll be talk­ing about the impact of dif­fer­ent modes of instruc­tion deliv­ery on data qual­i­ty in an online mul­ti-ses­sion cog­ni­tive study.

Jihanne:
So, as I’m sure we’re all famil­iar with, there is a gold stan­dard in the lab where we can con­trol for many aspects of test­ing, such as the envi­ron­ment and the equip­ment use, which helps reduce noise. Anoth­er impor­tant aspect of the lab is that an exper­i­menter can be present to guide the par­tic­i­pant through the study. And in online exper­i­ments, we lose con­trol of a lot of those ele­ments. So there are many extra­ne­ous vari­ables that may influ­ence the data col­lect­ed. And that includes the par­tic­i­pants being unable to clar­i­fy their under­stand­ing of the instruc­tions. And this fac­tor is crit­i­cal to exper­i­ments in our lab, which uses com­plex cog­ni­tive tasks and mul­ti-ses­sion designs, where­in in each par­tic­i­pant is worth a lot of data. So vari­able com­pre­hen­sion in study ele­ments can lead to reduced data quality.

Jihanne:
Instruc­tions in online stud­ies are com­mon­ly in writ­ten for­mat, and there have been stud­ies that direct­ly address instruc­tion deliv­ery and exper­i­menter pres­ence in online set­tings, but they are quite sparse. They most­ly com­pared face-to-face ver­sus online stud­ies, and there have been mixed find­ings. Some of them com­pa­ra­ble results while oth­ers have found more var­ied per­for­mance online.

Jihanne:
So from those stud­ies, along with the rest of our lit review, we found that the lack of super­vi­sion online can lead to nois­i­er data due to two fac­tors relat­ed to exper­i­menter pres­ence. And that includes the implic­it expec­ta­tions impart­ed by the exper­i­menter that help main­tain the par­tic­i­pants atten­tion, as well as the exper­i­menter’s role in ensur­ing that the par­tic­i­pants under­stand the instruc­tions of the study.

Jihanne:
So with this in mind, along with the ubiq­ui­ty of Zoom dur­ing the pan­dem­ic, as evi­denced by the pre­vi­ous talks as well with Teams, we set out to test video con­fer­enc­ing tech­nol­o­gy as a tool that can be used in online research to trans­late aspects of the lab environment.s

Jihanne:
So specif­i­cal­ly, we want­ed to see how dif­fer­ent modes of instruc­tion deliv­ery, com­par­ing Zoom ver­sus writ­ten instruc­tions, impact the data qual­i­ty in sur­veys and cog­ni­tive tasks, the com­pre­hen­sion of instruc­tions, the data qual­i­ty with­in mul­ti-ses­sion design, as well as the par­tic­i­pants’ expe­ri­ence. And with that be pre­dict­ed that the Zoom con­di­tion would lead to bet­ter data qual­i­ty, com­pared to the writ­ten con­di­tion only, as it trans­lates ele­ments of the lab.

Jihanne:
So from attend­ing the con­fer­ence last year, we were intro­duced to Goril­la, which we used for our study. We had two test­ing days. We also had two cog­ni­tive tasks, which was in line with the inter­ests of our lab, and was deemed a bit hard­er than some of the pre­vi­ous tasks that have been repli­cat­ed online. So we chose this spa­tial n‑back task and the Remote Asso­ciates Test. We also had three sur­veys, and we had com­pre­hen­sion quizzes for the task and sur­vey instructions.

Jihanne:
So we recruit­ed under­grads through SONA, which is our depart­men­tal recruit­ment plat­form. All the par­tic­i­pants, regard­less of the con­di­tion, had to sign up for time slots. So there is a recruit­ment option in Goril­la to con­nect with SONA, but we want­ed to con­trol the time that par­tic­i­pants com­plet­ed the study, so we won’t have writ­ten par­tic­i­pants com­plet­ing the exper­i­ment at odd hours, and with our par­tic­i­pants ran­dom­ly assigned to either writ­ten or Zoom con­di­tion. And they were test­ed twice one week apart, and the tasks were coun­ter­bal­anced, and the sur­vey order was randomized.

Jihanne:
So one test­ing ses­sion start­ed with par­tic­i­pants giv­ing con­sent, and set­ting up their device and their envi­ron­ment accord­ing to the instruc­tions pro­vid­ed. Then they were quizzed on the task instruc­tions before com­plet­ing the task. The same for­mat was fol­lowed for all three sur­veys. And then they answered the sub­jec­tive expe­ri­ence sur­vey, fol­lowed by the demo­graph­ic ques­tion­naire on the first day, and they were debriefed on the sec­ond day.

Jihanne:
So all writ­ten instruc­tions were iden­ti­cal for all the par­tic­i­pants. We tried to make the writ­ten instruc­tions as com­pre­hen­sive as pos­si­ble. And in the Zoom con­di­tion, we as exper­i­menters had our cam­eras and mikes on for the entire exper­i­ment, except when the par­tic­i­pants were com­plet­ing the task and sur­veys, to reduce their dis­com­fort. But we were present for the entire dura­tion of the ses­sion. We fre­quent­ly asked if par­tic­i­pants had any ques­tions, and we also con­firmed their under­stand­ing of the instructions.

Jihanne:
We also read the instruc­tions to the par­tic­i­pants, except for the sur­vey instruc­tions. And as for the par­tic­i­pants, we asked that they keep their mic on for the entire ses­sion, but the use of cam­eras were option­al to max­i­mize their lev­el of com­fort, and most par­tic­i­pants did have their cam­eras off.

Jihanne:
So we had sev­er­al out­comes of inter­est, but today I’ll be pre­sent­ing our pre­lim­i­nary results on the accu­ra­cy in the cog­ni­tive task, and per­for­mance in the task instruc­tion quizzes.

Jihanne:
So the n‑back con­sist­ed of three con­di­tions with dif­fer­ent lev­els of dif­fi­cul­ty. For the 1‑back par­tic­i­pants pressed the space bar when the blue box appeared in the same place in the fol­low­ing tri­al. For the 2‑back, they respond­ed when it appeared in the same place in the sec­ond fol­low­ing tri­al. And for the 3‑back they respond­ed when it appeared in the same place as the third fol­low­ing trial.

Jihanne:
So the val­ues pre­sent­ed here are log­ic val­ues, and there was a sig­nif­i­cant main effect of n‑back, such that par­tic­i­pants per­formed worse in the hard­er con­di­tions. And this per­for­mance range was com­pa­ra­ble to the pre­vi­ous lit­er­a­ture of the task being imple­ment­ed in the lab. There was no mean effect of instruc­tion deliv­ery. How­ev­er, we did find that instruc­tion deliv­ery inter­act­ed with the type of n‑back, such that Zoom improved per­for­mance in the eas­i­er con­di­tions, and worse in per­for­mance in the hard­er conditions.

Jihanne:
As for the Remote Asso­ciates Test, where par­tic­i­pants were pre­sent­ed with three words, and they had to pro­vide the fourth word that would link those three words, the over­all per­for­mance of our online sam­ple was con­sid­er­ably low­er than what was report­ed in the lit­er­a­ture of the task being imple­ment­ed in the lab. But there was a mean effect of instruc­tion deliv­ery, where­in the Zoom par­tic­i­pants per­formed bet­ter than the writ­ten par­tic­i­pants. How­ev­er, there was no dif­fer­ence in the com­pre­hen­sion quiz for either task, so it sug­gests that the dif­fer­ence in task per­for­mance was not dri­ven by the dif­fer­ence in com­pre­hen­sion of instructions.

Jihanne:
And so our pre­lim­i­nary find­ings par­tial­ly sup­port­ed our hypoth­e­sis, when the Zoom con­di­tion improved task per­for­mance in the Remote Asso­ciates Test, but we do note that the over­all per­for­mance was low­er than what was pre­vi­ous­ly found in the lab. The Zoom con­di­tion also improved per­for­mance in the eas­i­er n‑back con­di­tions, but per­for­mance declined in the hard­er n‑back con­di­tions. And instruc­tion deliv­ery did not seem to impact task comprehension.

Jihanne:
So it appears that improve­ment in data qual­i­ty relat­ed to instruc­tion deliv­ery is con­tin­gent to the type of cog­ni­tive task that is imple­ment­ed. And so for our next steps, we plan to com­plete our analy­sis on the rest of our out­comes of inter­est. We’re also inter­est­ed in explor­ing the pos­si­ble dif­fer­ences in com­put­er para­me­ters between con­di­tions. So for exam­ple, we instruct­ed par­tic­i­pants to max­i­mize their browsers, and you can con­firm this using the data col­lect­ed by Gorilla.

Jihanne:
And just a cou­ple of points we learned from set­ting up our study is to not under­es­ti­mate the time it takes for par­tic­i­pants to set up for par­tic­i­pants when using Zoom. And so spac­ing out par­tic­i­pants is key. And also tak­ing advan­tage of the fea­tures offered by the exper­i­men­tal plat­form to help max­i­mize data qual­i­ty. So for exam­ple in Goril­la, the ran­dom­iza­tion and branch­ing nodes were very help­ful for us.

Jihanne:
And a final note to keep in mind is that most cog­ni­tive and behav­ioral tasks have been val­i­dat­ed in lab. So we must con­tin­ue to ask how we can ade­quate­ly trans­late these tasks online, and even con­sid­er design­ing tasks for online settings.

Jihanne:
And with that, I’d just like to thank my super­vi­sor, Dr. Annie Duch­esne and from our lab, Kiran and Emma, as well as our senior lab instruc­tor, Julie. And also thank you to every­one here for lis­ten­ing to my presentation.

Speak­er 2:
Thank you very much Jihanne. We have time for a ques­tion, if some­body would like to ask one in the Q&A, oth­er­wise I’m going to ask my ques­tion. So we’ll need to type furiously.

Speak­er 2:
Okay, I’m going to grab the oppor­tu­ni­ty to ask a ques­tion. I liked your point at the end there, like we’ve start­ed with a mod­el of, “This is how we do exper­i­ments in the lab,” and then we’ve tak­en that online. But there would real­ly be a val­ue in think­ing, “I’m start­ing again. This is a dif­fer­ent for­mat.” If in-per­son test­ing had nev­er been pos­si­ble, what would online test­ing look like? It would prob­a­bly look dif­fer­ent. Would you care to expand on that for just a minute?

Jihanne:
Yeah, for sure. So, as I said in that point very suc­cinct­ly, right, like most of the tasks that we’re see­ing online have been val­i­dat­ed in the lab. And it’s real­ly hard to com­pare the lab with online. There are ben­e­fits to both, for sure, but for us to just take one thing and throw it in anoth­er set­ting, I feel like it’s not the fairest thing to do, in terms of mak­ing sure that we’re get­ting the same out­put from those tasks.

Speak­er 2:
Thank you very, very much. So thank you, Jihanne.

 

Get on the Registration List

BeOnline is the conference to learn all about online behavioral research. It's the ideal place to discover the challenges and benefits of online research and to learn from pioneers. If that sounds interesting to you, then click the button below to register for the 2023 conference on Thursday July 6th. You will be the first to know when we release new content and timings for BeOnline 2023.

With thanks to our sponsors!