Test­ing social inter­ac­tion in iso­la­tion: an online challenge

Bry­ony Payne, UCL
@bryony_payne

YouTube

By load­ing the video, you agree to YouTube’s pri­va­cy pol­i­cy.
Learn more

Load video

My research looks into the per­cep­tu­al bias we afford to voic­es that belong to us or to oth­ers and how this bias might be mod­u­lat­ed by using the voic­es in a social con­text. With the chal­lenge of mov­ing our research online, we need­ed a social­ly inter­ac­tive online envi­ron­ment that indi­vid­ual par­tic­i­pants could access remotely.

To this end, we cre­at­ed a coop­er­a­tive, two-play­er online game in which par­tic­i­pants were able to choose a new syn­the­sised voice to rep­re­sent them­selves and then use that voice to inter­act with anoth­er par­tic­i­pant in a 30-minute draw­ing game. At test, we assessed whether social use of the voice mod­u­lat­ed the degree of per­cep­tu­al bias afford­ed to it via a per­cep­tu­al match­ing par­a­digm. Specif­i­cal­ly, we com­pared the bias demon­strat­ed by par­tic­i­pants who played this online game (n=44) to a con­trol group (n=44) who had only brief expo­sure to the voic­es and did not play the game. Results show that par­tic­i­pants afford­ed a per­cep­tu­al bias to the syn­the­sised voic­es they chose, but that the degree of bias was not mod­u­lat­ed by social use of the voice. Here I present these results along­side the online tools, tasks, and plat­forms we used to attain them.

Full Tran­script:

Bry­ony Payne:
Okay. Hi every­one. I’m a PhD stu­dent at UCL, and I’m going to be talk­ing to you about the chal­lenges of test­ing social inter­ac­tion in iso­la­tion and how we over­came those chal­lenges. So briefly for con­text, my research looks at voic­es and voic­es are obvi­ous­ly a key part of our self-iden­ti­ty. They not only have great per­son­al impor­tance to us, but also have great social impor­tance because it’s through our voice that we share our­selves with oth­ers and achieve our social and com­mu­nica­tive goals.

Bry­ony Payne:
So broad­ly my research asks ques­tions like, well, are we biased towards our own voice because it’s a voice that belongs to us? And can we actu­al­ly give peo­ple a new voice that isn’t actu­al­ly their own voice inher­ent­ly, but get them to asso­ciate that voice to them­selves and then show a bias for it? And how much of this bias is affect­ed by whether or not they’ve had a chance to use the voice social­ly giv­en how impor­tant voic­es are to our social interactions?

Bry­ony Payne:
So the first sort of part of my PhD looked to answer the first two of these ques­tions. And we can see here from the results that when we give peo­ple a new voice, sim­ply by telling them that this new voice is now yours, this voice belongs to you, we see that reac­tion times to that voice are sig­nif­i­cant­ly quick­er than the reac­tion times to a voice we tell them belonged to a friend or a voice we tell them belongs to a stranger. And the fact that this voice, the self-voice is being per­ceived more quick­ly than either of the oth­er two voic­es is pure­ly because this voice has now been deemed to be a more self-rel­e­vant stim­u­lus than either of the oth­er two voic­es. And as it becomes more self-rel­e­vant, it accrues a pro­cess­ing advan­tage that pri­or­i­tizes it in our per­cep­tion. So this was done via a per­cep­tu­al match­ing par­a­digm in Goril­la, which peo­ple can use via Goril­la Open Mate­ri­als if they want to try it.

Bry­ony Payne:
But the ques­tion then became well, what about using this voice social­ly, rather than just giv­ing par­tic­i­pants a chance to hear the voice that we’ve sud­den­ly told them is theirs and then mea­sur­ing the bias towards it. What about if we give par­tic­i­pants a chance to use that new voice and then mea­sure how they per­ceive it? So that was the main sort of aim, but then the pan­dem­ic hit. So sud­den­ly test­ing social inter­ac­tion became very dif­fi­cult and the ques­tion real­ly became, well, how can we cre­ate an online envi­ron­ment where peo­ple can inter­act using a new voice?

Bry­ony Payne:
And so my first top tip then is to col­lab­o­rate where you can, because that’s the only way that this got done. We were lucky to col­lab­o­rate with aca­d­e­mics who work in AI, Angus Addle­see and Pro­fes­sor Ver­e­na Rieser. And togeth­er, we man­aged to build an online two-play­er game, com­bin­ing our skills and we were able to cre­ate this envi­ron­ment where we could host pairs of par­tic­i­pants to come in, inter­act in a real-life inter­ac­tion and use a new voice that they’d cho­sen for them­selves. Impor­tant­ly, this new voice was a syn­the­sized voice made by Cere­Proc voic­es and they cre­ate human-sound­ing voic­es in a range of accent. And by using a syn­the­sized voice in this task, it meant that we could pro­vide par­tic­i­pants with a huge amount of agency, not only in what the voice sound­ed like and what they want­ed to be rep­re­sent­ed as, but also a huge amount of flex­i­bil­i­ty in what they want­ed to say with that voice.

Bry­ony Payne:
The game itself that we cre­at­ed was called Draw­ing Con­clu­sions. And that’s because it took the form of a draw­ing game and it looks some­thing like this. This was cre­at­ed in Node.js app. The idea of the game was that pairs of par­tic­i­pants would take on the role as either a nar­ra­tor or an artist. And the nar­ra­tor had to ver­bal­ly describe to the artist how to draw a pic­ture with­out telling the artist what it was that they were draw­ing. So this is the screen that the nar­ra­tor would have seen. And it went some­thing like this. The nar­ra­tor would choose their syn­the­sized voice from a drop­down menu, choose a pic­ture from a pic­ture deck that we sup­ply to them, and then they would try to type instruc­tions to tell the artist how to draw that pic­ture. Impor­tant­ly, these writ­ten instruc­tions were then said aloud in the text to speech voice that they had cho­sen for themselves.

Speak­er 2:
Start by draw­ing a big rec­tan­gle in the mid­dle of the screen.

Bry­ony Payne:
So it went some­thing like that. The artist would then be able to hear the nar­ra­tor’s instruc­tions and fol­low those instruc­tions accord­ing­ly. And this process would hap­pen iter­a­tive­ly until either the nar­ra­tor was sat­is­fied that the pic­ture was com­plete or until the artist had suc­cess­ful­ly guessed what it was that they had drawn. So this was a bit like Pic­tionary and it actu­al­ly was a very fun way of get­ting par­tic­i­pants to inter­act with a real-life human being, to achieve a social goal and achieve that goal by using the voice that they had just cho­sen for themselves.

Bry­ony Payne:
So we’ve got the game, we’ve got the envi­ron­ment, how do we get par­tic­i­pants there? We had quite a com­pli­cat­ed set up, espe­cial­ly as it need­ed to all be run remote­ly. We need­ed to start par­tic­i­pants in Goril­la, which was our main test plat­form, get­ting them to choose a voice and answer ques­tions, like why they chose that voice for them­selves. We then need­ed to get them through to the draw­ing plat­form, the game plat­form we cre­at­ed. And then back again to Goril­la to be able to answer ques­tions about the bias towards that voice.

Bry­ony Payne:
And this is where the next tip comes in, which is real­ly to manip­u­late tools to your needs. The tools exist. You just have to fig­ure out how to use them for the best. So Goril­la sup­plies a redi­rect mode, which allows you to trans­fer par­tic­i­pants from Goril­la out to a third-par­ty plat­form. And then you can embed a link into that plat­form and send them back into Goril­la when you’re done. And impor­tant­ly, the par­tic­i­pant starts where they left off, which is real­ly help­ful for sort of ensur­ing con­ti­nu­ity between your tasks.

Bry­ony Payne:
It was also impor­tant to think about how we were going to recruit par­tic­i­pants. How are we actu­al­ly going to get par­tic­i­pants to do the study at all? Ordi­nar­i­ly in per­son, we might be recruit­ing pairs of par­tic­i­pants to come into the lab at pre­set times, but that again, was­n’t pos­si­ble. So instead we recruit­ed online via Pro­lif­ic, and obvi­ous­ly Pro­lif­ic as nor­mal­ly asso­ci­at­ed with the main ben­e­fit of recruit­ing hun­dreds of par­tic­i­pants at once. But it’s also, we’re say­ing that you can use Pro­lif­ic to actu­al­ly recruit very small and very con­trolled num­bers on time.

Bry­ony Payne:
So we start­ed a Pro­lif­ic study and we only opened up two avail­able spaces. We then recruit­ed par­tic­i­pants two at a time into the draw­ing game. We’d allowed them to com­plete the study. And at that time it was paused in Pro­lif­ic. And then we could grad­u­al­ly increase the places to anoth­er two par­tic­i­pants and this allowed it to be a very con­trolled way of get­ting par­tic­i­pants through our study. We could track their progress through the study. And it also meant if any­body with­drew from the study, we had an imme­di­ate pool of peo­ple that were ready to take over.

Bry­ony Payne:
So over­all I think in order to nav­i­gate peo­ple through mul­ti­ple plat­forms on using dif­fer­ent tools, it’s real­ly impor­tant to use real­ly clear and well-pilot­ed instruc­tions. We also used video instruc­tions for things like explain­ing how to play the game before they got to the game plat­form. And videos instruc­tions are just a real­ly good way of get­ting peo­ple to lis­ten to a lot of infor­ma­tion in one go in a more engag­ing way. And it’s also impor­tant that they actu­al­ly can’t skip past them in Goril­la. So they have to lis­ten to them before they move on.

Bry­ony Payne:
So we’ve got the par­tic­i­pants there. The next ques­tion was, well, how are we going to keep them there? This study was a very long study. It was about an hour long as sort of an aver­age, but some peo­ple took a lot longer. And to be hon­est, the game was the fun half, and that was the first half. So how to keep peo­ple in your study after that, rather than just play the game and then go make a cup of tea? So first­ly, we need­ed to make sure that it ran as smooth­ly as pos­si­ble before we even began. And obvi­ous­ly every­one has said pilot, pilot, pilot, and that’s very true. I also lim­it browsers. I find Chrome to be the least glitchy, espe­cial­ly when we’re work­ing across mul­ti­ple plat­forms. Chrome pre­sent­ed the least amount of issues.

Bry­ony Payne:
If you’re using audi­to­ry stim­uli in a task, I some­times find in Goril­la that the first audi­to­ry stim­u­lus does­n’t play or does­n’t play quite on time and that can throw par­tic­i­pants off. So I actu­al­ly use a dum­my sound at the begin­ning of tasks that include audi­to­ry stim­uli. And that dum­my sound is nor­mal­ly just a peri­od of silence that the par­tic­i­pants don’t even know has been includ­ed. But it means that by the time the first prop­er sound wants to play, it’s ready to go and it runs more smooth­ly. I also make real­ly good use of progress bars. Par­tic­i­pants are real­ly grate­ful to have progress bars in the study. And if I can’t have a progress bar on screen, because I don’t want it to be there offi­cial­ly, I always tell par­tic­i­pants how long the next sec­tion of the study is going to take.

Bry­ony Payne:
And if you’ve done all of those things, you should have good data and you can see these are the results from my study. So here you can see in the draw­ing game group, the peo­ple that chose a voice and played the game, they have pri­or­i­tized the self-voice sig­nif­i­cant­ly more in com­par­i­son to the oth­er voic­es that they heard in that game, but actu­al­ly in com­par­i­son to a con­trolled group who just chose the voice and did­n’t play this game, the results are exact­ly the same. There’s no sig­nif­i­cant dif­fer­ence. So here we can show that there is per­cep­tu­al pri­or­i­ti­za­tion of voic­es that we own or voic­es that we’ve used, but that pri­or­i­ti­za­tion isn’t mod­u­lat­ed by whether we’ve used them nor how we’ve used them in an inter­ac­tion with anoth­er person.

Bry­ony Payne:
So thanks very much to my lab, to Angus Addle­see, to Cere­Proc for their syn­the­sized voic­es, to Goril­la and Pro­lif­ic. And if you’ve got any ques­tions, please do drop me an email. Thank you very much.

Speak­er 2:
Thank you very much, Bry­ony. If there are any ques­tions for Bry­ony, do drop them in the Q&A. We have time to answer one now, oth­er­wise it’s going to be me ask­ing the ques­tion. So do, okay, keep ques­tions com­ing into the Q&A, and I’m going to quick­ly ask Bry­ony a ques­tion. So you talked about pre­sent­ing sound in these sorts of par­a­digms. And obvi­ous­ly, there was a whole ses­sion yes­ter­day when two peo­ple were talk­ing about doing online test­ing with sounds. Have there been any oth­er issues that you’ve run into in pre­sent­ing audi­to­ry stim­uli online?

Bry­ony Payne:
So I tried to cre­ate an inten­tion­al bind­ing par­a­digm. I’m not sure if peo­ple here are famil­iar with that, but it relied on hav­ing very accu­rate onsets of time and off­sets of time of audi­to­ry stim­uli. And I actu­al­ly found that the biggest issue with that was that some browsers present a lag between when Goril­la for instance tells you that the time has been start­ed and when the brows­er has actu­al­ly played it. And that’s most­ly why I use Chrome because it seems to have the least vari­a­tion across par­tic­i­pants of when that time onset of the audi­to­ry stim­u­lus is actu­al­ly played. Where­as a brows­er, I think it was Safari some­times had up to 500 mil­lisec­onds of lag. And because my study relied on hav­ing a very accu­rate and pre­cise mea­sure of time in mil­lisec­onds, a big­ger lag of that makes a big issue. So I know that Goril­la has pro­duced quite a lot of work and pre-prints about the tim­ing accu­ra­cy of these things. But for the sake of min­i­miz­ing that in my study, I always just use Chrome.

Speak­er 2:
Thank you very much. I think there are some more ques­tions com­ing in, but I’m going to say thank you very much to Bry­ony, but also to all our speak­ers in this open­ing ses­sion of Buf­fet of Online Research. And we’re going to hand back over to Jo. Thank you very much.

 

Get on the Registration List

BeOnline is the conference to learn all about online behavioral research. It's the ideal place to discover the challenges and benefits of online research and to learn from pioneers. If that sounds interesting to you, then click the button below to register for the 2023 conference on Thursday July 6th. You will be the first to know when we release new content and timings for BeOnline 2023.

With thanks to our sponsors!