Gloria W Feng — Yale University
Surprising sensory events are common in daily life but often behaviorally irrelevant. Here, we tested whether incidental surprises influence decision making, across six online experiments designed on the Gorilla.sc online platform.
Participants (n=1200) made choices between risky and safe options in which each option presentation was preceded by task-irrelevant six-tone auditory sequences. In two experiments (each n=200), “common” sequences heard before 75% of trials consisted of identical tones and “rare” sequences heard before 25% of trials ended with a novel deviant tone. Rare sequences simultaneously increased risk taking and increased switching away from the option chosen on the previous trial.
Our computational model captured both changes with value-independent risk-taking and choice perseveration parameters, respectively. When sequence probabilities were reversed such that rare sequences consisted of the six identical tones, participants still increased option switching after hearing these sequences but did not increase risk taking. In two control experiments, both effects were eliminated when sequences were presented in a predictable manner. The choice switching effect may arise not from tone novelty but from recognizing surprising sequences.
Thus, we find evidence for two dissociable influences of sensory surprise on decision making. Aberrant sensory processing is implicated in psychiatric disorders including schizophrenia and psychosis. Our findings offer a new way to evaluate patients and treatments by examining the relationships between sensory prediction errors and behavior.
Altogether, we find that surprising sounds systematically alter human behavior, identifying a previously unrecognized source of behavioral variability in everyday decision making.
Full Transcript:
Gloria Feng 0:00
here. Thank you everyone. My name is Gloria and I will be talking about my project titled surprising sounds influence risky decision making. So this project was conducted fully online and consists of four main experiments which I ran over the course of about a year. And I’m excited to share with you its results and also what it has taught me about online research.
So to start, imagine that you’re in a busy city. In this urban jungle, there are surprising sensory events everywhere, you can imagine the sound of a car honking for you to get out of the way, or the sound of an approaching subway car approaching you. So in response to the surprising sounds, you might quickly change your behaviour. This could mean changing course on the pedestrian crossway or quickly jerking backwards away from the platform edge. In both of these instances, an immediate behavioural response to a surprising sound can be truly adaptive, because it can protect you from an immediate danger, or alert you about a potential reward. However, but after being in these environments for a while, you might notice that most of the time, the abundant noises that you’re surrounded by, are actually behaviorally irrelevant.
Think about a time when you were stuffed inside of a crowded subway car on your morning commute. And while you’re trying to focus on doing a crossword puzzle, or as you’re writing up a message to a friend, you hear someone’s ringtone going off on the side, or the sound of a conversation happening in the background. These are also considered surprising sentence sensory events, but it’s a little bit less intuitive, what kind of immediate effects that might have on your behaviour, if at all? And if so whether those effects on your behaviour are systematic or not. So this is the kind of thing that we were wondering about whether you know, task irrelevant or behaviorally irrelevant, surprising sounds really affect our behaviour. And we decided to look at this general question in a smaller domain of risky decision making.
And so now the question kind of becomes, do surprising sounds systematically effect our risky decision making, even when those surprising sounds are actually task irrelevant. So this is the way that we took a stab at this question. And we asked participants to make choices between a risky gamble option so like a unbiased coin flip, essentially, or a safe option on every trial, and with a keyboard press, they can decide to choose the risky option. And after a short delay, they get to see whether they’ve won or lost that, or they can choose the safe choice by clicking another keyboard key.
On every trial, participants can see one of three different types of trials. So they can either see a gain trial, which is at the top, which features either a potential gain or a smaller potential gain, or a loss trial, which involves only potential losses, or finally, a mixed trial, which contains a mixture of potential gains or potential losses. So this is a pretty standard paradigm used to measure and kind of capture people’s risk taking preferences. The key though, is that we introduced surprising auditory sequences, or we introduced auditory sequences to this paradigm.
So in the inter trial interval, so in the three seconds before participants are shown their next set of options to make the choice, participants have to passively listen to a six tone auditory sequence. And what’s important to note is that these auditory sequences are supposed to be task irrelevant in the sense that whatever sounds that they hear, are completely not predictive of whatever they’re going to be shown next. And it’s not going to be predictive of the rewards that they’re going to get. So on a majority of trials on 75% of trials, participants will hear what I’ll consider a common sequence. So that’s on the bottom row here. And it’s common sequence, as shown in the graphic consists of six identical tones. So I’m going to try to play that for everyone. And I hope it’s not too loud. Let me see. Okay, yeah, so I just played this, this is the common sequence consists of six identical tones. Now, on 25% of trials, on a more minority of trials, people will actually hear a rare sequence. So this first sequence will start off the same way as the common sequences do with five tones, but at the end, it will have a different ending. So in this graphic here, it shows that it ends on an tone that has a different pitch. And so this rare sequence will sound like this.
4:14
Okay, so now you can kind of imagine that sometimes on a rare trial participants would be like surprised, and we kind of wanted to capture or to analyse how does risky decision making differ on these rare trials as opposed to common trials? Okay, so this is how we approached the way that we collected our data.
So there were several factors that drew us into conducting all of our experiments online, because of the many advantages of doing online research, which includes access having access to large pools of participants, and we also have, you know, the ability to collect large and also diverse samples, like for example, in Prolific there’s the option to gender balance our samples, which is a very nice feature. And also, probably the biggest advantage is that it’s extremely time efficient to conduct studies online. Typically in the lab, if you wanted to collect a dataset of 100 participants, it could take months. And it can be incredibly expensive and time and money to run. But the fact that we can press a button and essentially collect our whole dataset in a day is a huge plus.
However, all of this flexibility and convenience does come at the expense of having maximum amounts of experimental control over our participants environment. In our case, the crux of our study was really to see how a specific sound manipulation can influence people’s behaviour. And so it’s extremely important for us to make sure that the sound manipulation is really doing its job, so that we know that our results can be trusted and are valid. So thus, we came up with four different considerations, which have to do with kind of addressing some of the common disadvantages of online research.
The first one is getting the question is the sound even on? So this sounds trivial, almost. But however, when people are doing experiments at home, they’re using different browsers they might have ad blockers on. So there’s no guarantee that, you know, due to technical issues or something they, for some reason, can’t hear the sounds. Another one is does the audio have sufficient sound quality and clarity. So this is a big one, because we can’t control the types of auditory equipment people use. And so the variability there is immense. And we want to find a way to constraint that. Third one is, are there distractions or background noise? Yeah, so participants could be doing this outside, they could be doing this in public or at home. And especially given that our study is all about studying how irrelevant surprising sounds affect people’s behaviour, we definitely want those irrelevant, surprising sounds not to come from their own environments, but from our task specifically.
And finally, we wanted to make sure that participants are following basic instructions. So this is not specific to our study. Of course, in general, we want participants to be attentive, to be compliant to instructions and generally doing our task in good faith. So now I’ll show you how we structured our experiments. So we had used Gorilla as the hosting and experiment building platform for our experiments. And so you can see here it’s like the I drew out a graphic summarising the experiment tree that participants kind of progressed through. So in the first five minutes of the task, we have participants complete two screeners. Both of these were sourced from Gorillas open materials library, which is nice.
And so the first one is the browser autoplay soundcheck so this one’s super basic. All it does is that it plays two seconds of like a music clip and ask people whether or not they can hear the music yes or no. If they can’t, then it leads people through some instructions on how they can maybe disable an ad blocker or something to fix a problem. And otherwise, if they can’t, then they participants are given the option to exit the study early and return their submission. We thought this would be adequate for addressing consideration one which is whether the sound is on or not. Then people would progress into doing the headphone screen. So this is based off of the loudness judgement test developed by Whit and colleagues. And essentially all it does is that it has participants listen to three, a sequence of like three auditory tones, and then participants then have to label which one sounds the quietest and what’s important to note is that, um, this screener is really easy to pass if you’re wearing headphones. But it’s difficult to distinguish discriminate between the three tones if you were playing sound from your computer, but not wearing headphones. So essentially, those who achieved more than five out of six in accuracy for this screener would pass the screen.
8:22
So overall, these two, five these screeners in the beginning resulted in around a 30% exclusion rate in our experiments. And so we collected enough data so that by the end, we were able to analyse 200 participants in our main risk taking task.
Okay, and very quickly, I’ll now talk about some of the specifications we use for prolific so we use prolific as our main platform for recruiting our participants. For the device requirements we just made explicit that desktop is required, and also that there’s audio in the experiment. And in the study description, we tried our best to be as upfront and clear as possible. This is not what we wrote for participants verbatim. But essentially, we wanted to get two messages across, we wanted to make sure people’s ad blocker was turned off. And also that headphones are a required part of doing this experiment. So we wanted to put that upfront before people even accepted the study and did the screeners. And lastly, for the pre screening that we did on prolific, we kept it quite loose actually. We excluded participants from previous studies. So we had use prolific to recruit participants to do pilot versions of earlier iterations of our experiment. So of course, we didn’t want to invite those same participants to come back into our main study.
Alright, so now that I’ve gone over the nuts and bolts of how we ran this experiment, I’ll talk about the results that we found. So in front of you, at the top, you see the two experiment like paradigm descriptions, it’s of experiments one and experiments two each that we collected 200 participants on. They’re virtually identical in terms of the kind of structure where there 75% of trials are common. 25% of trials are rare with like a deviant ending, but the only difference is that For experiment two, we periodically switch the sides of the stimuli left and right, every 10 trials or so. But otherwise, our predictions for the two experiments would be very similar.
So what we found was that surprising sounds increase people’s risk taking. So the plot you see on the left here, what I’ve done was that I took the difference of the risk taking rate of rare trials minus common trials. So since these bars are significantly positive, that suggests that people are taking more risks for rare trials relative to common trials. And what’s nice is that experiments one and experiments two are both in agreement with each other on this one this result.
But from these plots alone, we can’t tell whether this increase in risk taking is driven by only like a subset of trials, for example. So one question I had was, oh, is this driven by gain trials only, for example, so what I did was that I combined these two datasets. So I had enough data, and I broke out all the trials into gain trials and makes trials and last trial types. And then I plotted that against the rate at which people chose the risky option. So what you can see here is that there’s based on this kind of stair step looking pattern, irrespective of the trial type, so in all three trial types, participants showed increased rate risk taking for rare versus common trials. So what this is suggesting now is that not only are people taking more risks, just in general, we can see that it’s happening in all different types of trial types, irrespective of whether there’s potential wins or potential losses at stake.
So with this kind of systematic effect of risk taking, we went to capture this in terms of a computational model. So one of the advantages of using this really basic risk taking paradigm is that it’s very well characterised computationally. So there’s a foundational theory called Prospect Theory, which captures people’s risk taking preferences as a function of people’s loss aversion, there’s a parameter for that there’s parameters for risk aversion for gains and losses. And finally, there’s a choice stochasisity parameter.
On top of this, on top of Prospect Theory, we went ahead and added an additional risky bias difference parameter, essentially, is a capture of value independent bias, that would capture the difference between risk taking for rare trials versus common trials. So this risky bias difference parameter, a positive one would indicate increased risk taking for rare trials, whereas a negative risky bias parameter captures have a bias towards the safe option. So on the left is a plot that you’ve already seen. And on the right, I took the model derived risky bias difference parameter fit for the two experiments. And what we can see is that it’s significantly positive in both experiments matching what we see in the model and three model independent analyses. So this is quite reassuring, actually,
12:49
that we found this. Okay, so on the left, you see this, the experiment designs for experiment one and two. And what we found now is that following risky, following rare sequences, participants are increasing their risk taking. However, from these two experiments, the way that it’s designed, we’re not sure if people are taking more risks, because people are kind of recognising that they’ve heard a rare sequence because it happens 25% of the time, or if it’s because people are simply reacting to the deviant tone at the end of the rare sequence.
So what I did was that I devised two other experiments for experiments 3 and experiment 4 such that now the rare sequence no longer ends in a rare or novel ending, instead, it ends on the common tone. So the question now becomes, after rare sequences do people still increase the risk taking, and what you can probably guess from the title, it actually completely eliminates the effect. So now I’m showing that when we cut in a sense, I remove a local surprise from the rare sequence, I actually get rid of the risk taking effect, which is a really cool and striking result.
All right, so let me summarise what I found. Incidental surprising sounds really do systematically increase risk taking. And I showed that in experiments 1 and 2, and I showed that this effect is consistent, consistent for a bit both behavioural and computational modelling based analyses. Next, I see that the risk taking effect of surprise can be eliminated simply by slightly, you know, tweaking the statistics of the auditory surprise. And as I showed, with the methods and how I built the experiment, I showed that headphones, readers were used to enhance data quality, and helped address the challenges and you can call them disadvantages of online research.
So that’s the thing about like these disadvantages of online research, right, such as, for example, having poor control over experiments setting or having lack of experimenters supervision, at the end of the day, these could all turn out to be a huge advantage at the end, which is something I found once you’ve established your results. So essentially, the participants from my study were recruited from over 12 different countries. Were in the presence of potential distractions and we’re probably doing the experiment during different times of the day. And yet, despite all of that, we’re still able to characterise clear systematic effects of surprising sounds on people’s risky decision making that were robust to all these varying conditions.
So doing this experiment online instead of in the lab definitely made things harder for us in some ways, because we had workarounds that we needed to do, but it ultimately made the results feel a lot stronger and more generalizable. So I think this gives me a lot of optimism about doing online research in the future. And it can be daunting, but also really rewarding in the discoveries it allows us to make. Thank you very much. That’s the end.
Jo Evershed 15:37
Gloria, that’s amazing. I love that last point you were making, how we, we love the control of the lab, it feels safe. And, and controlled. I’m sorry for all the controls lab, but it makes our results less robust and less reliable. And of course, taking the research online makes it harder to get it right and to get it get that data and to design your experiment so that it so that it works and that you’re, you believe the data, but when it does work, you feel much more confident that the result is robust, and is going to persist into different environments. So that was a really lovely point at the end.
I did have one question for you. Once participants have passed the headphone screener at the beginning, how do you make sure that participants continue to play the task with sound throughout the whole experiment? Is something? Is that something you looked at?
Gloria Feng 16:28
Yeah, that’s a really good question, Joe. So one thing that I didn’t discuss in this experiment was I only focused on the pre screeners that happened before the experiment. So we actually had some questions lesson checks, during the actual main experiment, the main task, so what we’ve done was that we call this like expectation checks where participants have to listen to, they have to play both tone sequences, and then they have to label which one they felt was common or not. So this was kind of a test of like, whether they’ve, you know, listened to instructions on keeping their headphones in, throughout the task, or also have they comprehended the task enough to kind of distinguish between what is rare and common. So we asked this, both at the beginning of the experiment, and also at the end of the experiment.
And while we did realise that, you know, some, I guess, bad actors could potentially not be wearing headphones throughout the whole experiment, and then when they see a question like this pop up, you know, put the headphones back in, and then listen to it and answer it correctly. So in that way, the check is still corruptible. But we think that that only by probably can’t be very common, I suppose. So, we did have this. And thankfully, after looking at my data to see people’s accuracy on these expectation questions, on average accuracy on it was like above 95%. So that was quite reassuring overall. So it sounded like after the screener people did seem to be quite good at the task and also able to discriminate between the different sequences.
Jo Evershed 17:56
Yeah, so it sounds like we don’t get many bad actors are prolific, which is what they promised us. I don’t know, if you’re here from the talk from Prolific this morning, they do quite a lot of checks, to make sure we get good quality participants from them. So that’s all very reassuring. There is one question in the q&a, which is from Laura, this result seems opposite to what one would expect. How do you interpret the increased risk taking after surprising sound if they might process this as a threat cue, rather than your tones, which are fairly neutral, I guess.
Gloria Feng 18:26
Yeah, exactly. Um, thanks. That’s a great question. And that’s something that we had been thinking about a lot, which is like, what is the valence of these surprising sounds that are occurring? So I guess, we tried our best to keep that as neutral as possible. And that, like the, we weren’t using stimuli that are known to be aversive, such as like screams or things like that. It is very interesting that it increases our risk taking, I think the way that we were kind of understanding this effect was that there was kind of like an orienting response that participants might be getting when they are hearing the surprising tone, which we thought was consistent with, like approach behaviour with in the sense of like, you can think of a naturalistic example, like a frog. And suddenly there’s a stimulus that comes up like a fly, and then immediately decides to approach that stimulus. So based on the behavioural results that we found that this was kind of like a value independent, like, approach motion. We thought this was consistent with this, like, bias towards risky decision making. Yeah.
Jo Evershed 19:34
Brilliant, thank you, Gloria. Thank you so much for your time Gloria.