Alex Irvine, MRC Cognition and Brain Sciences Unit, Cambridge
@alexanderirvine
My talk at BeOnline 2020 will be on the accuracy and consistency of visual presentation and recording of keyboard responses online. So if you are running research that involves presenting images for short durations, or looks at small differences (less than a few hundred milliseconds) in reaction times, then this talk will be of interest to you. The preprint paper I am presenting looks at different platforms — Gorilla, jsPsych, PsychoPy/Pavlovia and lab.js on 4 different web browsers and 4 different device categories. It is split up into two parts. In the first, we used a photodiode to detect onset of stimuli on the screen, and a robotic actuator to press the keyboard at specific times. This allowed us to look at the key metrics of: accuracy (the difference between expected and actual timing), and precision (the variability of these differences). The philosophy behind how we set up these testing rigs, was to represent as closely as possible the devices that participants would have in their own homes — so we created the tasks on each platform as a normal user would, and used standard keyboards. In the second part of the paper, we look at the demographics of a large sample of Gorilla participants (200,000 people), where they are, what platforms were used to recruit them, and what their devices looked like. This allows us to put the timing results in the first part into context. The results show that for most platforms and devices, timing was very good. It also showed that the most common browser and device combination of PCs and Chrome, showed sufficiently good timing. If you want to see what the results are in full, then you should virtually attend my talk, and if you are really keen go and find the preprint on PsyArxiv before hand. Thanks, and hope to see you there.