Lorijn Zaadnoordijk, Trinity College Dublin
@LorijnSZ
Infant research frequently relies on looking behavior. In many cognitive domains, looking time paradigms are used, for example, to assess what infants anticipate, whether they can discriminate between stimuli, or whether they have learned something during the experiment. In addition, recently there has been renewed attention to how infants’ looking behavior in their environment shapes what and how they learn. Infants are not passively receiving information, they actively seek it out by attending to certain types of organisms or objects over others at different stages in development.
As such, ideally, developmental scientists could run large-scale, cross-sectional looking behavior and learning studies. Such studies, unfortunately, are time consuming and costly. There has been increasing attention to reducing the costs and improving the sample sizes of experimental infant studies, which are typically low in infant studies (Bergmann, et al., 2018). The logistic and practical challenges related to coming to the lab as well as the small age ranges can be identified as partially responsible. Online data collection offers the possibility to acquire larger samples as participants can stay at home and participate at a convenient time. This makes large-scale studies more feasible in terms of both recruitment time as well as reaching a more diverse population. Being able to conduct looking behavior studies online would therefore open exciting novel research possibilities.
However, unlike in the lab, where state-of-the-art eye-trackers may be available, the looking data that is acquired online is based on a webcam, which contains more noise and does not provide the gaze direction or a reference axis to remove the effects of head motion. Manual coding of these data is labor-intensive and prone to day-to-day and interrater variability. An automated approach would thus be desirable and improve reproducibility. However, although algorithms such as Amazon Recognition can relatively reliably indicate whether the infant is looking at the screen, detecting whether the infant is fixating on the left or right side of the screen is still problematic (Chouinard, Scott & Cusack, 2019).
In addition to testing infants online, we have been exploring methods to improve the accuracy of gaze estimation algorithms in online infant experiments. In this presentation, I will describe the unique challenges that developmental scientists face when transitioning to online testing. I will touch upon a range of topics (from ethics to analyses) while describing our experiences with online looking behavior studies and solutions we and others have explored. Finally, I will present various state-of-the-art resources and initiatives that play a key role in online testing in the developmental science community.