Cognitive Science Lunch Time Talk - Benjamin deMayo & Isaac Christian

Feb 1, 2024, 12:00 pm1:00 pm



Event Description

Benjamin deMayo, Psychology and Cognitive Science Graduate Fellow, Princeton University

"Stability and change in gender identity across childhood and adolescence"

Abstract: As transgender and gender diverse youth experience unprecedented levels of societal visibility and debates about their legal rights rapidly gain steam, scientific research on how youth conceptualize their gender identities over time has remained scarce. In my talk, I will discuss results from a longitudinal study of gender identity in US American and Canadian youth that aims to address this gap. Data on gender identity have been collected from three groups of youth (and their parents) over a period of between 5 and 10 years: (1) an initially transgender group, who underwent social transitions (i.e., changing names, pronouns, hairstyle, and clothing to align with one’s gender, rather than one’s assigned sex) by age 12; (2) siblings of the initially transgender group (who were initially cisgender at the beginning of the study); and (3) an unrelated initially cisgender group, a sample of community-recruited youth matched in age and gender to those in the initially transgender group. I will address how much these youth’s conceptualizations of their own gender have or have not changed longitudinally, whether this differs among the three groups, and whether the youth’s self-reports of their identities align with their parents’ reports.

Isaac Christian, Psychology/Neuroscience and Cognitive Science Graduate Fellow, Princeton University

"Comparing Signatures of Attention in an Artificial Neural Network and Humans in a Social Task"

Abstract: Cognitive science has leveraged deep learning to contrast ways in which artificial and human agents process information. For example, researchers have examined similarity between human eye tracking data and attention maps in Artificial Neural Networks (ANNs) when captioning images, finding that ‘bottom up’ salience models better predict human attention than ‘Top Down’ ANN models (Borji, Cheng, Jiang, & Li, 2015; He, Tavakoli, Borji, Mi, & Pugeault, 2019). In this way, ANNs trained on the same task can be used to contrast distinct models of attention. Any good model of visual attention should also predict spatiotemporal patterns of attention in other task domains. In social cognition, cues indicative of attention state, such as eye gaze, are thought to provide a basis for inference about abstract mental states (Calder et al., 2002). In this work, we present preliminary findings comparing attention mechanisms between humans and an ANN in social and non-social image captioning tasks.