Programme

Schedule

  • Thu 13 June
  • 8:00 – 9:00
    Registration & Welcome
  • 9:00 – 10:00
    Keynote: Anne Cutler
  • 10:00 – 10:30 Coffee Break
  • 10:30 – 12:30
    Talk Session 1: Input & Development
     
     
  • 12:30 – 13:30
    Lunch Break
  • 13:30 – 15:00
    Talk Session 2: IDS & Phonology
     
  • 15:00 – 15:30 Coffee Break
  • 15:30 – 16:30
    Keynote: Elizabeth Johnson
  • 16:30 – 18:00
    Poster Session 1
    & Wine Reception
  • Fri 14 June
  •  
     
  • 9:00 – 10:00
    Keynote: Laura Bosch
  • 10:00 – 10:30 Coffee Break
  • 10:30 – 12:30
    Talk Session 3: Bilingualism
     
     
  • 12:30 – 13:30
    Lunch Break
  • 13:30 – 15:00
    Talk Session 4: Various
     
  • 15:00 – 16:30
    Poster Session 2
    & Coffee
  • 16:30 – 17:30
    Keynote: Derek Houston
  • 17:30 – 18:30
    Panel Discussion
    Result slides ( Download)
  • 19:00
    Conference Dinner
  • Sat 15 June
  •  
     
  • 9:00 – 10:00
    Keynote: Nivedita Mani
  • 10:00 – 10:30 Coffee Break
  • 10:30 – 12:30
    Talk Session 5: Word Learning
     
     
  • 12:30 – 13:30
    Lunch Break
  • 13:30 – 14:30
    Talk Session 6: Entrainment
  • 14:30 – 15:30
    Keynote: Sharon Peperkamp
  • 15:30
    Poster Prize & Farewell
  •  


    Presentations

    Information about individual presentations. Click on a presentation to view the abstract

    Keynotes

    Anne Cutler (MARCS/Western Sydney University)

    Language-specificity in processing, and its origins

    There was a time (really, there was) when the fact that infants could acquire whatever language they were exposed to was held to imply that language production and comprehension involved only universal processes, the same for everyone, with the processing of one versus another language differing only in the phonemic repertoire, grammar and lexicon upon which the processes drew. Then, it was quite OK to read about an experiment done in Language X, conduct a follow-up with a tweak in the design which led to different results, and publish this finding without even mentioning that the follow-up study was not conducted in Language X, but in Y.
    We know better now. The infant brain is furiously busy developing language-specific processes as well as the phoneme repertoire, the grammar and the lexicon they deal with. The talk will address some of the still open issues concerning which processes are language-specific, and what sources of evidence drive the creation of language-specific processing development.

    Elizabeth Johnson (University of Toronto, Mississauga)

    How infants build a lexicon from naturally variable input

    The realization of spoken words is highly variable. For example, the word car produced by a male who learned English in Australia would sound quite distinct from the word car produced by a female who learned English in Canada. And the production of this same word would vary further depending on a wide range of factors like the talker’s mood, the age at which the talker acquired English, the relationship between the talker and the addressee(s), and phrasal context. This variation in the realization of words does not trouble adult listeners. But how do children – who are still acquiring the phonology and lexicon of their native language(s) – cope with this variation? In this talk, I will discuss several lines of research concerned with when and how children acquire their native language(s) in the face of naturally variable input.

    Laura Bosch (Universitat de Barcelona)

    Predicting language outcomes from early speech perception skills in infants at risk for language-related difficulties

    The connection between performance on a number of speech perception tasks in the first year of life (e.g. sound discrimination and speech segmentation) and later language development has already been established in typically developing children. The challenge of an early detection of infants at risk for language delays or disorders on the basis of their performance in speech perception tasks is still present. In this talk I will address this problem from the perspective of two different at risk populations: prematurely born infants and those born at term but small for their gestational age. Information from short-term and long-term outcomes will be discussed.

    Derek Houston (Ohio State University)

    Real-time mechanisms of word learning during social interaction in young deaf children with cochlear implants

    Cochlear implants provide deaf children access to sound, but there is enormous variability in language outcomes after implantation – children who receive cochlear implants at younger ages tend to have better language outcomes than later-implanted children. However, we don’t know why early implantation is better. It is generally assumed that earlier implantation leads to better development of early speech perception skills, which, in turn, leads to better language outcomes. However, findings from my lab suggest a more complex story where early auditory experience affects children’s ability to associate novel words with their referents. I will present recent data suggesting that difficulties with novel word learning may be driven, at least in part, by the effects of auditory experience on the dynamics of parent-child interactions that form the foundation of word-learning skills.

    Nivedita Mani (Universität Göttingen)

    Why do children learn the words they do?

    Children go from knowing a handful of words at around 12-months of age to almost 30 times as many during their second year of life. While this pattern of lexical acquisition is similar across different children learning different languages, there are considerable differences in the individual words known to different children. For instance, German vocabulary data suggests that close to 50% of German 20-month-olds know either the word Bär (bear) or the word Bagger (digger), but not both (Wordbank). What determines whether a baby is a Bär baby or a Bagger baby? This talk will examine potential reasons for such individual differences, focussing on the interaction between the input available to different children and children’s active interest in learning some words over others.

    Sharon Peperkamp (ENS/CNRS, Paris)

    Some puzzles in perceptual attunement

    Decades of research have shown that infants attune to the speech sounds of their native language during their first year of life: while they initially discriminate native and non-native contrasts, perceptual reorganization results in decreased discrimination of non-native contrasts, and improved discrimination of native contrasts. This talk focuses on two outstanding issues in infants’ speech perception development, one concerning young infants’ discrimination of subtle acoustic contrasts, and the other one concerning older infants’ discrimination of native contrasts.

    Talks

    Talk Session 1: Input & Development – Thu 13 June 10:30 – 12:30

    Chair: Paula Fikkert

    Marina Kalashnikova (Basque Center on Cognition, Brain and Language, Spain)

    Exaggerated prosody in infant directed speech facilitates infants’ predictions of conversational turns

    To achieve smooth transitions between conversational turns, speakers must plan their responses while simultaneously anticipating the end of their interlocutor’s turn. Adults perform this task effortlessly by relying on a combination of non-linguistic and linguistic cues, but this task is more challenging for young children who take several years to develop adult-like turn-taking skills (Casillas et al., 2016).
    Caregivers engage in proto-conversations with their infants from their first months of life, thus scaffolding their early turn-taking abilities (Snow, 1977). It is also possible that the acoustic qualities of adults’ infant-directed speech (IDS) provide infants with exaggerated prosodic cues that signal turn transitions and facilitate the detection of these cues in conversational interactions. Compared to adult-directed speech (ADS), IDS is characterised by exaggerated prosody, which facilitates the detection of utterance boundaries in continuous speech (Ludusan et al., 2016). Previous studies indicate that three-year-old children can successfully predict conversational turns by relying on prosodic cues in ADS, but at this age, children already rely on prosody in combination with other lexicosyntactic cues (Lammertink et al., 2015). Thus, it remains unknown whether the exaggerated prosody in IDS can facilitate successful turn taking in younger children before they can access additional lexicosyntactic information in speech.
    This study assessed children’s ability to predict conversational turn transitions based on prosodic cues to utterance boundaries in IDS and ADS. Anticipatory gaze patterns were recorded in one- (N=20) and three-year-olds (N=18) when they observed eight 30-second videos depicting conversations between two puppets. Four conversations were recorded in IDS and four in ADS. The availability of prosodic cues signaling utterance boundaries was also manipulated. Half of the turns in each conversation were prosodically complete, and half were spliced resulting in prosodically incomplete but syntactically correct utterances.
    Mixed effects logistic regression models were constructed for each age group with children’s anticipatory gaze switches as the dependent variable, register (IDS vs. ADS) and prosody condition (complete vs. incomplete) as predictors and random intercepts for participants. The one-year-old model yielded a main effect of register, β = .572, SE = .279, z = 2.051, p = .04, and a prosody condition by register interaction, β = -.91, SE = .394, z = -2.308, p = .02. One-year-olds overall produced more anticipatory gaze switches in IDS than in ADS, but they only produced more anticipatory gaze switches when the utterances were prosodically complete in IDS and not ADS. The three-year-old model only yielded a main effect of prosody condition, β = 1.023, SE = .282, z = -3.665, p < .001, so three-year-olds produced more anticipatory gaze switches for prosodically complete utterances in both IDS and ADS.
    These findings indicate that one-year-old children successfully use prosodic cues to predict conversational turn transitions, but only when prosody is exaggerated as it is the case in IDS. By three-years of age, children no longer rely on the prosodic exaggeration of IDS, and they are able to use subtle prosodic cues to predict conversational turn structure regardless of the speech register used in a conversation.

    Ruth Brookman, Marina Kalashnikova, Janet Conti, Kerry-Ann Grant, Nan Xu-Rattanasone, Katherine Demuth & Denis Burnham (The MARCS Institute of Brain, Behaviour and Development, University of Western Sydney, Australia / Basque Center on Cognition, Brain and Language, Spain / Macquarie University, Australia)

    Maternal responsiveness mediates the link between maternal depression and infants’ language development

    Early mother-infant interactions play a significant role in shaping infants’ language development. These interactions, in turn, can be affected by a number of factors, one of which is maternal depression. The speech of depressed mothers is different to non-depressed mothers with regards to pitch characteristics, linguistic content and the degree of contingent delivery (Kaplan et al., 2014). In addition, maternal depression has been linked to low maternal responsiveness, which refers to a mother’s ability to provide responses to her infant’s communication cues that are contingent, prompt and developmentally appropriate (Bornstein, 1989). However, the relation between maternal depression, maternal responsiveness, and infant language outcomes remains unclear. This study investigated maternal depression and maternal responsiveness in mother-infant interactions during the infant’s first year of life and their impact on infants’ vocabulary size at 18-months.
    Forty-seven mother-infant dyads were followed longitudinally from the infant ages of 6- to 18-months. Mother-infant dyads were classified into at-risk (n = 21) or control groups (n = 26) based on mothers’ psychological history and postnatal depression and anxiety measures (Epidemiologic Studies Depression Scale – Revised, CESD-R; State-Trait Anxiety Inventory, STAI) obtained during the postnatal period (infant ages 6-, 9-, 12-, and 18-months). Mother-infant interactions were recorded during play sessions when the infants were 9- and 12-months old. Video recordings of the play sessions were scored using the Maternal Responsiveness Global Rating Scale. Infants’ expressive vocabulary size was assessed at 18-months using the Australian English adaptation (OZI) of the MacArthur-Bates CDI.
    Maternal depression, anxiety, and responsiveness scores were averaged across time points for analyses. Comparisons of maternal responsiveness ratings (risk M = 2.93 SD = .88; control M = 3.04, SD = .80; t(46) = .44, p = .591) and infant vocabulary size (risk M = 60.19, SD = 65.26; control M = 77.19, SD = 49.14; t(45) = 1.0, p = .621) showed no statistically significant differences between the risk and control groups. However, correlational analyses showed significant relations between infants’ expressive vocabulary size and their mothers’ depression (r(47) = -.31, p = .035), anxiety scores (r(47) = -.37, p = .010), and responsiveness ratings (r(45) = .36, p = .016).
    These relations were further assessed using linear regression analyses with infants’ vocabulary size at 18-months as the dependent variable, and maternal depression and anxiety scores and responsiveness ratings as the predictor variables. The resulting model explained 15.8% of variance (F(3, 44) = 4.28, p < .05; R2 = .158), but maternal responsiveness was the only significant predictor (B =.312, SE = 10.25, t = 2.070, p = .045) of infants’ vocabulary.
    These findings demonstrate that mothers’ depression and anxiety symptoms manifested during the first post-partum year have a long-lasting impact on their infants’ developing expressive language skills. Importantly, the relation between maternal emotional health and infants’ language development appears to be mediated by individual differences in maternal responsiveness in early mother-infant interactions. These findings will be discussed in relation to the effects of maternal emotional health on maternal responsiveness and its consequences for early language development.

    Lillian Masek, Kathy Hirsh-Pasek & Roberta Golinkoff (Temple University, USA / The University of Delaware, USA)

    Relations between quantity and quality of early input and child language development across socioeconomic status

    Much research demonstrates the importance of early language input for later language ability (e.g., Hart & Risley, 1995; Tomasello & Farrar, 1986). However, there is debate on the relative roles of quantity and quality of language input (Cartmill et al., 2013; Hart & Risley, 1995; Hirsh-Pasek et al., 2015; Rowe, 2012). One finding is that the quality of the parent-child Communication Foundation, characterized by bouts of shared attention infused with symbols and fluid back-and-forth exchanges, is a stronger predictor of language outcomes one year later than the quantity of speech heard from mothers when children were age 2 (Hirsh-Pasek et al., 2015). Here, we use the data from Hirsh-Pasek et al. (2015) to test for individual differences in the relationship between quantity of language input and the quality of the communication foundation. Specifically, we look at how the relationship between these variables interact to predict children’s later language.
    Sixty low-income participants, selected to represent a wide range of language abilities, were drawn from the NICHD-Study of Early Child Care and Youth Development. Quantity of language input (maternal words per minute; WPM) and quality of the Communication Foundation (CF; Adamson et al., 2016) were assessed at 24-months during the 3-box task, a semi-naturalistic interaction in which mother and child played for 15-minutes with a book and two toys. Children’s 36-month expressive language was assessed using the Reynell Developmental Language Scales (Reynell, 1990).
    Previous findings on the low-income sample showed that WPM and CF independently relate to children’s language outcomes, though when examined simultaneously, only CF remains a significant predictor (Hirsh-Pasek et al., 2015). Here, we examined the interaction between WPM and CF. For the low-income sample, not only was the interaction significant (B=-.194, t(56)=-2.142, p=.037), but simple slopes testing showed that WPM was only a significant predictor of later language for children who experienced a poor quality communication foundation (B=.37, t(56)=2.43, p=.020).
    These preliminary findings suggest that the amount of language children hear may be important, but only for some children. Children who experience lower-quality language interaction may require more language exposure to effectively learn. In contrast, children who experience higher-quality interaction may not need to hear as much language, since the quality of their communicative bouts is high. These apparently different pathways to successful language acquisition suggest that the debate about quantity versus quality may not capture the whole picture. Children are immersed in language environments that vary in both quantity and quality of talk. We must consider how different permutations of these factors work together. Future studies with high-income children will examine whether the same interaction is present in a group that may experience other language environments. By understanding what works best for whom and when, we can build interventions tailored to the individual and truly help all children learn language.

    Marisa Casillas, Penelope Brown & Stephen C. Levinson (Max Planck Institute for Psycholinguistics, The Netherlands)

    How much speech do Tseltal Mayan children hear? Daylong averages and interactional bursts

    We need quantitative descriptions of cross-cultural and cross-linguistic variation in children’s speech environments to formulate well-grounded theories about language-learning mechanisms (Lieven, 1994; Nielsen et al., 2017). By studying language development in non-WEIRD communities we can more easily study factors rare in our own modern societies, e.g., large, multi-generational households, low literacy in the language being learned, minimal adult control on children’s activities, etc. In this vein, Mayan caregivers have gained some prominence after decades of careful ethnographic work across several communities (e.g., Rogoff et al., 1993; de Leon, 1998; Gaskins, 2006) documented a consistent pattern of infrequent child-directed speech. For example, Shneidman and Goldin-Meadow (2012) found that Yucatec Mayan children hear fewer utterances per hour and many fewer directed utterances per hour compared to US children. Given such infrequent child-directed speech, it seems that young Mayan children might be adept at learning from overheard speech, but Shneidman and Goldin-Meadow (2012) found that directed speech—not overheard speech—predicted those children’s vocabularies. How then do Mayan children become competent adult speakers?
    The current study investigates the early language experience of 10 Tseltal Mayan children growing up in a traditional community in the highlands of Chiapas. Each child was recorded via a small, chest-worn audio recorder and miniature camera for 9–11 hours during a single day. The data come from a larger collection of such recordings from 56 kids in 43 households between 0 and 50 months old (Casillas et al., 2017). The recordings analyzed here were selected as part of a larger comparative project, maximizing child age (0–3;6), gender, and maternal education variance in the sample. From each recording, 1 hour has been transcribed and annotated, distributed over twenty 1–5-minute clips (Table 1). Annotated clips included speech from the target child plus an average of 2.8 other speakers, 1.1 of whom were children (range: 0–10). Multiple speakers led to overlapped speech for 7.8% of annotated time (by-child range: 1.8%–13.3%). Proportion speech directed to children only moderately increased with age (Figure 1), in both randomly sampled and high-activity audio clips, with change more attributable to a decrease in all speech (“XDS”) than an increase in speech directed exclusively to the target child (“TCDS”; see also Bergelson, Casillas, et al., 2018). Speech addressed to the target-child was much more frequent in high-activity clips (mean: 10.9 min/hr; median: 8.6 min/hr) than randomly selected ones (mean: 3.4 min/hr; median: 1.4 min/hr), with randomly selected rates comparable to findings from similar populations (Cristia et al., 2017). Finally, most of the speech came from adults, not children (Figure 2), in a pattern markedly different from that reported by Shneidman and Goldin-Meadow (2012). These findings suggest that there may be variation among Mayan communities in how young children are spoken to. Alternately, daylong recording techniques may reveal patterns different from short home visits (see also Tamis-LeMonda et al., 2017). These data will next be integrated with time-of-day information to suss out daily-cycle patterns before continuing with formal analyses.

    Talk Session 2: IDS & Phonology – Thu 13 June 13:30 – 15:00

    Chair: Catherine Best

    Gesa Schaadt, Angela D. Friederici, Hellmuth Obrig & Claudia Männel (Leipzig University & Max Planck Institute for Human Cognitive and Brain Sciences, Germany)

    Association of speech perception and production in 2-month-olds: Relating event-related-potential and vocal reactivity measures

    Perceptual and expressive phonological abilities are key features for success in language development and a functional connection between speech perception and production has been postulated. In line with this assumption, it has been shown that babbling – a form of vocalization – shapes speech processing in 10-month-olds [1]. Precursors of babbling (e.g., imitation of mouth movements, vocalization) already develop around the second month of life [2], but the association of speech perception and production (i.e., vocalization) has not been investigated during this early developmental period.
    In the present study, we investigated speech perception and production in 2-month-olds. For speech perception, the Mismatch Response (MMR) was measured in a multi-feature paradigm [3] with four deviant stimulus categories, namely consonant (/ga/), vowel (/bu/), pitch (F0; /ba+/), and vowel length changes (/ba:/) that were compared to the standard stimulus /ba/. For speech production, we used the subscale vocal reactivity of the parental Infant Behavior Questionnaire, defined as the amount of infants’ vocalization exhibited in daily activities [4]. Our data (N=25) reveal significant positive MMRs for all deviant categories, typically observed in infants at that age. Importantly, we found a negative correlation (r = –.38, p<.03) between the MMR to vowel changes and vocal reactivity, but no correlation between the MMR to the other deviant stimulus categories and vocal reactivity. Thus, a more negative MMR to vowel changes was associated with infants’ higher amount of vocalization. That the MMR to vowel changes, but not to, for example, consonant changes was associated with vocal reactivity, might be explained by findings showing that the perception and production of vowels emerges earlier in development, compared to the perception and production of consonants [5]. Our results suggest that speech perception and production are shaping each other already at an early age. Moreover, the transition from a positive to a negative polarity of the MMR, with negative MMRs indicating more mature responses [6], might be influenced by infants’ expressive abilities.

    References:
    [1] DePaolis et al. (2013). Infant Behav Dev, 36, 642–649.
    [2] Henning et al. (2005). Infant Behav Dev, 28, 519–536.
    [3] Näätänen et al. (2004). Clin Neurophysiol, 115, 140–144.
    [4] Garstein & Rothbart (2003). Infant Behav Dev, 26, 64–86.
    [5] Selby et al. (2000). Clin Linguist Phon, 14, 255–265.
    [6] He et al. (2009). Eur J Neurosci, 29, 861–867.

    Irena Lovcevic, Pelle Söderström, Marina Kalashnikova, Yatin Mahajan & Denis Burnham (The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Australia / Basque Center on Cognition, Brain and Language, Spain)

    Neural processing of hyper- and hypo-articulated vowels in Infant-Directed Speech

    When addressing infants, adults use a speech register known as infant-directed speech (IDS). Compared to adult-directed speech (ADS), IDS has a number of distinctive acoustic and linguistic features. Vowel hyperarticulation, the expansion of the acoustic space between the corner vowels /i,u,a/, is one feature specifically proposed to facilitate language acquisition processes. Interestingly, the presence of vowel hyperarticulation in IDS appears to be dependent on the infant’s communicative and linguistic needs. Mothers do not hyperarticulate vowels in IDS to infants with hearing loss (Lam & Kitamura, 2010) or infants at-risk for dyslexia (Kalashnikova et al., 2018), indicating that infants’ ability to hear and process speech can influence speakers’ IDS to them. Given the important role of vowel hyperarticulation in early language acquisition, it is of interest to investigate the effects of IDS with hypo-articulated vowels on infants’ early linguistic processing.
    This study investigated whether there is a neurophysiological difference in the processing of hyper- and hypo-articulated vowels in IDS by comparing the electroencephalographic (EEG) signatures of typical IDS with hyperarticulated vowels (hyper-IDS), IDS that lacks vowel hyperarticulation (hypo-IDS), and ADS in 9 month-old-infants (N = 12). Event-related potentials (ERPs) were recorded while infants listened to familiar words in hyper-IDS, hypo-IDS, and ADS registers. If hyper-IDS facilitates infants’ early lexical processing, we expected it to elicit a different pattern in brain potentials compared to hypo-IDS and ADS.
    Regarding electrophysiological measures, mean amplitudes were calculated in the 250-500ms time window measured from word onset, since ERP amplitudes in this window have been proposed to reflect increased semantic processing (Kidd et al., 2018; Zangl & Mills, 2007). A Speech (hyper-IDS, hypo-IDS, ADS) x Antpost (frontal, central, parietal, occipital) x Laterality (left, right) ANOVA yielded a main effect of Speech (F(2, 22) = 7.054, p = .004, p2 = .391) and Speech x Antpost x Laterality interaction (F(6, 66) = 2.771, p = .018, p2 = .201), meaning that the factor Speech interacted with topographical factors. Hypo-IDS was found to elicit a broadly distributed negativity compared to hyper-IDS and ADS respectively, which gave rise to more positive amplitudes in the same time window. Taken together, these results suggest a decrease in lexical processing for hypo-IDS.
    These findings indicate different brain responses to IDS with and without vowel hyperarticulation in early language processing, supporting the assumption that vowel hyperarticulation in IDS influences infants’ early linguistic processing. Given that early brain potentials occurring at 200-500ms have previously been suggested to reflect increased lexical or semantic processing, these findings indicate that infants are sensitive to the specific acoustic qualities of IDS, which facilitate semantic processing (Junge et al., 2014). Importantly, the current findings demonstrate that IDS with hyperarticulated vowels provides infants with a rich linguistic signal. When hyperarticulation is absent, lexical processing appears to be impeded. This is an especially important insight with regard to infants with hearing loss who do not have access to this feature in IDS, but who may instead rely more on other perceptual advantages offered by this register.

    Antonia Götz, Anna Krasotkina, Gudrun Schwarzer & Barbara Höhle (University of Potsdam & University Giessen, Germany)

    Neural correlates of non-native lexical tone and vowel discrimination in 9-month-old German infants and adults: An ERP study

    Previous behavioral experiments have shown that perceptual sensitivity for lexical tones declines in infants learning non-tone languages (Mattock et al., 2008; Yeung et al., 2013; but see Chen & Kager, 2016; Shi et al. 2017). Besides the decrease of the initial sensitivity some studies have shown a u-shaped development, indicated by a regain of discrimination abilities at 18 months (Liu & Kager, 2014, Götz et al. 2018). The purpose of this study is to examine the neurophysiological correlates of this perceptual reorganization process. This does not only complement behavioral experiments but also contributes to the discussion whether neural discrimination of speech can be maintained in the absence of behavioral discrimination as suggested by Rivera-Gaxiola et al. (2005). To this end, we conducted two ERP-experiments with 9-month-old German-learning infants (n = 18, data collection is ongoing) and German adults (n = 29) using a double oddball paradigm. Our hypothesis was that the strength of a neural mismatch response (MMR) to a non-native lexical tone contrast from Cantonese would decrease while the MMR evoked by a native-like vowel contrast was assumed to get stronger in amplitude (Conboy & Kuhl, 2011). We used the Cantonese syllables /se/ and /si/ produced by a native speaker of Cantonese with either the high-rising or the mid-level tone. This resulted in four different syllables /se25/, /se33/, /si25/, and /si33/. Infants were tested on their discrimination between the frequent standard of the mid-level /se33/ tone and the high-rising /se25/ tone deviant, a contrast for which several behavioral studies have shown no discrimination abilities at 9 months (Yeung et al. 2013, Götz et al. 2018). For native-like deviant we used a vowel contrast /se33/ vs. /si33/ with same tonal properties like the standard, but differing in vowel quality. German monolingual adults were tested in all possible combinations of the frequent standard (e.g. /si33/) paired with a tone deviant, where the vowel maintained the same, but the syllable changed in lexical tone, (e.g. /si25/) and paired with a vowel deviant, where the tone maintained the same but the vowel quality changed (e.g. /se33/).
    So far, our results show that in adults the vowel as well as the tone deviant elicited a robust MMR. In contrast, the tone contrast elicited a positive MMR in infants whereas overall no effect of the vowel contrast was observed. The mismatch responses for tones in infants as well as in adults indicate that both groups show some residual neural sensitivity for this non-native contrast. A switch in the polarity of infants’ compared to adults’ mismatch response has also been observed in previous studies (c.f., Morr et al., 2002, Trainor et al., 2003). This may also provide an explanation for the missing effect of infants’ responses to the vowel change: some infants may already show a mature MMR – marked by a negativity – and some of the tested infants may show a still immature positive response which cancel out each other. More data that allow for the identification of individual patterns is needed to evaluate this explanation.

    Talk Session 3: Bilingualism – Fri 14 June 10:30 – 12:30

    Chair: Thierry Nazzi

    Konstantina Zacharaki & Nuria Sebastian-Galles (University Pompeu Fabra - Center for Brain and Cognition, Spain)

    Language Discrimination abilities of 4-5 mo monolingual and bilingual infants

    Previous research indicates that neonates can discriminate languages that belong to different rhythmic classes. Within rhythmic class discrimination starts to take place around the fourth-fifth month of life (Nazzi, Jusczyk, & Johnson, 2000). What type of information infants use to perform such discrimination is yet to be fully determined. Here we investigate the hypothesis that although infants have not yet established vowel categories at such a young age, they may already have some rough knowledge about distributional properties of their vowel system and they may use such distributions to discriminate languages.
    We investigated the discrimination capacities of 4.5-month-old infants learning Catalan and or Spanish when listening to sentences in two dialects of Catalan (Southern and Central) – in Experiment 1- and when listening to sentences in Southern Catalan and Spanish – in Experiment 2. Spanish and (Central-Standard) Catalan have quite different vowel distributions due to the existence of vowel reduction in Central Catalan (resulting in very few mid vowels), but not in Spanish (Figure 1). Interestingly, Southern Catalan does not bear vowel reduction, yielding a distribution of vowels very similar to the Spanish one. We used the same procedure as in Bosch & Sebastian-Galles (2001). In the first study, 43 (n= 22 monolinguals, n= 21 bilinguals) 4.5-month-old infants having Central Catalan as their dominant language were tested. In the second study, 41 (n=21 monolinguals, n=20 bilinguals) 4.5-month-old infants having Spanish as their dominant language were tested. The results of the two experiments are shown in figure 2 and 3. Infants were able to discriminate both the two dialects of Catalan (F (1,41) = 15,204; p<.001) and Southern Catalan from Spanish (F (1,39) = 7,468, p=.009). The results indicate that infants may be using other phonological cues and not only vowel distribution and rhythm.
    We are currently testing whether infants need segmental cues to discriminate the aforementioned languages/ dialects and if they can do so only by relying on suprasegmental cues. We are investigating this by using the same stimuli as in the previous experiments, but low-pass filtered (400 Hz) so that information about vowels is removed. Preliminary results show that when only prosodic information is available, both dialects of Catalan cannot be discriminated, but Southern Catalan and Spanish can. The results indicate that infants are sensitive to both vocalic distribution and broad prosodic information and add to the scarce literature describing language learning in the first six months of age.

    Liquan Liu, Varghese Peter & Gabrielle Weidemann (Western Sydney University & Macquarie University, Australia)

    Bilingual infants exhibited neural sensitivity to non-native linguistic pitch after perceptual narrowing

    Canonical findings on lexical tone perception report decreased sensitivity for non-tone-language learning infants between 6 and 9 months, an approximate time window for perceptual narrowing (e.g., Mattock & Burnham, 2006). However, recent behavioural evidence suggests rebounded sensitivity for non-tone-language learning infants in the second year after birth (Liu & Kager, 2014), and the time point of such recovery appears earlier in simultaneous bilingual infants (Liu & Kager, 2017). As neural measures can be recorded without active participation from the infants, they are ideal to use with infants. The current study explored infants’ neural signature of lexical tone perception, and the role of infants’ linguistic experiences, along the tonal perceptual narrowing trajectory using the event related component mismatch negativity (MMN) / mismatch response (MMR).
    Forty full-term, typically developing Australian infants with no prior tone language experience underwent a passive oddball EEG paradigm. Infants were evenly split across two age groups (5-6 vs. 11-12 months) and two language backgrounds (monolingual vs. bilingual). A 200ms contracted Mandarin tonal contrast derived from previous studies (Liu & Kager, 2014; Figure 1, contrast B, T4 as standard and T1 as deviant) was set as the stimuli. The standard/deviant probability ratio was 80%/20% with 1000 stimuli in total. The stimuli were presented with an inter stimulus interval of 500 ms at a constant intensity of 70 dB SPL. The deviant stimuli in the oddball block was presented 200 times (without the intervening standards) in a separate block as control stimuli. Difference waves were computed by subtracting the event-related potential (ERP) for the control from the deviant.
    The deviant control difference (Figure 2) wave showed a positive peak between 100-400 ms for the monolingual 5-6-month-olds and bilingual groups at both ages; but not monolinguals at 11-12 months. This effect was confirmed by the cluster based permutation statistics (Table 1). Since the latency of the effect matches the expected latencies of the MMR response, these responses were considered as MMR. One-way analysis of variance with MMR amplitudes for the bilingual group (mean amplitude in a 40 ms window around the peak) compared across ages did not show any significant main effect of age F (1,18) = 0.01, p = .90 suggesting the MMR amplitude were similar for the 5-6 month olds (M = 4.13, SE = 2.02) and 11-12 month olds (M = 3.86, SE = 3.53). However, note that the time rage of MMR was earlier for the 11-12 month olds compared to 5-6 month olds (Table 1).
    Monolingual and bilingual infants exhibited MMRs to lexical tones at the onset of perceptual narrowing at 5-6 months. Results indicate early neural discrimination of lexical tones even when the feature is absent from infants’ native phonemic inventory, although such sensitivity was immature (Peter, Kalashnikova, Santos & Burnham, 2016). Furthermore, while 11-12-month-old monolingual infants lose sensitivity at perceptual narrowing offset, bilingual infants’ displayed immature neural responses. These outcomes add to our understanding of the enhanced neural plasticity to language among bilingual infants (Bosch & Sebastián-Gallés, 1997; Petitto et al., 2012) that may interact with their increased linguistic sensitivity, cognitive abilities, and possibly general auditory perceptibility.

    Evelyne Mercure, Isabel Quiroz, Laura Goldberg, Harriet Bowden-Howl, Kimberley Coulson, Teodora Gliga, Roberto Filippi, Peter Bright, Mark H. Johnson & Mairead MacSweeney (UCL / Birkbeck, University of London / Anglia Ruskin University / University of Cambridge and Birkbeck, University of London, United Kingdom)

    Impact of language experience on attention to faces in infancy: Evidence from unimodal and bimodal bilingual infants

    Faces capture and maintain infants’ attention more than other visual stimuli. Early language experience may influence attention to faces in infancy by modifying the significance of the facial cues in social communication. When learning more than one language, increased attention to faces could represent an adaptive strategy, which allows the access and integration of visual communicative cues such as lip movements and facial expressions. It was hypothesized that infants learning two spoken languages (unimodal bilinguals) and hearing infants of Deaf mothers learning British Sign Language and spoken English (bimodal bilinguals) would show enhanced attention to faces compared to monolinguals. The comparison between unimodal and bimodal bilinguals in the present study allowed differentiation of the effects of learning two languages, from the effects of increased visual communication in hearing infants of Deaf mothers. Data are presented for two independent samples of infants: Sample 1 included 49 infants between 7 and 10 months (26 monolinguals and 23 unimodal bilinguals), and Sample 2 included 87 infants between 4 and 8 months (32 monolinguals, 25 unimodal bilinguals, and 30 bimodal bilingual infants with a Deaf mother). Eye-tracking was used to analyse infants’ visual scanning of complex arrays including a face and four other stimulus categories. Infants from 4 to 10 months (all groups combined) directed their attention to faces faster than to non-face stimuli (i.e., attention capture), directed more fixations to, and looked longer at faces than non-face stimuli (i.e., attention maintenance). Unimodal bilinguals were generally faster at orientating to faces and they directed more fixations to faces compared to monolingual infants of the same age. This suggests that unimodal bilinguals have increased attention capture and attention maintenance by face stimuli. Contrary to predictions, bimodal bilinguals did not differ from monolinguals in attention capture and maintenance by face stimuli. This study demonstrates an impact of language experience on the early development of attention to faces in infancy. The increased complexity of learning two spoken languages was associated with an increased attention capture and maintenance for still faces. These visual strategies may be adaptive to maximize the use of potential visual cues of articulation to allow the discrimination of two spoken languages. Bimodal bilingualism and the experience of communication in the visual modality with a Deaf mother do not appear to impact attention to unfamiliar still faces. Our data suggest that there are complex interactions in the development of face processing and language learning in the context of social communication in infancy.

    Mélanie Havy & Pascal Zesiger (University of Geneva, Switzerland)

    Bridging ears and eyes in the early lexicon: Evidence in monolingual and bilingual children

    From the very first moments of their lives, infants preferentially orient to talking faces and selectively attend to the orofacial movements of their social partners. Infants use the visible articulatory movements to elucidate some perceptual uncertainties and assist word learning. Yet, tremendous evidence suggests that these capacities develop differently as function of the ambient language environment. In this field, a great deal of research has revealed a bilingual edge in attending to the visible aspects of speech. Studies have found that infants raised in bilingual households reliably start to attend to the redundant visible speech cues inherent in a talker’s mouth at an earlier age than their monolingual counterparts (Pons, et al., 2015) and are more proficient at discerning languages visually over the first year of life (Sebastián-Gallés et al., 2012). Yet, it is unknown as to whether and how these early differences influence the way infants appreciate the visible speech information as they learn their first words. The current study explores how the auditory and visible correlates of speech become part of the early lexical representations in French-learning monolingual and bilingual children aged 30 months.
    Using a crossmodal word-learning design, we tested monolingual and simultaneous bilingual children in different word learning conditions (Figure 1). During a learning phase, children were introduced to two pseudo-words in association with two distinct objects (Object A: ‘byp’, Object B: ‘var’). One group experienced the words in the auditory modality (acoustic form of the word with no accompanying face), the other group experienced the words in the visual modality (seeing a silent talking face). At test, the two previously seen objects were displayed side by side in silence during a pre-naming period. Then they disappeared and one of them was labeled (‘Look at the ‘var’!’). After labeling, both objects reappeared during a post-naming period. In the ‘same modality’ test condition, labeling occurred in the same modality as the one used at learning: i.e., auditory after auditory learning, visual after visual learning. In the ‘cross-modality’ test condition, labeling occurred in the other modality to the one used at learning: i.e., visual after auditory learning, auditory after visual learning.
    The results indicate that like their monolingual peers, bilingual children successfully learn new words in either auditory or visual modality: children show an increase of looking preference for the target object after labeling in both auditory (pmonolinguals < .01, pbilinguals= .05) and visual (pmonolinguals < .01, pbilinguals= .03) ‘same modality’ conditions. Of interest, both monolingual and bilingual children show cross-modal recognition of words upon auditory learning (pmonolinguals < .01, pbilinguals= .02), but only bilingual children show cross-modal recognition of words upon visual learning (pmonolinguals = .69, pbilinguals= .05).
    Altogether, these findings indicate a bilingual edge in visual word learning, expressed in the capacity to form a recoverable cross-modal representation. This pattern is discussed in line with broader literature in audio-visual speech perception, lexical and cognitive development.

    Talk Session 4: Various – Fri 14 June 13:30 – 15:00

    Chair: Barbara Höhle

    Charlotte Moore & Elika Bergelson (Duke University, USA)

    18-month-olds’ representations of vowels in regular & irregular verbs: A mispronunciation study

    Around 12mo., infants have well-specified phonetic representations of familiar nouns. They detect consonant and vowel mispronunciations during comprehension (Swingley&Aslin 2000; Mani & Plunkett 2007, 2008; Bergelson&Swingley, 2017); sensitivity by 18mo. is even more robust (Swingley & Aslin 2000; White & Morgan, 2008). However, we know little about early phonetic representations of verbs.
    Verbs are cross-linguistically later-learned than concrete nouns (Gentner, 1982). Indeed, 3-year-olds still struggle with novel verbs in the lab; tense-marking acquisition extends into middle childhood (Maguire, Hirsh-Pasek, & Golinkoff, 2005; Herriot, 1969). Still, infants begin producing verbs ~18mo. How robust are early phonological representations of verbs, and does verb regularity play a role?
    Approximately half of the 50 most frequent English verbs in infant-directed speech are irregular (data from Brent & Siskind, 2001), marking past tense with vowel changes (e.g. drink~drank). This variability may challenge young learners, who lack knowledge of inflectional nuances (Wood, Kouider & Carey, 2009). Thus, English-learning infants contend with vowel changes in roughly half the verb types they hear. We examine the role this plays in verb comprehension at 16-20 months. Notably, CDI data from Wordbank (Frank et al, 2016) suggest that regular and irregular verbs are understood (76% vs. 83%) and produced (17% vs. 16%) equivalently in this age-range (see Table).
    We tested 32 16-to-20-month-olds (M=17m, SD=38 days) in a looking-while-listening study. Infants watched yoked video-pairs of an actor performing 8 familiar actions (each video-pair featured the same actor/setting and equivalent props). Each video-pair consisted of a regular and irregular verb, matched for frequency (Brent&Siskind, 2001). In each trial (n=32), infants heard a sentence describing one video, with the action verb unmarked (e.g. ‘she’s gonna drink’). In 50% of trials, the target verb’s vowel was mispronounced (see Table). This resulted in 4 within-subject conditions: verb-regularity (regular, irregular) crossed with pronunciation (correctly-pronounced, mispronounced). We calculated the proportion of target looking 367-5000ms after target-word onset (target/(target+distractor), corrected for baseline preference.
    A two-way ANOVA showed a main effect of verb-regularity (F(1,27)=25, p<0.001): infants looked significantly more at target videos for regular verb trials than for irregular verb trials, regardless of pronunciation (see Figure). Target-looking on irregular-verb trials was not above chance in either correctly-pronounced or mispronounced conditions (T<1.5, p>0.05.) Thus, in contrast to the equivalent knowledge suggested by CDI data, children in our study failed to evince comprehension of the irregular verbs. They further showed no mispronunciation effect within the regular verbs (which they understood at modest but above-chance rates). A follow-up with 24–28-month-olds is ongoing.
    These results suggest that while infants may understand this set of regular verbs around 18mo., they are not yet sure about the vowels within them. The total absence of mispronunciation effects in verbs starkly contrasts to the ~10% reduction in target-looking found even six months earlier with nouns (e.g. Mani & Plunkett, 2010). Further, these results are compatible with the possibility that verbs’ consonants may initially be prioritized across this lexical class. Intriguingly, infants may initially encode common verbs’ vowels as ‘unreliable’ across the board.

    Helen Buckler & Elizabeth Johnson (University of Toronto, Canada / University of Nottingham, United Kingdom)

    It’s Hey Jude, not Hey Jade: 6-month-old Canadian English learners’ detection of own-name mispronunciations

    The division of labour between vowels and consonants in adult and toddler speech processing has been well documented (Nespor, Pena, & Mehler, 2003); when recognising and learning words, consonants are more important than vowels. The prevalence of this asymmetry cross-linguistically has led to the proposal that it may be innate, however, infant studies have demonstrated that it is emergent (see Nazzi & Cutler, 2018 for review). Moreover, the rate of development varies depending on the language being acquired. Here, we test whether the developmental pattern differs according to the variety of the language that the infant is exposed to.
    Related studies with French- and British English-learning infants have demonstrated differences in perceptual development of consonant and vowels at 5 months of age. Using the Headturn Preference Procedure to test sensitivity to mispronunciations of vowels and consonants in a familiar word - their own name - French-learning infants are sensitive to vowel, but not consonant, mispronunciations (Bouchon, Floccia, Fux, Adda-Decker, & Nazzi, 2015), whereas British English-learning infants are not sensitive to either (Delle Luche, Floccia, Granjon, & Nazzi, 2016). This indicates an early vowel advantage in French infants that British infants have not yet acquired. The consonant advantage emerges by 8 months in French infants (Nishibayashi & Nazzi, 2016), but not until 30 months of age in British infants (Nazzi, Floccia, Moquet, & Butler, 2009, though see Mani & Plunkett, 2007).
    But is this a product of acquiring English, or acquiring British English? Studies indicate that American English-learning infants are able to detect consonant mispronunciations at 9 months old (Jusczyk, Goodman, & Baumann, 1999), suggesting an advantage over their British peers in the perceptual development of vowels and consonants. We tested this hypothesis using the methodology of Bouchon et al. (2015) and Delle Luche et al. (2016), predicting that 6-month-old infants exposed to Canadian English would detect vowel, but not consonant, mispronunciations in their name.
    Monolingual Canadian English-learning 6-month-olds were presented with correct pronunciations and mispronunciations of their own name in the Headturn Preference Procedure. Mispronunciations were either of a vowel (N=24, e.g. Sam vs. *Sim) or a consonant (N=24, e.g. Noah vs. *Toah). Infants in the control condition heard correct and mispronunciations of a name that was unfamiliar to them (e.g. Emily heard Sam vs. *Sim). As predicted, infants detected the vowel mispronunciation but not the consonant mispronunciation. At 6 months, Canadian English learning infants display a sensitivity to vowel mispronunciations in familiar words that British English infants are not (yet) sensitive to.
    Results from this study demonstrate cross-variety variation in the emergence of a division of labour between vowels and consonants for language processing. More generally, they highlight potential differences in language development between infants exposed to North American and British English input during the early stages of language acquisition (cf. Floccia et al., 2016; Hamilton, Plunkett, & Schafer, 2000). They emphasize the need to consider the properties of the input to subpopulations of infants, and the influence that this variation may have on language development.

    Titia Benders & Ei Leen Lim (Maquarie University, Australia)

    How to report null results in infancy research: Journal abstracts reveal tension between statistical appropriateness and theoretical interest

    Human development during infancy is an exciting time, during which infants’ abilities appear to constantly undergo change. Characterising infant development requires identifying when abilities are absent versus present, and when development thereof is stable or changing. For example, the proposed developmental pathways for native-language perceptual attunement (Aslin & Pisoni, 1980) include two pathways of change and one of pathway of stability. The two proposed scenarios of change to speech-sound discrimination are facilitation of native contrasts and loss of non-native contrasts. The scenario of stability is maintenance, either in the ability to discriminate native contrasts or in the lack of an ability to discriminate non-native contrasts.
    Unfortunately, testing theoretical predictions about the absence of abilities or stability in development is at odds with frequentist statistics, the most commonly used statistical approach across the behavioural sciences. A frequentist researcher who compares infants’ task performance across two ages and obtains a p-value below the common alpha threshold of .05 can conclude that infants’ abilities have changed. In contrast, a researcher who obtains a p-value above this alpha value cannot draw any conclusions. The present study investigated to what extent researchers of human infant development nevertheless interpret ‘null-results’ when summarizing their findings and conclusions.
    To this end, two coders coded the 305 abstracts of papers published in 2016 in the scientific developmental journals Infancy, Infant Behavior and Development, and Journal of Experimental Child Psychology. Each abstract was first coded for the presence of empirical human infant data (humans aged 0-24 months) that were compared across at least two groups or conditions - thus excluding single group descriptive studies. Abstracts meeting these criteria were coded for any references to null results.
    Of the 305 abstracts coded, 124 met the inclusion criteria, and 59 (48% of the included abstracts) referred to a null result. Some abstracts stated the null result explicitly, whereas others implied it by stating that an effect was present in only one condition. When null results were explicitly mentioned, they were sometimes interpreted in statistically appropriate terms about absence of evidence in favour of an effect, but more often led to theoretically interesting claims about the absence of an ability or stability in development.
    These results not only underscore the high prevalence of null results in infancy research, but also that infancy researchers consider null results to be integral to their conclusions. We believe this reflects the theoretical importance of the absence of abilities and developmental stability in human infant development. Recommendations are made for rectifying the tension between statistical appropriateness and theoretical interest. These include suggestions for wording of findings and inference, the use of Bayesian statistics in combination with sufficiently large sample sizes to find strong evidence in favour of the null hypothesis, and a larger emphasis on effect size estimation.

    Talk Session 5: Word Learning – Sat 15 June 10:30 – 12:30

    Chair: Elika Bergelson

    Natalia Kartushina & Julien Mayor (Department of Psychology, University of Oslo, Norway)

    Word recognition without word comprehension in 6-9-month-old infants

    The past five years has witnesses an explosion of claims that infants as young as six months of age understand the meaning of several words (1–3). To reach this conclusion, researchers presented infants with pairs of pictures from distinct semantic domains (e.g., food items vs. body parts) and observed gaze patterns consistent with the interpretation that infants know these words. Yet, longer looks to a given item might not reflect comprehension of the referent word per se, but infants’ reliance on extra-linguistic cues while disambiguating between two items.
    Recent studies have shown that infants use extra-linguistic cues when learning new words. In the Human Speechome project (4), daily audio and visual recordings of the utterances of a child during his first three years of life revealed that words heard consistently and repeatedly within a well-defined temporal, spatial or linguistic context, are acquired earlier than words heard in broader contexts. Another study has shown that a noun’s concreteness and frequency are the first two strongest factors in predicting the emergence of its comprehension (5). In line with this conclusion, a recent study has shown that six-month-old infants who heard concrete nouns more frequently in co-occurrence with the referred objects tended to show better word recognition in the lab, suggesting that frequent word-object co-occurrence facilitates word learning (1).
    The current study assessed the robustness of a ‘comprehension’ interpretation by examining whether infants use extra-linguistic cues to disambiguate between items in absence of a firm semantic understanding of a word. Seventy 6-9-month-old Norwegian infants were tested on their comprehension of sixteen familiar words using infant preferential looking paradigm. On each trial, infants saw two pictures of objects sampled from different semantic categories (e.g., cat-keys, belly-cookie) and heard a sentence prompting to look at either of the images, e.g., «Look at the ‘name of the target’!».
    Contrarily to previous studies in English-learning infants, our results revealed no word comprehension in 6-7-month-old Norwegian infants, suggesting cross-linguistic differences in the onset of word comprehension. Older Norwegian infants (8-9-month-old), however, showed robust longer looks at the target, as compared to the distractor, suggesting that they understood the familiar words used in the study. Yet, word-pair effect sizes were highly correlated with the frequency imbalance between the two words in a pair, such that frequency-matched pairs were not disambiguated by infants. Our results suggest that the frequency differences between two items is an important additional cue that infants use to disambiguate between items (6). More broadly, they suggest that the very onset of word comprehension is not based on the infants’ knowledge of the referent word per se, but rather on infants’ use of a converging set of cues to identify the potential referent. Among them, frequency appears to be a robust (pre-semantic) cue that infants exploit to guide their word disambiguation and, in turn, to build their word representations.

    Mengru Han, Nivja de Jong & René Kager (Utrecht University & Leiden University, The Netherlands)

    Does infant-directed speech facilitate word-to-object mapping for Dutch two-year-old children?

    Prototypical infant-directed speech (IDS) is characterized by a higher mean pitch, a larger pitch range, and a slower speaking rate compared to adult-directed speech (ADS) (Cristia, 2013). Despite the long-standing claim that prototypical IDS facilitates language acquisition compared to ADS, only two studies have shown that prototypical IDS facilitates online word-to-object mapping for American English children (Graf Estes & Hurley, 2013; Ma, Golinkoff, Houston, & Hirsh-pasek, 2011). In these two studies, children only succeeded in learning novel words from IDS but not from ADS. Given that the degree of prosodic exaggeration in IDS varies across languages (Fernald et al., 1989), it remains unknown whether these results can be generalized to other languages. The current study asked whether prototypical IDS facilitates Dutch 24-month-old children’s word-to-object mapping.
    Twenty-four Dutch children participated in a word learning experiment using the Intermodal Preferential Looking Paradigm (IPLP) (Hirsh-Pasek & Golinkoff, 1996). The experimental set-up was adapted from Ma et al. (2011). Children were presented with two novel word-to-object associations (e.g., “modi” and “dofa”) in the training phase and they were tested whether they looked longer or faster at the correct word-to-object mapping (Target) than the incorrect one (Distractor) in the testing phase. All children were tested in two conditions: ADS and IDS. The content of the audio stimuli was the same in the two conditions, while the prosody of IDS had a higher pitch, a larger pitch range, and a slower speech rate compared to that in ADS.
    We performed two-way repeated measures ANOVA’s with two independent factors Condition (ADS/IDS) and Target (Target/Distractor). The dependent measures were single longest look (ms), proportions of looking time (%), and latency (ms) (Table 1). Results for the single longest look revealed that there was a significant main effect of Target (Target/Distractor) (p < 0.001) and a significant main interaction of Condition (ADS/IDS) and Target (p = 0.028); however, the main effect of Condition was not significant (p = 0.305). These results suggest that children learn words successfully in both ADS and IDS conditions but IDS still has a facilitative effect. For the proportions of looking time, there was only a significant main effect of Target (p = 0.002), but there was no significant main effect of Condition (p = 0.668), nor was there a significant interaction of Condition and Target (p = 0.583). As such, children learned novel words in both conditions, but there was no evidence of facilitative effects of IDS. For the measure of latency, there were no significant effects.
    Together these results suggest that Dutch 24-month-old children could reliably learn novel words from both ADS and IDS. There was only a small facilitative effect of IDS on one measure (single longest look), but overall our results did not strongly support the claim that prototypical IDS facilitates word learning. As such, whether prototypical IDS is necessary or beneficial for word learning across languages is still an open issue.

    Jingtao Zhu & Anna Gavarró (Universitat Autònoma de Barcelona, Spain)

    Early word order acquisition: Evidence from a null argument language

    Although from the earliest observable stages children are sensitive to the basic word order of their target language, the evidence so far comes mainly from the rigid word-order languages. Besides, little is known about the acquisition of non-canonical word orders in languages which permit the omission of both subject and object arguments, such as Mandarin Chinese. In the present study, we assess the acquisition of both canonical (1) and non-canonical word order involving the object marker ba (2) by 17-month-old Mandarin-speaking infants with the preferential looking paradigm using pseudo-verbs.
    Twenty-four typically-developing Mandarin infants with a mean age of 17.5 months (SD = 2.2) participated in our experiment. Children were shown two simultaneous videos while their eye fixation times were measured: in one video, the target causative event was depicted, while the other screen illustrated the same event with theta-role reversal. Each pair of videos included four windows: (i) the presentation of the videos with a baseline sentence of the type Look! What is happening? and (ii) three consecutive presentations of the experimental sentence, starting at 5, 10, and 15 seconds (S1, S2, S3 in figure 1).
    The results in table 1 show that in the SVO and SbaOV conditions, infants looked significantly longer at the target video than at the reverse video. However, no significant difference was found in the baseline window. One might venture that infants are simply adhering to an AGENT-first strategy (as postulated by Bever 1970, and, more recently, Lidz et al. 2001); however, that was not the case, since in the OSbaOV condition (see table 2), they looked longer at the scene with the first NP as THEME during the first (t(23) = 3.35, p = .003, d = .65) and the second presentation (t(23) = 2.08, p = .049, d = .57), reflecting their target interpretation. Thus, our results cannot be explained by an AGENT-first parsing strategy. Moreover, despite the additional complexities of the OSbaOV structure (exemplified in (2b)), where the object has been topicalized in the left periphery and is coindexed with a resumptive clitic pronoun in preverbal position, children still identified the target event very fast. This is in sharp contrast with a result from a previous experiment (also included in table 1), which showed that children cannot parse an ungrammatical SOV structure (exemplified in (3)). This indicates that infants exposed to Mandarin are sensitive to the functional elements (baP) from age 1;5 and they can use this knowledge to parse the sentence, a result similar to that of Lassotta et al. (2014) for French Clitic Left dislocation.
    These results are consistent with the idea that there is no delay in A’ movement in child grammar (Babyonyshev et al. 2001, Wexler, 2004), and that parameter setting takes place before production starts (Wexler, 1998). This finding holds for canonical as well as non-canonical word orders in languages with pervasive presence of null arguments, showing that these null arguments do not have an impact on comprehension, at least not at 17 months.

    (1) 小兔子 tuān 了 小鸭子。 (SVO)
    the rabbit PSEUDO-VERB PERF the duck

    (2a) 小兔子 把 小鸭子 tuān 了。 (SbaOV)
    the rabbit BA the duck PSEUDO-VERB PERF

    (2b) 小鸭子 小兔子 把 它 tuān 了。 (OSbaOV)
    the ducki the rabbit BA iti PSEUDO-VERB PERF
    ‘The rabbit V-ed the duck.’

    (3) *小兔子 小鸭子 nuí 了。
    The rabbit the duck PSEUDO-VERB PERF

    Lena Ackermann, Sarah Eiteljörge, Robert Hepach & Nivedita Mani (University of Göttingen & University of Leipzig, Germany)

    The effects of category interest on word learning in a gaze-contingent paradigm

    Young children are amazing word learners. While the overall pattern of word learning remains stable across children and languages, the vocabularies of individual children show considerable differences even at an early age (Mani & Ackermann, 2018). Historically, these individual differences have mostly been explained in terms of the quality and quantity of language input. Recent work places the child in a more active role and focuses on two factors: What the child already knows and what she is interested in. Borovsky et al. (2016) show that children can leverage their existing semantic knowledge to learn new words: The more words they already know from a category, the more readily they learn words for new category members.
    But why do children have differently sized categories to begin with? We propose that what children are interested plays a crucial role in what they will learn. Previous research has shown that interest in a particular object enhances learning about this object (Begus, Gliga, & Southgate, 2014; Lucca & Wilbourn, 2016), but no research to-date has looked at a possible beneficial influence of category interest on word learning.
    In a first study, we investigated whether 30-month-olds (n=46) learn words better if they refer to members of categories they are interested in. We presented participants with 16 familiar objects from four early-acquired categories and measured changes in pupil dilation as an index of interest. Next, we presented them with one new word-object-association from each category and tested their learning in a word recognition task. Results suggest that object interest and category interest independently contribute to word learning, highlighting the importance of individual interests in shaping early vocabularies.
    To combine these findings with recent approaches to active learning, we ran a follow-up study that included two gaze contingent phases, letting the child herself steer the learning experience. 30-month-olds (n=42) were, first, presented with the same 16 familiar objects used in the first study. In the following gaze-contingent phase, we prompted participants to look at the familiar object they wanted to hear the name of, which triggered the object to be labelled. This allowed us to rank the four categories according to each child’s preference. For the learning phase, we also assessed the child’s preferences using gaze contingency. Word recognition was tested using the same paradigm as the first study. Ongoing analyses will reveal how the child’s interests (as indexed by pupil dilation), preferences (as indexed by their active choice) and previous semantic knowledge (as indexed by a vocabulary questionnaire) interact in the acquisition of new word-object associations.
    Taking together both studies, we aim to shed light on the role of category interest in word learning and how it helps us explain the vast variability we see in early lexicons across children.

    Talk Session 6: Entrainment – Sat 15 June 13:30 – 14:30

    Chair: Claudia Männel

    Maria Clemencia Ortiz Barajas, Ramón Guevara Erra & Judit Gervain (Laboratoire Psychologie de la Perception - Universite Paris Descartes / CNRS, France)

    The brain activity of newborns can track speech in different languages

    When humans listen to speech, their neural activity tracks the slow amplitude modulations that this signal has over time (i.e., the speech envelope); this is known as speech envelope tracking (Kubanek et al., 2013). Studies have shown that a speech stream must contain a well-preserved envelope for humans to understand it (Drullman et al., 1994a; Drullman et al., 1994b; Ahissar et al., 2001), and the quality with which the neural activity tracks this envelope is related to the quality of speech comprehension (Ahissar et al., 2001). The developmental origin of the ability to track the speech envelope remains unexplored. To date, envelope tracking has only been investigated in adults (Kubanek et al., 2013) and preadolescents (Abrams et al., 2008) when they were presented with sentences in their native language. However, these populations represent highly proficient, experienced listeners, and it is unknown whether envelope tracking comes from extended experience with language or if it is a basic feature of the auditory system. To tackle this question we studied newborns who were born to French monolingual mothers. This ensured that their prenatal experience was only related to this language. We presented them with spoken sentences in French, Spanish and English, while simultaneously recording their brain activity using electroencephalography (EEG). The use of these 3 languages allowed us to investigate how the newborn brain responds when presented with familiar and unfamiliar languages. Our results show that prenatally French-exposed newborns are able to track the envelope of sentences in the three languages equally well. Furthermore, we have found that newborns’ brain activity also phase-locks to the incoming stimuli regardless of the language. Our findings reveal that the ability humans have to track the envelope of speech is not dependant on their experience with language: it is a fundamental auditory mechanism. These results shed new light on the abilities that help newborns break into language from the get-go.

    Natalie Boll-Avetisyan (Department of Linguistics, University of Potsdam, Germany)

    Infants’ spontaneous rhythmic body movements while listening to rhythmic speech

    It is widely acknowledged that the perception of rhythm of speech is one of the most relevant aspects of language that tune infants into language acquisition (Gleitman & Wanner, 1982). However, few infant studies have considered that rhythm is multimodal with intrinsic connections between speech and the body. Only for production, it has been shown that infants' gestures are synchronized with their earliest babbles (Esteve-Gibert & Prieto, 2014).
    A coincidental observation inspired the present investigation of the potential link between body and speech in rhythm perception: in artificial language learning experiments, when infants are familiarized with continuous speech, we sometimes observe that they move their body rhythmically as if they were dancing. Artificial speech in such experiments is highly rhythmic with syllables being organized in a repetitive order. Infants' spontaneous rhythmic body movements are enhanced in specific conditions, for example when hearing music (Zentner & Eerola, 2010). This raised the question of whether specific acoustic rhythm cues (pitch, intensity, or duration) increase infants' spontaneous rhythmic body movement.
    Bion et al. (2011) found that 7.5-months-old Italian-learning infants use pitch cues for segmenting artificial languages. Abboub et al. (2016) extended this study to French- and German-learning 7.5-months-olds, who used both pitch and duration, but not intensity. We suspected that infants' rhythmic body movements are associated with performance in such tasks.
    For the present study, we used Abboub et al.'s (2016) data of the German-learning 7.5-month-olds (n=69), and added unpublished data of 9.5-month-olds (n=80). In the experiment, infants were familiarized for three minutes with rhythmic speech streams, synthesized with either a French- or a German-sounding pronunciation. In three conditions, every second syllable was stressed (…NAzuGIpeFYro…) by pitch, duration, or intensity cues. In a control condition, no syllable was stressed (…nazugipefyro…). Afterwards, segmentation was tested with bisyllables that were either words (e.g. NAzu) or partwords (e.g. zuGI) in the artificial language using the head-turn preference procedure.
    Video recordings were annotated for infants' rhythmic body movements during familiarization. 42% of all babies occasionally moved rhythmically while listening. An Anova of the average rhythmic moving times (Fig.1) showed a significant main effect of condition (F=2.68, p<.05) and a significant pronunciation*condition interaction (F=2.97, p=.03), but no effects of age or other interactions. Further analysis revealed that the condition effect was only significant with the German- (F=4.43, p<0.01) but not with the French-sounding pronunciation. Post-hoc comparisons (bonferroni-corrected) showed significantly longer body movements in the duration than in the intensity condition (p=0.02); no other comparison reached significance. Moreover, moving times were marginally longer in the German-sounding than in the French-sounding condition (p=.07). Correlations between rhythmic body movements and speech segmentation performance were insignificant.
    The present study provides first exploratory evidence that infants' spontaneous rhythmic body movements while listening to rhythmic speech are systematic. We observe that infants are motorically more engaged with native than with non-native speech. Moreover, their engagement is highest when hearing duration cues to rhythm. Future studies with more statistical power should confirm that body moving with the rhythm of speech may contribute to prosody acquisition.

    Posters

    Poster session 1: Thu 13 June 16:30 – 18:00

    Anne van der Kant, Jie Ren, Mariella Paul, Claudia Männel, Angela Friederici, Barbara Hoehle & Isabell Wartenburger (Universität Potsdam & Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany)

    The role of the PFC in the sensitive period for implicit non-adjacent dependency learning in early childhood

    Infants acquire the grammatical rules of their native language with remarkable ease. The ability to implicitly extract and generalize abstract rules between non-adjacent elements (non-adjacent dependencies or NADs) is present very early in life but limited in adulthood (Mueller, Friederici, & Männel, 2012). The limited ability of adults to implicitly learn NADs was attributed to an age-related increase in cognitive control through the involvement of the prefrontal cortex (PFC) (Friederici, Mueller, Sehm, & Ragert, 2013). This increase in cognitive control might restrict associative learning, which has been assumed to guide implicit grammar learning under passive listening. We aim to uncover the time window of the sensitive period for associative NAD learning and how the activity and connectivity of the PFC, as well as inhibitory control, relate to age-related changes in NAD learning ability.
    We tested 2-year-old (N=25) and 3-year-old (N=25) German-speaking children on their implicit learning abilities using a grammar-learning paradigm with short Italian sentences containing NADs. Using functional Near-Infrared Spectroscopy (fNIRS), we assessed the detection of NAD violations after a short learning period (5 min / 100 sentences) in an alternating-non-alternating paradigm (Gervain, Macagno, Cogoi, Peña, & Mehler, 2008). We also assessed the functional connectivity of the PFC during learning. For functional connectivity analyses, 2 min of artifact-free data were obtained from the learning phase for 2-year-olds (N=15) and 3-year-olds (N=15) respectively. To assess inhibitory control, a subgroup of children (N=15) also completed an eye-tracking task adapted from Kovács & Mehler (2009). Children in this task were first trained to anticipate a visual stimulus on one side of the screen and then retrained to suppress the learned cue by switching their anticipation of a stimulus on the opposite side of the screen.
    Oxyhemoglobin (HbO) changes showed significant differences between blocks with and without NAD violations in a cluster of frontal channels in 2-year-olds, but not in 3-year-olds, indicating that 2-year-old children learned the NADs. Furthermore, functional connectivity analyses showed that the time courses of the learning phase are correlated (R>0.4, p<.001) in these frontal channels for 2-year-olds, but not for 3-year-olds, suggesting involvement of the PFC in the learning process itself in 2-year-old children. Structural Equation Modeling revealed that individual levels of inhibitory control negatively predict NAD violation detection independent of age. Although this result should be interpreted with care due to the smaller sample size, data from this subgroup also confirmed the age-related differences between NAD violation blocks and blocks without NAD violations in the left frontal region. Taken together, our results not only show a decreased ability for implicit learning of non-adjacent dependencies during the third year of life, but also suggest increased involvement of the PFC during NAD learning in 2-year-old compared to 3-year-old children. Finally, the development of inhibitory control mechanisms, rather than an increased involvement of the PFC in NAD learning, might drive the decrease in NAD learning ability.

    Katharina Zahner (University of Konstanz, Germany)

    The effect of pitch accent type on German infants’ stress perception: Summing up

    Infants growing up in stress-timed environments interpret stressed syllables as word onsets [e.g., 1, 2-5], even when stress conflicts with other word boundary cues [6]. Accentuation has recently been shown to promote stress-based segmentation in German infants: 6-month-olds extracted pitch-accented trochees (SW, strong-weak) from speech but failed when trochees were unaccented; at 9- and 12 months of age, recognition was independent of accentuation [5]. Autosegmental-metrical phonology [7] distinguishes different types of accents, depending on where the f0 peak is realized with regard to the stressed syllable, i.e., coinciding with, preceding, or following the stressed syllable. In a past series of head-turn-preference experiments, we studied the effect of pitch accent type on German infants’ perception of stress and hence their ability to extract SW-units from fluent speech, see Table 1.
    In Experiment 1, 54 9-month-olds were familiarized with trisyllabic WSW- words in sentence-contexts, in three different intonation conditions (between- subjects): one in which the f0 peak was aligned with the stressed syllable (medial- peak condition (LH*L)) and two misalignment conditions (f0 peak preceding (HL*L) and f0 peak following the stressed syllable (LL*H)). Infants showed recognition of the embedded SW-units in the peak-stress-alignment condition only (1.4sec longer looking times to novel than to familiar SW-words, p<0.05). To rule out the possibility that it is the tonal alternation in the medial-peak condition (LH*L) rather than the f0 peak that makes the stressed syllable more salient, infants were familiarized with a horizontally flipped medial-peak contour (HL*H) on the WSW-words in Experiment 2. As no SW-recognition was observed, high f0 seems to be a necessary cue to stress in metrical segmentation.
    Experiment 3 investigated whether high f0 on its own is powerful enough to signal stress in German infants. To this end, 48 9-month-olds were familiarized with trisyllabic WWS-nonce words in an alignment (stressed syllable high-pitched) or misalignment condition (f0 peak preceded the stressed syllable), and tested on the last two syllables of the WWS-word, but with a reversed metrical structure (SW). Infants showed recognition of SW-items in the misalignment condition (1.2sec longer looking times to familiarized items, p<0.05), suggesting they treated high-pitched (unstressed) syllables as word onsets. Experiment 4 replicated the misalignment condition of Experiment 3 with resynthesized materials to isolate the effect of high f0 from other peak-supporting acoustic cues, such as intensity [8]. However, results revealed no recognition of SW-units, speaking against high f0 as a sufficient cue to stress for German infants.
    Taken together, our findings shed light on the underlying mechanism of stress perception in metrical segmentation, suggesting that pitch accent type (rather than accentedness per se [5]) drives this process – at least in German. Our results reveal that the f0 peak in an accent is a necessary, but no sufficient cue to stress for German infants (cf. [9-11] on the relevance of converging cues for stress perception in infants and children). We discuss possible mechanisms that may explain our findings, i.e., the salience of high-pitched syllables or the high occurrence frequency of high-pitched stressed syllables in German infant-directed speech [12].

    Mengru Han, Nivja de Jong & René Kager (Utrecht University & Leiden University, The Netherlands)

    Is prosody of infant-directed speech in word-learning contexts correlated with children’s vocabulary?

    The most salient feature of infant-directed speech (IDS) is its exaggerated prosody compared to adult-directed speech (ADS) (Soderstrom, 2007). Despite the robust evidence suggesting that the quantity of IDS is associated with children’s vocabulary (e.g., Hart & Risley, 1995), results are mixed regarding whether there are correlations between IDS prosody and children’s vocabulary. Some studies suggest that IDS prosody is significantly correlated with children’s vocabulary size (e.g., Porritt, Zinser, Bachorowski, & Kaplan, 2014), while others suggest the opposite (Kalashnikova & Burnham, 2018; Song, Demuth, & Morgan, 2018). These correlational studies have invariably measured prosody on a global level, rather than focusing on IDS prosody specific in word-learning contexts in which mothers introduce unfamiliar words to children. The current study set out to examine whether the global prosody of IDS and/or the prosody of IDS specific to word-learning contexts are correlated with children’s vocabulary size.
    We used a storybook-telling task to elicit semi-spontaneous speech from Dutch mothers when their children were 18 and 24 months old (longitudinal design: 18m: N = 43; 24m: N = 27) and we measured children’s receptive vocabulary using Dutch Communicative Development Inventory (N-CDI) at both ages. The storybook contained target words that were familiar or unfamiliar to children. Each mother told the story twice, once to an adult (ADS) and once to their child (IDS). The prosodic measures were articulation rate, mean F0, and F0 range of the target words and the utterances with target words. We adopted a “hyper-score” measure by dividing the IDS values by the ADS values following Kalashnikova & Burnham (2018). Specifically, we calculated “general hyper-scores” (including both familiar and unfamiliar words) as a measure of general prosodic exaggeration and “unfamiliar hyper-scores” (including unfamiliar words only) as a measure of prosodic exaggeration specific in word-learning contexts.
    Multiple regression analyses were conducted to examine the concurrent correlations between hyper-scores and children’s vocabulary size at 18 months and 24 months. Also, we examined the correlations between hyper-scores at 18 months and children’s vocabulary growth from 18 to 24 months.
    The results revealed that the general hyper-scores were not significantly correlated with children’s vocabulary size at any ages (all p’s > 0.1). However, when the predictors were unfamiliar hyper-scores, our results showed that (1) at 18 months, a slower utterance articulation rate, a lower word mean F0, and a smaller utterance F0 range of IDS were significantly correlated with children’s larger receptive vocabulary at the same age (R2 = 0.227, F(4, 38) = 2.79, p = 0.04). (2) at 24 months, a slower utterance articulation rate, a lower utterance mean F0, and a larger F0 range of IDS predicted children’s larger receptive vocabulary (R2 = 0.36, F(3, 21) = 3.92, p = 0.023). Third, a larger utterance F0 range of IDS at 18 months significantly predicted children’s vocabulary growth from 18 to 24 months.
    These findings suggest that the prosody of IDS specific to word-learning contexts is predictive of children’s vocabulary size and may serve specific linguistic purposes.

    Denis Burnham, Karen Mattock & Marina Kalashnikova (MARCS Institute, Western Sydeny University, Australia / Basque Centre on Cognition, Brain and Language, Spain)

    Infants’ perceptual attunement for lexical tone: Auditory-only and auditory-visual differences

    Perceptual attunement involves narrowing of perceptual attention from language-general phone perception to language-specific phoneme perception and this occurs in infancy between 6 and 12 months for vowels and consonants (Polka & Bohn, 1996; Werker, 1984). Seventy percent of the world’s languages use lexical tone as well as consonants and vowels to alter word meaning and perceptual attunement has also been found for lexical tones (Mattock & Burnham, 2006; Yeung et al., 2013). All these studies involved auditory-only stimuli, but there is recent evidence for similar attunement for consonants presented auditory-visually (Danielson et al., 2017). Given that adults’ auditory perception of lexical tone is facilitated by visual (face and head) information to a small but significant extent (Burnham et al., 2014), we address the uncharted issue of infants’ perceptual attunement for lexical tones in auditory-only (AO) vs auditory-visual (AV) modes.
    Sixty-four infants participated: 32 six-month-olds (18 female; Mage, 190.3 days, SD, 3.7) and 32 nine-month-olds (17 female; Mage, 281.3 days, SD, 4.9). For 16 infants at each age mode of presentation was Auditory-Only (AO) and for 16 it was Auditory-Visual (AV). In the AO condition, sounds were accompanied by a bullseye on a central screen; in the AV condition, there was a video recording of the speaker saying the syllables. Stimuli were tone variations of the syllable /kha/ produced by a female Malaysian Mandarin speaker. In each Age x Mode subgroup eight infants were presented with an ‘easy’ tone contrast – Falling tone in familiarisation trials, Falling vs Rising in test, and eight with a ‘difficult’ tone contrast – Rising in familiarisation trials, Rising vs High in test. In familiarisation the familiarisation tone played whenever infants looked at the screen until 30 secs fixation was accrued. Eight alternating test trials followed, four familiar and four novel tones.
    There was significant fixation decline over test trials so Block (first 4 vs second 4 trials) was included in analyses. ANOVA of novelty preference scores revealed an Age x Mode x Block interaction. Further investigation showed that AO 6-month-olds had a familiarity preference in Block 1 and a novelty preference in Block 2 (a familiarity-to-novelty change not uncommon in infant speech perception studies, Burnham & Dodd, 1998). However, the 9-months showed no differential preference in either block. This is evidence for AO tone discrimination at 6 months but not 9 months, i.e., perceptual attunement. In the AV condition however, there was no significant familiarity or novelty preference in either block or at either age.
    One interpretation is that visual information on the screen distracted infants from attending to auditory information, hence no tone discrimination. More intriguingly, in the AV mode the similarity of visual information for the novel and the familiar tones may have overridden the dissimilarity of their auditory components, i.e., addition of visual tone information suppressed auditory perceptual discrimination and thus attunement for lexical tones. This possibility begs further investigation especially given the implications this may have for infants with hearing loss developing in a tone language environment.

    Paul Ratnage, Thierry Nazzi & Caroline Floccia (University of Plymouth, United Kingdom / Université Paris Descartes, France)

    Do British English-learning 11-month-olds identify consonant and vowel changes in familiar words?

    The ‘division of labour hypothesis’ (Nespor et al., 2003) proposes that consonants and vowels have specific functions in language processing. Specifically, consonants carry more information relating to the lexicon, whereas vowels play a more important role in syntactic and prosodic processing. Such a bias for consonantal information in lexical processing tasks has been demonstrated in adults across a range of methodologies and languages (Nazzi & Cutler, 2019). Research into the development of this consonant bias suggests that cross-linguistic differences, based on phonological and/or lexical properties of an infant’s native language, exist in the acquisition of the C-bias (Nazzi et al., 2016). For example, French-learning infants demonstrate an initial vowel bias at 5 months of age (Bouchon et al., 2015) before switching to a consonant bias from 11 months onwards (Poltrock & Nazzi, 2015). In contrast, research with British-English learning children has thus far demonstrated no bias in 5-month-old infants (Delle Luche et al., 2016), with infants showing an equal sensitivity to vowels and consonants until the age of 24 months (Floccia et al., 2014). However, further research is required, using the same methods, in order to compare the development of the consonant bias between languages.
    The present study examined whether British English-learning 11-month-olds recognition of early familiar words is more reliant on their consonants than on their vowels, as had been demonstrated by Poltrock and Nazzi (2015) in French-learning 11-month-olds. Experiment 1 measured if infants show a preference for familiar words over unfamiliar non-words, using a previously established word-recognition paradigm by Vihman et al. (2004). Infants’ listening times for lists of correctly pronounced familiar disyllabic words (e.g. bottle) over unfamiliar pseudowords (e.g. puckle) were measured using the Headturn Preference Procedure (HPP). Across all 24 successfully tested infants, mean listening times for the familiar words (M = 7.20s; SE = 0.43) were significant longer, t (23) = 4.52, p < .001, than for the unfamiliar pseudowords (M = 5.96s; SE = 0.34). This supports previous research that infants demonstrate familiar word recognition at 11-months (e.g, Vihman et al., 2004; Swingley, 2005; Poltrock & Nazzi, 2015). Given this finding, if infants demonstrate a consonant bias, then such a familiar word preference should be less impacted following a one-feature vowel change than following a one-feature consonant change. This was tested in Experiment 2, where the HPP was used to measure 11-month-olds listening times for consonant mispronunciations (e.g. ‘pottle’) over vowel mispronunciations (e.g. ‘buttle’) of the familiar words used in Experiment 1. Across all 16 successfully tested infants thus far (data collection will be completed by the time of the conference), mean listening times were 8.96s (SE = 0.86) for the consonant change and 8.27s (SE = 0.93) for the vowel change words, which was not statistically significant, t (15) = 0.77, p = .42. These preliminary results suggest that, in comparison to their French-learning counterparts, British English-learning infants word recognition is equally impacted by consonant and vowel information. This finding provides further evidence that infants’ initial word recognition procedures vary cross-linguistically.

    Caroline Junge, Emma Everaert, Lyan Porto, Titia Benders, Brigitta Keij, Maartje de Klerk & Paula Fikkert (Utrecht University & Radboud University, The Netherlands / Macquarie University, Australia / Auris, The Netherlands)

    Comparing behavioral methods to index infant preference in a speech segmentation task

    Background: To assess infant language acquisition researchers often rely on infants’ preference for listening to one type of sound over another, as indexed by longer looking to a visual stimulus. Such looking-time preferences can reveal what infants have extracted from their native language environment, or from a short lab-based familiarization phase prior to test. The familiarization-then-test paradigm is particularly important for studying the environmental cues and learning mechanisms infants can employ for language acquisition. Both familiarity preferences (familiarized over novel sounds) and novelty preferences have been observed, although the direction of preference remains difficult to predict (Bergmann & Cristia, 2016). This hampers the interpretation of variation in the existing literature and formulation of predictions. Given that there are various procedures available to index looking-time preferences, one possible source of variation could stem from the type of method employed. The present study therefore examines whether choice of method modulates infant preference in a familiarization-then-test paradigm.
    Approach: We specifically tested infants’ ability to recognize disyllabic trochees from running speech, which is traditionally tested using the Head-turn preference procedure (HPP; Jusczyk & Aslin, 1995), but has also been assessed using a central screen in a central fixation (CF) procedure (Altvater-Mackensen & Mani, 2013). The HPP requires custom-made equipment and infant responses need to be manually coded. The CF paradigm can be either automatically coded using an automatic eye-tracker, or manually coded relying on the experimenter who tracks infant looking behavior on-line. Using the same design and set of stimuli, we compare infant preference in these three procedures (HPP; CF-eyetracker; CF-manual) for familiar words over unfamiliar words. For each procedure, 32 Dutch 10-month-olds participated.
    Preliminary results: While data collection for the CF-manual procedure is still on-going (completed: April 2019), we have collected the data for the HPP and the CF-eyetracker procedures. For preliminary results, we compared these two procedures on the aggregated looking times per condition and difference between conditions (See Figure 1). The relative difference in looking time between the two speech stimuli is larger in the HPP than in the CP procedure (t (62) = -2.22; p =.030). Using mixed linear models, we observe that infants in the HPP display a familiarity preference (F(280.6)= 4.71, p =.031), while infants in the CF-eyetracker did not reveal any preference (F(272.06) = 1.12, p =.29). No significant differences between procedures were detected in the infant-controlled duration of the familiarization phase.
    Discussion: Preliminary results suggest that only the HPP, but not the CF-eyetracker procedure, robustly reveals infants' word segmentation ability through a familiarity preference. We hypothesize that the contingency between gross motor behavior and sound presentation in the HPP better enables infants to display preferences. In addition to having implications for the interpretation of disparate findings, these outcomes are important to researchers who consider establishing or extending infant language labs.

    Lena Ackermann, Chang Huan Lo, Julien Mayor & Nivedita Mani (University of Göttingen, Germany / University of Nottingham Malaysia, Malaysia / University of Oslo, Norway)

    Word learning from a touchscreen app: 30-month-olds learn better in a passive context

    Tablet computers are becoming increasingly popular: In 2016, 78 % of American households with children had a tablet at home, while 42 % of children had a tablet computer of their own (Rideout, 2017). At the same time, educational apps are a growing market that lures parents with bold claims of boosting children’s learning in various domains. The majority of apps targeted at toddlers and pre-schoolers has not undergone formal evaluation (Hirsh-Pasek et al., 2015). Nevertheless, at least 80% of parents report having downloaded apps for their children (Rideout, 2017).
    Recent work (Kirkorian, Choi, & Pempek, 2016; Partridge, McGovern, Yung, & Kidd, 2015; Russo-Johnson, Troseth, Duncan, & Mesghina, 2017) has shown that children can indeed learn new words from a tablet app. Crucially, older children have been shown to benefit from active learning in touchscreen contexts (Partridge, McGovern, Yung, & Kidd, 2015), while interactivity seemed to impede learning in younger children (Kirkorian, Choi, & Pempek, 2016).
    In the present study, we investigate if 30-month-old children benefit from active selection in a touchscreen-based word learning task. Based on previous research, we expected both groups to perform above chance, but we were particularly interested to see whether children the active condition would outperform their passive counterparts. Participants (n = 34) were assigned to either an active condition (where they could choose which objects they wanted to hear the label for) or a yoked passive condition (where selections were based on the choices made by age-matched children in the active condition). We familiarized children in both conditions with the touchscreen device and presented them with four novel word-object-associations. Word learning was examined using a two-alternative forced choice task (2-AFC) and a four-alternative forced choice task (4-AFC).
    Surprisingly, children in the passive condition significantly outperformed those in the active condition in the 2-AFC. In the 4-AFC, we found a significant interaction between condition and test order, indicating that active participants' performance decreased in later trials, while passive participants got better as the test phase went on.
    These results suggest that 30-month-olds do not benefit from active learning in a touchscreen-based word learning task. These findings are in contrast to Partridge, McGovern, Yung and Kidd (2015) who find a beneficial influence of active selection in preschoolers (3-5 years). One explanation is that younger toddlers in the active condition allocate valuable cognitive resources to the tapping itself, while children in the passive condition can focus on the word learning task itself. Relatedly, tapping might constitute a prepotent response for children in the active condition. Instead of paying attention to the prompt, they might be waiting for their next chance to tap and do so as soon as they can, regardless of instruction.
    The current study adds to the growing body of evidence that educational apps and their bold claims should be taken with caution: While children might benefit from interactive touchscreen apps under certain conditions, locomotor and cognitive constraints should always be taken into account.

    Irene Lorenzini & Thierry Nazzi (Université Paris Descartes / Laboratoire Psychologie de la Perception - LPP, France)

    On a possible link between babbling patterns and early lexical processing

    Perception and production skills are linked in adults (e.g. Skipper, Devlin & Lametti, 2017), which raises questions about the developmental trajectory of this connection. It has been shown that speech-related information contributes to perception in infancy (Yeung & Werker, 2013) and that production-perception links are already present at the babbling stage (DePaolis, Vihman & Keren-Portnoy, 2010). While the direction of this coupling still has to be characterized (e.g. Nazzi & Gonzalez-Gomez, 2012), the shared conclusion is that the acquisition of production- and perception-related information are connected. In particular, it has been observed that infants producing larger sets of consonants (‘high-producers’) display a significant listening preference for not-yet-acquired (if possible to be articulated at the participants’ age) vs own consonants, while infants producing smaller sets of consonants (‘low-producers’) display no preference (DePaolis, Vihman & Keren-Portnoy, 2010). This has been taken as evidence that high-producers benefit from the contribution of production-derived information when processing own-consonants, making elaboration more efficient and enabling them to devote attention to the not-yet-acquired sounds. Here, we explored whether similar patterns exist for the processing of familiar words differing in phonetic content. Our stimuli were not selected on the basis of individual production patterns, but along the broader distinction between easy-to-articulate (early-acquired) vs difficult-to-articulate (late-acquired) consonants.
    Two groups of healthy, typically-developing French-learning monolinguals (11- and 14-month-olds, N = 22 and 18, samples to be completed) were presented in an HPP procedure with two types of lists, containing words exclusively composed of easy-to-articulate consonants (plosives, nasals) vs difficult-to-articulate consonants (fricatives). Word frequency, vowel context and syllabic length were varied within but balanced across lists. All words were spoken in infant-directed style by a French-native female speaker. The set of consonants that each infant produced was collected by means of a detailed parental questionnaire and the groups were median-split into high- and low-producers.
    At 11 months, no significant preference (2-tailed paired t-test) was detected for both low- (t = -0.001; p = .9x) and high-producers (t = 0.4x; p = .7x). Conversely, at 14 months, high-producers oriented significantly more towards the lists containing difficult-to-articulate consonants (t = -3.x; p = 0.004), while no significant preference was found in low-producers (t = -0.8x; p = 0.4x).
    The current results agree with previous literature, describing a gradual emergence of the perception-production connection. On the one side, the lack of production-related effects at 11 months is coherent with the fact that fricatives were overall not produced by the infants at this age (i.e. sensorimotor knowledge of those consonants was too weak, even in high-producers, to trigger effects). On the other side, the production-related effect observed at 14 months in high-producers is consistent with the fact that, based on the parental questionnaires, these infants had started producing the difficult-to-articulate sounds. This study adds further evidence for the French language and is the first investigation, to the authors’ knowledge, to test the phenomenon with isolated familiar words in the Head-turn Preference Procedure.

    Rebecca Reh, Takao Hensch & Janet Werker (The University of British Columbia, Canada / Harvard University, USA)

    Variability promotes distributional learning of a liquid speech sound continuum in 5- & 9-month-old infants

    Over the first year of life, infants undergo a period of perceptual attunement, during which their ability to discriminate non-native phonetic contrasts declines and they improve on native phonetic contrast discrimination. Previous research has shown infants are able to track the statistics of phoneme distributions and that this may facilitate both the collapsing of non-native phonetic boundaries and enhance the discrimination of non-native and difficult native contrasts (Maye et al., 2002; Maye et al., 2008; Yoshida et al., 2010). However, distributional learning studies typically use a small number of sound tokens — infants encounter a much higher variability among speech sounds in their environment. Variability in the input may promote category formation, however this has yet to be investigated in the context of distributional learning. To address this, we drew distributions from a liquid speech sound continuum with 140 unique sound tokens. Using electroencephalography (EEG), we investigate whether neuronal responses to ‘ra’ and ‘la’ sounds are modulated by exposure to either a bimodal or unimodal sound distribution spanning the [r]~[l] phoneme space. We further examine how this response changes between 5 to 9 months.
    Methods: English monolingual 5-month-old (n = 44) and 9-month-old (n = 24, data collection on-going to n = 44) infants were familiarized to either a unimodal or bimodal distribution of ra-la speech sounds for 2.3 minutes (Figure 1). During the subsequent test phase, an ERP oddball task was used to assess infants’ discrimination between ‘ra’ and ‘la’ tokens. EEG activity during both familiarization and test were collected using a 64-channel HydroCel Geodesic Net.
    Results: ERPs to the standard and deviant sounds were generated for each infant and difference waves calculated (Figure 2). As previously reported, the mismatch response is delayed in 5-month-olds but by 9 months more closely resembles the adult response (Dehaene-Lambertz & Gliga, 2004). In 5-month-olds, the area under the curve for the time period between 400-800ms following stimulus presentation was significantly different between the bimodal and unimodal exposure conditions (Mann Whitney, p = 0.01), showing stronger evidence of discrimination following bimodal exposure. 9-month-old statistics will be calculated when data collection is complete. Preliminary results from the 5-month-old group were presented at ICIS 2017.
    Conclusions: We find that brief exposure to a liquid sound distribution is sufficient to alter neuronal responses to subsequent ‘ra’/‘la’ speech sounds. These results are the first to show that distributional learning is supported with high variability in the familiarized speech continuum. Infants are able to extract the underlying statistics from a phoneme distribution containing many unique sound token exemplars — suggesting that infants are able to pull out the statistical properties based on acoustic similarity of sounds as opposed to repetition in presentation. This situation better captures speech sound distributions in the real world, where infants encounter a high degree of variability in the speech signal even from the same individual.

    Manuela Friedrich, Matthias Mölle, Jan Born & Angela D. Friederici (Humboldt-University of Berlin / University of Lübeck / University of Tübingen / Max Planck Institute for Human Cognitive and Brain Sciences, Germany)

    Generalization and retention of non-adjacent dependencies in 6- to 8-month-old infants

    During their first years of life, infants learn grammatical rules of their native language without any effort and without being aware of them. This early syntactic knowledge is based on the formation of implicit, so called non-declarative memories. Lexical-semantic knowledge, in contrast, can be retrieved consciously and, thus, is part of the hippocampus-dependent declarative memory system. Recently, infant brain responses have revealed that retention and generalization of lexical-semantic memories crucially depend on their timely consolidation during sleep (Friedrich et al., 2015; 2017; 2018). A similar impact of post-encoding sleep on the consolidation of syntactic knowledge has been suggested by studies in 15-month-olds, in which the memory for non-adjacent dependencies (NAD) was accessed behaviorally (Gómez, et al., 2006; Hupbach et al., 2009). In these studies, however, generalization to novel stimuli was not tested. An electrophysiological study, in which generalization to novel stimuli was tested (Friederici et al., 2011), evidenced that even 4 month-olds are able to generalize NADs immediately after familiarization. In this study, on the other hand, retention was not assessed.
    The present study aimed to investigate, whether and how timely sleep after exposure affects retention and generalization of syntactic information in 6- to 8-month-olds. In the familiarization session, infants listened to 128 short Italian sentences (stimulus material of Citron et al., 2011), each containing one of two non-adjacent dependencies between an auxiliary and a main verb's suffix (A-X-B, C-X-D) and one of 32 intervening verb stems (X-elements). In the retention period of 0.5 – 1.5 hours, half of the infants napped and the other half stayed awake. In the memory test after the retention period, infants were exposed to old sentences in its syntactically correct form (A-X-B and C-X-D), to syntactically incorrect sentences that were built from the same old elements, but violated the NADs of the familiarization session (A-X-D and C-X-B), to syntactically correct sentences with new verb stems (A-Y-B and C-Y-D), and to syntactically incorrect sentences with new verb stems, which violated the NADs (A-Y-D and C-Y-B).
    Memory was assessed by event-related potentials (ERPs). Data of the memory test revealed that infants retained the NADs and generalized them to novel verb stems, which indicates that they had acquired general knowledge about the syntactic regularities of the sentences presented during the familiarization session. The processing differences between incorrect and correct sentences in the memory test manifested itself in the same ERP differences as those observed for increasing familiarity. That means, verb suffixes in correct NADs were processed as more familiar, while the same suffixes in NAD violations were processed as unfamiliar. Moreover, although a timely nap modulated infant brain responses, post-encoding sleep was not crucial for the retention of NAD information. Overall, the results suggests that 6- to 8-months-old infants generalize syntactic information immediately during encoding and retain this generalized non-declarative knowledge for, at least, a short period of time in memory. A current study with older infants will reveal, how these findings are related to the sleep-dependent memory effects reported by Gómez and colleagues.

    Matt Hilton, Romy Räling, Isabell Wartenburger & Birgit Elsner (University of Potsdam, Germany)

    Parallels in the segmentation of speech and action sequences during infancy

    We set out to examine potential parallels between the processes underlying the segmentation of speech and of action during infancy. Speech contains various cues to signal to the listener the location of boundaries between segments of speech (e.g. phrases). These cues include a pause between segments and a lengthening of the pre-boundary syllable (Peters et al., 2005).
    Previous work has identified an ERP component, the Closure Positive Shift (CPS), that is evoked by the perception of such a boundary in speech in adults (Steinhauer et al., 1999), and recent work has shown that 8-month-old infants also show a CPS in response to these boundaries (Holzgrefe-Lang et al., 2018). As recent work has suggested that the processes reflected by the CPS are domain-general rather than language specific in adult populations (e.g. Glushko et al., 2016), we sought to examine whether the processes underlying boundary detection are also domain-general during infancy. We therefore examined whether a CPS is evoked by the processing of a boundary in an action sequence during infancy by presenting 12-month-old infants with sequences of three distinct actions performed by a series of child-friendly animated characters. For example, the character expanded its whole body, jumped up, then rotated in space. We manipulated the timing of the sequences to create a no-boundary and a boundary condition. In the no-boundary condition, all three actions were performed as one single coherent sequence, with each individual action immediately following the previous action. However, in the boundary condition, a boundary was added between the second and third action, by inserting a pause between the second and third action, and extending the duration of the pre-boundary action. These two modifications have been found to signal a boundary in non-speech stimuli (e.g. Friend & Pace, 2016; Frost et al., 2017). Recording EEG during the presentation of these action sequences both with and without boundary, we examined whether the occurrence of a boundary within the sequence prompted a CPS. Finding a positivity in relation to a boundary would be further evidence that action and language processing during infancy are tightly related, and that the processes that we have understood to underlie language processing and development might not be language-specific. Initial results and their implications for future research will be discussed.

    Boyang Qin & Marieke van Heugten (University at Buffalo, The State University of New York, USA)

    Verbs drive real-time argument representation in children under three years of age

    Spoken language comprehension is an active process causing our mental representations of scenarios or events to be continuously updated as sentences unfold. For instance, upon hearing “The boy will eat”, children and adults alike orient towards an image of something edible (Borovsky & Creel, 2014; Mani & Huettig, 2012; Altmann & Kamide, 1999), suggesting that verbs help restrict the possible set of arguments. Moreover, differences in verbs have been found to result in differences in mental representations, at least in adults. For one, eventive verbs (e.g., “destroy”, “spill”) take longer to process than stative verbs (e.g., “love”, ”own”), likely due to (i) differences in argument structure complexity (cf. Shapiro et al., 1987) and (ii) the change they trigger in the resulting mental scene (Gennari & Poeppel, 2003). In addition, eventive verbs alter the mental representation of its arguments once the event has taken place, such that hearing “The man has eaten” causes listeners to expect a partially eaten (rather than intact) food item (Altmann & Kamide, 2007). It is currently unclear, however, whether children use verb information in a similar fashion.
    To address this question, we tested native English-speaking 32- to 36-month-olds’ (N = 22 out of 32 total) mental representations of eventive verb arguments (with stative verbs being the control condition). Using the Preferential Looking Procedure, children’s eye movements were tracked as they were presented with two images side-by-side on a screen. On experimental trials, these images depicted the same entities, but differed in object features (e.g., an intact bottle vs. a broken bottle). Following image onset, sentences were presented containing the entity label (e.g., “Susan dropped the bottle!” on eventive verb trials vs. ”Susan noticed the bottle!” on stative verb trials, with the verbs being the only difference between the sentences). To prevent children from working out the goal of the experiment, we included filler trials in which the two images represented different objects.
    If the mental representation of verb arguments is modulated by verb type, children should look more toward the images representing the eventive scenarios in eventive verbs trials than in stative verb trials. To this end, a bootstrapped cluster-based permutation analysis (based on Maris & Van Oostenveld, 2007) was employed on the proportion of looks to the eventive image. This revealed a divergence between the two conditions from 1200 to 2000 ms after verb onset (p = .02, see Figure 1), suggesting that children’s mental representations of objects in a scenario change upon hearing an eventive verb.
    In follow-up work, using exclusively eventive verbs, we plan to employ verb tense (e.g., “Karen has dropped the bottle” vs. “Karen will drop the bottle”, cf. Altmann & Kamide, 2007) to disambiguate between the images. This would rule out any potential effects induced by differences between the eventive and stative verbs. Nonetheless, the current results tentatively show that children under three years of age possess basic verb type knowledge that they rapidly integrate in their argument representation during real-time language processing.

    Aine Ni Choisdealbha, Adam Attaheri, Brusini Perrine, Sinead Rocha-Thomas, Sheila Flanagan, Natasha Mead, Panagiotis Boutris, Samuel Gibbon, Helen Scott, Henna Ahmed & Usha Goswami (University of Cambridge & University of Liverpool, United Kingdom)

    Individual differences in auditory entrainment to speech and nonspeech rhythm by infants and relations with early language development

    Low-frequency modulations of the amplitude of the speech signal carry information about its prosodic and syllabic structure. The alignment (or entrainment) of similarly low-frequency delta band neural oscillations to these temporal modulations is related to the perception of speech, and differences in entrainment have been found in children with and without phonological difficulties (developmental dyslexia). Consequently, the development of neural entrainment to speech rhythm may be important for language development.
    In the BabyRhythm project, we are recording infants’ neural responses to repetitive, rhythmic auditory stimuli at multiple time points in the first year. The stimuli are drum beats and spoken syllables, each presented at 2 Hz, a frequency in the centre of the delta band. Here we report data from a sub-group of infants who have already completed early word recognition and comprehension measures, relating their neural entrainment to individual differences in language outcomes. Recognition of common nouns is present from about 6 months of age, hence the investigation of neural entrainment at 6 months and its correlation with language measures taken during the first year.
    The first measure, administered at 8 months, used eye-tracking to measure word recognition (adapted from Bergelson and Swingley, 2011). The second measure was parent-reported language comprehension and production at 10 months, measured using the UK Communicative Development Inventory (CDI). We report data from 24 6-month-old infants, of which 17 completed the word recognition task with sufficient trials for analysis. CDI data were collected for 20 infants.
    Overall, the increase in 2 Hz neural power in response to the stimuli (relative to a silent baseline) was greater than the increase for surrounding frequencies (F(2,46) = 7.54, p < 0.01, ɳ2 = 0.25). There was no difference between the drum and syllable stimuli, and no difference between the regions investigated (frontal, parietal). However, some interesting trends emerge in the relationship between delta power and early language development. Infants who showed a greater increase in delta power when listening to the syllable stimulus recognised a higher proportion of words in the eye-tracking word recognition task, and this trended towards significance, r = 0.345, p = 0.088. By contrast, there was no relationship between word recognition and delta power during the drum stimulus, r = 0.054, p = 0.419. For parental report, infants who showed a greater increase in delta power when listening to the drum stimulus showed a trend toward higher word comprehension scores on the CDI, r = 0.318, p = 0.086. There was no corresponding relationship between the CDI and delta power when listening to the syllable stimulus, r = 0.117, p = 0.312.
    These results indicate that entrainment to rhythmic auditory stimuli is occurring early in life, at a developmental period when infants begin to comprehend common words but before they begin to speak. Preliminary results suggest a potential relationship between auditory entrainment at six months and word learning in subsequent months, indicating a role for rhythmic entrainment in early language development.

    Caterina Marino, Carline Bernard & Judit Gervain (Université Paris Descartes, France)

    8 month-olds use word frequency as a cue to open/closed class categories

    The division of labor hypothesis between function words, signalling grammatical structure, and content words, carrying meaning, is universal1, 2. One potential cue to this lexical distinction is word frequency: functors are much more frequent than content words3. It has been shown that 8-month-olds prefer the relative order of frequent and infrequent words that match the distributions found in their native language(s)3. Does this mean that they actually map frequent words onto the lexical category of functors and infrequent words onto content words?
    Since content words come in an open class whereas function words in a closed class4, we examined whether infants are flexible to accept new test items only within the infrequent category.
    Using the Head-turn Preference Paradigm (HPP), five groups of 8-month-old monolingual French-learning infants were familiarized with an artificial language in which frequent words (F), mimicking functors, and infrequent words (I), corresponding to content words, strictly alternated.
    In experiment 1, infants were tested on sequences taken from the familiarization stream. Half of the test sequences started with a frequent word (F-I-F-I), the other half with an infrequent word (I-F-I-F). French being a functor-initial language, here we predicted that infants would show a preference for the frequent-word initial (F-I-F-I) sequences.
    In experiment 2, we replaced the infrequent words in the test items with novel ones (F-N-F-N vs. N-F-N-F). If infants expect infrequent words to be content words, and thus belonging to open classes, they should maintain their frequent word initial preference.
    In experiment 3, we replaced the frequent words with novel ones (N-I-N-I vs. I-N-I-N). We expected this manipulation to disrupt infants’ preference, as they could no longer rely on the frequent words as structural anchors.
    Two controls conditions were then performed. In experiment 4, the memory for the infrequent items was tested. Infants were presented with pairs of infrequent words from the familiarization stream contrasted with pairs of novel words (I-I vs. N-N). A preference for the novel words would indicate that infants recall the infrequent words from the stream.
    Finally, experiment 5 was tested as a further control to investigate whether infants encoded the position of infrequent words at all. We presented infants with test items in which infrequent words were in the native-like final position (N-I-N-I) versus items in which both frequent and infrequent words were replaced by novel ones (N-N-N-N). The predicted preference for the N-I-N-I items would suggest that infants might have some knowledge about their expected sequential position.
    In both experiments 1 and 2 we found that infants treated frequent words as functors and infrequent words as content words, showing the predicted frequent-initial preference. In experiment 3, by contrast, no such preference was found. This result is not simply due to a better recall of the frequent words, as infants readily discriminated the familiarized infrequent tokens from novel ones (experiment 4). Importantly, by showing a significantly longer looking time for items in which infrequent items were not replaced, infants demonstrated that they could encode the position of infrequent words alone (experiment 5).

    Lizhi Ma, Katherine Twomey & Gert Westermann (Lancaster University & University of Manchester, United Kingdom)

    A Negative Bias Affects Toddlers’ Word Learning

    Studies reported that adults perceive negative information more saliently than positive in the acquisition of new knowledge (Öhman & Mineka, 2001). This negativity bias is also found in infants over seven months old with more attention paid to negative than to positive facial expressions (Hoehl, 2014). Meanwhile, evidence indicates that both emotionally positive and negative vocalizations facilitate 10-month-old infants’ word recognition (Singh, Morgan, & White, 2004). However, it remains unclear how perceived emotions influences toddlers’ long-term learning of word-object associations.
    In the current study, two eye tracking experiments were conducted to investigate this question in English speaking 30-month-olds. Each experiment consisted of a referent selection (RS) training phase followed by two retention testing phases (RT1&RT2), RT1 after a five-minute break and RT2 on the following day to examine long-term retention.
    In Experiment 1, during RS, participants watched an experimenter label three novel objects, each of them paired with two familiar objects, with neutral, positive or negative affect on a computer screen (e.g., positive: “Can you find the coodle?... Wow! Look! There is the coodle!”). During RT, the retention of label/affect-object relations was tested by showing all novel objects side-by-side while labelling one of them neutrally in label trials (e.g., “Can you find the coodle? …”) or cueing them only by emotional interjections in no-label trials (e.g., “Wow! Look! Wow! Look at that! Wow!”).
    Results: In the label trials, proportion target looking of negatively familiarized targets was above chance (.33) in both RT phases, and for neutrally familiarized targets only in RT2 (all ps < .02). However, participants looked at the three objects randomly when the positively familiarized targets were labelled. In the no-label trials, participants looked at negative targets in both RT phases when hearing the negative interjection. However, when prompted by a neutral cue, they looked longer at negative distractor than at the neutral target and positive distractor in RT1. In RT2, participants only looked longer at negative distractor than at the neutral target.
    The retention of the neutral label-object mappings only in RT2 could be explained by sleep-based consolidation (Williams & Horst, 2014) together with the fact that as retention label trials were all neutral, no emotion generalization between training and test was necessary. When no linguistic and emotional information was involved at test, more attention was allocated to negative objects after the five-minute delay, but this attention bias was not found on the following day, suggesting that the strength of the negativity bias decreased with time.
    The result that positively familiarized object-label mappings were not retained might be because at test, the negative competitor attracted attention due to the negativity bias. Thus, Experiment 2 was designed to investigate whether 30-month-olds can retain the positively valenced label-object relations. Experiment 2 followed Experiment 1 in training but at test paired only two objects, so that ‘positive’ objects appeared in trials without ‘negative’ objects. Data collection is ongoing, and the results of both experiments will be presented at the conference.

    Laura Elisabeth Hahn, Titia Benders, Tineke M. Snijders & Paula Fikkert (Radboud University, The Netherlands / ARC Centre of Excellence in Cognition and its Disorders; Department of Linguistics, Macquarie University, Australia / Max Planck Institute for Psycholinguistics, The Netherlands)

    Segmenting clauses in song and speech – absence of evidence for easier segmentation of song

    Phrasing of acoustic information is a universal phenomenon that can be observed in human speech, song and instrumental music. Here we investigate whether infants exploit the acoustic boundary between phrases to perceptually organize the information in song, as well as in speech. While infants’ ability to segment speech into phrases is well-attested (Nazzi et al., 2000; Johnsson & Seidl, 2008), there is no evidence whether this ability extends to songs. We hypothesized that infants would find it easier to segment songs than speech. Firstly, because songs can carry even stronger phrase boundary cues than spoken phrases and secondly, because songs are well-known for their sustaining effect on infants’ attention (e.g. Tsang et al., 2016).
    In this study, six-month old Dutch infants (n=80) were tested on their ability to segment either songs or speech into phrases (effect of modality, between subjects). Following Nazzi and colleagues' (2000) head-turn preference procedure, infants were first familiarized with two sequences of the same words. While one sequence was uttered as phrase internal, carrying phrase boundaries at the edges, e.g. "Koude pizza smaakt niet zo goed." ("Cold pizza doesn’t taste so well"), the other was uttered as phrase straddling, carrying phrase boundaries halfway, e.g. "koude pizza / Smaakt niet zo goed" ("cold pizza. Doesn’t taste so well"). In the test phase, infants were presented with two passages of three sentences each: one passage contained the phrase straddling sequence, the other the phrase internal sequence. We measured listening times for both passages (effect of condition, within subjects). A linear mixed-effect model indicated significant main effects of condition (t = 2.20, ß = 0.04 , p = .03) and modality (t = -2.55, ß = -0.09 , p = .01), but no interaction (t = -0.26 , ß = -0.005 , p = .8). We thus replicated infants’ ability to segment passages of speech into their constituents and have shown that this ability extends to the sung modality. We also replicated the finding that infants show prolonged listening times for song over speech. Contrary to our hypothesis, however, we did not observe that infants’ segmentation is boosted in song. We thus found no evidence that the heightened attention for infant-directed singing and the salient phrase structure of songs improves the parsing of acoustic boundaries. Moreover, comparisons between our experimental stimuli and those of earlier studies suggest that infants are quite flexible in their processing of boundary cues. While our stimuli were slower with longer pauses between adjacent constituents compared to previous work, infants nevertheless parsed the in-coming stream of acoustic information and recognized the familiarized sequences within the longer passages of sound.
    To supplement the confirmatory analyses presented here, we plan to investigate how infants’ phrase segmentation abilities are related to their behavior in the familiarization phase as well as to their vocabulary at 30 months. Overall, the present results not only replicate infants’ ability to process boundary cues in speech, but also provide novel evidence for infants’ domain general ability to segment acoustic information into smaller constituents.

    Marc Colomer & Nuria Sebastian-Galles (Pompeu Fabra University, Spain)

    Understanding the challenges of communication: A comparison between bilingual and monolingual infants

    During the first year of life infants understand that people can communicate using speech (Martin et al., 2012). Infants expect speech either in their native language or foreign language, but not non-speech sounds, to transfer information between communicative partners (Vouloumanos, 2018). Here we asked if infants understand that speakers are able to exchange information when sharing common communicative conventions, but not otherwise; that is, a listener will be able to decode a message from a speaker only if she comprehends the language used to convey the message. The linguistic environment may play a critical role in determining whether individuals comprehend one or more languages (Pitts et al., 2014). In fact, while monolingual infants experience people communicating in one language, bilingual infants constantly experience that people can communicate in multiple languages.
    In two studies, we tested the role of the linguistic environment at 15 months of age in evaluating the success of communication between individuals. Study 1 investigated if monolinguals expected foreign speech (Hungarian) to transfer information from Communicator to Recipient when both actresses introduced themselves in Hungarian (Same Language Condition, N=20), or when Communicator introduced herself in Hungarian and Recipient in infants’ native language (Catalan or Spanish; Different Language Condition; N=24). In Study 2 we tested bilinguals in the same two conditions (Same Language Condition: N=16; Different Language Condition: N=20). Both groups of infants initially watched two video-clips in which each actress introduced herself. Then, the Communicator appeared alone and selectively grasped one of two objects (target) displayed in the video. Next, the Recipient appeared alone showing no preference by grasping both objects. At test, the two agents appeared together and the Communicator could no longer reach the objects. Communicator then turned towards the Recipient and repeated a sentence in Hungarian twice. In one of the test outcomes, the Recipient brought the target object over the Communicator’s face (Target outcome). In the other test outcome, the Recipient handled the non-target object (Non-target outcome; Fig.1). We measured infants’ looking times during each test outcome, capitalizing on the phenomenon that infants tend to look longer to the screen at unexpected or surprising events. If infants expected the Communicator’s speech to inform the Recipient about her preference for the target object, we expected longer looking times in the Non-target outcome than the Target outcome. Otherwise, we expected no significant differences.
    The results, although preliminary, suggest that monolingual infants expect a foreign speech to transfer information only when both speakers share the same language, but not when the Recipient has shown to be a native-speaker. In contrast, bilingual infants expect transfer of information in both conditions (see Fig. 2). This evidence suggests that infants generalize their experience with others’ communicative interactions to reason about novel communicative situations involving unfamiliar people and languages.

    Camille Frey & Nuria Sebastian-Galles (Pompeu Fabra University, Spain)

    Top-down influences on phoneme acquisition: Data from Spanish-Catalan bilinguals

    When learning their native language, one of the first steps the infants face is the acquisition of the phonetic categories. To do so, it has been proposed that infants compute the distribution of sounds in the acoustic space (Maye, Werker and Gerken, 2002). This perceptual reorganization coincides with the acquisition of the first words (Bergelson and Swingley, 2009; Tincoff and Jusczyk, 1998) but the possible influence of word-level information on the establishment of phonetic categories has been poorly investigated. One interesting population to investigate this question are bilinguals learning two typologically close languages such as Spanish and Catalan. These languages share a high amount of cognates among their translation equivalents. Moreover, these cognates differ mainly in their vowels inducing a high vocalic variability and incrementing the number of minimal pairs in the speech stream. Feldman et al. (2013) found that word-level information constrained how both adults and 8-month-olds treated overlapping vocalic native contrast, leading them to propose that the presence of non-minimal pairs may help separate two overlapping vowel categories by providing a clearer word context.
    We want to address the possible impact of bilingualism/cognateness in the establishment of phonetic categories by comparing Spanish-Catalan bilinguals and monolinguals. We adapted Feldman et al. (2013) procedure to test both adults and 8 month-olds on their discrimination of a difficult to perceive non-native contrast (British English /ɒ-ʌ/ contrast).
    Using a corpus of pseudo-words containing the vocalic contrast /ɒ-ʌ/, we familiarized both groups of participants according to two word-context conditions: the Minimal Pair (MP) condition where the vocalic contrast appeared in all the pseudo-words (e.g, “litɒh-litʌh-nutʌh-nutɒh”), and the Non-Minimal Pair (NMP) condition where the contrast appeared in distinct word context (e.g, “litɒh-nutʌh” or “litʌh-nutɒh”). Adults’ discrimination was assessed in a discrimination task by calculating their sensitivity score to the /ɒ-ʌ/ contrast (d’) in two test blocks (within participants). Infants’ discrimination was assessed in a Head-turn preference procedure by measuring the mean looking times towards two types of test trials (between participants): Non-Alternating (syllables repeating one of the test vowels), and Alternating (syllables alternating each test vowel).
    Results of the adult study (n=80; Figure 1) showed that the monolingual group replicate the pattern found by Feldman et al. (2013): in the first test block, exposure to the NMP condition significantly increased participants’ sensitivity score (p=.024). Bilinguals, instead, showed similar discrimination in the two conditions; additionally, they discriminated better than the monolinguals in both conditions, especially in the MP condition (p=.008). This suggests that, Spanish-Catalan bilinguals put less weight on word-context.
    Infants’ preliminary results show that only bilinguals (n=14) discriminate at test, as revealed by a significant test trials by language profile interaction (p=.018). When separating across conditions (Fig.2), bilinguals show a trend to discriminate test trials only in the NMP condition (p=.063) suggesting a different pattern from their adult peers. Contrary to the results of Feldman et al. (2013), monolinguals (n=14) show no systematic preference for test trials in both conditions suggesting so far an absence of discrimination.
    These results suggest an influence of bilingualism in the establishment of phonetic categories only for adult participants but more data is needed concerning the infant group.

    Jessica Tan, Michael J. Crosse, Giovanni M. Di Liberto & Denis Burnham (The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Australia / Albert Einstein College of Medicine, USA / Laboratoire des Systemes Perceptifs / CNRS, France)

    Neural entrainment to auditory-visual speech in infants and children

    Speech is multimodal. Over and above auditory information, visual information from a speaker’s face, e.g., lips, eyebrows, contributes to speech perception and comprehension. Behavioural studies examining this visual speech benefit (VSB) show that adults, children and infants perceive speech more accurately in auditory-visual conditions than auditory-only conditions (Taitelbaum-Swead & Fostick, 2016; Teinonen et al., 2008). Recent neurophysiological studies with adults confirm and extend behavioural findings by showing enhanced neural entrainment to continuous speech stimuli in auditory-visual over auditory-only speech (Crosse et al., 2015).
    To date, no study has examined whether visual speech information augments neural entrainment to the speech amplitude envelope in infants or children, despite behavioural studies suggesting that visual speech information enhances infant and child speech perception. The aim of this study is to investigate neural entrainment to auditory-visual speech in infants and children.
    Electroencephalography (EEG) data (128 electrodes) were collected for 5-month-old infants and 4-year-old children. Recordings of a female native speaker of Australian English talking in infant-directed speech were presented in auditory-only (A: still photo of speaker’s face paired with the auditory recordings), visual-only (V: dynamic video of the speaker talking presented in silence) and auditory-visual (AV: dynamic video of the speaker talking and the corresponding soundtrack) conditions.
    Following Crosse et al. (2015), neural entrainment can be quantified through forward and backward modelling. Forward modelling involves deriving temporal response functions (TRFs) that describe how the spectrotemporal acoustic features of the stimuli are transformed into neural responses. Preliminary analyses of TRFs generated from 5-month-olds’ and 4-year-olds’ EEG data show a distinct difference between neural responses to AV vs (A+V); super-additive effects in the AV condition which are greater than the simple addition of neural responses in the A and V conditions. This between-condition difference is supported by brain topography maps derived from the correlations between the original EEG and the predicted EEG signal across the 128 channels (Figure 1, Supplementary Material). In the A condition, the highest r values were clustered around the right temporal region; in V there was no obvious clustering; and in AV the highest r values were clustered around the left occipital and temporal regions.
    Unlike the forward modelling approach which provides spatio-temporal information, the backward modelling approach provides an index for neural representation by comparing an estimate of the reconstructed speech envelope from the EEG data to the original speech envelope. This approach results in a single r-value per condition, allowing comparison across the AV, A and V conditions. Results of this procedure will follow.
    While the exact mechanism underlying the VSB requires explication, these findings have important implications particularly for infants and children with hearing impairment. As the addition of visual speech information has been found to compensate for the degraded auditory signal in speech perception in various contexts, exogenous (e.g., environmental noise) and endogenous (e.g., hearing loss) these electrophysiological data will assist in determining the neural locus of such augmentation.

    Celia Rosemberg, Florencia Alam, Laura Ramirez, Cynthia Audisio, Leandro Garber & Maia Migdalek (National Council of Scientific and Technical Research, Argentina)

    Are SES differences related to the proportion of nouns and verbs in toddler’s linguistic environment? A study with an Argentinean Spanish-speaking population

    This study explores the vocabulary composition (nouns and verbs) of the input to which Argentinian Spanish-speaking children are exposed in their daily experiences. Findings from cross-linguistic studies highlight the fact that specific properties of the input could explain the distribution of nouns and verbs in child production (eg.: Choi & Goptnik, 1995; Tardif, Shatz & Naigles, 1997). While these studies showed that certain aspects of language structure determine input characteristics, other studies provided evidence that activity contexts and caregiver-child interactional routines that configure the child linguistic experience shape what the children receive and grasp from their input and therefore early vocabulary composition (Tardif, Gelman & Xu, 1999; Jackson Maldonado, Peña & Aghora, 2011; Stoll, Bickel, Lieven & Paudial, 2012). Given that socio-economic background frequently implies, variations in these socio-cultural and pragmatic aspects and in input frequency this study asks whether socio-economic status (SES) implies differences in the characteristics of children’s linguistic environment and consequently in the vocabulary they access. SES has been shown to have an effect on the proportion of child-directed speech (CDS; Rowe, 2008; Rosemberg, Alam, Stein Migdalek, Menti, Scaff & Cristia, 2017); thus, from a naturalistic perspective, it is relevant to study the entire language environment surrounding the child, comparing the distribution of nouns and verbs in CDS and overheard speech (OHS).
    The participants of the study are 20 infants and their caregivers, from low and middle SES (10 females, age: 14 months) residing in the metropolitan area of Buenos Aires. Children were audio-recorded for 4 hours, using a digital recorder in a vest, without the presence of the researcher. The middle 2 hours of each child’s recording were transcribed using the CHAT format (40 hours). The input was analysed in CLAN (MacWihnney, 2000), and coded for CDS and OHS. Nouns and verbs were identified using MOR command for Argentinean Spanish. Following Stoll, Bickel, Lieven & Paidyal (2012) the noun-verb ratio was calculated.
    Logistic regression analyses showed that middle SES toddlers heard a greater proportion of nouns than verbs in CDS than in OHS; the opposite was observed among low SES toddlers. This difference seems to indicate that in middle SES households children might hear a greater proportion of referential language, that is of words refering to entities, such as concrete objects that can be seen, heard, or touched, and which may be used in joint attention contexts, (Hoff, 2006, among others). The fact that the opposite is observed in low SES households – a significantly lower proportion of nouns than verbs in CDS than in OHS –could reflect the fact that, in these households, utterances addressed to the child are to a greater extent commands aimed at regulating children’s behavior in activities in which they are not necessarily the center of attention; as it was previously noted in several studies (e.g.Hoff, 2013; Sperry, Sperry & Miller, 2018).

    Jennifer Sander, Barbara Höhle, Nicole Altvater-Mackensen & Aude Noiray (Laboratory for Oral Language Acquisition, University of Potsdam, Germany / Haskins Laboratories, USA / Johannes Gutenberg-University of Mainz, Germany)

    Gaze dynamics during infants' vocal development

    This study investigates developmental differences in infants ́ attention to audiovisual information and how this process may relate to early production capability. A few studies have recently noted a developmental shift in infants ́ processing of facial information, from a focus on the gaze area at 4 months of age to increasing attention on speakers’ mouth between 8 and 10 months of age [1]. This finding has important implications for language acquisition because it highlights increasing attention to parts of the face that convey relevant information for speech processing [2, 3), both in terms of articulatory gestures and as the source of the acoustic speech signal, and for speech production (e.g., the visible movement from speaker ́s jaw may guide infants ́ productions). Interestingly, this time frame corresponds to the emergence of a new speech template in infants’ vocal repertoire called babbling (canonical: [dædæd], variegated: [dædi]). Babbling is considered as a milestone in spoken language acquisition because it instantiates infants ́ first syllabic templates. From a biomechanical perspective, babbling results from vertical oscillations from the jaw while other articulators such as the lips and the tongue remain at rest or passively carried by the jaw [4]. Hence, it could be that the transition from undifferentiated vocal forms (e.g., long [æ]) to differentiated phonetic templates resembling more adult speech (babbling) is stimulated by greater attention to speakers' mouth that provides access to linguistically relevant information. This study tests for a developmental relation between attention and production.
    Eye movement from 41 healthy monolingual German infants aged between 6 and 12 months of age were recorded in an audiovisual versus visual speech attention task. During the test, infants were seating on their parents ́ lap, facing a computer screen which displayed audiovisual or visual exemplars of the vowels /i/ and /a/ pre-recorded by a German native female model speaker. An eye-tracking device (SMI red 250), a microphone as well as a video camera were clipped on the computer screen. Each infant was presented with 5 randomized repetitions of both target vowels per modality. Parents filled in a questionnaire describing their infants ́ vocal activity in details. An additional questionnaire assessing infants ́ vocabulary will be sent at 18 months to test for a developmental relation between attention and lexical growth.
    To test for a developmental shift in infants ́ visual attention, we determined two regions of interest (ROIs): the eye and the mouth area. Looks were determined with respect to these two ROIs; other looks were discarded. Using LMERs [7], we tested for an effect of modality, age, vowel and ROI on infants ́ total looking time. Preliminary results suggest a preference for the mouth area from 8 to 12 months of age in opposition to 6- to 7-months-old infants whose patterns were much more variable and difficult to interpret. Indeed, while some infants showed a preference for the gaze area, others focused more on the mouth. Hence, our findings in German corroborate the developmental shift observed in previous research with English infants [1]. Analysis of the full dataset will allow us to further test these preliminary trends.

    Natalie Fecher & Elizabeth K. Johnson (University of Toronto, Canada)

    Talker recognition in monolingual and bilingual 9-month-olds

    Bilingual and monolingual infants process spoken language differently, yielding differences in language discrimination, phoneme perception and word learning [1,2]. But are these differences limited to linguistic processing, or do infants who learn multiple languages from birth also differ in how they process non-linguistic information in speech, such as who is talking?
    A recent study demonstrated that bilingual 9-month-olds have an advantage over their monolingual peers in recognizing the talker’s identity in a face-voice matching task [3]. Critically, all infants participating in this study were tested on an unfamiliar language (Spanish). While the monolinguals failed to learn the face-voice associations in a foreign language, the bilinguals readily succeeded in this task. These results raise various questions. Did bilinguals outperform monolinguals because of domain-general cognitive or perceptual advantages (e.g., better memory for audio-visual pairings [4–8])? Or did bilinguals excel at this task due to differences specifically in how they process talker-related speech cues (e.g., due to delayed perceptual narrowing, or greater experience with a broader range of speech sounds [9])? To begin to tease apart these different explanations, we tested monolinguals and bilinguals on face-voice matching in English (a language familiar to both groups) rather than Spanish. If the group differences observed for Spanish arise from general cognitive or perceptual advantages, then bilinguals should be better at talker recognition in any language – even English.
    We tested 48 English-learning 9-month-olds on face-voice matching in English. Although all infants learned English, half of the infants learned at least one additional language. The English speakers producing the stimuli were the same as the Spanish speakers in [3], and the procedure was also identical. Using the ‘switch’ habituation procedure, infants were habituated to two talking cartoon faces (faceA/voiceA, faceB/voiceB) and tested on their ability to detect a mismatch between faces and voices. If infants had learned the face-voice pairings, then they should look longer during switch trials (e.g., faceA/voiceB) than same trials (e.g., faceA/voiceA).
    Overall, infants successfully detected the face-voice mismatch (longer looking times during switch trials). However, in contrast to earlier work, where bilinguals outperformed monolinguals at recognizing talkers in an unfamiliar language (see Figure 1, Spanish data), performance of the monolinguals and bilinguals did not significantly differ for the familiar language (see Figure 1, English data). Indeed, although not significant, there was even a numerical trend for monolinguals to outperform bilinguals. Thus, our results do not suggest that bilinguals are better at talker recognition in general. Rather, the bilingual advantage in this task appears to be limited to unfamiliar languages.
    Taken together, this study sheds light on emerging talker recognition abilities in monolingual and bilingual infants. In our presentation, we will discuss the implications of these findings for the development of talker recognition in young language learners, and we will propose future directions for this work.

    Agnes Kata Szerafin, Bence Kas, Istvan Winkler & Ildiko Toth (Hungarian Academy of Sciences, Hungary)

    Early triadic interactions and infants’ subsequent language development

    At the age of 9-12 months, typically developing infants become able to establish joint attention (JA) with a social partner. JA refers to a shared focus of two individuals on an object, marked by triadic coordination and gaze alternation between the partner and the object. The emergence of JA has been generally associated with language acquisition and social cognition. Some experimental studies, however showed that younger infants are capable of triadic attention as early as 3 months of age, while object engagement occurs at about 4 months. Additionally, eye gaze cueing facilitates neural processing of objects in 4-month-old infants. Little is known though about how early triadic interactions can be detected in mother-infant interactions, and whether they are related to language acquisition.
    In the present longitudinal study, 10-minute play of 45 mothers and their 4-month-old infants was video recorded. By using a micro-analytic behavioural coding scheme, we continuously registered maternal and infant gaze direction and object manipulation. Different types of triadic attention patterns were defined as coordinated combinations of partners' gaze and manipulation overlap in time. These measures were summed into Triadic interactions variable representing the percentage of time dyads spent in triadic interactions during play. Infant word comprehension at 8 and 12 months was assessed by the Hungarian version of the MacArthur-Bates Communication Development Inventory (HCDI) I: Words and gestures.
    Linear regressions were calculated to predict all receptive word comprehension as well as noun comprehension at 8 and 12 months. Preliminary analyses yielded significant effects of gender, birth weight, gestational week and maternal education on different types of triadic attention pattern variables. Therefore they were included as control variables. No significant regression equation was found for all receptive word comprehension at 8 months, but the model proved significant for 12 months (F(1,31)=15.6; p < 0,001; R2 = .539). For receptive noun vocabulary, significant regression equation was found for both ages (F(1,39)=6.2; p=.017; R2 = .211 for 8-month and F(1,31)=6.8; p=.014; R2 = .489 for 12-month olds; Table 1). The 12-months models were controlled for 8-months all/noun receptive vocabulary, respectively.
    To our knowledge, this is the first study to show that the time spent in early triadic interactions is associated with infants' subsequent language comprehension. Triadic situations are ideal for object name learning, because they make evident what the mother refers to. This facilitates the early emergence of noun comprehension during lexical development. Assume that dyads' object-focused interaction patterns remain stable over time, they contribute to the observed longitudinal effects. Results also suggest that beyond the mothers' effort to engage infants in object-focused play, infants’ maturity as reflected by their birth weight can also play a role. In conclusion we suggest that the amount of time dyads spend in early triadic interactions influence the onset of word comprehension and increase the rate of word learning in the early stages of receptive language development.

    Joan Birules, Mathilde Fort, Julien Diard, Laura Bosch & Ferran Pons (University of Barcelona, Spain / Université Lyon, France / Université Grenoble, France)

    Using Hidden Markov Models to understand infants’ developmental pattern of visual attention to talking faces; evidence from monolingual and bilingual infants

    Monolingual infants’ visual attention to a talking face follows a developmental pattern that begins in the eyes at 4 months of age and shifts towards the mouth at around 8 months of age (Lewkowicz and Hansen-Tift, 2012). Bilingual infants shift earlier to the mouth and show a stronger mouth-preference at 12 and 15 months of age than their monolingual peers. This suggests that bilingual infants rely more on the audiovisual redundant cues provided by the mouth area of talking faces to deal with their dual language learning problem (Pons, Bosch & Lewkowicz, 2015; Fort et al., 2018; Birulés et al., 2018). Importantly, these studies have analyzed the data spatially, computing proportions of total looking times (PTLT) to two a priori defined areas of interest (AOIs): the eyes and mouth areas of the talking face. It remains to be explored whether more fine-grained analyses - both spatial and temporal, beyond average looking time differences - could help better characterize infants’ visual exploration strategies at different stages of development.
    Here, we re-analyzed the data from Pons et al. (2015) and Birulés et al. (2018) by using Hidden Markov Models (HMMs), which provide a more complete description of the data, that includes data-driven state identification, in which states represent both spatial and temporal information (generalizing classical AOIs). We computed one HMM per each language background (monolingual and bilingual) and age group (4-, 6-, 8-, 12- and 15-month-olds) and compared fixation counts to each learned state, transition matrixes between states and cross-likelihoods of the two models. For validating our approach, we first replicated the average looking time results of the two studies by using 2-state HMMs, which, as expected, resembled the pre-defined eyes and mouth AOIs previously published. Secondly, we observed that the HMMs captured temporal information with intermediate mandatory states (namely AOIs) between the eyes and mouth, and also a “rest of the face” state that was shared in all language background and age groups (see HMM example in Figure 1). Further analyses using these new transition states will allow us to better classify participants gaze patterns, for instance to disentangle between a highly stable looking behavior and a more exploratory and variable gaze pattern. Importantly, these temporal differences would not be captured by classical average-time analyses. These results show that using HMMs is a promising tool in infant eye-tracking studies that may provide new insights regarding the cognitive mechanisms at play in early language acquisition and face processing.

    Poster session 2: Fri 14 June 15:00 – 16:30

    Sandrien van Ommen, Silvana Poltrock & Thierry Nazzi (Université Paris Descartes - CNRS, France / University of Potsdam, Germany)

    Perceiving consonant and vowel tiers, potential limitations for 9-month-olds

    We explore whether German- and French-acquiring 9-month-olds show asymmetrical detection of consonant versus vowel repetition across auditorily presented pseudowords. The task is inspired by the finding that English-learning 9-month-olds prefer listening to lists of CVC pseudowords that share initial CV (fed, feg) or C (e.g. mod, mib) over unrelated lists, but not when they share final VC (e.g. mad, lad) or V (tiz, bis) (Jusczyk, Goodman, & Baumann, 1999). This result can reflect sensitivity to word-initial sounds (Zamuner & Kharlamov, 2016) or an early expression of the C-bias, with consonants being privileged in prelexically-related processing (Nespor, Pena, & Mehler, 2003). To explore the C-bias interpretation, we tested if 9-month-olds prefer listening to lists in which (disyllabic) pseudowords share their consonantal tier (tufo, tafe) over those in which they share their vowel tier (tufo, luko) and unrelated lists (tufo, lake).
    24 German- and 24 French-learning 9-month-olds were tested in HPP. The experiment consisted of 18 trials (6 for each of 3 conditions: consonant-related (C), vowel-related (V) and unrelated (U)) in which a list of 12 pseudowords was presented. Pseudowords were constructed with 6 different consonants and 6 vowels, combined into 24 vowel tiers and 24 consonant tiers, rendering 576 different pseudowords. Each infant heard a unique set of stimuli in which no pseudoword was ever repeated and maximum segmental variability was ascertained. Stimuli were synthesized with MBROLA voices German7 and French4 with equal-duration consonants and vowels, voices were counterbalanced by list. Results were analyzed with linear mixed models of the arcsine of looking time per trial (minimum LT: 1500 ms) including condition and trial order, modeling random participant intercepts and participant slopes of condition and trial order. Results showed no effect of condition in either language.
    These results fail to confirm our prediction of infants’ ability to spot consonant repetitions, previously found for English (Jusczyk e.a., 1999). Several reasons could explain our null result. First, English-learning infants may have an earlier ability to spot consonant repetitions in the present context than German- and French-learning infants. This is unexpected though because several studies have shown an earlier C-bias in French-(Nishibayashi & Nazzi, 2016) than English-learning infants (Floccia, Nazzi, Luche, Poltrock, & Goslin, 2014; Mani & Plunkett, 2007) rather than the opposite. Second, methodological reasons could explain our result. One change in experimental procedure was to use a within- rather than between-subjects design. While increasing power by reducing participant variability, it also asks more of the participant. Another change was the use of MBROLA voices (rather than natural speech), which has flat prosody when prosody has been found to help infants detect segmental differences, especially in non-initial position (Karzon, 1985). Lastly, our use of disyllabic instead of monosyllabic words required infants to notice repetitions from non-adjacent elements, which might be more difficult than detecting adjacent repetitions (though see Gonzalez‐Gomez & Nazzi, 2012). While further experiments are needed to understand which factors account for our results, the present study nevertheless points to limits in infants’ ability to detect segmental repetitions at the lexical level.

    Nathalie Czeke, Katharina Zahner, Jasmin Rimpler, Bettina Braun & Sonia Frota (University of Konstanz, Germany / University of Lisbon, Portugal)

    German infants do not discriminate Portuguese rising vs. falling contours

    Contrasts such as rising versus falling intonation contours play an important role in language development as they may signal a difference in illocution type. In languages such as Portuguese and Basque, intonation is the sole means to differentiate polar questions from statements, while languages such as English or German additionally use morpho-syntactic information. English polar questions employ auxiliaries (e.g., do, be), while German fronts the finite verb (auxiliary or full verb). Frota, et al. [1] found that Portuguese infants (5-6 months and 8-9 months) successfully discriminate between rises and falls in disyllabic intonational phrases. While Basque 4-month-olds succeeded in the same task, English 4-month-olds failed to discriminate, unless when tested in a more sensitive procedure, using restricted segmental variability [2]. Hence, the ability to discriminate rising and falling contours seems to depend on whether or not the native language marks illocution type by intonation only. We test this assumption with German infants. German is an interesting test case as it employs morpho-syntactic marking for polar questions (as in English), but in a different way.
    We replicated the study by Frota, et al. [1] with German infants, with the same stimuli (16 segmentally-varied, disyllabic pseudo-words), procedure (visual habituation) and age groups (5-6 and 8-9-month-olds). Half of the infants were habituated with a rising, half with a falling intonation. Habituation criterion was set to a 60% decrease in average looking time of the last 4 compared to the first 4 trials. In 2 consecutive test trials infants were then presented with 8 bisyllabic words different from those in the habituation phase, with same or different intonation as heard during habituation (counterbalanced). In test, average looking times for infants at 5-6 months (n = 19, mean age = 0;5.27) and 8-9 months (n = 20, mean age = 0;8.15) were longer in the switch (young: 7.2sec (SD = 3.5), old: 7.6sec (SD = 3.7)) than in the same trial (young: 6.8sec (SD = 1.8), old: 7.0sec (SD = 3.0)), see Fig 1. Separate paired t-tests for each age group, however, revealed no significant looking time differences (young: p > 0.6; old: p > 0.4). There was no main effect of age group (p > 0.7) and no interaction between age group and trial type (p > 0.9). A combined analysis of our data and those of Frota, et al. [1] showed no three-way interaction between age group, language background, and trial type (p > 0.2), but a significant interaction between language background and trial type (F(1,77) = 15.7, p < 0.01).
    Our findings lend further support to the hypothesis that the extent of morpho-syntactic cues to question marking in the native language partly determines infants’ ability to discriminate rising from falling contours. Compared to the results from European Portuguese and Basque infants (intonational marking) and English-learning infants (additional morpho-syntactic marking) in the same experimental task [2], our results resemble those for English infants. In a next step, we plan to test German infants with no segmental variability as done with English infants [2]. A further question is how the domain of f0-movement (mono- vs. disyllabic) affects processing [cf. 3, 4].

    Mireia Marimon, Maxine Dos Santos, Thierry Nazzi & Barbara Höhle (University of Potsdam, Germany / Université Paris Descartes, France)

    Word segmentation cues: Prosody or statistics? Evidence from French

    Speech to infants – like speech to adults - is a continuous speech stream in which word boundaries are not marked by a set of unique phonetic cues. An important requirement for lexical development is to divide this fluent speech stream into units that correspond to the words of the language. Previous research has shown that infants rely on several types of information for solving this segmentation problem. Among the cues identified, two have been proposed to have a central role: statistical cues, which refer to the sensitivity to distributional regularities in the input (e.g Aslin, Saffran & Newport, 1998), and prosodic cues like lexical stress (e.g Jusczyk et al., 1993; Höhle, 2002).
    The weighting of these two cues seem to vary with age and, most importantly, with the language infants are learning and its phonological properties. Whereas statistical learning is found across different languages from as early as 7 months of age, prosodic cues like lexical stress are signaled by language-specific acoustic properties that differ between languages. In German stressed syllables tend to be louder, longer and higher pitched than unstressed syllables (Dogil & Williams, 1999). However, French has no lexical stress per se, but dominant phrasal iambic stress, marked by longer final syllables and by a pitch movement (Hayes, 1995; Delattre, 1966). This seems to have early processing consequences: it has been found that German-learning infants have a preference for initially stressed trochaic over finally stressed iambic words at 6 months of age, but that French- learning infants have no such preference (Höhle, Bijeljiac-Babic, Herold, Weissenborn & Nazzi, 2009).
    In a previous study (Marimon & Höhle, under review) we examined how German-learning infants weight prosodic and statistical cues and found that –unlike their English-learning age-mates– German-learning infants weight prosodic information more heavily than statistical information already at 7 months. In the present study, we wanted to examine how French-learning infants weight prosodic and statistical cues and whether cross-linguistic differences appear. We tested a group of 6-to-7-month-old French-learning infants using the HPP in a paradigm similar to Thiessen and Saffran (2003). Infants were familiarized with a 3-min string in which statistical cues (transitional probabilities) and prosodic cues (lexical stress) were pit against each other and therefore indicated different word boundaries. Preliminary data from the French group (n =15) suggest that French infants rely more strongly on statistical cues. French-learning infants showed shorter looking times for statistical words compared to non-words (p = .03) and compared to prosodic words (p = .03), which we interpret as a novelty effect. The findings will be discussed in relation to the early impacts of language-specific properties on segmentation mechanisms.

    Holly Bradley & Paul Iverson (University College London, United Kingdom)

    Infant neural entrainment to continuous speech: Initial methodological development

    This present study sought to measure cortical entrainment by infants to the amplitude envelope of continuous speech. Previous studies with adults have demonstrated that neural oscillations entrain to amplitude modulation patterns in speech. This phenomenon has been used to assess factors such as the role of attention in speech processing by adults; adults typically have higher entrainment to attended talkers in multi-talker or noisy environments. In this study, EEG was recorded in infants (mean age = 7.5 months) to a children’s story read in a child-directed speech style. Most babies listened to approximately 15 minutes of speech (3 repetitions of the story). Traditional ERP designs reduce noise by averaging across repeated events, but that is not possible with continuous speech like used here. Instead, the data was analyzed using a machine-learning technique that has been successful with adults; multivariate Temporal Response Functions (mTRF) were calculated to map the neural response back to the amplitude envelope of speech. The mTRF resembles a traditional ERP, and it essentially extracts a neural component that is time-locked to amplitude variation in the acoustic signal. The extracted neural component was then compared to the speech amplitude envelope in terms of coherence (i.e., the phase synchrony of the two signals across frequency bands). Prior to the analysis, all babies with recordings that were consistently above an artifact-rejection threshold (+/- 200 uV) were omitted. The results demonstrated that infants have cortical entrainment to speech much like has been found in adults. However, this can only be found in relatively clean data (approximately 33% of babies tested), and requires advanced methods to overcome the remaining data artefacts. Future research using these methods could look at attention and more complex auditory scenes in order to understand more about infant speech perception.

    Rianne van Rooijen, Emma Ward, Maretha de Jonge, Chantal Kemner & Caroline Junge (Utrecht University / Radboud University / Leiden University, The Netherlands)

    Two-year-olds at risk for autism can learn novel words from parents

    Background: clinicians consider lagging behind in early vocabulary formation as one of the first signs of atypical behavior in infants who later develop autism. Most of this evidence comes from parental questionnaires, which can be prone to biases as some parents require their child to react explicitly as proof for word understanding (Houston-Price et al., 2007). Surprisingly, experimental evidence that infants at high risk for autism (HR) are indeed limited in their vocabulary formation is still missing (but see Gliga et al., 2012, with three-year-olds). The current preferential-looking study therefore compares on-line word learning ability in a high-risk sample of 24-month-olds (n=18) with a low-risk sample (n=11). Familial risk was defined as the child having an older sibling with or without autism (Ozonoff et al., 2011).
    Intervention studies aimed at mitigating early autistic traits can prove successful when these focus on improving the quality and quantity of parent-child interactions (McConachie & Diggle, 2007). In typically developing children, parents can also boost early vocabulary growth, as compared to child care providers (Marulis & Neuman, 2010). Indeed, a recent study revealed that typically-developing two-year-olds only show evidence of novel word learning when it is the mother who provided the accompanying speech (van Rooijen et al., 2018). As a first step to test on-line word-learning ability in our sample of infants, we therefore adopted this word-learning paradigm: we relied on the parents to read aloud the subtitles underneath the novel objects, while we recorded their child’s eye movements in response to their speech. In addition, we also collected information about their concurrent vocabulary scores (N-CDI: Zink & Lejaegere, 2002) and followed up on their autistic traits at three years (‘ADOS scores’; Lord et al., 2012).
    Results: Results show that both high- and low-risk children were able to learn new word-object mappings in our current experimental set-up (F1,27=4.29, p=.048; see Figure 1): they look longer at the requested target than at the distracter item. Crucially, there is no difference in performances between groups. High-risk children however did reveal lower N-CDI scores (t25=2.71, p=.012). Finally, we did not observe a correlation between ADOS scores and word learning abilities (r = -.174; p = .20; see Figure 2).
    Conclusion: Thus, our results reveal that despite their reported lag in vocabulary size, children at high-risk for autism do not differ in word learning abilities from low-risk children, at least when it is a parent who provides the speech. This is vital information for designing new autism interventions with a primary role for the children’s caregivers.

    Iris-Corinna Schwarz, Christa Lam-Cassettari, Ellen Marklund & Lisa Gustavsson (Stockholm University, Sweden / Western Sydney University, Australia)

    Does positive affect promote word learning in Australian English learning and Swedish learning 16-month-olds?

    The properties of infant-directed speech (IDS) facilitate associative learning (for a review, see Saint-Georges et al., 2013). One such property, positive affect, regulates the emotional state of the infant. Infants prefer to listen to positive affect in speech over neutral affect in speech (Singh, Morgan, & Best, 2002). Is positive affect related to infant language development? No relationship could be found between affect in parent IDS and infant vocabulary size in 7- to 19-month-olds (Kalashnikova & Burnham, 2018). However, mismatch of positive and neutral affect between training and test trials impaired word recognition performance of 7.5- and 10-month-olds (Singh, Morgan & White, 2004) and 19-month-olds learned novel word-object associations only when they were presented in IDS and therefore contained more affect - amongst other properties (Foursha-Stevenson, Schembri, Nicoladis & Eriksen, 2017).
    We studied whether affect type (positive/neutral) had an effect on the learning of two novel word-object associations in 16-month-olds. We used a 3-screen setup, employing video-recording and simultaneous toggling of infant gaze. The experiment consisted of 1) a pre-test phase that tested recognition of three early and presumably known words without familiarization, 2) a familiarization phase that associated two novel objects with two novel labels, one per affect type, 3) a 4-trial test phase with matched and mismatched affect type, and 4) one post-test trial with a known word.
    We tested 48 Australian English-learning and 30 Swedish-learning infants. Criteria for inclusion were correct target identification in 2 out of 3 pre-test trials to ensure task comprehension, more than 40 seconds looking time during familiarization to guarantee learning opportunity and correct target identification in the post-test trial to rule out fatigue. The final sample constituted 22 Australian English infants (10 female) and 12 Swedish infants (7 female).
    Proportion looking time towards the target object should indicate novel word learning. We ran a 2 x 2 ANOVA with the within-subject factors affect type (positive/neutral) and affect match (matched/mismatched with familiarization). There was no main effect for affect type or affect match and no interaction. We thus cannot conclude that positive affect facilitates forming novel word-object associations. These null results instead indicate that the 16-month-olds did not form word-object associations during the word-learning task regardless of affect type, despite general task compliance, ample familiarization time and target identification of known words both in pre- and post-test trials.

    Jolanta Golan, Leanne Barnes, Elena Kushnerenko, Rachel George, Melanie Vitkovitch, Derek Moore & Angela Gosling (University of East London & University of Greenwich, United Kingdom)

    Assessment of the auditory paradigms in generating MMR in 5- to 10-month-old infants

    Electrophysiological mismatch response (MMR) to acoustic change has been implicated in predicting language development. Studies using stimuli such as tones (He, et al., 2009; Leppanen, et al., 2004; Kushnerenko, et al., 2007) and phonemes (Lee, et al., 2012, ) rely on change (for instance oddball or roving) and on specific interstimulus interval (ISI) to produce differential responses to frequent and infrequent sounds. The aim of the current study was to assess contribution of such manipulations in generating efficient MMR in infants.
    Four auditory paradigms were designed, two of them demonstrating the change and the other two the ISI duration modulation.. Each paradigm consisted of 370 trials out of which 70 were deviant trials. Two phoneme paradigms with the ISI of 800 msec contrasted on the type of spectral change: oddball (in a sequence: da-da-ba-da-da-da-da-ba-da-da-da-ba... ) versus roving (da-da-da-ba-ba-da-da-da-da-ba-ba-ba...). In both tone paradigms the oddball change in frequency between 100-100 Hz in the standard and 100-300 Hz for the deviant tone-pairs was constant. They differed however, on the duration of the ISI: 800 versus 930 milliseconds. Thirty seven infants, aged between 5 and 10 months participated in the study. They were exposed to all four paradigms in a counterbalanced order. The 128-channel EGI Hydrocel net was used to collect ERP responses. The collected data was averaged bilaterally over the left and right frontal and temporal channel groups.
    Infants in our study generated MMR to all four paradigms. Within phonemes, larger MMR was produced to the oddball than the roving change. Specifically, the significant mean difference between MMRs in both paradigms was found in the left frontal cluster. In contrast, both tone paradigms generated larger MMRs bilaterally in the temporal than frontal regions but manipulation of the ISI did not have an effect, i.e., the MMR was similar in both tone paradigms.Therefore, the shorter ISI was selected as more efficient, as it could be completed quicker, which is important in infant experiments.
    Based on the results, the phonemes oddball and the tone paradigm with the shorter ISI were chosen as prime generators of the auditory MMR in infants. The MMRs of the two selected paradigms were systematically compared in the final analysis. The MMR to frequency change in tones was larger than to the spectral change in phonemes both in the frontal and temporal regions, indicating that frequency change was more effective in generating MMR than spectral change.
    Overall the findings suggest that in auditory paradigms, oddball change is more efficient than roving change, but shortening the ISI duration by 130 msec does not have an effect on the MMR, which allows for shorter duration of the paradigms with the same number of trials. Finally, the frequency change in tones is more reliable than the spectral change in phonemes at generating efficient MMR, but the latter produces more differentiated MMR in the frontal area, suggesting distinct processes involved in processing tones and phonemes. The study is important from the methodological perspective for development of efficient electrophysiological auditory and specifically language assessments.

    Lana Jago, Michelle Peter, Samantha Durrant, Amy Bidgood, Julian Pine & Caroline Rowland (University of Liverpool & University of Salford, United Kingdom)

    Individual differences in productive vocabulary: Identifying toddlers who are slow to talk

    Some toddlers exhibit a delay in early productive vocabulary development and are classified as late talkers when they are two years old (Rescorla, 1989). Little is known about the predictors of late talking yet many late talking children go on to develop developmental language delay (DLD, Rescorla, 2002). In this study, we used our knowledge of individual differences in productive vocabulary development to better understand delays in vocabulary acquisition. The aims of this study were to a) identify predictors of individual differences in productive vocabulary at 24 months; b) to investigate if these predictors could identify children who are, and are not, slow to talk; and c) to establish if we can use these predictors to predict if children will be slow to learn to talk at 24 months.
    Participants for this study were taking part in a longitudinal project run in the North West of England, the Language 0-5 Project. We used data from 79 of the children in this project. The sample was split almost evenly for sex (girls=41).
    First, we investigated predictors of individual differences in productive vocabulary using regression analyses. We chose predictors which have been previously shown to be predictive of individual differences in vocabulary development These predictors included: sex, family history of language delay or dyslexia, input, conversational turns, earlier measures of receptive and productive vocabulary, mean length of utterances (MLU), speed of linguistic processing and non-word repetition (NWR). Five factors - conversational turns, earlier receptive vocabulary scores, MLU, speed of processing and NWR - explained unique variance in individual differences productive vocabulary scores after controlling for sex and earlier productive vocabulary.
    Second, we divided the participants into two groups: children classified as being slow to learn to talk (N=20) and children with typically developing language (N=59), and used receiver operating characteristic (ROC) curve analyses a to examine if these predictors can also distinguish between children who are, and are not slow to talk. The scores obtained by children in these groups, on the predictors identified as significant in regression analyses were successful at identifying children in both groups.
    Finally, we took the best cut off scores from the ROC analysis and used discriminant function analysis to determine the predictive power of different combinations of factors. We ran four analyses using the variables which were successful in predicting variance in the regression analyses: 1) all of the variables; 2) only the non-experimental measures; 3) only the variables from earlier time points, and 4) only non-experimental measures from earlier time points. The first three analysis yielded high percentages of accuracy and successfully distinguished between the two groups with good sensitivity and specificity. The final analysis yielded good accuracy and specificity but poor sensitivity, suggesting if we want to identify toddlers who are slow to talk, the inclusion of experimental procedures improves our prediction.
    This research found that many of the factors which are strong predictors of individual differences in productive vocabulary development can be used to identify children who are slow to talk.

    Gonzalo García-Castro, Mireia Marimon, Chiara Santolin & Núria Sebastian-Galles (University Pompeu Fabra - Center for Brain and Cognition, Spain / University of Potsdam, Germany)

    Encoding new word forms when contrastive phonemes are interchanged: A preliminary study on 8-months-old infants

    From 6 to 12 months of age infants attune their perceptual abilities to the phonetic repertoire of their native language. Nevertheless, 8 months Catalan-Spanish bilingual infants (whose languages share a significant number of cognates) seem to treat as similar some words in which contrastive phonemes have been interchanged (e.g., "dodi" [’doði] and dudi [’duði]) when facing a preferential looking task. Yet, anticipatory looking tasks have revealed that infants are able to discriminate such contrasts: they perceive /o/ and /u/ as different phonemes. One possible explanation is that /o/ and /u/ are frequently interchanged in translation equivalents across Catalan and Spanish (e.g., [pɾo’θeso] in Spanish, [pɾu'ses] in Catalan, both translations of "processing"). This might be leading infants to consider this contrast as non-relevant at the lexical level. In other words, bilingual infants might treat "dodi" and "dudi" as acceptable variants of the same word. The results of preferential looking studies point in the direction that such effect may be restricted to frequently interchanged contrasts, like /o/-/u/ and /e/-/ε/, but not to non-interchanged contrasts such as /e/-/u/ or /e/-/i/.
    We report preliminary data of a first study aimed at testing Jusczyk & Aslin (1995, Cognitive Psychology) Head-turn Preference Procedure, given previous failures or contrasting patterns in terms of familiarity vs. preference. Following J&A’s experiment 4, infants were familiarised with short sentences spoken in the infant’s dominant language, embedding two made-up words (e.g. "gon", "mus"). At test, infants were presented with familiar (e.g. "gon", "mus") and unfamiliar words (e.g. "for", "pul") and looking times were measured as dependent variable. The results of monolingual infants (Figure 1; N = 11) suggests that they were able to discriminate and showed a novelty preference pattern. We computed a non-parametric test on median values (unfamiliar: Median = 11346.17 ms, SEM = 606.55; familiar: Median = 8996.5 ms, SEM = 610.54); W = 9, p = 0.032 (Cohen’s d = 0.7)). Our data replicated J&A’s results.
    We are currently testing the critical experiment with 8-month-old Spanish-Catalan monolinguals and bilinguals. In this experiment infants are not presented with the familiarised words in the test phase, but with stimuli where the test vowels are exchanged at the test phase (e.g., familiarised with "for", "pul" and tested with "fur", "pol" and "gon", "mus" -new items). We predict that if bilingual infants treat the /o-u/ contrast as exchangeable at the lexical level, they will show a novelty preference for new items (e.g. they will look longer when listening "gon", "mus", than when listening to "fur", "pol").

    Linda Kelly, Jean Quigley & Elizabeth Nixon (Trinity College Dublin, Ireland)

    It’s your turn: Conversational turn-taking in father-child interaction and child executive function

    Introduction: Emerging research has suggested that features of child-directed speech, such as vocabulary diversity and language complexity, may support child development of executive function (EF). Beyond exposure to varied and complex language, it has recently been proposed that conversational turn-taking may support deeper engagement by the child with the linguistic structure of speech input. Turn-taking reflects the degree to which parent and child are actively engaged with one another, with greater conversational balance signalling higher levels of joint-attention and parental responsiveness. In relation to the core components of EF – working memory, cognitive flexibility and inhibitory control – conversational turn-taking requires children to continuously switch from the role of speaker to listener, to wait until it is their turn to speak again, and to monitor what is being said, relating incoming verbal information to previously heard speech. Despite these theoretical links between turn-taking and EF, no research thus far has been conducted to test these associations. Furthermore, the majority of previous research exploring the association between language and EF have focused on mothers. The current pilot study sought to investigate the longitudinal associations between fathers’ vocabulary diversity, language complexity and turn-taking during father-child interaction, and child EF.
    Method: Linguistic data was drawn from video-recordings of a sample of 20 two-year old children (10 females) and their fathers performing a problem-solving task. These interactions were transcribed and CHILDES was used to calculate fathers’ mean length of utterance (MLU; a measure of syntactic complexity calculated by dividing the total number of utterances by total number of morphemes), fathers’ vocabulary diversity (VOCD; a measure of the number of unique words), and the ratio of child-father mean length of turn (MLT; a measure of child-father turn-taking, such that an MLT ratio closer to 1 indicates greater conversational balance). Child language development was assessed at age 3 using the Bayley Scales of Infant Development and EF was assessed at age 4 using an age-appropriate battery of tasks.
    Results: MLT ratio was positively associated with child EF at age 4 (r = .598, p < .01). No significant correlations between EF and father MLU or VOCD were observed. A bivariate correlation indicated that controlling for child language at age 3 had little influence on the relationship between MLT ratio at age 2 and child EF at age 4. There was no significant difference in MLT ratio comparing father-daughter and father-son interactions.
    Discussion: Greater balance in conversational turn-taking between fathers and their children during a problem-solving task at age two was positively associated with child EF at age four. As EF is a critical component of child cognitive development and is an important predictor of school readiness and achievement, the findings of the current research suggest that targeting early conversational turn-taking in parent-child interaction may be an effective means of promoting EF development during the preschool period.

    Claudia Männel, Hellmuth Obrig, Arno Villringer, Merav Ahissar & Gesa Schaadt (University of Leipzig & Max Planck Institute for Human Cognitive and Brain Sciences, Germany / The Hebrew University of Jerusalem, Israel)

    Infants benefit from auditory predictive coding: Perceptual anchoring as a stepping stone into language acquisition

    Listeners predict upcoming events through experience. Predictive coding is functional in infants and adults, as evidenced by prediction errors in the event-related brain potential (ERP). Moreover, adults have been reported to show behavioral advantages in frequency discrimination in the context of repeated reference tones, which serve as anchors for the processing of subsequent sounds. However, this immediate benefit of predictive coding for the processing of new information has not yet been tested for infants.
    By employing ERPs, we here evaluated whether the effect of context-based predictions from reference stimuli (i.e., perceptual anchoring) plays a role in language acquisition. We presented 2-month-old infants with tone pairs and 6-month-old infants with syllable pairs across anchor (i.e. constant first stimulus) and no-anchor conditions (i.e. variable first stimulus). This experimental design allowed for comparing responses to identical second stimuli proceeded either by constant anchor or random (no-anchor) first stimuli.
    For 2-month-olds, ERP responses to the second tones revealed a modulation of infants’ obligatory ERP components, with more positive-going fronto-central responses in the anchor than the no-anchor condition. This effect resembled the adult P2 that is modulated by selective attention and training, resulting in faster auditory discrimination. Thus, infants process physically identical stimuli differently depending on the given stimulus environment. For 6-month-olds, preliminary ERP responses to the second syllables suggest similar effects as in the tone experiment. Crucially, when infants were subsequently tested on their syllable recognition, they only showed ERP familiarity responses for syllables previously heard under anchor conditions. Thus, our experiments demonstrate for the first time that infants do not only apply predictive coding mechanisms, but show processing benefits from repeated information in their learning environment, indicating that perceptual anchoring is an essential learning mechanism in language acquisition.

    Marina Kalashnikova, Usha Goswami & Denis Burnham (Basque Center on Cognition, Brain and Language, Spain / University of Cambridge, United Kingdom / University of Western Sydney, Australia)

    Infant-directed speech and parent-directed signals in interactions with infants at family risk for dyslexia

    Mothers produce acoustically-exaggerated vowels in infant-directed speech (IDS). This IDS component, known as vowel hyperarticulation, is proposed to facilitate early language development by providing infants with especially clear speech input. However, when mothers speak to infants at-risk for developmental dyslexia, they do not hyperarticulate vowels (Kalashnikova et al., 2018). This study investigated why vowel hyperarticulation is absent in IDS to at-risk infants, and more specifically whether vowel hyperarticulation in IDS is a product of maternal infant-directed behaviour or a product of infants’ parent-directed cues.
    Fifteen mother-infant dyads were included in this study. In eight dyads, the infant was at family risk for dyslexia by virtue of having a dyslexic parent (infant = IAR, mother = MAR). In seven dyads the infant was not at-risk for any language disorders (infant = INAR, mother = MNAR). An innovative cross-dyad design was used to record speech in three conditions: when mothers interacted with (i) their own infants (MNAR-own-INAR, MAR-own-IAR), (ii) infants who were not their own but in the same risk category (MNAR-other-INAR, MAR-other-IAR) and (iii) infants who were not their own and in the opposite risk category (MNAR-other-IAR, MAR-other-INAR). Mothers’ ADS was also recorded. Mothers were aware of their own infant’s risk status, but they were blind to the other infants’ status and the purpose of the study.
    Productions of the three corner vowels /i, u, a/ were elicited from mothers’ speech in all conditions, and first and second formant values were extracted and used to calculate the area of the vowel triangle for each mother in each condition. Hyper-scores were then calculated by dividing each mother’s IDS triangle area by her own ADS triangle area, which allows each mother’s ADS to act as her own control. A linear mixed effect model was constructed with vowel hyper scores as the dependent variable, infant risk group and the three levels of condition as predictors and random slopes for infant group and random intercepts for infant.
    Results showed that overall, INAR heard more exaggerated vowels than IAR (β = .6, SE = .271, p = .04). There was also a significant group by condition interaction. Pairwise comparisons were conducted for each infant group (Figure 1). In the NAR group, there were no differences in vowel hyper scores between MNAR-own-INAR and MNAR-other-INAR. However, vowel hyper scores were significantly reduced when INARs were addressed by an MAR than by an MNAR. In the AR infant group, vowel hyper scores were equivalent when the infants were addressed by a mother who was or was not their own. Furthermore, there were no significant differences between IARs being spoken to by an MAR or an MNAR.
    In summary, mothers of typically-developing infants adjusted their IDS to the baby’s risk status, while mothers of at-risk babies did not. These findings show that IDS is determined reciprocally by characteristics of both partners in the dyad: both infant and maternal factors are essential for the vowel hyperarticulation component of IDS.

    Maria Arredondo, Eloise Moss, Riley Bizzotto, Ana Ivkov, Sarah Cheung, Elsa Arteaga, Richard Aslin & Janet Werker (The University of British Columbia, Canada / Haskins Laboratories, USA)

    The effects of bilingualism on attention in the developing brain at 6- and 10-months

    Does early bilingual exposure alter the neural organization of cognitive processes? Bilingualism theories suggest that the need to selectively attend and alternate between two languages fosters an overall improvement of Executive Functions in bilinguals—an improvement likely supported by the brain’s frontal cortex (Bialystok, 2001). While growing research yields inconclusive results on the notion of a bilingual cognitive advantage, greater understanding could be garnered by investigating more directly the impact of bilingualism on frontal lobe maturation. Recent work suggests bilingual children (Arredondo et al., 2017) and adults (Garbin et al., 2010) engage left frontal “language” regions during non-verbal executive function tasks, whereas monolinguals engage right frontal regions. Little is known about the emergence of these changes and whether they are evident when babies are not yet producing language. Hence, the present study investigates whether these differences are evident during the first year of life.
    Method: Monolingual- and bilingual-learning infants (N=56) took part in a longitudinal study at 6- and 10-months. Using functional near-infrared spectroscopy, we measured activity from 42-channels covering bilateral frontal and temporal brain regions. Infants completed a version of the Infant Orienting with Attention task (IOWA; Ross-Sheehy et al., 2015), where an asterisk (*) is presented (100-ms) before an image appears on a conflicting (Incongruent, experimental condition) or non-conflicting (Congruent, control condition) side of a visual display. During Congruent trials, the asterisk and image appear on the same side; during Incongruent trials, the asterisk and image appear on the opposite side; see Figure A.
    Results: At 6-months, preliminary results suggest differences in brain activity between monolingual and bilingual infants. During Incongruent trials, bilinguals activate more left fronto-temporal channels than monolinguals; see Figure B. Data collection with 10-month-olds is underway; thus, we expect to be able to present developmental differences and discuss their relation to bilingualism by June of next year.
    Conclusion: These preliminary results support and extend prior work, by revealing that bilingual infants also activate left frontal regions to a greater extent than monolinguals for non-linguistic cognitive functions (i.e., selective attention). The presentation will discuss the developing brain’s functional organization for non-verbal attentional processes. Specifically, these findings may support the notion that left frontal regions develop to support the ability to process linguistic and non-linguistic conflicting information, and extensive bilingual experiences may strengthen its computational capabilities. Future analyses will also relate these functional differences to task performance and the amount of dual-language experience. Finally, these findings carry powerful implications for understanding how early (dual) language experiences can impact domain-general cognitive processing and neural plasticity.

    Yana Kuchirko & Catherine Tamis-LeMonda (City University of New York / New York University, USA)

    The cultural context of parent-infant language interactions: Variability, specificity, and generalizability

    Parents across the globe vary in how they interact with infants (Keller, 2013). How parents speak to infants, what they say, and in what context convey messages to infants about the meaning and function of language in their communities (Schieffelin & Ochs, 1986). Cultural differences in parental language input to infants are typically reported using averages and are meaningful; they point to patterns of communication that are culturally embedded. However, the researchers’ choice to beam a light on cultural differences masks the enormous intra-cultural variability that is characteristic of each community. In this presentation, we will draw from our work with ethnically and racially diverse infants and mothers from low-income communities in the United States. Specifically, we will present on the (1) cultural specificity, variability, and generalizability of parents’ language input to infants; and (2) sources of variability in parent-infant interactions.
    We observed African American, Dominican, Mexican mother-infant dyads from low-income communities in the United States interacting with books and toys in the home when infants were 14 and 24 months old. Videos of mother-infant interactions were coded for 1) mothers’ referential language (i.e., mother provides or asks for information about objects or ongoing activities; ‘That’s blue’; ‘Eso es azul’; ‘What color is this?’ ‘Que color es este?’ “You are stirring”), 2) mothers’ regulatory language (i.e., mother directs, prohibits, or corrects infants’ actions; e.g., ‘Look!’ ‘Mira!’;‘Put it there’ ‘Ponlo ah’); and 3) infant vocalizations.
    Our results suggested cultural specificity of mothers’ language to infants. Mexican and Dominican mothers on average used more regulatory language than African American mothers at both infant ages (14 months= F(2,188) = 8.94, p < .001, and at 24 months=F(2,188) = 9.78, p < .001). There were no differences in referential language by ethnic group. However, mean differences in regulatory language masked large intra-cultural variation. Mothers from all three ethnic groups displayed enormous variability in both referential and regulatory language at both 14 and 24 months, as indicated by figures 1a-d. The range of mother’s use of regulatory language across ethnic groups, for instance, is almost identical: There were Mexican, Dominican, and African American mothers who did not use regulatory language at all with their infants, and there were mothers from all ethnic groups who used regulatory language about 140 times.
    Despite cultural differences in frequencies of mothers’ use of referential and regulatory language, our results suggest striking similarity in how mothers temporally align their language with that of their infants. Using sequential analyses, we examined the probability that mothers will use referential and regulatory language within 3 seconds of infant vocalization. Our findings show that when infants vocalized, mothers from all three ethnic groups were more likely than chance to use referential language, and less likely than chance to use regulatory language (F(5,725)=89.38, p < .001). In this presentation, we will discuss individual and contextual factors that shape mothers’ use of referential and regulatory language to infants, paying especially close attention to the cultural and economic variables that contribute to inter- and intra-cultural variability in parent-infant interactions.

    Anna Aumeistere, Sybrine Bultena & Susanne Brouwer (Radboud University, The Netherlands)

    The role of grammatical gender in predictive processing in Russian

    Languages with grammatical gender allow language users to predict upcoming nouns based on preceding gender marked articles. Previous studies have shown that such morphosyntactic cues aid adult and child native (L1) speakers in online sentence interpretation (e.g. Lew-Williams & Fernald, 2010; Brouwer et al., 2017). The vast majority of this type of research looked at the role of gender marked articles in Roman and West-Germanic languages. The novelty of this study is that we focused on gender marked adjectives in Russian.
    The aim of the current study was to investigate whether Russian speaking adults and children can use gender marked adjectives to predict the upcoming noun. Russian consists of three possible genders: masculine (masc.), feminine (fem.) and neuter (neut.). In our study we looked only at feminine and masculine gender because neuter nouns usually describe abstract concepts, which are typically not picturable (e.g. счастье ‘happinessneut’). Adjectives have several possible endings depending on the gender of the following noun (e.g. белая машина – ‘whitefem carfem’, and белый снег Ø – ‘whitemasc snowmasc’).
    Participants were adult L1 speakers of Russian (N = 31) and child L1 speakers of Russian (N = 49) of varying ages (range = 2;0 to 7;0) tested in Latvia. In a looking-while-listening paradigm, participants heard simple questions in Russian (e.g. ‘Where is the prettymasc greenmasc chairmasc?) and saw two pictures on the screen. One picture was the target (e.g. chairmasc) and the other the distractor of either the same (e.g. ballmasc) or different gender (e.g. bookfem). All questions had the same structure with two gender marked adjectives followed by a noun.
    We hypothesised that participants would look more and faster at the target picture in the different gender than the same gender condition. More specifically, our analyses examined whether they could use gender-marking anticipatorily (i.e. before the onset of the noun) or facilitatively (i.e. after the onset of the noun).
    Mixed-effects logistic regression analyses revealed that adults anticipated the upcoming noun (i.e. they predict before noun onset; Fig 1A), whereas children were not able to anticipate the noun before its onset. However, they showed a facilitation effect from noun onset onwards (Fig. 1B). Subsequent analyses on the child data suggested that the facilitation effect increased with age. The results of our study extend and improve existing theoretical knowledge regarding the role of grammatical gender in online sentence processing.

    Brianna Mcmillan, Lillian Masek, Sarah Paterson, Andrew Ribner, Clancy Blair & Kathy Hirsh-Pasek (Temple University & NYU-Steinhardt, USA)

    Socioeconomic differences in the connection between early attention, parent-child interactions, and language development

    A child’s ability to attend is a powerful driver of language development (Morales et al., 2000; Tomasello & Farrar, 1986). How does early attention exerts its influence on language development? The social interactions between a caregiver and child are foundational for children’s language development. High-quality parent-child interactions create a communication foundation that predicts language development above and beyond the amount of language children hear (Hirsh-Pasek et al., 2015). In the present study we ask whether early attention predicts vocabulary development indirectly through parent-child interactions. We predict that children who are better at sustaining attention at a young age will demonstrate more fluid interactions and have larger vocabularies, indicating that a child’s attentional disposition might support the type of parent-child interactions that then support language outcomes.
    Methods: Participants were recruited as part of a larger longitudinal study of executive function development; the final sample was comprised of 60 children. Sustained attention at 4.5 months was measured using the Multisensory Attention Assessment Protocol (MAAP). Mother-child interaction quality was assessed during a five-minute shared book-sharing task at 14.5 months. Videos were coded for the fluency of the interaction. Fluency and connectedness was evaluated on a 7-point Likert scale, with 1-point indicating no conversation occurred, and 7-points indicating the conversation was fluid and balanced. The MCDI: Words and Gestures was used to measure vocabulary at 14.5 months.
    Results: Mediation regression analyses were conducted to investigate the hypothesis that early attention is related to the quality of the parent-child interaction and children’s vocabulary development. Analyses were run using Process, and attention data were log transformed. Results indicated that early attention is related to the quality of parent-child interactions, β = 1.62, t(59) = 2.57, p = .01. Interaction quality was related to vocabulary, β = 4.74, t(58) = 2.09, p = .04. There was a non-significant direct relation between early attention and vocabulary, β = 1.18, t(58) = .09, p = .93. Results confirmed the indirect effect of attention on vocabulary development (β = 8.70, CI = -11.78 to -.19). The 95% confidence interval of the indirect effects was obtained with 5,000 bootstrap resamples (Preacher & Hayes, 2008).
    Discussion: Our results indicate that early attention is related to language development through the quality of the parent-child interactions. Children who have greater sustained attention at 4.5 months have higher quality parent-child interactions, which in turn is related to larger vocabularies. It is likely that these children are better able to cope with the attentional demands placed on them in a back and forth conversation, and this provides them with enriched opportunities for communication and language development. These results highlight the interconnection between developing cognitive systems and the experiential forces that drive language development. In further analyses we will examine whether we observe this same pattern of results in low-income, predominately Spanish speaking dyads.

    Melanie Steffi Schreiner, Vivien Radtke & Nivedita Mani (University of Göttingen, Germany)

    Talking to me? Infants’ associations of infant- and adult-directed speech

    Across many cultures, infants and young children are spontaneously addressed in a special way (Ferguson, 1964). This speech register, called infant-directed speech (hereafter, IDS), is produced as adults modify the speech they commonly use to communicate with other adults, known as adult-directed speech (hereafter, ADS). While the preference for IDS over ADS and the beneficial impact of early language learning has been reported in numerous studies, it remains unclear whether infants attend to IDS because of its attentional salience or because they already associate this register as speech for them. In the current study, we explore whether infants’ respond differently to sequences where the speech register matched the addressee compared to nonmatching presentations.
    Using an eye-tracking task, 6-month-old monolingual infants (n=35) were presented with two female speakers speaking in IDS or ADS towards a person hidden behind a curtain. The addressee was subsequently revealed to be either matching the speech register used (e.g., IDS and infant) or mismatching (e.g., IDS and adult). We measured infants’ pupil diameter in response to the reveal of the addressee with the assumption that larger pupil diameters represent an index of infants’ violation of expectation.
    In order to assess infants’ preference in terms of internal arousal to the two speech registers, we also measured infants’ pupil diameter when listening to the female speaker using IDS and ADS. In line with previous behavioral studies’ findings on a preference of IDS over ADS, we expected larger pupils in response to IDS compared to ADS.
    A 2x2 ANOVA with the within-subjects factor condition (match vs. mismatch) and the within subjects-factor speech register (IDS vs. ADS), revealed a significant main effect of condition, F(1, 34)=5.133, p=0.03. Infants’ pupil diameter was larger if the addressee did not match the previously used register compared to matching addressee and register. Comparing pupil diameter across the two speech registers IDS and ADS revealed that infants showed significantly larger pupil diameters when listening to IDS compared to ADS, t(25)=-4.33, p<0.001.
    The results suggest that infants as young as 6 months of age already associate ADS with adults and IDS with young children. In addition, IDS seems to create greater infant arousal as indicated by larger pupil sizes in response to IDS compared to ADS. Given that the speaker was not explicitly directing the input to the child being tested but rather towards a person behind a curtain, this may suggest that infants’ are attracted by IDS even if it is just overheard speech. Thus, the current findings underline the attentional salience of IDS and, in addition, provide first evidence that infants may attend to IDS because they already associate this register with speech input exclusively addressed towards them.

    Tina Whyte-Ball, Catherine Best, Karen Mulak & Marina Kalashnikova (MARCS Institute, Western University of Sydney, Australia / University of Maryland, USA / Basque Center on Cognition, Brain and Language, Spain)

    Effects of regional accent exposure on bilingual versus monolingual infants’ cross-accent word recognition

    Bilinguals appear to possess greater linguistic flexibility than monolinguals (Cummins, 1978; Graf Estes & Hay, 2015). For instance, whereas monolinguals succeed on novel word learning only when the stimuli were pronounced in their native language, bilinguals succeeded when they were pronounced in both languages, like their input (Mattock et al., 2010). These findings suggest that the bilingual experience provides richer phonetic input, leading to more flexible representations of words, and facilitating processing of non-native speech or foreign accents. We know that monolinguals at 19-months recognise familiar words in both a native and unfamiliar accent, while at 15-months they only recognise words in the native accent (Best et al., 2009). Therefore, we examined whether bilingual experience increases perceptual flexibility by testing whether bilinguals benefit more than monolinguals in their ability to take advantage of a pre-exposure story before being tested on recognition of familiar words in a non-native accent.
    Thirty-two monolingual (Australian English: AusE) and 32 bilingual (AusE+heritage language) 17-month-olds from Sydney, Australia heard a four-minute story in either the native (AusE) or an unfamiliar accent (Jamaican Mesolect English: JaME). This was followed by two listening preference tests (one in each accent) in which children heard blocks of high-frequency toddler words (e.g., “bottle”) and blocks of low-frequency adult words (e.g., “soot”). In the native accent, we would expect a listening preference for the toddler words for both monolinguals and bilinguals. If bilinguals are more flexible in cross-accent word recognition relative to monolinguals, we expect a listening preference for toddler words in JaME by bilinguals but not monolinguals.
    A mixed-effects linear model was fitted using toddlers’ looking time, with word familiarity (unfamiliar vs. familiar words) test accent (AusE vs. JaME), and language experience (monolingual vs. bilingual) as fixed effects. Random intercepts were included for participant and test accent order. A main effect of familiarity revealed a preference for familiar over unfamiliar words F(987.64)=19.917, p=.001. An interaction among language experience and test accent F(987.90)= 4.152, p = .041 revealed that bilinguals preferred the unfamiliar accent. An interaction of language experience, passage accent, and test accent (F(987.89)=12.017, p<0.001) revealed that bilinguals listened longer to words in the unfamiliar JaME accent test overall following exposure to the AusE passage, while monolinguals listened longer to the AusE test words overall following exposure to the AusE passage.
    The results suggest that bilinguals and monolinguals recognise familiar words in both accents; however, the monolinguals showed a preference for words pronounced in AusE after hearing the story in AusE, while the bilinguals showed a preference for JaME. So the monolinguals preferred familiarity of the accent, whereas the bilinguals preferred the novelty. Monolinguals’ preference for the familiar accent is consistent with previous findings showing listening preference in the native accent (Best & Kitamura, 2014). However, the bilinguals in our study showed a preference for the unfamiliar accent after hearing the AusE passage (a preferential switch between accents). This confirms that bilinguals might be more tolerant of variation in the production of words as a result of hearing more variable input.

    Rachael W Cheung, Calum Hartley, Kirsty Dunn, Rebecca Frost & Padraic Monaghan (Lancaster University, United Kingdom)

    Environmental effects on parental teaching and infant word learning

    Word learning occurs in complex, multi-modal environments, and how children determine accurate word-referent pairs within these context-rich environments is uncertain. Monaghan (2017) found that the combination of multiple cues from the environment, including gestures, could guide learning about an intended referent in a canalisation model of word learning. In particular, pointing by care-givers provides valuable information about the intended referent, which is likely to be particularly helpful during the phase of rapid vocabulary acquisition between 18–24 months.
    Infants initiate pointing at objects in the presence of others at 11–12 months (Carpenter et al., 1998), and begin to point with more referential efficacy by 24 months (O’Neill & Topolovec, 2001). Parental gestural cues are prevalent during this time (Iverson, 1999), and the quality of parental gestural cues has been found to boost word learning (Cartmill et al., 2013). However, the usefulness of pointing to disambiguate potential referents in the environment is contingent upon how many potential referents are present in the child’s environment. Are care-givers sensitive to this contingency?
    The present study aimed to examine the effect of environmental manipulations on parental gesture cues during word learning. We hypothesised that parents would initiate more pointing when teaching their child the name of a target object that appeared amongst more compared to fewer object foils, and that infants of parents of those who offer more gestural and speech cues would show better word learning on test trials. A total of 47 infants aged between 18–24-months-old undertook a word learning task where the number of foils presented with targets varied across three conditions; a) one target object, b) one target object and one foil, and c) one target and five foils. Parents were instructed to teach their child the novel target words for each trial. Each trial lasted for 30 seconds. Infants were then presented with each of the three targets and asked for the target object in turn using the appropriate target label, with the other targets used as foils. Dyads were video recorded and analysed for gesture and speech cues. A coding manual based on Rowe, Özçalışkan and Goldin-Meadow (2008) was used, where gesture tokens (sheer number of gestures produced by parent or child with or without speech), gesture types (number of different gestures), speech utterances containing gesture, and gesture only without speech were coded.
    Results indicated that parents provided more gestural cues with a higher number of potential referents, with the highest increase observed in deictic referent-specific gestures and overall gesture use between the condition with no foils versus the condition with five. However, parents offered more speech cues in condition B. Gesture and speech cues did not directly relate to child accuracy during testing, although accuracy was predicted by condition, with children performing best in condition B. These results indicate that the immediate environment influences parental gestural cues during early word learning and may act as crucial cues that aid referent identification.

    Chiara Santolin, Jenny R. Saffran & Nuria Sebastian-Galles (University Pompeu Fabra - Center for Brain and Cognition, Spain / University of Wisconsin-Madison, USA)

    Non-linguistic artificial grammar learning in 13-month-old infants: A cross-lab replication study

    When infants begin to acquire grammatical structures of their native languages, they learn that words are grouped into categories, and organized according to hierarchical patterns. They also learn that a given word category predicts the presence of a member of another word category, computing predictive (statistical) dependencies across words to discover linguistic phrase structure. Previous studies showed that 12- to 13-month-old infants track phrase structure from artificial languages (Saffran et al., 2008) and auditory nonlinguistic inputs (Santolin & Saffran, under rev), suggesting that predictability in the input facilitates learning.
    The current research is aimed at replicating Santolin & Saffran (under rev) and compare results across studies. Stimuli were strings produced using a set of 5 nonlinguistic sounds (Mac Alert sounds), were clearly discriminable from one another, and were intended to correspond to words of the linguistic grammar used in Saffran et al. 2008. Grammar comprised 8 strings containing predictive (statistical) dependencies: the presence of a given sound predicted the presence of another sound within the same string, and sound strings were embedded into other sound strings conferring hierarchical organization to the grammar (Fig. 1). We used Headturn Preference to assess learning. After familiarization with the grammar, infants were tested with familiar (grammar-matching) strings and novel (grammar-violating) strings. Group A included 13-month-olds infants recruited in Madison (WI, USA), and Group B included infants of the same age recruited in Barcelona (Spain). Looking time measures revealed that both groups discriminated between familiar and unfamiliar test strings, thus replicating Santolin & Saffran and in line with what found with linguistic materials (Saffran et al., 2008), and in different age ranges (Saffran, 2001; Saffran, 2002) and species (Abe & Watanabe, 2011; Wilson et al., 2013).
    Interestingly, though, the two groups of infants showed opposite patterns of preference. In Group A, infants listened longer to novel “ungrammatical” strings (5.49s vs. 6.42s; t(26)=2.454, p=.021, d=.47) whereas in Group B, infants listened longer to familiar “grammatical” strings (8.17s vs. 6.81s; t(16)=2.403, p=.027, d=.55; Fig. 2). Although preliminary, further analysis revealed a significant difference across groups (F(1,44)=11.759, p=.001), which seems to be driven by a specific difference in looking times for familiar test strings (t(25)=2.457, p=.021). One possible explanation of such result points to different language experience: Group A comprised infants raised in a monolingual environment, while Group B comprised infants raised in multilingual one. Whether multilanguage experience may affect infant direction of preference in this task, it has to be determined, and represents an intriguing question for further research.
    Overall, this research provides replicable evidence of infant learning of phrase structure in linguistic and nonlinguistic inputs. Predictive dependencies may facilitate learning of phrase structure also in nonlinguistic auditory inputs, pointing to predictability as an important constraint on learning.

    Priscilla Fung, Helen Buckler & Elizabeth Johnson (University of Toronto, Canada / University of Nottingham, United Kingdom)

    The effect of linguistically diverse input on vocabulary growth in infancy and toddlerhood

    Over the past decade, research comparing language acquisition in monolingual and bilingual infants and toddlers has become increasingly common (e.g., Costa & Sebastián-Gallés, 2014). We have learned that monolingual children recognize English words faster than bilingual children (e.g., De Groot et al., 2002), and that bilingual children often have smaller vocabularies in each of their languages than monolingual children (but similar total vocabulary size if both languages are considered; e.g., Hoff et al., 2012).
    But does growing up in a multilingual environment affect monolinguals as well as bilinguals? Does exposure to multiple varieties of their native language affect language development in a manner similar to bilingualism? Existing data suggests that children routinely exposed to multiple varieties of their native language (multi-accent children) process speech differently than those exposed to only one variant (mono-accent children; e.g., Floccia et al., 2012; Van der Feest & Johnson, 2016). For example, multi-accent 24-month-olds are slower in their recognition of familiar words spoken in the locally dominant variety of English than their mono-accent peers (Buckler, Oczak-Arsic, Siddiqui, & Johnson, 2017).
    To date, despite growing evidence that multi-accent exposure affects speech processing, no study has directly examined the role of accent exposure in vocabulary development. In the current study, we use the MacArthur-Bates CDI forms (Words and Gestures: 11 to 18 months old; Words and Sentences: 19 to 30 months old; CDI-III: 31 to 34 months old) to compare vocabulary growth in monolingual children exposed to multiple varieties of English (spoken by caregivers with whom they spent at least 40 hours a week) to monolingual children exposed to only the locally dominant variety of English (less than six hours a week to other varieties of English). These children were all exposed to at least 80% English. We are also currently collecting comparable vocabulary data on bilingual children (ranging between 30% to 70% exposure to English; some of whom are exposed to many varieties of English and some of whom are exposed to only the locally dominant variety of English). Participants were assigned to exposure bins following collection of detailed language background information in person by researchers during a lab visit.
    Thus far, we have collected 1419 vocabulary questionnaires from 12- to 34-month-old children (mono-accent: N=767; multi-accent: N=506; bilingual: N=146). We aim to collect at least 400 more data points from bilinguals by June 2019. Preliminary results show that mono- and multi-accent infants and toddlers exhibit a similar rate of vocabulary growth. But as children near their third birthday, exposure to multiple accents and/or languages may have a measurable impact on vocabulary size. By collecting additional data (especially in these older age ranges), we will be able to draw a firmer conclusion regarding how exposure to multiple languages and/or accents impact lexical development. This study highlights the importance of considering children’s specific language environments instead of simply having a binary distinction of monolingual and bilingual, which could overlook some fine-grained variations within the populations.

    Hui Chen, Daniel Tsz Hin Lee, Regine Yee King Lai, Thierry Nazzi & Hintat Cheung (CNRS – Université Paris Descartes, France / University of Hong Kong, Hong Kong)

    Phonological biases in early word learning in Cantonese-learning toddlers

    Consonants and vowels have been considered to carry different functions in language processing, vowels being more important for prosodic and syntactic processes and consonants for lexical-related processes (Nespor et al., 2003). This C-bias hypothesis in lexical processing has been supported by studies with adults and infants in languages such as English, French, Spanish, although cross-linguistic variations exist (Nazzi, 2005; Nazzi et al., 2016). As these studies mainly examined non-tonal languages, it is unclear whether the C-bias exists in tonal languages such as Cantonese, which also has more consonants than vowels in the phonological system, but more importantly, has tones, which are acoustically more linked to vowels (Khouw & Ciocca, 2007). It is interesting to know whether more processing weight is put on vowels or on consonants in tonal languages like Cantonese, and such investigations will provide implications for theoretical discussions on the origins of this phonological bias and related hypotheses such as the acoustic/phonetic hypothesis stressing the acoustic differences between consonants and vowels (Floccia et al., 2014), and the lexical hypothesis stressing the structure of the lexicon (Keidel et al., 2007).
    This study therefore investigates the early phonological biases in word learning in 20- and 30-month-old Cantonese-learning toddlers (target sample size: 32 per group/condition). Looking behaviours were recorded by an eyetracker while toddlers were watching animated cartoons in Cantonese to learn pairs of novel words. Two conditions, Consonant contrast and Vowel contrast, were tested in a between-subject design. In each test, after 2 practice trials, toddlers proceeded to 8 experimental trials, where they had to learn two new novel word/object associations on the training phase in a trial, and the novel words differed minimally by one phonological feature, i.e., either a consonant (e.g., /tœ6/ vs. /kœ6/) or a vowel (e.g., /khim3/ vs. /khɛm3/). Proportion of looks to the target object before and after the onset of the target word on the test phase were calculated and compared.
    A mixed design ANOVA model on proportion of target looks was used to analyze two between-subject factors of ‘age’ (20-month-old and 30-month-old) and ‘condition’ (Consonant and Vowel), and one within-subject factor of ‘naming’ (Prenaming and Postnaming). Current results from 95 toddlers (20-month-olds, Consonant: 31, Vowel: 27; 30-month-olds, Consonant: 20, Vowel: 17) revealed an overall significant main effect of ‘naming’ (F(1,91) = 5.65, p = .02), indicating that the toddlers increased their looks towards the target objects from pre- (49.6%) to post-naming phase (52.5%) (Figure 1). Although no significant ‘age’ or ‘condition’ effect, nor interactions, were found, Figure shows that the effect is driven by the Vowel condition at 30 months (p = .012, 1-tailed).
    These preliminary findings show that, in general, Cantonese-learning toddlers are sensitive to phonological contrasts minimally differing in only 1 feature in a word learning task by 20-to-30-months. Completion of the experiment will clarify whether a Vowel bias is present at 30 months, which would establish that the interaction between phonological acquisition and lexical processing in tonal languages does not follow the pattern of non-tonal languages in the literature.

    Thilanga Dilum Wewalaarachchi & Leher Singh (National University of Singapore, Singapore)

    Phonological biases in Mandarin learners: Evidence from novel word learning

    A central question in the study of phonological development is the extent to which different sources of phonological variation constrain language processing. Traditionally, it was thought that consonants receive greater priority in lexical processing (e.g. Nespor, Peña & Mehler, 2003). However, it is not clear whether a consonant bias, derived mostly from evidence from European languages, applies to languages that use tones in addition to vowels and consonants to distinguish word meanings. Given that tone language learners represent the linguistic majority (Yip, 2002), empirical validation from this population is necessary to test the robustness of consonant biases across languages. The goal of the present studies was to investigate phonological biases in Mandarin learners. In Experiment 1, 3-year-old Mandarin monolinguals (N = 24) were taught novel object-word pairings during the familiarisation phase, and were tested on their recognition of correct pronunciations and mispronunciations of these words when they underwent a vowel, consonant or a tone substitution during the test phase. Children’s visual responses to the target object (toy labelled during the familiarisation phase) and the distractor object (toy unlabelled during the familiarisation phase) during the test phase were tracked. Results revealed that although children were sensitive to all types of mispronunciations, they did not exhibit equal sensitivity to vowel, consonant and tone variation: children were most sensitive to vowel substitutions, mapping vowel mispronunciations onto the unlabelled toy (Figure 1). Sensitivity was similar for consonants and tones. Experiment 2 was designed to push the bounds of this bias by determining whether children would prioritize vowel, consonant or tones when these sources of variation were pitted against one another. Unlike in Experiment 1 where children were taught one label for an item and tested on their memory of that label, in Experiment 2, children were taught two labels during familiarization. In addition, children were presented with conflict trials where the mispronounced label differed from the target object by one source of phonological variation and differed from the competitor object by another source of phonological variation. For instance, children were taught ‘Men3’ and ‘Lin3’, and tested with ‘Len3’. In this example, children have to choose to preserve either consonant information by fixating on the object named ‘Men3’ or to preserve vowel information by fixating on the object named ‘Lin3’ allowing us to investigate which cue children chose to dispense with. In Experiment 2, 3-year-old Mandarin monolingual children (N = 18) were presented with correct pronunciations and conflict trials (vowel vs. consonant, vowel vs. tone and consonant vs. tone). Visual responses to the target object and the competitor object were tracked. Although children did recognize correctly pronounced words, results revealed that children did not selectively prioritize one source of phonological variation over any other in conflict trials. Findings suggest that Mandarin learning children demonstrate a task-selective bias towards vowel information over consonant and tone information. More specifically, children only prioritized vowel information when phonological cues were not in conflict with one another. Results suggest that phonological biases are both language- and context-dependent in young children.

    Katerina Chladkova, Nikola Paillereau, Filip Smolik & Vaclav Jonas Podlipsky (Charles University & Czech Academy of Sciences / Palacky University Olomouc, Czechia)

    Development of vowel quality and quantity throughout the first year

    Before their first birthday infants form categories for most native-language speech sounds with vowels being, most likely, acquired earlier than consonants, namely at about 6 months of age [4, 11]. Within the class of vowels, however, there are various types of contrasts that vary in their perceptual saliency which could affect the order in which they come to be acquired. While in all languages, vowels are contrasted by their spectral properties [5], in some (e.g. Japanese, Finnish, Czech) vowel duration, too, cues phonological categories such that a short and a long vowel of the same spectral quality represent two different phonemes. Some have proposed that vowel duration in speech sounds is a more perceptually salient cue than spectrum [1], which could cause duration-cued contrasts to be acquired before spectral ones. Since language acquisition begins already in utero and fetuses are able to hear speech sound differences [10, 7, 2], one could also reason that they could more easily learn from the durational than from spectral information as the latter undergoes significant attenuation when passing from the outside environment to the fetal ear [3]. A review of previous studies with children acquiring vowel length does not provide unanimous evidence as to the order of vowel length vs vowel quality development [6, 8, 9]. We thus tested the hypothesis that, thanks to their greater perceptual saliency in general – and more so in prenatal development – duration-cued contrasts are acquired before spectral ones.
    We traced the development of vowel length and vowel quality in infants acquiring Czech. Using the central fixation paradigm, 4-, 6-, 8-, and 10-month-olds (n = 16 per age) were habituated to tokens of a nonsense syllable /fɛ/ and subsequently presented with trials where /fɛ/ alternated with /fɛː/, /fa/, or /fɛ/ again. The difference in looking time to each type of change with respect to the average of the last two habituation trials was submitted to a linear-mixed effects model per age group, each containing Trial as a fixed effect with 2 orthogonal contrasts (spectral vs. no change, duration vs. no change). Participant was entered as a random factor, including random slopes per Trial type. A difference between durational and no change was detected in all four age groups. A difference between spectral and no change was detected in the 6-, 8-, and 10-month-olds. At 6 months the durational change yielded a stronger response (i.e. larger difference to no change) than the spectral change; no such Trial-type effects were observed for the two oldest groups. In summary, infants between 4 and 10 months perceptually discriminate vowel duration changes, and 4- and 6-month-olds discriminate duration more strongly than they discriminate vowel quality. This indicates that, in a language that employs contrastive vowel length, categories cued by duration might be acquired earlier than those cued by spectral quality. In order to rule out a potential alternative explanation in terms of large auditory saliency of duration as such, a follow-up experiment is underway that tests the discrimination of durational and spectral differences with non-speech stimuli.