Tuesday, November 17, 2015
Graf Estes, K. & Lew-Williams, C. (2015). Listening Through Voices: Infant Statistical Word Segmentation Across Multiple Speakers. Developmental Psychology, 51(11), 1517-1528.
Statistical language learning refers to learning aspects of language based on the patterns of the language. For example, sounds within a word are more likely to occur together than are sounds that cross word boundaries. These patterns might help infants learn to distinguish words from word boundaries. In this study, Graf Estes and Lew-Williams focused on the statistical learning in infants.
In a real environment, infants are exposed to multiple voices, each with a unique speaking style and rate. Graf Estes and Lew-Williams used multiple voices in a monotone speech stream to mimic this natural environment. Infants from this study listened to a language produced by eight different voices that changed frequently for six minutes. Results suggested that infants were able to learn the artificial language when tested with a common voice or a novel voice, also suggesting that infants can generalize representations.
In the second series of experiments, infants listened to two different voices. The researchers suggested that the use of two dominant voices may better mimic the infant’s environment (e.g. two parents). Infants, however, failed to display signs of language acquisition. Graf Estes and Lew-Williams suggested two possible explanations: Infants were able to learn both the words and part-words, thus showing no discrimination during test phase, or the used of two voices resulted in a focus on discerning the voices, thus inhibiting language acquisition.
Although the mixed results require further investigation, the findings highlight the efficiency of learning in the context of variability (see also, Plante et al. 2014).
Blogger: Hosung (Joel) Kang is a neuroscience student completing his undergraduate thesis project in the Language and Working Memory Lab.
Monday, November 9, 2015
Leung, J. H., & Williams, J. N. (2012). Constraints on Implicit Learning of Grammatical Form‐Meaning Connections. Language Learning, 62(2), 634-662.
Implicit learning is learning that takes place without intention or conscious awareness. Humans are able to extract and learn from patterns in the environment, without any realization of this learning. In this study, Leung and Williams focused on the implicit learning of grammatical form-meaning connections – an area of implicit learning research where there is still much to investigate.
A form-meaning connection takes place when the assignment of a meaning to an unfamiliar word is made. In the case of this study, Leung and Williams used an artificial language with four determiner-like words. These determiner words were used in front of a noun to encode whether the noun was near or far, or animate or inanimate. Participants in this study were told all four of the novel determiners along with the near or far rule, however, they were not told about the animate or inanimate rule.
The researchers conducted two experiments in which participants were shown two side-by-side pictures on a computer screen. Participants were asked to click the correct image after hearing the corresponding phrase. Each phrase involved one of the four determiner words, followed by the noun of one of the picture on the screen. After training, participants completed a test phase in which participants who had learned the implicit rule for animate or inanimate markers would be at some advantage in terms of their reaction time. Results provided implicit learning of the animate/inanimate rule in this experiment. In the second experiment, involving implicit learning of relative size, no learning was observed. Leung and Williams posited that this might have been because some meanings are more susceptible to implicit learning than others, based on the characteristics of the language being learned.
Although the results provide some evidence of implicit learning of form-meaning connections, it is clear that this method of learning is slower than explicitly teaching the rule.
Blogger: Alisha Johnson; Alisha is an undergraduate these student in the Language and Working Memory Lab
Wednesday, October 21, 2015
Karasinski, C. (2015). Language ability, executive functioning and behaviour in school-age children. International Journal of Language & Communication Disorders, 50(2), 144–150.
Executive functions are the complex thinking skills that enable us to use self-control, set goals, track our progress while executing those goals, and adjust our strategies if necessary. We use our executive functions whenever we solve problems or break away from our usual routine. The three most commonly studied components of executive function are inhibition, working memory, and task switching. This paper sought to examine the connections between executive function, language, and behaviour, in school-age children.
A total of 42 children (8–11 years) with a range of abilities completed measures of language, nonverbal intelligence, and executive functioning. In the executive function measure, children were required to sort pictures according to a changing rule. Parents also completed questionnaires about each child’s attention, behaviour, and executive function abilities. Data were analyzed first by looking for correlations between measures, and second by testing possible predictors of language, attention, and behaviour ratings.
Results showed a tenuous connection between language ability and executive functioning. Although both the executive function measures correlated with language, they did not predict language ability as well as nonverbal intelligence did. Behaviour was best predicted by the parent’s responses to questions about their child’s ability to inhibit responses. This finding is consistent with other research showing a relation between poor inhibition and attention difficulties.
Blogger: Laura Pauls, PhD Candidate
Monday, October 5, 2015
Frank, M. C., Tenebaum, J. M., Gibson, E. (2013). Learning and long-term retention of large-scale artificial languages. PLoS ONE 8(1): e52500. doi:10.1371/ journal.pone.0052500
Studying the way that a person learns an artificial language, a made up language never seen before, is a useful tool in helping researchers understand the important cues to natural language learning. A shortcoming with typical artificial language learning studies is that the languages have been quite different from natural languages – usually the number of words in the language is quite small, and each word occurs equally often. In order to “scale up” the artificial language used in the current experiment, the researchers adopted an artificial language with a 1,000-word vocabulary. Word frequency was also manipulated: Words occurred as few as 10 times, to as many as 8,000 times. Unique from the typical lab experiment, the participants had the artificial language downloaded on personal iPods so that their 10 hours of exposure could occur throughout their everyday activities, like during their daily commute or exercising. Importantly, the only cue to segment or learn words from the artificial language was the probability of syllable co-occurrences – syllables that belonged together within a word were more likely to occur together than syllables that spanned a word boundary. This cue exists in natural languages, and may help language learners learn word units over in addition to other cues such as pauses or stress patterns.
Following 10 hours of listening to the large-scale artificial language, participants were tested on their ability to identify words from the language immediately after the 10 hours had been completed, 1-2 months after, or 3 years after. Participants were able to identify words from the language immediately after listening for 10 hours, and scored just as well 1-2 months after, with higher scores for high than low frequency words. At the end of 3 years, participants still were able to identify high frequency words from the artificial language. Although they did not show retention of low frequency words, this is an incredibly impressive finding as the words from the artificial language were nonsense, meaningless words. This study demonstrated that language learners could successfully segment words from an artificial language with a large vocabulary, and that the retention of newly learned words depended on word frequency. These processes might support the learning of second languages. For example, you might remember words from a second language you’ve studied in the past, especially the ones you heard most often. The results suggest, too, that listening to a new language for several hours might help you learn something about the words in that language.
Blogger: Nicolette Noonan, PhD student with Drs. Lisa Archibald and Marc Joanisse, and coordinator of the Canadian SLP blog.