Getting your Trinity Audio player ready...
|
Speaking to babies in ‘sing-song’ speech – such as nursery rhymes – is the best way for them to learn how to talk, according to a new study.
Linguists have long considered phonetics – the smallest sound elements of speech, typically represented by the alphabet – to be the foundation of language.
Babies were thought to learn these small sound elements and add them together to make words.
However, new research by the University of Cambridge and Trinity College Dublin has found that babies do not process phonetic information until they are around seven months old, and still struggle to do so at around 11 months old – when they tend to say their first words.
This means, the study said, that babies instead learn language in their early months from rhythmic speech and information – with moments of emphasis in a song helping them learn when an individual word starts and ends.
Professor Usha Goswami, a neuroscientist at the University of Cambridge, said: “Our research shows that the individual sounds of speech are not processed reliably until around seven months old, even though most infants can recognize familiar words like ‘bottle’ by this point.
“We believe therefore that speech rhythm information is the hidden glue underpinning the development of a well-functioning language system.
“Infants can use rhythmic information like a scaffold or skeleton to add phonetic information on to.
“For example, they might learn that the rhythm pattern of English words is typically strong-weak, as in ‘daddy’ or ‘mummy’, with the stress on the first syllable.
“They can use this rhythm pattern to guess where one-word ends and another begins when listening to natural speech.
“So parents should talk and sing to their babies as much as possible or use infant-directed speech like nursery rhymes because it will make a difference to language outcome.”
Researchers discovered this way of learning by recording the patterns of electrical brain activity in 50 babies at four, seven, and 11 months old while they watched a video of a primary school teacher singing 18 nursery rhymes.
They then fed the brainwaves through a special algorithm, which produced a ‘read out’ of the information that was being encoded in the babies’ brains.
Professor Giovanni Di Liberto, a cognitive and computer scientist at Trinity College Dublin and a researcher at the ADAPT Centre, commented: “This is the first evidence we have of how brain activity related to phonetic information changes over time in response to continuous speech.”
Previously, studies have relied on comparing the responses to nonsense syllables, like “bif” and “bof” instead.
The current study, published in the journal Nature Communications, forms part of the BabyRhythm project led by Professor Goswami, which is investigating how language is learned and how this is related to dyslexia and developmental language disorder.
Goswami said there is a long history in trying to explain dyslexia and developmental language disorder in terms of phonetic problems but that the evidence “doesn’t add up”.
A sister study, also part of the BabyRhythm project, has shown that rhythmic speech information was processed by babies at two months old – with differences in how they process it able to predict later language outcomes.
The research was funded by the European Research Council under the European Union’s Horizon 2020 research and innovation program and by Science Foundation Ireland.
Produced in association with SWNS Talker
“What’s the latest with Florida Man?”
Get news, handpicked just for you, in your box.