Cross-Modal Prediction in Speech Perception
2011

Cross-Modal Prediction in Speech Perception

Sample size: 34 publication Evidence: moderate

Author Information

Author(s): Sánchez-García Carolina, Alsius Agnès, Enns James T., Soto-Faraco Salvador

Primary Institution: Universitat Pompeu Fabra, Barcelona, Spain

Hypothesis

Can prior unimodal contextual information improve audiovisual processing in speech perception?

Conclusion

Visual speech information can enhance the processing of subsequent auditory input through predictive mechanisms.

Supporting Evidence

  • Participants responded faster when the visual context was continuous with the auditory target.
  • Visual context provided a significant advantage for processing auditory input.
  • High accuracy rates were observed in both visual and auditory versions of the experiment.

Takeaway

When we see someone talking, it helps us understand what they are saying, even if we can't hear them well. This study shows that seeing lips move can help us guess what sounds to expect.

Methodology

Participants made speeded judgments about audiovisual matching after being presented with a leading context in either visual or auditory form.

Potential Biases

Potential bias due to participants' familiarity with the task and stimuli.

Limitations

Participants were not trained lip-readers, which may limit the generalizability of the findings.

Participant Demographics

34 native Spanish speakers (10 males, mean age 23.4 years).

Statistical Information

P-Value

p<0.05

Statistical Significance

p<0.05

Digital Object Identifier (DOI)

10.1371/journal.pone.0025198

Want to read the original?

Access the complete publication on the publisher's website

View Original Publication