Lip-Reading Aids Word Recognition Most in Moderate Noise: A Bayesian Explanation Using High-Dimensional Feature Space
2009

Lip-Reading Aids Word Recognition Most in Moderate Noise

Sample size: 33 publication 10 minutes Evidence: high

Author Information

Author(s): Ma Wei Ji, Zhou Xiang, Ross Lars A., Foxe John J., Parra Lucas C.

Primary Institution: Baylor College of Medicine

Hypothesis

Visual information improves word recognition in noisy environments, particularly at intermediate levels of auditory noise.

Conclusion

The study concludes that visual information enhances speech recognition more effectively at moderate noise levels than at high or low levels.

Supporting Evidence

  • Visual information significantly improves performance in word recognition tasks.
  • Enhancements in performance are maximal at intermediate signal-to-noise ratios.
  • Behavioral data confirm the predictions of the Bayesian model regarding multisensory integration.

Takeaway

When it's noisy, watching someone's lips can help you understand what they're saying, especially when the noise isn't too loud or too quiet.

Methodology

The study involved two experiments with 33 participants who identified spoken words under varying noise conditions, using auditory-only and auditory-visual stimuli.

Potential Biases

Potential biases may arise from the specific demographic of participants, all of whom were students from a single institution.

Limitations

The study's findings may not generalize to all types of speech or different populations outside the tested group.

Participant Demographics

33 volunteer subjects (14 female), native American-English speakers, with normal or corrected-to-normal vision and normal hearing.

Statistical Information

P-Value

p<0.001

Statistical Significance

p<0.001

Digital Object Identifier (DOI)

10.1371/journal.pone.0004638

Want to read the original?

Access the complete publication on the publisher's website

View Original Publication