Two same-different discrimination tasks were conducted to test whether Mandarin and English native speakers use visual cues to facilitate Mandarin lexical tone perception. In the experiments, the stimuli were presented in 3 modes: audio-only (AO), audio-video (AV) and video-only (VO) under the clear and two levels of signal-to-noise ratio (SNR) -6dB and - 9dB noise condition. If the speakers’ perception of AV is better than that of AO, the extra visual information of lexical tones contributes tone perception. In Experiment 1 and 2, we found that Mandarin speakers had no visual augmentation under clear and noise conditions. For English speakers, on the other hand, extra visual information hindered their tone perception (visual reduction) under SNR -9dB noise. This suggests that English speakers rely more on auditory information to perceive lexical tones. Tone pairs analysis in both experiments found that visual reduction in tone pair T2- T3 and visual augmentation in tone pair T3-T4. It indicates that acoustic tone features (e.g. duration, contour) can be seen and be involved in the process of audiovisual perception. Visual cues facilitate or inhibit tone perception depends on whether the presented visual features of the tone pairs are distinctively recognised or highly confusing to each other.
|Teitl||The 1st Joint Conference on Facial Analysis, Animation, and Auditory-Visual Speech Processing|
|Statws||Cyhoeddwyd - 2015|
|Digwyddiad||FAAVSP - The 1st Joint Conference on Facial Analysis, Animation, and Auditory-Visual Speech Processing|
- Vienna, Awstria
Hyd: 11 Medi 2015 → 13 Medi 2015
|Cynhadledd||FAAVSP - The 1st Joint Conference on Facial Analysis, Animation, and Auditory-Visual Speech Processing|
|Cyfnod||11/09/15 → 13/09/15|