Face models based on a guided PCA of motion-capture data: speaker dependent variability in /s/-/R/ contrast production
DOI:
https://doi.org/10.21248/zaspil.40.2005.260Abstract
We measure face deformations during speech production using a motion capture system, which provides 3D coordinate data of about 60 markers glued on the speaker's face. An arbitrary orthogonal factor analysis followed by a principal component analysis (together called a guided PCA) of the data has showed that the first 6 factors explain about 90% of the variance, for each of our 3 speakers. The 6 derived factors, therefore, allow us to efficiently analyze or to reconstruct with a reasonable accuracy the observed face deformations. Since these factors can be interpreted in articulatory terms, they can reveal underlying articulatory organizations. The comparison of lip gestures in terms of data derived factors suggests that these speakers differently maneuver the lips to achieve contrast between /s/ and /R/. Such inter-speaker variability can occur because the acoustic contrast of these fricatives is shaped not only by the lip tube but also by cavities inside the mouth such as the sublingual cavity. In other words, these tube and cavity can acoustically compensate each other to produce their required acoustic properties.
Downloads
Veröffentlicht
2005
Zitationsvorschlag
Maeda, Shinji. 2005. „Face Models Based on a Guided PCA of Motion-Capture Data: Speaker Dependent Variability in S - R Contrast Production“. ZAS Papers in Linguistics 40 (Januar):95-108. https://doi.org/10.21248/zaspil.40.2005.260.
Ausgabe
Rubrik
Artikel