Por favor, use este identificador para citar o enlazar este ítem: https://hdl.handle.net/10495/37640
Título : Automatic Pronunciation Assessment of Non-native English based on Phonological Analysis
Autor : Escobar Grisales, Daniel
Ríos Urrego, Cristian David
Moreno Acevedo, Santiago Andrés
Pérez Toro, Paula Andrea
Noth, Elmar
Orozco Arroyave, Juan Rafael
metadata.dc.subject.*: Habla
Speech
Inglés - Pronunciación
English languaje - pronunciation
Actos del habla
Speeh acts (linguistics)
Inglés
English language
Fonética
Phonetics
metadata.dc.contributor.conferencename: Text, Speech, and Dialogue: International Conference, TSD 2023 (26 : del 4 al 7 de septiembre de 2023, Faculty of Applied Sciences, University of West Bohemia, Pilsen, República Checa)
Fecha de publicación : 5-sep-2023
Resumen : ABSTRACT: The rapid development of speech recognition systems has motivated the community to work on accent classification, considerably improving the performance of these systems. However, only a few works or tools have focused on evaluating and analyzing in depth not only the accent but also the pronunciation level of a person when learning a non-native language. Our study aims to evaluate the pronunciation skills of non-native English speakers whose first language is Arabic, Chinese, Spanish, or French. We considered training a system to compute posterior probabilities of phonological classes from English native speakers and then evaluating whether it is possible to discriminate between native English speakers vs. non-native English speakers. Posteriors of each phonological class separately and also their combination are considered. Phonemes with low posterior results are used to give feedback to the speaker regarding which phonemes should be improved. The results suggest that it is possible to distinguish between each of the non-native languages and native English with accuracies between 67.6% and 80.6%. According to our observations, the most discriminant phonological classes are alveolar, lateral, velar, and front. Finally, the paper introduces a graphical way to interpret the results phoneme-by-phoneme, such that the speaker receives feedback about his/her pronunciation performance.
Aparece en las colecciones: Documentos de conferencias en Ingeniería

Ficheros en este ítem:
Fichero Descripción Tamaño Formato  
EscobarDaniel_2023_Pronunciation.pdfDocumento de conferencia609.4 kBAdobe PDFVisualizar/Abrir
EscobarDaniel_2023_Pronunciation_Poster.pdfPóster1.16 MBAdobe PDFVisualizar/Abrir


Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons Creative Commons