Por favor, use este identificador para citar o enlazar este ítem: https://hdl.handle.net/10495/37640
Registro completo de metadatos
Campo DC Valor Lengua/Idioma
dc.contributor.authorEscobar Grisales, Daniel-
dc.contributor.authorRíos Urrego, Cristian David-
dc.contributor.authorMoreno Acevedo, Santiago Andrés-
dc.contributor.authorPérez Toro, Paula Andrea-
dc.contributor.authorNoth, Elmar-
dc.contributor.authorOrozco Arroyave, Juan Rafael-
dc.contributor.conferencenameText, Speech, and Dialogue: International Conference, TSD 2023 (26 : del 4 al 7 de septiembre de 2023, Faculty of Applied Sciences, University of West Bohemia, Pilsen, República Checa)spa
dc.date.accessioned2023-12-18T15:22:15Z-
dc.date.available2023-12-18T15:22:15Z-
dc.date.issued2023-09-05-
dc.identifier.urihttps://hdl.handle.net/10495/37640-
dc.description.abstractABSTRACT: The rapid development of speech recognition systems has motivated the community to work on accent classification, considerably improving the performance of these systems. However, only a few works or tools have focused on evaluating and analyzing in depth not only the accent but also the pronunciation level of a person when learning a non-native language. Our study aims to evaluate the pronunciation skills of non-native English speakers whose first language is Arabic, Chinese, Spanish, or French. We considered training a system to compute posterior probabilities of phonological classes from English native speakers and then evaluating whether it is possible to discriminate between native English speakers vs. non-native English speakers. Posteriors of each phonological class separately and also their combination are considered. Phonemes with low posterior results are used to give feedback to the speaker regarding which phonemes should be improved. The results suggest that it is possible to distinguish between each of the non-native languages and native English with accuracies between 67.6% and 80.6%. According to our observations, the most discriminant phonological classes are alveolar, lateral, velar, and front. Finally, the paper introduces a graphical way to interpret the results phoneme-by-phoneme, such that the speaker receives feedback about his/her pronunciation performance.spa
dc.format.extent10 páginasspa
dc.format.mimetypeapplication/pdfspa
dc.language.isoengspa
dc.type.hasversioninfo:eu-repo/semantics/draftspa
dc.rightsinfo:eu-repo/semantics/openAccessspa
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/2.5/co/*
dc.titleAutomatic Pronunciation Assessment of Non-native English based on Phonological Analysisspa
dc.typeinfo:eu-repo/semantics/conferenceObjectspa
dc.publisher.groupGrupo de Investigación en Telecomunicaciones Aplicadas (GITA)spa
oaire.versionhttp://purl.org/coar/version/c_b1a7d7d4d402bccespa
dc.rights.accessrightshttp://purl.org/coar/access_right/c_abf2spa
oaire.citationtitleText, Speech, and Dialogue: 26th International Conference, TSD 2023spa
oaire.citationconferenceplaceFaculty of Applied Sciences, University of West Bohemia, Pilsen, República Checaspa
oaire.citationconferencedate2023-09-04/2023-09-07spa
dc.rights.creativecommonshttps://creativecommons.org/licenses/by-nc-sa/4.0/spa
oaire.fundernameUniversidad de Antioquia. Vicerrectoría de investigación. Comité para el Desarrollo de la Investigación - CODIspa
dc.publisher.placePilsen, República Checaspa
dc.type.coarhttp://purl.org/coar/resource_type/c_5794spa
dc.type.redcolhttps://purl.org/redcol/resource_type/ECspa
dc.type.localDocumento de conferenciaspa
dc.subject.decsHabla-
dc.subject.decsSpeech-
dc.subject.lembInglés - Pronunciación-
dc.subject.lembEnglish languaje - pronunciation-
dc.subject.lembActos del habla-
dc.subject.lembSpeeh acts (linguistics)-
dc.subject.lembInglés-
dc.subject.lembEnglish language-
dc.subject.lembFonética-
dc.subject.lembPhonetics-
oaire.awardtitlePRG2017-15530 Analysis of architectures based on deep learning methods to evaluate and recognize traits in speech signals.spa
dc.description.researchgroupidCOL0044448spa
dc.description.researchcost$99.519.000spa
oaire.awardnumberES92210001spa
oaire.awardnumberPI2023-58010spa
oaire.awardnumberPRG2017-15530spa
oaire.funderidentifier.rorRoR:03bp5hc83-
Aparece en las colecciones: Documentos de conferencias en Ingeniería

Ficheros en este ítem:
Fichero Descripción Tamaño Formato  
EscobarDaniel_2023_Pronunciation.pdfDocumento de conferencia609.4 kBAdobe PDFVisualizar/Abrir
EscobarDaniel_2023_Pronunciation_Poster.pdfPóster1.16 MBAdobe PDFVisualizar/Abrir


Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons Creative Commons