Quality assessment tools for studio and AI-generated dubs and voice-overs
Giselle Spiteri Miggiani
-
Version HTML du résumé et des mots-clés [English]:
Titre
Quality assessment tools for studio and AI-generated dubs and voice-overs
Résumé
This paper proposes a quality assessment model designed for dubs and voice-overs, applicable to both studio recordings and AI-generated output. Drawing on a prior quality assessment proposal narrowed down to script adaptation (Spiteri Miggiani, 2022a), this paper introduces a broader model that includes an additional rubric to assess the overall quality of dubbed and voice-over output. The quality rating of the end product is determined by evaluating and assigning individual scores to a set of comprehensive quality indicators categorized into two main components: speech and sound. In contrast, the dubbing script is evaluated using the textual parameters rubric developed previously, which adopts a granular, error-based approach and combines a formula to calculate a percentage score. The newly revised quality assessment model thus enables a comprehensive or macro evaluation of a dubbed product from a viewer’s perspective. Additionally, it provides another tool focused on textual parameters for a more detailed micro examination from the perspective of linguists and adapters. These tools have broad applications and account for recent AI advancements in dubbing and media localization. The model is intended for dubbing practitioners, trainers, evaluators, recruiters, dubbing managers, quality control specialists, and software developers interested in creating dubbing-related tools or enhancing localization management platforms with quality control features.
Mots-clés
Quality assessment, quality control, dubbing, voice-over, script, speech-and-sound, speech-and-sound post-editing, studio dubs, AI-dubs
36(2) - 2024