Modeling gestural alignment in spoken simultaneous interpreting: The role of gesture types
Inés Olza
-
HTML version of the abstract and keywords [English]:
Title
Modeling gestural alignment in spoken simultaneous interpreting: The role of gesture types
Abstract
This article explores gestural alignment in spoken simultaneous interpreting, analyzing whether and how the interpreters under scrutiny align with the gestural behavior of a visible speaker-source, and which gesture types by the speaker-source more often prompt a gesturally aligned response by the interpreters. The paper offers a mixed-methods analysis of a set of multimodal data collected under (quasi-)experimental conditions in a real court interpreting setting during spoken training exercises performed by two novice interpreters. This study relies on the findings of a previous exploratory approach to the same dataset (Olza, 2024), where different degrees of gestural alignment were found and defined. In this study, the variable gesture type is used to systematically examine a new sub-sample of the same data and to compare the performance of the two novice interpreters. Results show that iconic gestures elicit higher degrees of alignment by both interpreters. The findings are not conclusive, though, when relating the (non-)representational nature of gestures by the speaker-source, nor their (non-)semantic value, to the degree of replication of such gestures by the two interpreters. Future research will rely on broader datasets obtained from more experienced interpreters engaged in tasks that more accurately reflect their actual practice.
Keywords
Gesture, alignment, spoken simultaneous interpreting, multimodal data, gesture types