Event Segmentation in the Audio Description of Films
A Case Study
DOI:
https://doi.org/10.47476/jat.v6i1.2023.245Keywords:
audio description, event segmentation, event cognition, film, language production, event semanticsAbstract
To make the content of films available to a visually impaired audience, a sighted translator can provide audio description (AD), a verbal description of visual events. To achieve this goal, the audio describer needs to select what to describe, when to describe it, and how to describe it, as well as to express the information aurally. The efficacy of this communication is critically dependent upon basic cognitive processes of how the sighted audio describer perceives and segments the film’s unfolding chain of events and in what way the visually impaired end users conceive the structure, content, and segmentation of such events in relation to the produced AD. There is, however, virtually no research on this interplay in relation to AD. In this study, we scrutinize live AD of a film from two trained audio describers and examine how events are structured, segmented and construed in their AD. Results demonstrate that the event segmentation experienced from the film is indeed a fundamental part of how AD is structured and construed. It was found that AD at event boundaries was highly sensitive to different spatiotemporal circumstances and this relationship depends on semantic resources for expressing AD.
Lay summary
In our everyday lives, we naturally organize and remember our experiences as meaningful sequences of events. This happens the same way when we watch movies – we see them as a series of events happening in different places, times, and with changing character dynamics. Now, think about making this cinematic experience accessible to people with visual impairments. This is where a sighted translator, called an audio describer, becomes really important.
To make movies understandable for a visually impaired audience, a sighted audio describer describes crucial visual events in the film. Their goal is to create a "narrative equivalence" between the original film and the audio-described version. Thus, the main challenges for audio describers are deciding what information to describe, when to do it, and how to articulate it effectively.
In the current case study, we looked at two audio descriptions for the Swedish film "Skumtimmen" (English title: "Echoes from the Dead"). These descriptions were created by trained audio describers. We wanted to see how these narrators organized, broke down, and verbalized the course of events in their audio descriptions. To achieve this, we focused on two important parts of storytelling – changes in time and location – and paid special attention to how often and in what way audio describers verbalized these event boundaries.
What we found is that both describers indeed verbalized most event boundaries. This shows that these boundaries are a big part of how audio descriptions are put together. We also learned that the descriptions at these event boundaries were very sensitive to when and where key events unfolded in the story. The way these boundaries were described depended on fundamental cognitive and linguistic resources to talk about space and time.
Our findings help us understand how breaking down events in a film is a crucial part of shaping audio descriptions. Plus, our research highlights how important it is to study how events are broken down and verbalized, giving useful insights to improve audio descriptions and train future audio describers. The ultimate goal is to make the experience better for visually impaired viewers, contributing to a more inclusive and enjoyable audiovisual world.