Saturday, August 26, 2017

Transcribing for Social Research. Alexa Hepburn and Galina B. Bolden. SAGE Publications. 2017.




Seasoned researchers expect to spend at least three or four hours transcribing for every hour of audio. Everyone has some tips to make this process more time-efficient while seldom providing any information on what a transcript should look like or how to go beyond the confines of standard spelling and grammar that are rarely adhered to orally. In this context, the unique selling point of Transcribing for Social Research is that it does not try to save time: instead, it aims to enable the reader to create the most analytically useful transcripts from rich social data (spoken interactions with words, gestures and sounds).
For those students and academics who regard transcription as a tedious but necessary step before analysis is possible, Alexa Hepburn and Galina B. Bolden put forward a strong argument that sensitive transcription is an integral aspect of analysis. Indeed, there is also a difference between an arduously transcribed yet incomplete verbatim account and an accurate transcript of social interactions.
Transcription is not an exercise in merely typing word after word of what people have said. Nor is it about standard orthography. Think about a conversation you’ve had with a stressed colleague or anxious friend where they did not explicitly tell you how they were feeling but you derived that from their manner of speaking  the speed of delivery, stutters, pauses, emphases. The prosody (patterns of rhythm and intonation) and musicality of speech contribute as much to the conversation as do the words themselves. Therefore, the record of the interaction should be animated to include these details as well as breathiness, pitch, colloquialisms and overlapping speech. These aspects of the interaction can be valuable for decoding meaning and providing insight into how interlocutors arrive at saying what they do.
Indeed, the field of conversation analysis has laid the groundwork for the systematic recording of close observations of the social world. The foreword recounts Gail Jefferson’s story, who, when tasked with transcribing audio tapes, developed a technical research craft rather than going through the motions of a mechanical chore. Thus was born ‘Jeffersonian transcription’ – a standard that has continued to evolve in response to studies that highlight the importance of features of speech delivery. The power of Jeffersonian transcription is evident from the prominence the authors afford it throughout the book. They do concede that it is complicated to read and appears technical, but stress that – with training and the guide they provide – its strengths outweigh its obscurity.
Image Credit: (Toshiyuki IMAI CC BY SA 2.0)
In the final chapter of the book, ‘Comparisons, Concerns, and Conclusions’, the authors elaborate other existing transcription systems including the International Phonetic Alphabet and the Discourse Transcription system. These are differently suited to a range of contexts, but the authors promote the Jeffersonian system as the most precise. Whichever framework is preferred by the transcriber based on their background and research, Hepburn and Bolden claim that a sensitive transcript that transcends words is necessary to investigate the fundamental communication processes that make human interaction possible.
The details reflected in a Jeffersonian transcript allow transcript readers to have the clearest idea of how the interaction actually sounded and enable peers to verify the analysis. Transcribing for Social Research indicates how to transcribe silences, micropauses, cut-offs, stretched sounds, jump-starts, smiley voices, creaky delivery, overlaps, tempo and more. By transcribing all these complexities well, the transcriber is repeatedly engaging with the data and is already beginning to analyse by thinking about the implications and inferences of all elements of speech. The authors dedicate larger sections on transcribing the following:
  • Audible inhalations and exhalations (as distinct from normal breathing), such as sighing, can impact the interaction as they are affectively loaded and can be responses as communicative as laughter or crying. For example, .hhh includes one h for each 0.1-0.2 seconds depending on the speed of the talk. Transcribing aspirations is discussed in depth in Chapter Five.
  • Crying and other non-speech sounds of audible and embodied upset. Also addressed in Chapter Five, transcribing this is fundamental for research on the interactional role of crying as various signs of distress can appear in isolation or can accumulate during the interaction. Responses to upset evidenced by speakers can also indicate the relationship between interlocutors (for example, a therapist would respond differently to a friend or parent).
  • Atypical speech. Transcribing language learners (both adults and children) or speakers with speech production difficulties pose additional transcription challenges and are dealt with in Chapter Four on Transcribing Speech Delivery.
For those working outside of English, Chapter Eight will be a useful starting point for academics transcribing in other languages. The authors have experience of this, and explain the choices researchers face concerning who does the transcribing, orthographic representation and developing language-specific transcription conventions (especially an issue with tonal languages such as Mandarin Chinese and Thai).
Chapter Nine on Technological Resources for Transcription outlines how to digitise recordings originally captured in analogue formats and a short introduction to various software that offer audio-editing and visual representation functionality. As many of you may have experienced, sound editing can be critical in instances of noisy surroundings, quiet speakers or even to anonymise voices.
I suspect that this is perhaps the chapter that many have been waiting for and will first flip to, looking for a reliable solution to automate transcription. Unsurprisingly, the authors strongly discourage this. Today’s tools are not yet sophisticated enough to produce accurate and detailed transcripts, and the authors maintain that the process of transcribing is part and parcel of the analysis in forcing an intimate engagement with the data.
The sheer number of symbols and tools in the transcription arsenal can be daunting for beginners. Hepburn and Bolden advise that beginners take the time to practise listening, and that it is better to initially use fewer symbols correctly rather than to misapply them – an incomplete analysis is preferable to a misleading one.
Transcribing for Social Research is written for students of transcription. The close of each chapter summarises the transcription conventions introduced thus far, and suggests a list of recommended reading. Unfortunately, a ‘cheat-sheet’ or reproduction of Gail Jefferson’s (2004) glossary of transcription symbols is missing. At an accessible location at the front or back of the book, this would have been a useful pedagogical addition towards making the book a go-to reference when transcribing.
I would recommend this book as it introduces a deep level of technical detail while being eminently readable. This combination is therefore perfect for those who want to get their teeth into Jeffersonian transcription as well as those who only want to understand the opportunities that this transcription system offers. Even for researchers who do not require such detailed transcripts due to the scope and methodologies of their projects, the book is teeming with the history, philosophy and merits of transcription, as well as acting as an inspiration to begin.

No comments:

Post a Comment