The science of subtitling

The science of subtitling

If you are like me, a lover of cultures that enjoys watching movies and series from all over the world, you would probably agree on the frustration felt when we are watching a movie in a foreign language and suddenly, what you are listening to doesn’t match with the subtitles you read on screen? There is an art and also a science behind mastering the creation and implementation of subtitles and making viewers forget they are reading subtitles while enjoying the screen adventure.

In the translation industry, most of us agree that audiovisual translations are among the most fun and engaging tasks we complete, but it takes hard work and knowledge about many techniques to make it smooth and simple for consumers. 3 methods are currently used in the translation of audiovisual content:

  1. Dubbing, where the original voices are replaced by voices in the target language.
  2. Voice-over, where voices in the target language are heard over the original voices (please read our previous article on this)
  3. Subtitling, where a transcription of dialogues or narration in the target language appears in the lower part of the screen

Each of these 3 approaches to multimedia translation have their challenges. However, the simpler approach by far is subtitling. Subtitling, nevertheless, adds a new dimension to translation as it entails the technical and human adaptation of the content. The tech team has to balance two factors: Limitations due to the space available on the screen and limitations based on the time required for viewers to effectively read the subtitles in a way that doesn’t reduce the experience. The objective is making it easy for the viewer to incorporate sound, image and text at the same time.

Think of it as a mathematical formula, 24 frames to create a second of action: multiply that out to make a minute or an hour of film and you have A LOT of frame options for placing your text. Techs have to pick an optimal frame to place the time code in – TC IN when subtitle should appear on screen – and the TC OUT when it should leave the screen.

Techniques and stages

  1. The spotter: time coding. Determining the beginning and end of each phrase of dialogue. Synchronize the appearance of the subtitles with the words in the dialogue. Ensure legibility
  2. The translator: translates subtitles prepared by the spotter and adapts them to the target language
  3. The simulator, visualization tester, corrects any mistakes made. Final project

Translation revolves around messages not words, literal translations must be avoided completely. When it comes to source material that’s in different language than the target product, it is important not only to choose the right word to match the actions but the one that has fewest characters to increase the readability of the text in a short time.






Not Found

Apologies, but no results were found for the requested archive. Perhaps searching will help find a related post.