Photo by Ilya Pavlov on Unsplash
Taking stock of the year as it comes to an end, it seems appropriate to look back to the very exciting event Translation and Disruption: Global and local perspectives, which took place at University of Portsmouth on November 4th.
Portsmouth’s 17th translation conference, seamlessly organised by Dr Akiko Sakamoto, Ms Begoña Rodríguez de Céspedes and Dr Jonathan Evans, was preceded on November 3rd by a panel discussion which opened with the following question: “Who should become machine translation post-editors?” Enlisted to provide their own perspectives on this controversial issue were Prof Dorothy Kenny (DCU), Prof Masaru Yamada (Kansai University), Dr Olga Torres-Hostench (UAB), and Dr Akiko Sakamoto (University of Portsmouth). Nevertheless, the conversation was shortly joined by many of the attendants, which led to a rich and insightful discussion thanks to the presence of scholars, practitioners, and language service providers alike.
Among the main concerns that the participants raised was the need to move away from volume-based remuneration models, and to highlight the provision of added value services at a time when the quality argument is becoming less compelling as a ground for resistance to machine translation. The key issue appeared to be nevertheless one of role definitions, related to the notion of the professional post-editor, the training for skill sets seen both as differentiated and overlapping, and the reality of machine translation use in the current work of translators and professional linguists.
The pre-conference event finished with a public lecture by Dr Eiichiro Sumita, a Fellow of the National Institute of Information and Communications Technology in Japan, who described the impressive advances that the country is making in machine translation technologies in preparation for the 2020 Tokyo Olympic Games, such as the multilingual speech-to-speech translation system for smartphones VoiceTra.
On the following day, Prof Nobuo Ueno from the Japan Society for the Promotion of Science was in charge of the official opening of the conference before Prof Kenny’s keynote speech. Borrowing Clayton Christensen’s notion of disruption from his 1997 work, The Innovator’s Dilemma, Prof Kenny delivered a most thought-provoking talk about the upward-downward mobility dilemma that translators are currently facing, suggesting a focus on the provision of indispensable complementary rather than substitute services as a potential solution.
Although varied in their approach and focus, the papers presented throughout the day successfully highlighted again the main common threads relating to the disruptive power of digital technologies in translation: the role and application of technology along the complete translation supply chain, the changing landscape of translation as a professional activity, and the implications for training and education.
Training was also the main topic of the second keynote address of the day, delivered by Prof Kayoko Takeda from Rikkyo University. Prof Takeda presented an inspiring proposal for curricular innovation based on a holistic approach aiming at the education of the future various actors in the translation ecosystem by means of general translation and interpreting literacy courses for undergraduate students.
What will language-related jobs look like in the next ten years? Does post-editing call for a distinct professional profile? What should translator training involve? How is the translation profession to be framed in the current market? These are extremely topical questions which are relevant for both translation practice and theory but also relate to wider economic and educational issues, so events like the Translation and Disruption conference cannot but be commended for bringing them to the fore. Here’s hoping that the conversation will continue in the coming year!
For more information, please visit the event website.
This is a really interesting development the field of translation both in terms of the professional training and development of translators. I am not for or against MT per se. I feel that it has a place in some areas but is also not welcome or is suitable in others. One might cite the translation of a novel or a legal text as being one example of where a machine can never achieve the results of a human. Nonetheless, I accept that it is coming and with the ever-increasing production of ‘content’ both online and in print, the industry (and scholarship) needs to adapt to ensure that it can translation can remain high-quality but produced at speed. My question/wonderment is that if translation workflows are inevitably going to change and professional training is going to adapt as a result, does it mean that translated output is going to get better i.e. better quality produced quicker or does it represent a shift in what we believe to be ‘acceptable’. Are consumers more willing to accept a text that is of average quality when it communicates everything that it needs to? Indeed, increasingly more people willing to watch films and TV series from torrent sites on small screens that reduce the quality of content considerably but consumers do not seem to mind. Also, despite the clear advantage of MT to produce translated text instantaneously, if we are now considering the inclusion of post-translation editors into the mix might this be a case ‘too many cooks, spoil the broth”. If anything, it is not the just post-translation editing skills that we need to include in translation curricula, but also project management skills to be able to manage more and more actors that have increasingly specialised roles.
To the question of whether translated output is going to improve or whether we wiĺl see an increase in tolerance to lower quality, I would say “yes” to both. There is a growing business model based actually on letting customers decide how much are they ready to pay for different degrees of quality. But the fact that there is more lower quality translation available can also mean more demand for higher quality, added value, specialised services.