Neutralizing the effect of translation shifts on automatic machine translation evaluation
Mostra el registre complet Registre parcial de l'ítem
- dc.contributor.author Fomicheva, Marina
- dc.contributor.author Bel Rafecas, Núria
- dc.contributor.author da Cunha Fanego, Iria
- dc.date.accessioned 2023-05-18T06:09:19Z
- dc.date.available 2023-05-18T06:09:19Z
- dc.date.issued 2015
- dc.description Comunicació presentada a 16th International Conference (CICLing 2015), celebrada del 14 al 20 d'abril de 2015 al Cairo, Egipte.
- dc.description.abstract State-of-the-art automatic Machine Translation [MT] evaluation is based on the idea that the closer MT output is to Human Translation [HT], the higher its quality. Thus, automatic evaluation is typically approached by measuring some sort of similarity between machine and human translations. Most widely used evaluation systems calculate similarity at surface level, for example, by computing the number of shared word n-grams. The correlation between automatic and manual evaluation scores at sentence level is still not satisfactory. One of the main reasons is that metrics underscore acceptable candidate translations due to their inability to tackle lexical and syntactic variation between possible translation options. Acceptable differences between candidate and reference translations are frequently due to optional translation shifts. It is common practice in HT to paraphrase what could be viewed as close version of the source text in order to adapt it to target language use. When a reference translation contains such changes, using it as the only point of comparison is less informative, as the differences are not indicative of MT errors. To alleviate this problem, we design a paraphrase generation system based on a set of rules that model prototypical optional shifts that may have been applied by human translators. Applying the rules to the available human reference, the system generates additional references in a principled and controlled way. We show how using linguistic rules for the generation of additional references neutralizes the negative effect of optional translation shifts on n-gram-based MT evaluation.
- dc.format.mimetype application/pdf
- dc.identifier.citation Fomicheva M, Bel N, da Cunha I. Neutralizing the effect of translation shifts on automatic machine translation evaluation. In: Gelbukh A, editor. Computational linguistics and intelligent text processing: 16th International Conference, CICLing 2015, Cairo, Egypt, Apr 14-20, 2015, Proceedings, Part I; 2015 Apr 14-20; Cairo, Egypt. Cham: Springer; 2015. p. 596-607. DOI: 10.1007/978-3-319-18111-0_45
- dc.identifier.doi http://dx.doi.org/10.1007/978-3-319-18111-0_45
- dc.identifier.isbn 978-3-319-18110-3
- dc.identifier.issn 0302-9743
- dc.identifier.uri http://hdl.handle.net/10230/56874
- dc.language.iso eng
- dc.publisher Springer
- dc.relation.ispartof Gelbukh A, editor. Computational linguistics and intelligent text processing: 16th International Conference, CICLing 2015, Cairo, Egypt, Apr 14-20, 2015, Proceedings, Part I; 2015 Apr 14-20; Cairo, Egypt. Cham: Springer; 2015. p. 596-607.
- dc.rights © Springer The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-18111-0_45
- dc.rights.accessRights info:eu-repo/semantics/openAccess
- dc.subject.keyword Translation shifts
- dc.subject.keyword Machine Translation Evaluation
- dc.subject.keyword Paraphrase Generation
- dc.title Neutralizing the effect of translation shifts on automatic machine translation evaluation
- dc.type info:eu-repo/semantics/conferenceObject
- dc.type.version info:eu-repo/semantics/acceptedVersion