Briva-Iglesias, VicentDogru, GokhanCavalheiro Camargo, João Lucas2025-01-282025-01-282024Briva-Iglesias V, Dogru G, Cavalheiro Camargo JL. Large language models "ad referendum": How good are they at machine translation in the legal domain? MonTI. Monographs in Translation and Interpreting. 2024;16:75–107. DOI: 10.6035/MonTI.2024.16.021889-4178http://hdl.handle.net/10230/69348This study evaluates the machine translation (MT) quality of two state-of-the-art large language models (LLMs) against a traditional neural machine translation (NMT) system across four language pairs in the legal domain. It combines automatic evaluation metrics (AEMs) and human evaluation (HE) by professional translators to assess translation ranking, fluency and adequacy. The results indicate that while Google Translate generally outperforms LLMs in AEMs, human evaluators rate LLMs, especially GPT-4, comparably or slightly better in terms of producing contextually adequate and fluent translations. This discrepancy suggests LLMs' potential in handling specialized legal terminology and context, highlighting the importance of human evaluation methods in assessing MT quality. The study underscores the evolving capabilities of LLMs in specialized domains and calls for reevaluation of traditional AEMs to better capture the nuances of LLM-generated translations.application/pdfengEste trabajo se comparte bajo la licencia de Atribución-NoComercial-CompartirIgual 4.0 Internacional de Creative Commons (CC BY-NC-SA 4.0): https://creativecommons.org/licenses/by-nc-sa/4.0/.Large language models "ad referendum": How good are they at machine translation in the legal domain?info:eu-repo/semantics/articlehttp://dx.doi.org/10.6035/MonTI.2024.16.02Machine translationLarge language modelLegal translationHuman evaluationAutomatic evaluationinfo:eu-repo/semantics/openAccess