SemEval-2016 Task 1: Semantic textual similarity, monolingual and cross-lingual evaluation
Mostra el registre complet Registre parcial de l'ítem
- dc.contributor.author Agirre, Enekoca
- dc.contributor.author Banea, Carmenca
- dc.contributor.author Cer, Danielca
- dc.contributor.author Diab, Monaca
- dc.contributor.author Gonzalez Agirre, Aitorca
- dc.contributor.author Mihalcea, Radaca
- dc.contributor.author Rigau Claramunt, Germanca
- dc.contributor.author Wiebe, Janyceca
- dc.date.accessioned 2017-12-19T10:21:21Z
- dc.date.available 2017-12-19T10:21:21Z
- dc.date.issued 2016
- dc.description Comunicació presentada al 10th International Workshop on Semantic Evaluation (SemEval-2016), celebrat els dies 16 i 17 de juny de 2016 a San Diego, Califòrnia.
- dc.description.abstract Semantic Textual Similarity (STS) seeks to measure the degree of semantic equivalence between two snippets of text. Similarity is expressed on an ordinal scale that spans from semantic equivalence to complete unrelatedness. Intermediate values capture specifically defined levels of partial similarity. While prior evaluations constrained themselves to just monolingual snippets of text, the 2016 shared task includes a pilot subtask on computing semantic similarity on cross-lingual text snippets. This year’s traditional monolingual subtask involves the evaluation of English text snippets from the following four domains: Plagiarism Detection, Post-Edited Machine Translations, Question-Answering and News Article Headlines. From the questionanswering domain, we include both questionquestion and answer-answer pairs. The cross-lingual subtask provides paired SpanishEnglish text snippets drawn from the same sources as the English data as well as independently sampled news data. The English subtask attracted 43 participating teams producing 119 system submissions, while the crosslingual Spanish-English pilot subtask attracted 10 teams resulting in 26 systems.en
- dc.description.sponsorship This material is based in part upon work supported by DARPA-BAA-12-47 DEFT grant to George Washington University, by DEFT grant #12475008 to the University of Michigan, and by a MINECO grant to the University of the Basque Country (TUNER project TIN2015-65308-C5-1-R). Aitor Gonzalez Agirre is supported by a doctoral grant from MINECO (PhD grant FPU12/06243).
- dc.format.mimetype application/pdfca
- dc.identifier.citation Agirre E, Banea C, Cer D, Diab M, Gonzalez-Agirre A, Mihalcea R, Rigau G, Wiebe J. SemEval-2016 Task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. SemEval-2016. 10th International Workshop on Semantic Evaluation; 2016 Jun 16-17; San Diego, CA. Stroudsburg (PA): ACL; 2016. p. 497-511.
- dc.identifier.uri http://hdl.handle.net/10230/33534
- dc.language.iso eng
- dc.publisher ACL (Association for Computational Linguistics)ca
- dc.relation.ispartof SemEval-2016. 10th International Workshop on Semantic Evaluation; 2016 Jun 16-17; San Diego, CA. Stroudsburg (PA): ACL; 2016. p. 497-511.
- dc.relation.projectID info:eu-repo/grantAgreement/ES/1PE/TIN2015-65308-C5-1-R
- dc.rights © ACL, Creative Commons Attribution 4.0 License
- dc.rights.accessRights info:eu-repo/semantics/openAccess
- dc.rights.uri http://creativecommons.org/licenses/by/4.0/
- dc.subject.other Lingüística computacional
- dc.title SemEval-2016 Task 1: Semantic textual similarity, monolingual and cross-lingual evaluationca
- dc.type info:eu-repo/semantics/conferenceObject
- dc.type.version info:eu-repo/semantics/publishedVersion