Welcome to the UPF Digital Repository

SemEval-2016 Task 1: Semantic textual similarity, monolingual and cross-lingual evaluation

Show simple item record

dc.contributor.author Agirre, Eneko
dc.contributor.author Banea, Carmen
dc.contributor.author Cer, Daniel
dc.contributor.author Diab, Mona
dc.contributor.author Gonzalez Agirre, Aitor
dc.contributor.author Mihalcea, Rada
dc.contributor.author Rigau Claramunt, German
dc.contributor.author Wiebe, Janyce
dc.date.accessioned 2017-12-19T10:21:21Z
dc.date.available 2017-12-19T10:21:21Z
dc.date.issued 2016
dc.identifier.citation Agirre E, Banea C, Cer D, Diab M, Gonzalez-Agirre A, Mihalcea R, Rigau G, Wiebe J. SemEval-2016 Task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. SemEval-2016. 10th International Workshop on Semantic Evaluation; 2016 Jun 16-17; San Diego, CA. Stroudsburg (PA): ACL; 2016. p. 497-511.
dc.identifier.uri http://hdl.handle.net/10230/33534
dc.description Comunicació presentada al 10th International Workshop on Semantic Evaluation (SemEval-2016), celebrat els dies 16 i 17 de juny de 2016 a San Diego, Califòrnia.
dc.description.abstract Semantic Textual Similarity (STS) seeks to measure the degree of semantic equivalence between two snippets of text. Similarity is expressed on an ordinal scale that spans from semantic equivalence to complete unrelatedness. Intermediate values capture specifically defined levels of partial similarity. While prior evaluations constrained themselves to just monolingual snippets of text, the 2016 shared task includes a pilot subtask on computing semantic similarity on cross-lingual text snippets. This year’s traditional monolingual subtask involves the evaluation of English text snippets from the following four domains: Plagiarism Detection, Post-Edited Machine Translations, Question-Answering and News Article Headlines. From the questionanswering domain, we include both questionquestion and answer-answer pairs. The cross-lingual subtask provides paired SpanishEnglish text snippets drawn from the same sources as the English data as well as independently sampled news data. The English subtask attracted 43 participating teams producing 119 system submissions, while the crosslingual Spanish-English pilot subtask attracted 10 teams resulting in 26 systems.
dc.description.sponsorship This material is based in part upon work supported by DARPA-BAA-12-47 DEFT grant to George Washington University, by DEFT grant #12475008 to the University of Michigan, and by a MINECO grant to the University of the Basque Country (TUNER project TIN2015-65308-C5-1-R). Aitor Gonzalez Agirre is supported by a doctoral grant from MINECO (PhD grant FPU12/06243).
dc.format.mimetype application/pdf
dc.language.iso eng
dc.publisher ACL (Association for Computational Linguistics)
dc.relation.ispartof SemEval-2016. 10th International Workshop on Semantic Evaluation; 2016 Jun 16-17; San Diego, CA. Stroudsburg (PA): ACL; 2016. p. 497-511.
dc.rights © ACL, Creative Commons Attribution 4.0 License
dc.rights.uri http://creativecommons.org/licenses/by/4.0/
dc.subject.other Lingüística computacional
dc.title SemEval-2016 Task 1: Semantic textual similarity, monolingual and cross-lingual evaluation
dc.type info:eu-repo/semantics/conferenceObject
dc.relation.projectID info:eu-repo/grantAgreement/ES/1PE/TIN2015-65308-C5-1-R
dc.rights.accessRights info:eu-repo/semantics/openAccess
dc.type.version info:eu-repo/semantics/publishedVersion

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account

Statistics

In collaboration with Compliant to Partaking