In this work we propose to tackle the limitations posed by the lack of annotated data for argument mining in scientific texts by annotating argumentative units and relations in research abstracts in two scientific domains. We evaluate our annotations by computing inter-annotator agreements, which range from moderate to substantial according to the difficulty level of the tasks and domains. We use our newly annotated corpus to fine-tune BERT-based models for argument mining in single and multi-task ...
In this work we propose to tackle the limitations posed by the lack of annotated data for argument mining in scientific texts by annotating argumentative units and relations in research abstracts in two scientific domains. We evaluate our annotations by computing inter-annotator agreements, which range from moderate to substantial according to the difficulty level of the tasks and domains. We use our newly annotated corpus to fine-tune BERT-based models for argument mining in single and multi-task settings, finally exploring the adaptation of models trained in one scientific discipline (computational linguistics) to predict the argumentative structure of abstracts in a different one (biomedicine).
+