A considerable body of research deals with the automatic identification of hate speech and
related phenomena. However, cross-dataset model generalization remains a challenge. In this
context, we address two still open central questions: (i) to what extent does the generalization
depend on the model and the composition and annotation of the training data in terms of
different categories?, and (ii) do specific features of the datasets or models influence the
generalization potential? To answer ...
A considerable body of research deals with the automatic identification of hate speech and
related phenomena. However, cross-dataset model generalization remains a challenge. In this
context, we address two still open central questions: (i) to what extent does the generalization
depend on the model and the composition and annotation of the training data in terms of
different categories?, and (ii) do specific features of the datasets or models influence the
generalization potential? To answer (i), we experiment with BERT, ALBERT, fastText, and SVM
models trained on nine common public English datasets, whose class (or category) labels are
standardized (and thus made comparable), in intra- and cross-dataset setups. The experiments
show that indeed the generalization varies from model to model and that some of the categories
(e.g., ‘toxic’, ‘abusive’, or ‘offensive’) serve better as cross-dataset training categories than others
(e.g., ‘hate speech’). To answer (ii), we use a Random Forest model for assessing the relevance
of different model and dataset features during the prediction of the performance of 450 BERT,
450 ALBERT, 450 fastText, and 348 SVM binary abusive language classifiers (1698 in total). We
find that in order to generalize well, a model already needs to perform well in an intra-dataset
scenario. Furthermore, we find that some other parameters are equally decisive for the success
of the generalization, including, e.g., the training and target categories and the percentage of
the out-of-domain vocabulary.
+