In spite of the recent advances in Machine Translation (MT)
for spoken languages, translation between spoken and Sign Languages
(SLs) or between Sign Languages remains a difficult problem. Here, we
study how Neural Machine Translation (NMT) might overcome the communication barriers for the Deaf and Hard-of-Hearing (DHH) community.
Namely, we approach the Text2Gloss translation task in which spoken
text segments are translated to lexical sign representations. In this context, we leverage transformer-based ...
In spite of the recent advances in Machine Translation (MT)
for spoken languages, translation between spoken and Sign Languages
(SLs) or between Sign Languages remains a difficult problem. Here, we
study how Neural Machine Translation (NMT) might overcome the communication barriers for the Deaf and Hard-of-Hearing (DHH) community.
Namely, we approach the Text2Gloss translation task in which spoken
text segments are translated to lexical sign representations. In this context, we leverage transformer-based models via (1) injecting linguistic
features that can guide the learning process towards better translations;
and (2) applying a Transfer Learning strategy to reuse the knowledge
of a pre-trained model. To this aim, different aggregation strategies are
compared and evaluated under Transfer Learning and random weight
initialization conditions. The results of this research reveal that linguistic features can successfully contribute to achieve more accurate models;
meanwhile, the Transfer Learning procedure applied conducted to substantial performance increases.
+