Bao, LingxianLambert, PatrikBadia Cardus, Toni2023-03-172023-03-172024Bao L, Lambert P, Badia T. Improving aspect-based neural sentiment classification with lexicon enhancement, attention regularization and sentiment induction. Nat Lang Eng. 2024;30(1):30 p. DOI: 10.1017/S13513249220004321351-3249http://hdl.handle.net/10230/56255Deep neural networks as an end-to-end approach lack robustness from an application point of view, as it is very difficult to fix an obvious problem without retraining the model, for example, when a model consistently predicts positive when seeing the word “terrible.” Meanwhile, it is less stressed that the commonly used attention mechanism is likely to “over-fit” by being overly sparse, so that some key positions in the input sequence could be overlooked by the network. To address these problems, we proposed a lexicon-enhanced attention LSTM model in 2019, named ATLX. In this paper, we describe extended experiments and analysis of the ATLX model. And, we also try to further improve the aspect-based sentiment analysis system by combining a vector-based sentiment domain adaptation method.application/pdfeng© The Author(s), 2022. Published by Cambridge University Press. This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.Improving aspect-based neural sentiment classification with lexicon enhancement, attention regularization and sentiment inductioninfo:eu-repo/semantics/articlehttp://dx.doi.org/10.1017/S1351324922000432Sentiment AnalysisDeep LearningAttentionLexiconDomain Adaptationinfo:eu-repo/semantics/openAccess