
Introduction
Sentiment Analysis (SA) aims at classifying the sentiment polarity towards a whole sentence. Compared to SA, Aspect-Based Sentiment Analysis (ABSA) is designed to identify certain target aspects of an entity and classify sentiment polarities towards these aspects. For example, in the sentence “Food is pretty good but the service is horrific”, there are two target aspects: “food” and “service” with opposite sentiment polarities. Further, there are two variants of the ABSA problem. One is Aspect-Target Sentiment Classification (ATSC), which is our example before and also the focus of our paper.
In recent years, neutral networks have been developed and largely improved the ABSA performance by learning target-context relationships. Afterwards, the pre-trained language model shows powerful representation ability. Its application to many down-stream tasks, including ABSA, has achieved many accomplishments. Lately, domainspecific post-trained BERT shows better performance on this topic. In this paper, we experiment with LSTM-based and BERT-based models for aspect-based sentiment analysis, and apply these models to a more challenging dataset than the commonly used benchmark. We then conduct a robust error analysis for our models to analyze reasons for erroneous classifications.
Our presentation
We implement LSTM-based models (LSTM and ATAE-LSTM) and BERT-based models (vanilla BERT-base and BERT-ADA) on a challenging dataset called MAMS, which contains multiple aspects with multiple sentiment polarities. We also conduct error analysis through the method of input reduction. The experiment results show that BERT-ADA outperforms other models on MAMS dataset. You can find more details in the video.