We use cookies to ensure that we give you the best experience on our website. You can change your cookie settings at any time. Otherwise, we'll assume you're OK to continue.

Durham University

Computer Science


Publication details for Professor Alexandra Cristea

Yiwei Zhou, Cristea, Alexandra I. & Shi, Lei (2017), Connecting Targets to Tweets: Semantic Attention-based Model for Target-Specific stance Detection, in Bouguettaya, Athman Gao, Yunjun Klimenko, Andrey Chen, Lu Zhang, Xiangliang Dzerzhinskiy, Fedor Jia, Weijia Klimenko, Stanislav V. & Li, Qing eds, Lecture Notes in Computer Science 10569: Web Information Systems Engineering – WISE 2017, 18th International Conference. Moscow, Springer, Cham, 18-32.

Author(s) from Durham


Understanding what people say and really mean in tweets is still a wide open research question. In particular, understanding the stance of a tweet, which is determined not only by its content, but also by the given target, is a very recent research aim of the community. It still remains a challenge to construct a tweet’s vector representation with respect to the target, especially when the target is only implicitly mentioned, or not mentioned at all in the tweet. We believe that better performance can be obtained by incorporating the information of the target into the tweet’s vector representation. In this paper, we thus propose to embed a novel attention mechanism at the semantic level in the bi-directional GRU-CNN structure, which is more fine-grained than the existing token-level attention mechanism. This novel attention mechanism allows the model to automatically attend to useful semantic features of informative tokens in deciding the target-specific stance, which further results in a conditional vector representation of the tweet, with respect to the given target. We evaluate our proposed model on a recent, widely applied benchmark Stance Detection dataset from Twitter for the SemEval-2016 Task 6.A. Experimental results demonstrate that the proposed model substantially outperforms several strong baselines, which include the state-of-the-art token-level attention mechanism on bi-directional GRU outputs and the SVM classifier.