Subjects: Computer Science >> Integration Theory of Computer Science submitted time 2018-05-20 Cooperative journals: 《计算机应用研究》
Abstract: This paper proposed a method in English sentence compression based on Pre-readin” and Simple Attention Mechanism. On the basis of Gated Recurrent Unit (GRU) and Encoder-Decoder, this paper modeled the original sentence semantics twice in the encoding stage. The first result was used as a global information to strengthen the second semantic model, thus obtaining a more comprehensive and accurate semantic vector. With full consideration of the particularity of the deleted sentence compression, this paper simply adopt the 3t-Attention mechanism in the decoding stage to improve the efficiency and accuracy of prediction, which means that the semantic vectors most relevant to the current decoding time step are inputted to the decoder. The results from the experiments on the Google news sentence compression dataset show that our model significantly outperforms all the recent state-of-the-art methods. Therefore, "Pre-reading" and Simple Attention Mechanism can effectively improve the accuracy of English sentence compression.