Improving Slot Filling in Spoken Language Understanding with Joint Pointer and Attention

Lin Zhao, Zhe Feng

We present a generative neural network model for slot filling based on a sequence- to-sequence (Seq2Seq) model together with a pointer network, in the situation where only sentence-level slot annotations are available in the spoken dialogue data. This model predicts slot values by jointly learning to copy a word which may be out-of-vocabulary (OOV) from an input utterance through a pointer network, or generate a word within the vocabulary through an attentional Seq2Seq model. Experimental results show the effective- ness of our slot filling model, especially at addressing the OOV problem. Addi- tionally, we integrate the proposed model into a spoken language understanding sys- tem and achieve the state-of-the-art perfor- mance on the benchmark data.