Ilya Sutskever, Oriol Vinyals, & Quoc V. Le (2014)
arXiv.
DOI: https://doi.org/10.48550/arxiv.1409.3215
Abstract. Introduces sequence-to-sequence learning: an LSTM encoder compresses an input sequence into a fixed vector and an LSTM decoder generates the output sequence autoregressively. The paper showed that this simple approach could achieve competitive machine translation.
Tags: rnn seq2seq nmt lstm