Sequence to Sequence

Seq2Seq, or Sequence to Sequence, is a model that is commonly used in sequence prediction tasks. This includes language modelling and machine translation. It uses a type of neural network called LSTM, which stands for Long Short-Term Memory. The first LSTM is called the encoder and its job is to read the input sequence one timestep at a time. This creates a large fixed dimensional vector representation called a context vector. The second LSTM is called the decoder and it uses the context vector

1 / 1