Skip to content
Snippets Groups Projects
Commit de581e20 authored by Dmitri Soshnikov's avatar Dmitri Soshnikov
Browse files

Update environment and torchnlp.py

parent bfb93b02
No related branches found
No related tags found
No related merge requests found
......@@ -16,7 +16,7 @@ tensorflow-datasets==4.4.0
tensorflow-hub==0.12.0
tensorflow-text==2.8.1
tensorflow==2.8.1
tensorboard==2.8.1
tensorboard==2.8.0
tokenizers==0.10.3
torchinfo==0.0.8
tqdm==4.62.3
......
......@@ -16,7 +16,7 @@ tensorflow-datasets==4.4.0
tensorflow-hub==0.12.0
tensorflow-text==2.8.1
tensorflow==2.8.1
tensorboard==2.8.1
tensorboard==2.8.0
tokenizers==0.10.3
torchinfo==0.0.8
tqdm==4.62.3
......
......@@ -11,9 +11,7 @@ After you install miniconda, you need to clone the repository and create a virtu
```bash
git clone http://github.com/microsoft/ai-for-beginners
cd ai-for-beginners
cd .devcontainer
conda env create --name ai4beg --file environment.yml
cd ..
conda env create --name ai4beg --file .devcontainer/environment.yml
conda activate ai4beg
```
......
......@@ -23,9 +23,16 @@ def load_dataset(ngrams=1,min_freq=1):
vocab = torchtext.vocab.vocab(counter, min_freq=min_freq)
return train_dataset,test_dataset,classes,vocab
stoi_hash = {}
def encode(x,voc=None,unk=0,tokenizer=tokenizer):
global stoi_hash
v = vocab if voc is None else voc
return [v.get_stoi().get(s,unk) for s in tokenizer(x)]
if v in stoi_hash.keys():
stoi = stoi_hash[v]
else:
stoi = v.get_stoi()
stoi_hash[v]=stoi
return [stoi.get(s,unk) for s in tokenizer(x)]
def train_epoch(net,dataloader,lr=0.01,optimizer=None,loss_fn = torch.nn.CrossEntropyLoss(),epoch_size=None, report_freq=200):
optimizer = optimizer or torch.optim.Adam(net.parameters(),lr=lr)
......
%% Cell type:markdown id: tags:
# Recurrent neural networks
In the previous module, we have been using rich semantic representations of text, and a simple linear classifier on top of the embeddings. What this architecture does is to capture aggregated meaning of words in a sentence, but it does not take into account the **order** of words, because aggregation operation on top of embeddings removed this information from the original text. Because these models are unable to model word ordering, they cannot solve more complex or ambiguous tasks such as text generation or question answering.
To capture the meaning of text sequence, we need to use another neural network architecture, which is called a **recurrent neural network**, or RNN. In RNN, we pass our sentence through the network one symbol at a time, and the network produces some **state**, which we then pass to the network again with the next symbol.
<img alt="RNN" src="images/rnn.png" width="60%"/>
Given the input sequence of tokens $X_0,\dots,X_n$, RNN creates a sequence of neural network blocks, and trains this sequence end-to-end using back propagation. Each network block takes a pair $(X_i,S_i)$ as an input, and produces $S_{i+1}$ as a result. Final state $S_n$ or output $X_n$ goes into a linear classifier to produce the result. All network blocks share the same weights, and are trained end-to-end using one back propagation pass.
Because state vectors $S_0,\dots,S_n$ are passed through the network, it is able to learn the sequential dependencies between words. For example, when the word *not* appears somewhere in the sequence, it can learn to negate certain elements within the state vector, resulting in negation.
> Since weights of all RNN blocks on the picture are shared, the same picture can be represented as one block (on the right) with a recurrent feedback loop, which passes output state of the network back to the input.
Let's see how recurrent neural networks can help us classify our news dataset.
%% Cell type:code id: tags:
``` python
import torch
import torchtext
from torchnlp import *
train_dataset, test_dataset, classes, vocab = load_dataset()
vocab_size = len(vocab)
```
%% Output
Loading dataset...
d:\WORK\ai-for-beginners\5-NLP\16-RNN\data\train.csv: 29.5MB [00:01, 28.3MB/s]
d:\WORK\ai-for-beginners\5-NLP\16-RNN\data\test.csv: 1.86MB [00:00, 9.72MB/s]
Building vocab...
%% Cell type:markdown id: tags:
## Simple RNN classifier
In case of simple RNN, each recurrent unit is a simple linear network, which takes concatenated input vector and state vector, and produce a new state vector. PyTorch represents this unit with `RNNCell` class, and a networks of such cells - as `RNN` layer.
To define an RNN classifier, we will first apply an embedding layer to lower the dimensionality of input vocabulary, and then have RNN layer on top of it:
%% Cell type:code id: tags:
``` python
class RNNClassifier(torch.nn.Module):
def __init__(self, vocab_size, embed_dim, hidden_dim, num_class):
super().__init__()
self.hidden_dim = hidden_dim
self.embedding = torch.nn.Embedding(vocab_size, embed_dim)
self.rnn = torch.nn.RNN(embed_dim,hidden_dim,batch_first=True)
self.fc = torch.nn.Linear(hidden_dim, num_class)
def forward(self, x):
batch_size = x.size(0)
x = self.embedding(x)
x,h = self.rnn(x)
return self.fc(x.mean(dim=1))
```
%% Cell type:markdown id: tags:
> **Note:** We use untrained embedding layer here for simplicity, but for even better results we can use pre-trained embedding layer with Word2Vec or GloVe embeddings, as described in the previous unit. For better understanding, you might want to adapt this code to work with pre-trained embeddings.
In our case, we will use padded data loader, so each batch will have a number of padded sequences of the same length. RNN layer will take the sequence of embedding tensors, and produce two outputs:
* $x$ is a sequence of RNN cell outputs at each step
* $h$ is a final hidden state for the last element of the sequence
We then apply a fully-connected linear classifier to get the number of class.
> **Note:** RNNs are quite difficult to train, because once the RNN cells are unrolled along the sequence length, the resulting number of layers involved in back propagation is quite large. Thus we need to select small learning rate, and train the network on larger dataset to produce good results. It can take quite a long time, so using GPU is preferred.
%% Cell type:code id: tags:
``` python
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=16, collate_fn=padify, shuffle=True)
net = RNNClassifier(vocab_size,64,32,len(classes)).to(device)
train_epoch(net,train_loader, lr=0.001)
```
%% Output
3200: acc=0.3090625
6400: acc=0.38921875
9600: acc=0.4590625
12800: acc=0.511953125
16000: acc=0.5506875
19200: acc=0.57921875
22400: acc=0.6070089285714285
25600: acc=0.6304296875
28800: acc=0.6484027777777778
32000: acc=0.66509375
35200: acc=0.6790056818181818
38400: acc=0.6929166666666666
41600: acc=0.7035817307692308
44800: acc=0.7137276785714286
48000: acc=0.72225
51200: acc=0.73001953125
54400: acc=0.7372794117647059
57600: acc=0.7436631944444444
60800: acc=0.7503947368421052
64000: acc=0.75634375
67200: acc=0.7615773809523809
70400: acc=0.7662642045454545
73600: acc=0.7708423913043478
76800: acc=0.7751822916666666
80000: acc=0.7790625
83200: acc=0.7825
86400: acc=0.7858564814814815
89600: acc=0.7890513392857142
92800: acc=0.7920474137931034
96000: acc=0.7952708333333334
99200: acc=0.7982258064516129
102400: acc=0.80099609375
105600: acc=0.8037594696969697
108800: acc=0.8060569852941176
%% Cell type:markdown id: tags:
## Long Short Term Memory (LSTM)
One of the main problems of classical RNNs is so-called **vanishing gradients** problem. Because RNNs are trained end-to-end in one back-propagation pass, it is having hard times propagating error to the first layers of the network, and thus the network cannot learn relationships between distant tokens. One of the ways to avoid this problem is to introduce **explicit state management** by using so called **gates**. There are two most known architectures of this kind: **Long Short Term Memory** (LSTM) and **Gated Relay Unit** (GRU).
![Image showing an example long short term memory cell](./images/long-short-term-memory-cell.svg)
LSTM Network is organized in a manner similar to RNN, but there are two states that are being passed from layer to layer: actual state $c$, and hidden vector $h$. At each unit, hidden vector $h_i$ is concatenated with input $x_i$, and they control what happens to the state $c$ via **gates**. Each gate is a neural network with sigmoid activation (output in the range $[0,1]$), which can be thought of as bitwise mask when multiplied by the state vector. There are the following gates (from left to right on the picture above):
* **forget gate** takes hidden vector and determines, which components of the vector $c$ we need to forget, and which to pass through.
* **input gate** takes some information from the input and hidden vector, and inserts it into state.
* **output gate** transforms state via some linear layer with $\tanh$ activation, then selects some of its components using hidden vector $h_i$ to produce new state $c_{i+1}$.
Components of the state $c$ can be thought of as some flags that can be switched on and off. For example, when we encounter a name *Alice* in the sequence, we may want to assume that it refers to female character, and raise the flag in the state that we have female noun in the sentence. When we further encounter phrases *and Tom*, we will raise the flag that we have plural noun. Thus by manipulating state we can supposedly keep track of grammatical properties of sentence parts.
> **Note**: A great resource for understanding internals of LSTM is this great article [Understanding LSTM Networks](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) by Christopher Olah.
While internal structure of LSTM cell may look complex, PyTorch hides this implementation inside `LSTMCell` class, and provides `LSTM` object to represent the whole LSTM layer. Thus, implementation of LSTM classifier will be pretty similar to the simple RNN which we have seen above:
%% Cell type:code id: tags:
``` python
class LSTMClassifier(torch.nn.Module):
def __init__(self, vocab_size, embed_dim, hidden_dim, num_class):
super().__init__()
self.hidden_dim = hidden_dim
self.embedding = torch.nn.Embedding(vocab_size, embed_dim)
self.embedding.weight.data = torch.randn_like(self.embedding.weight.data)-0.5
self.rnn = torch.nn.LSTM(embed_dim,hidden_dim,batch_first=True)
self.fc = torch.nn.Linear(hidden_dim, num_class)
def forward(self, x):
batch_size = x.size(0)
x = self.embedding(x)
x,(h,c) = self.rnn(x)
return self.fc(h[-1])
```
%% Cell type:markdown id: tags:
Now let's train our network. Note that training LSTM is also quite slow, and you may not seem much raise in accuracy in the beginning of training. Also, you may need to play with `lr` learning rate parameter to find the learning rate that results in reasonable training speed, and yet does not cause memory waste.
%% Cell type:code id: tags:
``` python
net = LSTMClassifier(vocab_size,64,32,len(classes)).to(device)
train_epoch(net,train_loader, lr=0.001)
```
%% Output
3200: acc=0.259375
6400: acc=0.25859375
9600: acc=0.26177083333333334
12800: acc=0.2784375
16000: acc=0.313
19200: acc=0.3528645833333333
22400: acc=0.3965625
25600: acc=0.4385546875
28800: acc=0.4752777777777778
32000: acc=0.505375
35200: acc=0.5326704545454546
38400: acc=0.5557552083333334
41600: acc=0.5760817307692307
44800: acc=0.5954910714285714
48000: acc=0.6118333333333333
51200: acc=0.62681640625
54400: acc=0.6404779411764706
57600: acc=0.6520138888888889
60800: acc=0.662828947368421
64000: acc=0.673546875
67200: acc=0.6831547619047619
70400: acc=0.6917897727272727
73600: acc=0.6997146739130434
76800: acc=0.707109375
80000: acc=0.714075
83200: acc=0.7209134615384616
86400: acc=0.727037037037037
89600: acc=0.7326674107142858
92800: acc=0.7379633620689655
96000: acc=0.7433645833333333
99200: acc=0.7479032258064516
102400: acc=0.752119140625
105600: acc=0.7562405303030303
108800: acc=0.76015625
112000: acc=0.7641339285714286
115200: acc=0.7677777777777778
118400: acc=0.7711233108108108
(0.03487814127604167, 0.7728)
%% Cell type:markdown id: tags:
## Packed sequences
In our example, we had to pad all sequences in the minibatch with zero vectors. While it results in some memory waste, with RNNs it is more critical that additional RNN cells are created for the padded input items, which take part in training, yet do not carry any important input information. It would be much better to train RNN only to the actual sequence size.
To do that, a special format of padded sequence storage is introduced in PyTorch. Suppose we have input padded minibatch which looks like this:
```
[[1,2,3,4,5],
[6,7,8,0,0],
[9,0,0,0,0]]
```
Here 0 represents padded values, and the actual length vector of input sequences is `[5,3,1]`.
In order to effectively train RNN with padded sequence, we want to begin training first group of RNN cells with large minibatch (`[1,6,9]`), but then end processing of third sequence, and continue training with shorted minibatches (`[2,7]`, `[3,8]`), and so on. Thus, packed sequence is represented as one vector - in our case `[1,6,9,2,7,3,8,4,5]`, and length vector (`[5,3,1]`), from which we can easily reconstruct the original padded minibatch.
To produce packed sequence, we can use `torch.nn.utils.rnn.pack_padded_sequence` function. All recurrent layers, including RNN, LSTM and GRU, support packed sequences as input, and produce packed output, which can be decoded using `torch.nn.utils.rnn.pad_packed_sequence`.
To be able to produce packed sequence, we need to pass length vector to the network, and thus we need a different function to prepare minibatches:
%% Cell type:code id: tags:
``` python
def pad_length(b):
# build vectorized sequence
v = [encode(x[1]) for x in b]
# compute max length of a sequence in this minibatch and length sequence itself
len_seq = list(map(len,v))
l = max(len_seq)
return ( # tuple of three tensors - labels, padded features, length sequence
torch.LongTensor([t[0]-1 for t in b]),
torch.stack([torch.nn.functional.pad(torch.tensor(t),(0,l-len(t)),mode='constant',value=0) for t in v]),
torch.tensor(len_seq)
)
train_loader_len = torch.utils.data.DataLoader(train_dataset, batch_size=16, collate_fn=pad_length, shuffle=True)
```
%% Cell type:markdown id: tags:
Actual network would be very similar to `LSTMClassifier` above, but `forward` pass will receive both padded minibatch and the vector of sequence lengths. After computing the embedding, we compute packed sequence, pass it to LSTM layer, and then unpack the result back.
> **Note**: We actually do not use unpacked result `x`, because we use output from the hidden layers in the following computations. Thus, we can remove the unpacking altogether from this code. The reason we place it here is for you to be able to modify this code easily, in case you should need to use network output in further computations.
%% Cell type:code id: tags:
``` python
class LSTMPackClassifier(torch.nn.Module):
def __init__(self, vocab_size, embed_dim, hidden_dim, num_class):
super().__init__()
self.hidden_dim = hidden_dim
self.embedding = torch.nn.Embedding(vocab_size, embed_dim)
self.embedding.weight.data = torch.randn_like(self.embedding.weight.data)-0.5
self.rnn = torch.nn.LSTM(embed_dim,hidden_dim,batch_first=True)
self.fc = torch.nn.Linear(hidden_dim, num_class)
def forward(self, x, lengths):
batch_size = x.size(0)
x = self.embedding(x)
pad_x = torch.nn.utils.rnn.pack_padded_sequence(x,lengths,batch_first=True,enforce_sorted=False)
pad_x,(h,c) = self.rnn(pad_x)
x, _ = torch.nn.utils.rnn.pad_packed_sequence(pad_x,batch_first=True)
return self.fc(h[-1])
```
%% Cell type:markdown id: tags:
Now let's do the training:
%% Cell type:code id: tags:
``` python
net = LSTMPackClassifier(vocab_size,64,32,len(classes)).to(device)
train_epoch_emb(net,train_loader_len, lr=0.001,use_pack_sequence=True)
```
%% Output
3200: acc=0.285625
6400: acc=0.33359375
9600: acc=0.3876041666666667
12800: acc=0.44078125
16000: acc=0.4825
19200: acc=0.5235416666666667
22400: acc=0.5559821428571429
25600: acc=0.58609375
28800: acc=0.6116666666666667
32000: acc=0.63340625
35200: acc=0.6525284090909091
38400: acc=0.668515625
41600: acc=0.6822596153846154
44800: acc=0.6948214285714286
48000: acc=0.7052708333333333
51200: acc=0.71521484375
54400: acc=0.7239889705882353
57600: acc=0.7315277777777778
60800: acc=0.7388486842105263
64000: acc=0.74571875
67200: acc=0.7518303571428572
70400: acc=0.7576988636363636
73600: acc=0.7628940217391305
76800: acc=0.7681510416666667
80000: acc=0.7728125
83200: acc=0.7772235576923077
86400: acc=0.7815393518518519
89600: acc=0.7857700892857142
92800: acc=0.7895043103448276
96000: acc=0.7930520833333333
99200: acc=0.7959072580645161
102400: acc=0.798994140625
105600: acc=0.802064393939394
108800: acc=0.8051378676470589
112000: acc=0.8077857142857143
115200: acc=0.8104600694444445
118400: acc=0.8128293918918919
(0.029785829671223958, 0.8138166666666666)
%% Cell type:markdown id: tags:
> **Note:** You may have noticed the parameter `use_pack_sequence` that we pass to the training function. Currently, `pack_padded_sequence` function requires length sequence tensor to be on CPU device, and thus training function needs to avoid moving the length sequence data to GPU when training. You can look into implementation of `train_emb` function in the [`torchnlp.py`](torchnlp.py) file.
%% Cell type:markdown id: tags:
## Bidirectional and multilayer RNNs
In our examples, all recurrent networks operated in one direction, from beginning of a sequence to the end. It looks natural, because it resembles the way we read and listen to speech. However, since in many practical cases we have random access to the input sequence, it might make sense to run recurrent computation in both directions. Such networks are call **bidirectional** RNNs, and they can be created by passing `bidirectional=True` parameter to RNN/LSTM/GRU constructor.
When dealing with bidirectional network, we would need two hidden state vectors, one for each direction. PyTorch encodes those vectors as one vector of twice larger size, which is quite convenient, because you would normally pass the resulting hidden state to fully-connected linear layer, and you would just need to take this increase in size into account when creating the layer.
Recurrent network, one-directional or bidirectional, captures certain patterns within a sequence, and can store them into state vector or pass into output. As with convolutional networks, we can build another recurrent layer on top of the first one to capture higher level patterns, build from low-level patterns extracted by the first layer. This leads us to the notion of **multi-layer RNN**, which consists of two or more recurrent networks, where output of the previous layer is passed to the next layer as input.
![Image showing a Multilayer long-short-term-memory- RNN](images/multi-layer-lstm.jpg)
*Picture from [this wonderful post](https://towardsdatascience.com/from-a-lstm-cell-to-a-multilayer-lstm-network-with-pytorch-2899eb5696f3) by Fernando López*
PyTorch makes constructing such networks an easy task, because you just need to pass `num_layers` parameter to RNN/LSTM/GRU constructor to build several layers of recurrence automatically. This would also mean that the size of hidden/state vector would increase proportionally, and you would need to take this into account when handling the output of recurrent layers.
%% Cell type:markdown id: tags:
## RNNs for other tasks
In this unit, we have seen that RNNs can be used for sequence classification, but in fact, they can handle many more tasks, such as text generation, machine translation, and more. We will consider those tasks in the next unit.
......
......@@ -23,9 +23,16 @@ def load_dataset(ngrams=1,min_freq=1):
vocab = torchtext.vocab.vocab(counter, min_freq=min_freq)
return train_dataset,test_dataset,classes,vocab
stoi_hash = {}
def encode(x,voc=None,unk=0,tokenizer=tokenizer):
global stoi_hash
v = vocab if voc is None else voc
return [v.get_stoi().get(s,unk) for s in tokenizer(x)]
if v in stoi_hash.keys():
stoi = stoi_hash[v]
else:
stoi = v.get_stoi()
stoi_hash[v]=stoi
return [stoi.get(s,unk) for s in tokenizer(x)]
def train_epoch(net,dataloader,lr=0.01,optimizer=None,loss_fn = torch.nn.CrossEntropyLoss(),epoch_size=None, report_freq=200):
optimizer = optimizer or torch.optim.Adam(net.parameters(),lr=lr)
......
......@@ -23,9 +23,16 @@ def load_dataset(ngrams=1,min_freq=1):
vocab = torchtext.vocab.vocab(counter, min_freq=min_freq)
return train_dataset,test_dataset,classes,vocab
stoi_hash = {}
def encode(x,voc=None,unk=0,tokenizer=tokenizer):
global stoi_hash
v = vocab if voc is None else voc
return [v.get_stoi().get(s,unk) for s in tokenizer(x)]
if v in stoi_hash.keys():
stoi = stoi_hash[v]
else:
stoi = v.get_stoi()
stoi_hash[v]=stoi
return [stoi.get(s,unk) for s in tokenizer(x)]
def train_epoch(net,dataloader,lr=0.01,optimizer=None,loss_fn = torch.nn.CrossEntropyLoss(),epoch_size=None, report_freq=200):
optimizer = optimizer or torch.optim.Adam(net.parameters(),lr=lr)
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment