608 lines
17 KiB
Plaintext
608 lines
17 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# 3 - Faster Sentiment Analysis\n",
|
|
"\n",
|
|
"In the previous notebook, we managed to achieve a decent test accuracy of ~85% using all of the common techniques used for sentiment analysis. In this notebook, we'll implement a model that achieves comparable results a lot faster. More specifically, we'll be implementing the \"FastText\" model from the paper [Bag of Tricks for Efficient Text Classification](https://arxiv.org/abs/1607.01759).\n",
|
|
"\n",
|
|
"\n",
|
|
"This will allow us to achieve the same ~85% test accuracy as the last model, but much faster."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Preparing Data\n",
|
|
"\n",
|
|
"One of the key concepts in the FastText paper is that they calculate the n-grams of an input sentence and append them to the end of a sentence. Here, we'll use bi-grams. Briefly, a bi-gram is a pair of words/tokens that appear consecutively within a sentence. \n",
|
|
"\n",
|
|
"For example, in the sentence \"how are you ?\", the bi-grams are: \"how are\", \"are you\" and \"you ?\".\n",
|
|
"\n",
|
|
"The `generate_bigrams` function takes a sentence that has already been tokenized, calculates the bi-grams and appends them to the end of the tokenized list."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"def generate_bigrams(x):\n",
|
|
" n_grams = set(zip(*[x[i:] for i in range(2)]))\n",
|
|
" for n_gram in n_grams:\n",
|
|
" x.append(' '.join(n_gram))\n",
|
|
" return x"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"As an example:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"['This', 'film', 'is', 'terrible', 'film is', 'This film', 'is terrible']"
|
|
]
|
|
},
|
|
"execution_count": 2,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"generate_bigrams(['This', 'film', 'is', 'terrible'])"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"TorchText `Field`s have a `preprocessing` argument. A function passed here will be applied to a sentence after it has been tokenized (transformed from a string into a list of tokens), but before it has been indexed (transformed from a token to an integer). Here, we pass our `generate_bigrams` function."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import torch\n",
|
|
"from torchtext import data\n",
|
|
"from torchtext import datasets\n",
|
|
"\n",
|
|
"torch.manual_seed(1234)\n",
|
|
"\n",
|
|
"TEXT = data.Field(tokenize='spacy', preprocessing=generate_bigrams)\n",
|
|
"LABEL = data.LabelField(tensor_type=torch.FloatTensor)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"As before, we load the IMDb dataset and create the splits."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"train, test = datasets.IMDB.splits(TEXT, LABEL)\n",
|
|
"\n",
|
|
"train, valid = train.split()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Build the vocab and load the pre-trained word embeddings."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"defaultdict(<function _default_unk_index at 0x7f4feb020158>, {'neg': 0, 'pos': 1})\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"TEXT.build_vocab(train, max_size=25000, vectors=\"glove.6B.100d\")\n",
|
|
"LABEL.build_vocab(train)\n",
|
|
"\n",
|
|
"print(LABEL.vocab.stoi)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"And create the iterators."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"BATCH_SIZE = 64\n",
|
|
"\n",
|
|
"train_iter, valid_iter, test_iter = data.BucketIterator.splits(\n",
|
|
" (train, valid, test), \n",
|
|
" batch_size=BATCH_SIZE, \n",
|
|
" sort_key=lambda x: len(x.text), \n",
|
|
" repeat=False)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Build the Model\n",
|
|
"\n",
|
|
"This model has far fewer parameters than the previous model as it only has 2 layers that have any parameters, the embedding layer and the linear layer. There is no RNN component in sight!\n",
|
|
"\n",
|
|
"Instead, it first calculates the word embedding for each word using the `Embedding` layer, then calculates the average of all of the word embeddings and feeds this through the `Linear` layer, and that's it!\n",
|
|
"\n",
|
|
"![](https://i.imgur.com/e0sWZoZ.png)\n",
|
|
"\n",
|
|
"We implement the averaging with the `avg_pool2d` (average pool 2-dimensions) function. Initially, you may think using a 2-dimensional pooling seems strange, surely our sentences are 1-dimensional, not 2-dimensional? However, you can think of the word embeddings as a 2-dimensional grid, where the ones are along one axis and the dimensions of the word embeddings are along another. In the image below is an example sentence after being converted into 5-dimensional word embeddings, with the words along the vertical axis and the embeddings along the horizontal axis.\n",
|
|
"\n",
|
|
"![](https://i.imgur.com/SSH25NT.png)\n",
|
|
"\n",
|
|
"The `avg_pool2d` passes a filter of size `embedded.shape[1]` (i.e. the length of the sentence) by 1. This is shown in pink in the image below.\n",
|
|
"\n",
|
|
"![](https://i.imgur.com/U7eRnIe.png)\n",
|
|
"\n",
|
|
"The average value of all of the dimensions is calculated and concatenated into a 5-dimensional (in our pictoral examples, 100-dimensional in the code) tensor for each sentence. This tensor is then passed through the linear layer to produce our prediction."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import torch.nn as nn\n",
|
|
"\n",
|
|
"class FastText(nn.Module):\n",
|
|
" def __init__(self, vocab_size, embedding_dim, output_dim):\n",
|
|
" super().__init__()\n",
|
|
" \n",
|
|
" self.embedding = nn.Embedding(vocab_size, embedding_dim)\n",
|
|
" self.fc = nn.Linear(embedding_dim, output_dim)\n",
|
|
" \n",
|
|
" def forward(self, x):\n",
|
|
" \n",
|
|
" #x = [sent len, batch size]\n",
|
|
" \n",
|
|
" embedded = self.embedding(x)\n",
|
|
" \n",
|
|
" #embedded = [sent len, batch size, emb dim]\n",
|
|
" \n",
|
|
" embedded = embedded.permute(1, 0, 2)\n",
|
|
" \n",
|
|
" #embedded = [batch size, sent len, emb dim]\n",
|
|
" \n",
|
|
" pooled = F.avg_pool2d(embedded, (embedded.shape[1], 1)).squeeze(1) \n",
|
|
" \n",
|
|
" #pooled = [batch size, embedding_dim]\n",
|
|
" \n",
|
|
" return self.fc(pooled)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"As previously, we'll create an instance of our `FastText` class."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 8,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"INPUT_DIM = len(TEXT.vocab)\n",
|
|
"EMBEDDING_DIM = 100\n",
|
|
"OUTPUT_DIM = 1\n",
|
|
"\n",
|
|
"model = FastText(INPUT_DIM, EMBEDDING_DIM, OUTPUT_DIM)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"And copy the pre-trained vectors to our embedding layer."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 9,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"tensor([[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n",
|
|
" [ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n",
|
|
" [-0.0382, -0.2449, 0.7281, ..., -0.1459, 0.8278, 0.2706],\n",
|
|
" ...,\n",
|
|
" [ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n",
|
|
" [ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n",
|
|
" [ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]])"
|
|
]
|
|
},
|
|
"execution_count": 9,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"pretrained_embeddings = TEXT.vocab.vectors\n",
|
|
"\n",
|
|
"model.embedding.weight.data.copy_(pretrained_embeddings)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Train the Model"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Training the model is the exact same as last time.\n",
|
|
"\n",
|
|
"We initialize our optimizer..."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import torch.optim as optim\n",
|
|
"\n",
|
|
"optimizer = optim.Adam(model.parameters())"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We define the criterion and place the model and criterion on the GPU (if available)..."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"criterion = nn.BCEWithLogitsLoss()\n",
|
|
"\n",
|
|
"device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
|
|
"\n",
|
|
"model = model.to(device)\n",
|
|
"criterion = criterion.to(device)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We implement the function to calculate accuracy..."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 12,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import torch.nn.functional as F\n",
|
|
"\n",
|
|
"def binary_accuracy(preds, y):\n",
|
|
" \"\"\"\n",
|
|
" Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8\n",
|
|
" \"\"\"\n",
|
|
"\n",
|
|
" #round predictions to the closest integer\n",
|
|
" rounded_preds = torch.round(F.sigmoid(preds))\n",
|
|
" correct = (rounded_preds == y).float() #convert into float for division \n",
|
|
" acc = correct.sum()/len(correct)\n",
|
|
" return acc"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We define a function for training our model...\n",
|
|
"\n",
|
|
"**Note**: we are no longer using dropout so we do not need to use `model.train()`, but as mentioned in the 1st notebook, it is good practice to use it."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"def train(model, iterator, optimizer, criterion):\n",
|
|
" \n",
|
|
" epoch_loss = 0\n",
|
|
" epoch_acc = 0\n",
|
|
" \n",
|
|
" model.train()\n",
|
|
" \n",
|
|
" for batch in iterator:\n",
|
|
" \n",
|
|
" optimizer.zero_grad()\n",
|
|
" \n",
|
|
" predictions = model(batch.text).squeeze(1)\n",
|
|
" \n",
|
|
" loss = criterion(predictions, batch.label)\n",
|
|
" \n",
|
|
" acc = binary_accuracy(predictions, batch.label)\n",
|
|
" \n",
|
|
" loss.backward()\n",
|
|
" \n",
|
|
" optimizer.step()\n",
|
|
" \n",
|
|
" epoch_loss += loss.item()\n",
|
|
" epoch_acc += acc.item()\n",
|
|
" \n",
|
|
" return epoch_loss / len(iterator), epoch_acc / len(iterator)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We define a function for testing our model...\n",
|
|
"\n",
|
|
"**Note**: again, we leave `model.eval()` even though we do not use dropout."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 14,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"def evaluate(model, iterator, criterion):\n",
|
|
" \n",
|
|
" epoch_loss = 0\n",
|
|
" epoch_acc = 0\n",
|
|
" \n",
|
|
" model.eval()\n",
|
|
" \n",
|
|
" with torch.no_grad():\n",
|
|
" \n",
|
|
" for batch in iterator:\n",
|
|
"\n",
|
|
" predictions = model(batch.text).squeeze(1)\n",
|
|
" \n",
|
|
" loss = criterion(predictions, batch.label)\n",
|
|
" \n",
|
|
" acc = binary_accuracy(predictions, batch.label)\n",
|
|
"\n",
|
|
" epoch_loss += loss.item()\n",
|
|
" epoch_acc += acc.item()\n",
|
|
" \n",
|
|
" return epoch_loss / len(iterator), epoch_acc / len(iterator)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Finally, we train our model..."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 15,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"/home/ben/.conda/envs/pytorch04/lib/python3.6/site-packages/torchtext/data/field.py:322: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.\n",
|
|
" return Variable(arr, volatile=not train)\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Epoch: 01, Train Loss: 0.687, Train Acc: 57.60%, Val. Loss: 0.634660, Val. Acc: 71.32%\n",
|
|
"Epoch: 02, Train Loss: 0.649, Train Acc: 72.87%, Val. Loss: 0.512169, Val. Acc: 76.34%\n",
|
|
"Epoch: 03, Train Loss: 0.575, Train Acc: 79.32%, Val. Loss: 0.445301, Val. Acc: 79.78%\n",
|
|
"Epoch: 04, Train Loss: 0.498, Train Acc: 83.25%, Val. Loss: 0.403604, Val. Acc: 83.26%\n",
|
|
"Epoch: 05, Train Loss: 0.433, Train Acc: 86.34%, Val. Loss: 0.386188, Val. Acc: 85.46%\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"N_EPOCHS = 5\n",
|
|
"\n",
|
|
"for epoch in range(N_EPOCHS):\n",
|
|
"\n",
|
|
" train_loss, train_acc = train(model, train_iter, optimizer, criterion)\n",
|
|
" valid_loss, valid_acc = evaluate(model, valid_iter, criterion)\n",
|
|
" \n",
|
|
" print(f'Epoch: {epoch+1:02}, Train Loss: {train_loss:.3f}, Train Acc: {train_acc*100:.2f}%, Val. Loss: {valid_loss:.3f}, Val. Acc: {valid_acc*100:.2f}%')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"...and get the test accuracy!\n",
|
|
"\n",
|
|
"The results are comparable to the results in the last notebook, but training takes considerably less time."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 16,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"/home/ben/.conda/envs/pytorch04/lib/python3.6/site-packages/torchtext/data/field.py:322: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.\n",
|
|
" return Variable(arr, volatile=not train)\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Test Loss: 0.396, Test Acc: 85.06%\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"test_loss, test_acc = evaluate(model, test_iter, criterion)\n",
|
|
"\n",
|
|
"print(f'Test Loss: {test_loss:.3f}, Test Acc: {test_acc*100:.2f}%')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## User Input\n",
|
|
"\n",
|
|
"And as before, we can test on any input the user provides."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 17,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import spacy\n",
|
|
"nlp = spacy.load('en')\n",
|
|
"\n",
|
|
"def predict_sentiment(sentence):\n",
|
|
" tokenized = [tok.text for tok in nlp.tokenizer(sentence)]\n",
|
|
" indexed = [TEXT.vocab.stoi[t] for t in tokenized]\n",
|
|
" tensor = torch.LongTensor(indexed).to(device)\n",
|
|
" tensor = tensor.unsqueeze(1)\n",
|
|
" prediction = F.sigmoid(model(tensor))\n",
|
|
" return prediction.item()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"An example negative review..."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 18,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"5.865559415951793e-08"
|
|
]
|
|
},
|
|
"execution_count": 18,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"predict_sentiment(\"This film is terrible\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"An example positive review..."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 19,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"1.0"
|
|
]
|
|
},
|
|
"execution_count": 19,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"predict_sentiment(\"This film is great\")"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.6.5"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|