Tutorials on getting started with PyTorch and TorchText for sentiment analysis.
Go to file
2020-11-16 21:50:28 +00:00
assets fixed typos in max pool figure and size of tensors after convolutional filters (#35) 2019-04-22 13:26:13 +01:00
custom_embeddings added appendix c - handling embeddings 2019-04-02 13:36:14 +01:00
data added optional appendix for how to use your own dataset with torchtext 2018-06-06 16:06:25 +01:00
experimental refactor of data loading 2020-10-13 21:54:44 +01:00
.gitignore fix bug with max_length in tokenizer. improved loading vectors. added weight initialization 2020-09-13 22:10:45 +01:00
1 - Simple Sentiment Analysis.ipynb reran all notebooks with latest pytorch and torchtext to ensure still working 2019-09-16 11:32:18 +01:00
2 - Upgraded Sentiment Analysis.ipynb added explicit notes to copy embeddings using weight.data and not weight 2019-09-19 16:32:07 +01:00
3 - Faster Sentiment Analysis.ipynb reran all notebooks with latest pytorch and torchtext to ensure still working 2019-09-16 11:32:18 +01:00
4 - Convolutional Sentiment Analysis.ipynb added batch_first to remove a permute 2019-11-06 14:01:23 +00:00
5 - Multi-class Sentiment Analysis.ipynb added model.eval() in predict sentiment functions (#31) 2019-04-10 10:27:20 +01:00
6 - Transformers for Sentiment Analysis.ipynb (#63) fixed typo in transformer model 2020-01-10 13:47:27 +00:00
A - Using TorchText with Your Own Datasets.ipynb clarified loading own datasets 2020-01-28 09:53:16 +00:00
B - A Closer Look at Word Embeddings.ipynb updated appendix B - formatting and typos 2019-04-01 17:08:38 +01:00
C - Loading, Saving and Freezing Embeddings.ipynb fixed appendix C loading incorrect embeddings from cache 2019-09-24 16:35:57 +01:00
LICENSE Initial commit 2017-12-13 13:36:41 +00:00
README.md updated readme for experimental requirements 2020-11-16 21:50:28 +00:00

Note: This repo is currently being updated for the new torchtext API!

As of November 2020 the new torchtext experimental API - which will be replacing the current API - is in development. To maintain legacy support, the implementations below will not be removed, but will probably be moved to a legacy folder at some point. Updated tutorials using the new API are currently being written, though the new API is not finalized so these are subject to change but I will do my best to keep them up to date. The new tutorials are located in the experimental folder, and require PyTorch 1.7, Python 3.8 and a torchtext built from the master branch - not installed via pip - see the README in the torchtext repo for instructions on how to build torchtext from master.

If you have any feedback in regards to them, please submit and issue with the word "experimental" somewhere in the title.

PyTorch Sentiment Analysis

This repo contains tutorials covering how to perform sentiment analysis using PyTorch 1.7 and torchtext 0.8 using Python 3.8.

The first 2 tutorials will cover getting started with the de facto approach to sentiment analysis: recurrent neural networks (RNNs). The third notebook covers the FastText model and the final covers a convolutional neural network (CNN) model.

There are also 2 bonus "appendix" notebooks. The first covers loading your own datasets with TorchText, while the second contains a brief look at the pre-trained word embeddings provided by TorchText.

If you find any mistakes or disagree with any of the explanations, please do not hesitate to submit an issue. I welcome any feedback, positive or negative!

Getting Started

To install PyTorch, see installation instructions on the PyTorch website.

To install TorchText:

pip install torchtext

We'll also make use of spaCy to tokenize our data. To install spaCy, follow the instructions here making sure to install the English models with:

python -m spacy download en

For tutorial 6, we'll use the transformers library, which can be installed via:

pip install transformers

These tutorials were created using version 1.2 of the transformers library.

Tutorials

  • 1 - Simple Sentiment Analysis Open In Colab

    This tutorial covers the workflow of a PyTorch with TorchText project. We'll learn how to: load data, create train/test/validation splits, build a vocabulary, create data iterators, define a model and implement the train/evaluate/test loop. The model will be simple and achieve poor performance, but this will be improved in the subsequent tutorials.

  • 2 - Upgraded Sentiment Analysis Open In Colab

    Now we have the basic workflow covered, this tutorial will focus on improving our results. We'll cover: using packed padded sequences, loading and using pre-trained word embeddings, different optimizers, different RNN architectures, bi-directional RNNs, multi-layer (aka deep) RNNs and regularization.

  • 3 - Faster Sentiment Analysis Open In Colab

    After we've covered all the fancy upgrades to RNNs, we'll look at a different approach that does not use RNNs. More specifically, we'll implement the model from Bag of Tricks for Efficient Text Classification. This simple model achieves comparable performance as the Upgraded Sentiment Analysis, but trains much faster.

  • 4 - Convolutional Sentiment Analysis Open In Colab

    Next, we'll cover convolutional neural networks (CNNs) for sentiment analysis. This model will be an implementation of Convolutional Neural Networks for Sentence Classification.

  • 5 - Multi-class Sentiment Analysis Open In Colab

    Then we'll cover the case where we have more than 2 classes, as is common in NLP. We'll be using the CNN model from the previous notebook and a new dataset which has 6 classes.

  • 6 - Transformers for Sentiment Analysis Open In Colab

    Finally, we'll show how to use the transformers library to load a pre-trained transformer model, specifically the BERT model from this paper, and use it to provide the embeddings for text. These embeddings can be fed into any model to predict sentiment, however we use a gated recurrent unit (GRU).

Appendices

  • A - Using TorchText with your Own Datasets Open In Colab

    The tutorials use TorchText's built in datasets. This first appendix notebook covers how to load your own datasets using TorchText.

  • B - A Closer Look at Word Embeddings Open In Colab

    This appendix notebook covers a brief look at exploring the pre-trained word embeddings provided by TorchText by using them to look at similar words as well as implementing a basic spelling error corrector based entirely on word embeddings.

  • C - Loading, Saving and Freezing Embeddings Open In Colab

    In this notebook we cover: how to load custom word embeddings, how to freeze and unfreeze word embeddings whilst training our models and how to save our learned embeddings so they can be used in another model.

References

Here are some things I looked at while making these tutorials. Some of it may be out of date.