pytorch-sentiment-analysis/B - A Closer Look at Word Embeddings.ipynb
2019-04-01 17:08:38 +01:00

915 lines
26 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# B - A Closer Look at Word Embeddings\n",
"\n",
"We have very briefly covered how word embeddings (also known as word vectors) are used in the tutorials. In this appendix we'll have a closer look at these embeddings and find some (hopefully) interesting results.\n",
"\n",
"Embeddings transform a one-hot encoded vector (a vector that is 0 in elements except one, which is 1) into a much smaller dimension vector of real numbers. The one-hot encoded vector is also known as a *sparse vector*, whilst the real valued vector is known as a *dense vector*. \n",
"\n",
"The key concept in these word embeddings is that words that appear in similar _contexts_ appear nearby in the vector space, i.e. the Euclidean distance between these two word vectors is small. By context here, we mean the surrounding words. For example in the sentences \"I purchased some items at the shop\" and \"I purchased some items at the store\" the words 'shop' and 'store' appear in the same context and thus should be close together in vector space.\n",
"\n",
"You may have also heard about *word2vec*. *word2vec* is an algorithm (actually a bunch of algorithms) that calculates word vectors from a corpus. In this appendix we use *GloVe* vectors, *GloVe* being another algorithm to calculate word vectors. If you want to know how *word2vec* works, check out a two part series [here](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/) and [here](http://mccormickml.com/2017/01/11/word2vec-tutorial-part-2-negative-sampling/), and if you want to find out more about *GloVe*, check the website [here](https://nlp.stanford.edu/projects/glove/).\n",
"\n",
"In PyTorch, we use word vectors with the `nn.Embedding` layer, which takes a _**[sentence length, batch size]**_ tensor and transforms it into a _**[sentence length, batch size, embedding dimensions]**_ tensor.\n",
"\n",
"In tutorial 2 onwards, we also used pre-trained word embeddings (specifically the GloVe vectors) provided by TorchText. These embeddings have been trained on a gigantic corpus. We can use these pre-trained vectors within any of our models, with the idea that as they have already learned the context of each word they will give us a better starting point for our word vectors. This usually leads to faster training time and/or improved accuracy.\n",
"\n",
"In this appendix we won't be training any models, instead we'll be looking at the word embeddings and finding a few interesting things about them.\n",
"\n",
"A lot of the code from the first half of this appendix is taken from [here](https://github.com/spro/practical-pytorch/blob/master/glove-word-vectors/glove-word-vectors.ipynb). For more information about word embeddings, go [here](https://monkeylearn.com/blog/word-embeddings-transform-text-numbers/). \n",
"\n",
"## Loading the GloVe vectors\n",
"\n",
"First, we'll load the GloVe vectors. The `name` field specifies what the vectors have been trained on, here the `6B` means a corpus of 6 billion words. The `dim` argument specifies the dimensionality of the word vectors. GloVe vectors are available in 50, 100, 200 and 300 dimensions. There is also a `42B` and `840B` glove vectors, however they are only available at 300 dimensions."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"There are 400000 words in the vocabulary\n"
]
}
],
"source": [
"import torchtext.vocab\n",
"\n",
"glove = torchtext.vocab.GloVe(name = '6B', dim = 100)\n",
"\n",
"print(f'There are {len(glove.itos)} words in the vocabulary')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As shown above, there are 400,000 unique words in the GloVe vocabulary. These are the most common words found in the corpus the vectors were trained on. **In these set of GloVe vectors, every single word is lower-case only.**\n",
"\n",
"`glove.vectors` is the actual tensor containing the values of the embeddings."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"torch.Size([400000, 100])"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"glove.vectors.shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see what word is associated with each row by checking the `itos` (int to string) list. \n",
"\n",
"Below implies that row 0 is the vector associated with the word 'the', row 1 for ',' (comma), row 2 for '.' (period), etc."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['the', ',', '.', 'of', 'to', 'and', 'in', 'a', '\"', \"'s\"]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"glove.itos[:10]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also use the `stoi` (string to int) dictionary, in which we input a word and receive the associated integer/index. If you try get the index of a word that is not in the vocabulary, you receive an error."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"glove.stoi['the']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can get the vector of a word by first getting the integer associated with it and then indexing into the word embedding tensor with that index."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"torch.Size([100])"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"glove.vectors[glove.stoi['the']].shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We'll be doing this a lot, so we'll create a function that takes in word embeddings and a word then returns the associated vector. It'll also throw an error if the word doesn't exist in the vocabulary."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"def get_vector(embeddings, word):\n",
" assert word in embeddings.stoi, f'*{word}* is not in the vocab!'\n",
" return embeddings.vectors[embeddings.stoi[word]]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As before, we use a word to get the associated vector."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"torch.Size([100])"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"get_vector(glove, 'the').shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Similar Contexts\n",
"\n",
"Now to start looking at the context of different words. \n",
"\n",
"If we want to find the words similar to a certain input word, we first find the vector of this input word, then we scan through our vocabulary calculating the distance between the vector of each word and our input word vector. We then sort these from closest to furthest away.\n",
"\n",
"The function below returns the closest 10 words to an input word vector:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"\n",
"def closest_words(embeddings, vector, n = 10):\n",
" \n",
" distances = [(word, torch.dist(vector, get_vector(embeddings, word)).item())\n",
" for word in embeddings.itos]\n",
" \n",
" return sorted(distances, key = lambda w: w[1])[:n]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's try it out with 'korea'. The closest word is the word 'korea' itself (not very interesting), however all of the words are related in some way. Pyongyang is the capital of North Korea, DPRK is the official name of North Korea, etc.\n",
"\n",
"Interestingly, we also get 'Japan' and 'China', implies that Korea, Japan and China are frequently talked about together in similar contexts. This makes sense as they are geographically situated near each other. "
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[('korea', 0.0),\n",
" ('pyongyang', 3.9039554595947266),\n",
" ('korean', 4.068886756896973),\n",
" ('dprk', 4.2631049156188965),\n",
" ('seoul', 4.340494155883789),\n",
" ('japan', 4.551243782043457),\n",
" ('koreans', 4.615609169006348),\n",
" ('south', 4.65822696685791),\n",
" ('china', 4.839518070220947),\n",
" ('north', 4.986356735229492)]"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"word_vector = get_vector(glove, 'korea')\n",
"\n",
"closest_words(glove, word_vector)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Looking at another country, India, we also get nearby countries: Thailand, Malaysia and Sri Lanka (as two separate words). Australia is relatively close to India (geographically), but Thailand and Malaysia are closer. So why is Australia closer to India in vector space? This is most probably due to India and Australia appearing in the context of [cricket](https://en.wikipedia.org/wiki/Cricket) matches together."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[('india', 0.0),\n",
" ('pakistan', 3.6954822540283203),\n",
" ('indian', 4.114313125610352),\n",
" ('delhi', 4.155975818634033),\n",
" ('bangladesh', 4.261017799377441),\n",
" ('lanka', 4.435845851898193),\n",
" ('sri', 4.515716552734375),\n",
" ('australia', 4.806082725524902),\n",
" ('thailand', 4.994781017303467),\n",
" ('malaysia', 5.009334087371826)]"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"word_vector = get_vector(glove, 'india')\n",
"\n",
"closest_words(glove, word_vector)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We'll also create another function that will nicely print out the tuples returned by our `closest_words` function."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"def print_tuples(tuples):\n",
" for w, d in tuples:\n",
" print(f'({d:02.04f}) {w}') "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A final word to look at, 'sports'. As we can see, the closest words are most of the sports themselves. "
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(0.0000) sports\n",
"(3.5875) sport\n",
"(4.4590) soccer\n",
"(4.6508) basketball\n",
"(4.6561) baseball\n",
"(4.8028) sporting\n",
"(4.8763) football\n",
"(4.9624) professional\n",
"(4.9824) entertainment\n",
"(5.0975) media\n"
]
}
],
"source": [
"word_vector = get_vector(glove, 'sports')\n",
"\n",
"print_tuples(closest_words(glove, word_vector))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Analogies\n",
"\n",
"Another property of word embeddings is that they can be operated on just as any standard vector and give interesting results.\n",
"\n",
"We'll show an example of this first, and then explain it:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"def analogy(embeddings, word1, word2, word3, n=5):\n",
" \n",
" #get vectors for each word\n",
" word1_vector = get_vector(embeddings, word1)\n",
" word2_vector = get_vector(embeddings, word2)\n",
" word3_vector = get_vector(embeddings, word3)\n",
" \n",
" #calculate analogy vector\n",
" analogy_vector = word2_vector - word1_vector + word3_vector\n",
" \n",
" #find closest words to analogy vector\n",
" candidate_words = closest_words(embeddings, analogy_vector, n+3)\n",
" \n",
" #filter out words already in analogy\n",
" candidate_words = [(word, dist) for (word, dist) in candidate_words \n",
" if word not in [word1, word2, word3]][:n]\n",
" \n",
" print(f'{word1} is to {word2} as {word3} is to...')\n",
" \n",
" return candidate_words"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"man is to king as woman is to...\n",
"(4.0811) queen\n",
"(4.6429) monarch\n",
"(4.9055) throne\n",
"(4.9216) elizabeth\n",
"(4.9811) prince\n"
]
}
],
"source": [
"print_tuples(analogy(glove, 'man', 'king', 'woman'))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is the canonical example which shows off this property of word embeddings. So why does it work? Why does the vector of 'woman' added to the vector of 'king' minus the vector of 'man' give us 'queen'?\n",
"\n",
"If we think about it, the vector calculated from 'king' minus 'man' gives us a \"royalty vector\". This is the vector associated with traveling from a man to his royal counterpart, a king. If we add this \"royality vector\" to 'woman', this should travel to her royal equivalent, which is a queen!\n",
"\n",
"We can do this with other analogies too. For example, this gets an \"acting career vector\":"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"man is to actor as woman is to...\n",
"(2.8133) actress\n",
"(5.0039) comedian\n",
"(5.1399) actresses\n",
"(5.2773) starred\n",
"(5.3085) screenwriter\n"
]
}
],
"source": [
"print_tuples(analogy(glove, 'man', 'actor', 'woman'))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For a \"baby animal vector\":"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"cat is to kitten as dog is to...\n",
"(3.8146) puppy\n",
"(4.2944) rottweiler\n",
"(4.5888) puppies\n",
"(4.6086) pooch\n",
"(4.6520) pug\n"
]
}
],
"source": [
"print_tuples(analogy(glove, 'cat', 'kitten', 'dog'))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A \"capital city vector\":"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"france is to paris as england is to...\n",
"(4.1426) london\n",
"(4.4938) melbourne\n",
"(4.7087) sydney\n",
"(4.7630) perth\n",
"(4.7952) birmingham\n"
]
}
],
"source": [
"print_tuples(analogy(glove, 'france', 'paris', 'england'))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A \"musician's genre vector\":"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"elvis is to rock as eminem is to...\n",
"(5.6597) rap\n",
"(6.2057) rappers\n",
"(6.2161) rapper\n",
"(6.2444) punk\n",
"(6.2690) hop\n"
]
}
],
"source": [
"print_tuples(analogy(glove, 'elvis', 'rock', 'eminem'))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And an \"ingredient vector\":"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"beer is to barley as wine is to...\n",
"(5.6021) grape\n",
"(5.6760) beans\n",
"(5.8174) grapes\n",
"(5.9035) lentils\n",
"(5.9454) figs\n"
]
}
],
"source": [
"print_tuples(analogy(glove, 'beer', 'barley', 'wine'))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Correcting Spelling Mistakes\n",
"\n",
"Another interesting property of word embeddings is that they can actually be used to correct spelling mistakes! \n",
"\n",
"We'll put their findings into code and briefly explain them, but to read more about this, check out the [original thread](http://forums.fast.ai/t/nlp-any-libraries-dictionaries-out-there-for-fixing-common-spelling-errors/16411) and the associated [write-up](https://blog.usejournal.com/a-simple-spell-checker-built-from-word-vectors-9f28452b6f26).\n",
"\n",
"First, we need to load up the much larger vocabulary GloVe vectors, this is due to the spelling mistakes not appearing in the smaller vocabulary. \n",
"\n",
"**Note**: these vectors are very large (~2GB), so watch out if you have a limited internet connection."
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
"glove = torchtext.vocab.GloVe(name = '840B', dim = 300)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Checking the vocabulary size of these embeddings, we can see we now have over 2 million unique words in our vocabulary!"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"torch.Size([2196017, 300])"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"glove.vectors.shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As the vectors were trained with a much larger vocabulary on a larger corpus of text, the words that appear are a little different. Notice how the words 'north', 'south', 'pyongyang' and 'dprk' no longer appear in the most closest words to 'korea'."
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(0.0000) korea\n",
"(3.9857) taiwan\n",
"(4.4022) korean\n",
"(4.9016) asia\n",
"(4.9593) japan\n",
"(5.0721) seoul\n",
"(5.4058) thailand\n",
"(5.6025) singapore\n",
"(5.7010) russia\n",
"(5.7240) hong\n"
]
}
],
"source": [
"word_vector = get_vector(glove, 'korea')\n",
"\n",
"print_tuples(closest_words(glove, word_vector))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Our first step to correcting spelling mistakes is looking at the vector for a misspelling of the word 'reliable'."
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(0.0000) relieable\n",
"(5.0366) relyable\n",
"(5.2610) realible\n",
"(5.4719) realiable\n",
"(5.5402) relable\n",
"(5.5917) relaible\n",
"(5.6412) reliabe\n",
"(5.8802) relaiable\n",
"(5.9593) stabel\n",
"(5.9981) consitant\n"
]
}
],
"source": [
"word_vector = get_vector(glove, 'relieable')\n",
"\n",
"print_tuples(closest_words(glove, word_vector))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice how the correct spelling, \"reliable\", does not appear in the top 10 closest words. Surely the misspellings of a word should appear next to the correct spelling of the word as they appear in the same context, right? \n",
"\n",
"The hypothesis is that misspellings of words are all equally shifted away from their correct spelling. This is because articles of text that contain spelling mistakes are usually written in an informal manner where correct spelling doesn't matter as much (such as tweets/blog posts), thus spelling errors will appear together as they appear in context of informal articles.\n",
"\n",
"Similar to how we created analogies before, we can create a \"correct spelling\" vector. This time, instead of using a single example to create our vector, we'll use the average of multiple examples. This will hopefully give better accuracy!\n",
"\n",
"We first create a vector for the correct spelling, 'reliable', then calculate the difference between the \"reliable vector\" and each of the 8 misspellings of 'reliable'. As we are going to concatenate these 8 misspelling tensors together we need to unsqueeze a \"batch\" dimension to them."
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [],
"source": [
"reliable_vector = get_vector(glove, 'reliable')\n",
"\n",
"reliable_misspellings = ['relieable', 'relyable', 'realible', 'realiable', \n",
" 'relable', 'relaible', 'reliabe', 'relaiable']\n",
"\n",
"diff_reliable = [(reliable_vector - get_vector(glove, s)).unsqueeze(0) \n",
" for s in reliable_misspellings]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We take the average of these 8 'difference from reliable' vectors to get our \"misspelling vector\"."
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [],
"source": [
"misspelling_vector = torch.cat(diff_reliable, dim = 0).mean(dim = 0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can now correct other spelling mistakes using this \"misspelling vector\" by finding the closest words to the sum of the vector of a misspelled word and the \"misspelling vector\".\n",
"\n",
"For a misspelling of \"because\":"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(6.1090) because\n",
"(6.4250) even\n",
"(6.4358) fact\n",
"(6.4914) sure\n",
"(6.5094) though\n",
"(6.5601) obviously\n",
"(6.5682) reason\n",
"(6.5856) if\n",
"(6.6099) but\n",
"(6.6415) why\n"
]
}
],
"source": [
"word_vector = get_vector(glove, 'becuase')\n",
"\n",
"print_tuples(closest_words(glove, word_vector + misspelling_vector))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For a misspelling of \"definitely\":"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(5.4070) definitely\n",
"(5.5643) certainly\n",
"(5.7192) sure\n",
"(5.8152) well\n",
"(5.8588) always\n",
"(5.8812) also\n",
"(5.9557) simply\n",
"(5.9667) consider\n",
"(5.9821) probably\n",
"(5.9948) definately\n"
]
}
],
"source": [
"word_vector = get_vector(glove, 'defintiely')\n",
"\n",
"print_tuples(closest_words(glove, word_vector + misspelling_vector))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For a misspelling of \"consistent\":"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(5.9641) consistent\n",
"(6.3674) reliable\n",
"(7.0195) consistant\n",
"(7.0299) consistently\n",
"(7.1605) accurate\n",
"(7.2737) fairly\n",
"(7.3037) good\n",
"(7.3520) reasonable\n",
"(7.3801) dependable\n",
"(7.4027) ensure\n"
]
}
],
"source": [
"word_vector = get_vector(glove, 'consistant')\n",
"\n",
"print_tuples(closest_words(glove, word_vector + misspelling_vector))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For a misspelling of \"package\":"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(6.6117) package\n",
"(6.9315) packages\n",
"(7.0195) pakage\n",
"(7.0911) comes\n",
"(7.1241) provide\n",
"(7.1469) offer\n",
"(7.1861) reliable\n",
"(7.2431) well\n",
"(7.2434) choice\n",
"(7.2453) offering\n"
]
}
],
"source": [
"word_vector = get_vector(glove, 'pakage')\n",
"\n",
"print_tuples(closest_words(glove, word_vector + misspelling_vector))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For a more in-depth look at this, check out the [write-up](https://blog.usejournal.com/a-simple-spell-checker-built-from-word-vectors-9f28452b6f26)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
}
},
"nbformat": 4,
"nbformat_minor": 2
}