refactor of data loading

- ignore trying to keep things as generators
This commit is contained in:
bentrevett 2020-10-13 21:54:44 +01:00
parent db59992e69
commit 69fc4b0c69
2 changed files with 217 additions and 269 deletions

View File

@ -64,94 +64,59 @@
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 35
},
"colab_type": "code",
"id": "y-yPGXY_dFmH",
"outputId": "3b6e5a98-f073-4281-8ff7-0ec873c059ae"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<torchtext.experimental.datasets.raw.text_classification.RawTextIterableDataset object at 0x7f56d84f0ac0>\n"
]
}
],
"metadata": {},
"outputs": [],
"source": [
"print(raw_train_data)"
"raw_train_data = list(raw_train_data)\n",
"raw_test_data = list(raw_test_data)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 55
},
"colab_type": "code",
"id": "UXWtJbsXdFmO",
"outputId": "b6f80eb7-4b91-4188-a800-c328067f339e"
},
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(0, 'I rented I AM CURIOUS-YELLOW from my video store because of all the controversy that surrounded it when it was first released in 1967. I also heard that at first it was seized by U.S. customs if it ever tried to enter this country, therefore being a fan of films considered \"controversial\" I really had to see this for myself.<br /><br />The plot is centered around a young Swedish drama student named Lena who wants to learn everything she can about life. In particular she wants to focus her attentions to making some sort of documentary on what the average Swede thought about certain political issues such as the Vietnam War and race issues in the United States. In between asking politicians and ordinary denizens of Stockholm about their opinions on politics, she has sex with her drama teacher, classmates, and married men.<br /><br />What kills me about I AM CURIOUS-YELLOW is that 40 years ago, this was considered pornographic. Really, the sex and nudity scenes are few and far between, even then it\\'s not shot like some cheaply made porno. While my countrymen mind find it shocking, in reality sex and nudity are a major staple in Swedish cinema. Even Ingmar Bergman, arguably their answer to good old boy John Ford, had sex scenes in his films.<br /><br />I do commend the filmmakers for the fact that any sex shown in the film is shown for artistic purposes rather than just to shock people and make money to be shown in pornographic theaters in America. I AM CURIOUS-YELLOW is a good film for anyone wanting to study the meat and potatoes (no pun intended) of Swedish cinema. But really, this film doesn\\'t have much of a plot.')\n"
]
"data": {
"text/plain": [
"('neg',\n",
" 'I rented I AM CURIOUS-YELLOW from my video store because of all the controversy that surrounded it when it was first released in 1967. I also heard that at first it was seized by U.S. customs if it ever tried to enter this country, therefore being a fan of films considered \"controversial\" I really had to see this for myself.<br /><br />The plot is centered around a young Swedish drama student named Lena who wants to learn everything she can about life. In particular she wants to focus her attentions to making some sort of documentary on what the average Swede thought about certain political issues such as the Vietnam War and race issues in the United States. In between asking politicians and ordinary denizens of Stockholm about their opinions on politics, she has sex with her drama teacher, classmates, and married men.<br /><br />What kills me about I AM CURIOUS-YELLOW is that 40 years ago, this was considered pornographic. Really, the sex and nudity scenes are few and far between, even then it\\'s not shot like some cheaply made porno. While my countrymen mind find it shocking, in reality sex and nudity are a major staple in Swedish cinema. Even Ingmar Bergman, arguably their answer to good old boy John Ford, had sex scenes in his films.<br /><br />I do commend the filmmakers for the fact that any sex shown in the film is shown for artistic purposes rather than just to shock people and make money to be shown in pornographic theaters in America. I AM CURIOUS-YELLOW is a good film for anyone wanting to study the meat and potatoes (no pun intended) of Swedish cinema. But really, this film doesn\\'t have much of a plot.')"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"raw_train_data = list(raw_train_data)\n",
"\n",
"print(raw_train_data[0])"
"raw_train_data[0]"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 55
},
"colab_type": "code",
"id": "B2HoQ4VOdFmS",
"outputId": "b63ed1a5-82b8-4bf9-c590-82a371e1ec84"
},
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(0, 'I love sci-fi and am willing to put up with a lot. Sci-fi movies/TV are usually underfunded, under-appreciated and misunderstood. I tried to like this, I really did, but it is to good TV sci-fi as Babylon 5 is to Star Trek (the original). Silly prosthetics, cheap cardboard sets, stilted dialogues, CG that doesn\\'t match the background, and painfully one-dimensional characters cannot be overcome with a \\'sci-fi\\' setting. (I\\'m sure there are those of you out there who think Babylon 5 is good sci-fi TV. It\\'s not. It\\'s clichéd and uninspiring.) While US viewers might like emotion and character development, sci-fi is a genre that does not take itself seriously (cf. Star Trek). It may treat important issues, yet not as a serious philosophy. It\\'s really difficult to care about the characters here as they are not simply foolish, just missing a spark of life. Their actions and reactions are wooden and predictable, often painful to watch. The makers of Earth KNOW it\\'s rubbish as they have to always say \"Gene Roddenberry\\'s Earth...\" otherwise people would not continue watching. Roddenberry\\'s ashes must be turning in their orbit as this dull, cheap, poorly edited (watching it without advert breaks really brings this home) trudging Trabant of a show lumbers into space. Spoiler. So, kill off a main character. And then bring him back as another actor. Jeeez! Dallas all over again.')\n"
]
"data": {
"text/plain": [
"('neg',\n",
" 'I love sci-fi and am willing to put up with a lot. Sci-fi movies/TV are usually underfunded, under-appreciated and misunderstood. I tried to like this, I really did, but it is to good TV sci-fi as Babylon 5 is to Star Trek (the original). Silly prosthetics, cheap cardboard sets, stilted dialogues, CG that doesn\\'t match the background, and painfully one-dimensional characters cannot be overcome with a \\'sci-fi\\' setting. (I\\'m sure there are those of you out there who think Babylon 5 is good sci-fi TV. It\\'s not. It\\'s clichéd and uninspiring.) While US viewers might like emotion and character development, sci-fi is a genre that does not take itself seriously (cf. Star Trek). It may treat important issues, yet not as a serious philosophy. It\\'s really difficult to care about the characters here as they are not simply foolish, just missing a spark of life. Their actions and reactions are wooden and predictable, often painful to watch. The makers of Earth KNOW it\\'s rubbish as they have to always say \"Gene Roddenberry\\'s Earth...\" otherwise people would not continue watching. Roddenberry\\'s ashes must be turning in their orbit as this dull, cheap, poorly edited (watching it without advert breaks really brings this home) trudging Trabant of a show lumbers into space. Spoiler. So, kill off a main character. And then bring him back as another actor. Jeeez! Dallas all over again.')"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"raw_test_data = list(raw_test_data)\n",
"\n",
"print(raw_test_data[0])"
"raw_test_data[0]"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 52
},
"colab_type": "code",
"id": "Tq8DjZTzdFmU",
"outputId": "2aeab986-6922-4a28-ad2f-507282ceb60f"
},
"metadata": {},
"outputs": [
{
"name": "stdout",
@ -178,8 +143,6 @@
"outputs": [],
"source": [
"def get_train_valid_split(raw_train_data, split_ratio = 0.7):\n",
"\n",
" raw_train_data = list(raw_train_data)\n",
" \n",
" random.shuffle(raw_train_data)\n",
" \n",
@ -188,9 +151,6 @@
" train_data = raw_train_data[:n_train_examples]\n",
" valid_data = raw_train_data[n_train_examples:]\n",
" \n",
" train_data = RawTextIterableDataset(train_data)\n",
" valid_data = RawTextIterableDataset(valid_data)\n",
" \n",
" return train_data, valid_data"
]
},
@ -210,29 +170,7 @@
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "iS9aLR8rdFmc"
},
"outputs": [],
"source": [
"raw_train_data = list(raw_train_data)\n",
"raw_valid_data = list(raw_valid_data)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 69
},
"colab_type": "code",
"id": "yzEGgkz5dFmf",
"outputId": "8e1cd5a4-76cd-492c-c387-65604dba13c1"
},
"metadata": {},
"outputs": [
{
"name": "stdout",
@ -252,7 +190,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 11,
"metadata": {
"colab": {},
"colab_type": "code",
@ -282,7 +220,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 12,
"metadata": {
"colab": {},
"colab_type": "code",
@ -290,14 +228,14 @@
},
"outputs": [],
"source": [
"max_length = 500\n",
"max_length = 250\n",
"\n",
"tokenizer = Tokenizer(max_length = max_length)"
]
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 13,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
@ -324,7 +262,7 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 14,
"metadata": {
"colab": {},
"colab_type": "code",
@ -333,7 +271,7 @@
"outputs": [],
"source": [
"def build_vocab_from_data(raw_data, tokenizer, **vocab_kwargs):\n",
" \n",
" \n",
" token_freqs = collections.Counter()\n",
" \n",
" for label, text in raw_data:\n",
@ -347,7 +285,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 15,
"metadata": {
"colab": {},
"colab_type": "code",
@ -360,6 +298,23 @@
"vocab = build_vocab_from_data(raw_train_data, tokenizer, max_size = max_size)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Unique tokens in vocab: 25,002\n"
]
}
],
"source": [
"print(f'Unique tokens in vocab: {len(vocab):,}')"
]
},
{
"cell_type": "code",
"execution_count": 17,
@ -376,26 +331,26 @@
{
"data": {
"text/plain": [
"[('the', 211890),\n",
" ('.', 208427),\n",
" (',', 173201),\n",
" ('a', 103447),\n",
" ('and', 103052),\n",
" ('of', 91695),\n",
" ('to', 84931),\n",
" (\"'\", 83805),\n",
" ('is', 67948),\n",
" ('it', 61322),\n",
" ('in', 58757),\n",
" ('i', 57420),\n",
" ('this', 49214),\n",
" ('that', 46061),\n",
" ('s', 38864),\n",
" ('was', 31341),\n",
" ('as', 29171),\n",
" ('movie', 28776),\n",
" ('for', 27903),\n",
" ('with', 27680)]"
"[('the', 165322),\n",
" ('.', 164239),\n",
" (',', 133647),\n",
" ('a', 81952),\n",
" ('and', 80334),\n",
" ('of', 71820),\n",
" ('to', 65662),\n",
" (\"'\", 64249),\n",
" ('is', 53598),\n",
" ('it', 49589),\n",
" ('i', 48810),\n",
" ('in', 45611),\n",
" ('this', 40868),\n",
" ('that', 35609),\n",
" ('s', 29273),\n",
" ('was', 26159),\n",
" ('movie', 24543),\n",
" ('as', 22276),\n",
" ('with', 21494),\n",
" ('for', 21332)]"
]
},
"execution_count": 17,
@ -473,15 +428,14 @@
},
"outputs": [],
"source": [
"def process_raw_data(raw_data, tokenizer, vocab):\n",
" \n",
" raw_data = [(label, text) for (label, text) in raw_data]\n",
"\n",
"def raw_data_to_dataset(raw_data, tokenizer, vocab):\n",
" \n",
" text_transform = sequential_transforms(tokenizer.tokenize,\n",
" vocab_func(vocab),\n",
" totensor(dtype=torch.long))\n",
" \n",
" label_transform = sequential_transforms(totensor(dtype=torch.long))\n",
" label_transform = sequential_transforms(lambda x: 1 if x == 'pos' else 0, \n",
" totensor(dtype=torch.long))\n",
"\n",
" transforms = (label_transform, text_transform)\n",
"\n",
@ -502,12 +456,35 @@
},
"outputs": [],
"source": [
"train_data = process_raw_data(raw_train_data, tokenizer, vocab)"
"train_data = raw_data_to_dataset(raw_train_data, tokenizer, vocab)\n",
"valid_data = raw_data_to_dataset(raw_valid_data, tokenizer, vocab)\n",
"test_data = raw_data_to_dataset(raw_test_data, tokenizer, vocab)"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of training examples: 17,500\n",
"Number of validation examples: 7,500\n",
"Number of testing examples: 25,000\n"
]
}
],
"source": [
"print(f'Number of training examples: {len(train_data):,}')\n",
"print(f'Number of validation examples: {len(valid_data):,}')\n",
"print(f'Number of testing examples: {len(test_data):,}')"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
@ -522,41 +499,43 @@
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([ 610, 8612, 10, 5, 60, 221, 6, 5, 60, 0,\n",
" 172, 3, 32, 91, 19, 9, 330, 7, 1487, 9,\n",
" 17, 2, 35, 13, 414, 128, 4, 6, 11, 293,\n",
" 5, 210, 7, 116, 21, 111, 632, 1851, 7, 125,\n",
" 1427, 18, 39, 84, 90, 2552, 1114, 4, 6, 44,\n",
" 131, 2, 1851, 7, 2, 490, 3, 44, 10, 9,\n",
" 6126, 9, 72, 43, 2, 428, 496, 8, 293, 2,\n",
" 348, 7, 2, 490, 8, 2, 584, 831, 481, 3,\n",
" 2, 4673, 111, 30, 119, 1783, 4, 35, 3138, 6,\n",
" 35, 2349, 42, 854, 21, 5, 17557, 5565, 1586, 3,\n",
" 2, 560, 7, 41, 174, 3138, 2038, 1996, 42, 1431,\n",
" 8, 34, 41, 2004, 2289, 7, 2, 6375, 5693, 346,\n",
" 0, 2, 3138, 2140, 12, 2, 348, 6, 536, 7,\n",
" 2, 3138, 1271, 3, 14, 10, 77, 2, 1564, 7,\n",
" 2, 23, 30, 2, 59, 592, 3, 2, 1852, 7,\n",
" 6854, 10, 2482, 6, 265, 8, 264, 4, 2, 600,\n",
" 7, 3138, 0, 10, 10481, 4, 6, 2, 105, 2,\n",
" 1271, 851, 2341, 8, 2, 514, 1128, 5216, 10, 1136,\n",
" 3, 2, 139, 7, 2, 23, 432, 10, 0, 1374,\n",
" 4, 22, 8291, 43, 5, 440, 937, 1851, 3, 869,\n",
" 10103, 6, 939, 4340, 203, 564, 349, 4, 22, 2,\n",
" 1564, 7, 2, 69, 30, 101, 3643, 8, 34, 703,\n",
" 13280, 3])\n"
"tensor([ 12, 121, 1013, 6, 219, 1855, 8, 276, 70, 20,\n",
" 5, 177, 3, 1013, 0, 30, 541, 0, 4, 15259,\n",
" 6, 7022, 3, 12, 751, 8, 45, 14, 4, 12,\n",
" 69, 123, 4, 22, 11, 10, 8, 56, 241, 1013,\n",
" 19, 12534, 563, 10, 8, 338, 1803, 25, 2, 196,\n",
" 24, 3, 717, 0, 4, 745, 3428, 686, 4, 4315,\n",
" 3437, 4, 4258, 15, 170, 9, 28, 1209, 2, 951,\n",
" 4, 6, 2005, 5083, 113, 544, 35, 2957, 20, 5,\n",
" 9, 1013, 9, 925, 3, 25, 12, 9, 145, 255,\n",
" 46, 30, 160, 7, 26, 54, 46, 42, 107, 12534,\n",
" 563, 10, 56, 1013, 241, 3, 11, 9, 16, 29,\n",
" 3, 11, 9, 16, 2966, 6, 8018, 3, 24, 143,\n",
" 199, 773, 249, 45, 1364, 6, 120, 893, 4, 1013,\n",
" 10, 5, 516, 15, 135, 29, 205, 437, 599, 25,\n",
" 24229, 3, 338, 1803, 24, 3, 11, 222, 1655, 734,\n",
" 1296, 4, 265, 29, 19, 5, 618, 4793, 3, 11,\n",
" 9, 16, 69, 866, 8, 474, 47, 2, 113, 138,\n",
" 19, 39, 30, 29, 343, 6136, 4, 48, 984, 5,\n",
" 5212, 7, 122, 3, 77, 1894, 6, 3550, 30, 1650,\n",
" 6, 634, 4, 403, 1266, 8, 110, 3, 2, 1332,\n",
" 7, 649, 130, 11, 9, 16, 1834, 19, 39, 31,\n",
" 8, 215, 134, 1965, 13961, 9, 16, 649, 3, 3,\n",
" 3, 910, 81, 68, 29, 1677, 142, 3, 13961, 9,\n",
" 16, 13264, 208, 35, 1685, 13, 77, 13826, 19, 14,\n",
" 696, 4, 745, 4, 793, 2192, 25, 142, 11, 211])\n"
]
}
],
"source": [
"label, indexes = train_data[0]\n",
"label, indexes = test_data[0]\n",
"\n",
"print(indexes)"
]
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 24,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
@ -571,7 +550,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"['david', 'mamet', 'is', 'a', 'very', 'interesting', 'and', 'a', 'very', '<unk>', 'director', '.', 'his', 'first', 'movie', \"'\", 'house', 'of', 'games', \"'\", 'was', 'the', 'one', 'i', 'liked', 'best', ',', 'and', 'it', 'set', 'a', 'series', 'of', 'films', 'with', 'characters', 'whose', 'perspective', 'of', 'life', 'changes', 'as', 'they', 'get', 'into', 'complicated', 'situations', ',', 'and', 'so', 'does', 'the', 'perspective', 'of', 'the', 'viewer', '.', 'so', 'is', \"'\", 'homicide', \"'\", 'which', 'from', 'the', 'title', 'tries', 'to', 'set', 'the', 'mind', 'of', 'the', 'viewer', 'to', 'the', 'usual', 'crime', 'drama', '.', 'the', 'principal', 'characters', 'are', 'two', 'cops', ',', 'one', 'jewish', 'and', 'one', 'irish', 'who', 'deal', 'with', 'a', 'racially', 'charged', 'area', '.', 'the', 'murder', 'of', 'an', 'old', 'jewish', 'shop', 'owner', 'who', 'proves', 'to', 'be', 'an', 'ancient', 'veteran', 'of', 'the', 'israeli', 'independence', 'war', '<unk>', 'the', 'jewish', 'identity', 'in', 'the', 'mind', 'and', 'heart', 'of', 'the', 'jewish', 'detective', '.', 'this', 'is', 'were', 'the', 'flaws', 'of', 'the', 'film', 'are', 'the', 'more', 'obvious', '.', 'the', 'process', 'of', 'awakening', 'is', 'theatrical', 'and', 'hard', 'to', 'believe', ',', 'the', 'group', 'of', 'jewish', '<unk>', 'is', 'operatic', ',', 'and', 'the', 'way', 'the', 'detective', 'eventually', 'walks', 'to', 'the', 'final', 'violent', 'confrontation', 'is', 'pathetic', '.', 'the', 'end', 'of', 'the', 'film', 'itself', 'is', '<unk>', 'smart', ',', 'but', 'disappoints', 'from', 'a', 'human', 'emotional', 'perspective', '.', 'joe', 'mantegna', 'and', 'william', 'macy', 'give', 'strong', 'performances', ',', 'but', 'the', 'flaws', 'of', 'the', 'story', 'are', 'too', 'evident', 'to', 'be', 'easily', 'compensated', '.']\n"
"['i', 'love', 'sci-fi', 'and', 'am', 'willing', 'to', 'put', 'up', 'with', 'a', 'lot', '.', 'sci-fi', '<unk>', 'are', 'usually', '<unk>', ',', 'under-appreciated', 'and', 'misunderstood', '.', 'i', 'tried', 'to', 'like', 'this', ',', 'i', 'really', 'did', ',', 'but', 'it', 'is', 'to', 'good', 'tv', 'sci-fi', 'as', 'babylon', '5', 'is', 'to', 'star', 'trek', '(', 'the', 'original', ')', '.', 'silly', '<unk>', ',', 'cheap', 'cardboard', 'sets', ',', 'stilted', 'dialogues', ',', 'cg', 'that', 'doesn', \"'\", 't', 'match', 'the', 'background', ',', 'and', 'painfully', 'one-dimensional', 'characters', 'cannot', 'be', 'overcome', 'with', 'a', \"'\", 'sci-fi', \"'\", 'setting', '.', '(', 'i', \"'\", 'm', 'sure', 'there', 'are', 'those', 'of', 'you', 'out', 'there', 'who', 'think', 'babylon', '5', 'is', 'good', 'sci-fi', 'tv', '.', 'it', \"'\", 's', 'not', '.', 'it', \"'\", 's', 'clichéd', 'and', 'uninspiring', '.', ')', 'while', 'us', 'viewers', 'might', 'like', 'emotion', 'and', 'character', 'development', ',', 'sci-fi', 'is', 'a', 'genre', 'that', 'does', 'not', 'take', 'itself', 'seriously', '(', 'cf', '.', 'star', 'trek', ')', '.', 'it', 'may', 'treat', 'important', 'issues', ',', 'yet', 'not', 'as', 'a', 'serious', 'philosophy', '.', 'it', \"'\", 's', 'really', 'difficult', 'to', 'care', 'about', 'the', 'characters', 'here', 'as', 'they', 'are', 'not', 'simply', 'foolish', ',', 'just', 'missing', 'a', 'spark', 'of', 'life', '.', 'their', 'actions', 'and', 'reactions', 'are', 'wooden', 'and', 'predictable', ',', 'often', 'painful', 'to', 'watch', '.', 'the', 'makers', 'of', 'earth', 'know', 'it', \"'\", 's', 'rubbish', 'as', 'they', 'have', 'to', 'always', 'say', 'gene', 'roddenberry', \"'\", 's', 'earth', '.', '.', '.', 'otherwise', 'people', 'would', 'not', 'continue', 'watching', '.', 'roddenberry', \"'\", 's', 'ashes', 'must', 'be', 'turning', 'in', 'their', 'orbit', 'as', 'this', 'dull', ',', 'cheap', ',', 'poorly', 'edited', '(', 'watching', 'it', 'without']\n"
]
}
],
@ -579,20 +558,6 @@
"print([vocab.itos[i] for i in indexes])"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "4Rec_Wk6dFnD"
},
"outputs": [],
"source": [
"valid_data = process_raw_data(raw_valid_data, tokenizer, vocab)\n",
"test_data = process_raw_data(raw_test_data, tokenizer, vocab)"
]
},
{
"cell_type": "code",
"execution_count": 25,
@ -1002,9 +967,9 @@
" [ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n",
" [-0.0382, -0.2449, 0.7281, ..., -0.1459, 0.8278, 0.2706],\n",
" ...,\n",
" [-0.2925, 0.1087, 0.7920, ..., -0.3641, 0.1822, -0.4104],\n",
" [-0.7250, 0.7545, 0.1637, ..., -0.0144, -0.1761, 0.3418],\n",
" [ 1.1753, 0.0460, -0.3542, ..., 0.4510, 0.0485, -0.4015]])"
" [ 0.4029, 0.1353, 0.6673, ..., -0.3300, 0.7533, -0.1666],\n",
" [ 0.1226, 0.0419, 0.0746, ..., -0.0024, -0.2733, -1.0033],\n",
" [-0.1009, -0.1484, 0.3141, ..., -0.3414, -0.3768, 0.5605]])"
]
},
"execution_count": 42,
@ -1032,7 +997,7 @@
{
"data": {
"text/plain": [
"678"
"734"
]
},
"execution_count": 43,
@ -1061,7 +1026,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"['<unk>', '<pad>', '\\x96', 'hadn', '****', '100%', 'camera-work', '*1/2', '$1', '*****']\n"
"['<unk>', '<pad>', '\\x96', '****', 'hadn', 'camera-work', '*1/2', '100%', '*****', '$1']\n"
]
}
],
@ -1089,9 +1054,9 @@
" [ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n",
" [-0.0382, -0.2449, 0.7281, ..., -0.1459, 0.8278, 0.2706],\n",
" ...,\n",
" [-0.2925, 0.1087, 0.7920, ..., -0.3641, 0.1822, -0.4104],\n",
" [-0.7250, 0.7545, 0.1637, ..., -0.0144, -0.1761, 0.3418],\n",
" [ 1.1753, 0.0460, -0.3542, ..., 0.4510, 0.0485, -0.4015]])"
" [ 0.4029, 0.1353, 0.6673, ..., -0.3300, 0.7533, -0.1666],\n",
" [ 0.1226, 0.0419, 0.0746, ..., -0.0024, -0.2733, -1.0033],\n",
" [-0.1009, -0.1484, 0.3141, ..., -0.3414, -0.3768, 0.5605]])"
]
},
"execution_count": 45,
@ -1298,35 +1263,35 @@
"output_type": "stream",
"text": [
"Epoch: 01 | Epoch Time: 0m 4s\n",
"\tTrain Loss: 0.687 | Train Acc: 56.51%\n",
"\t Val. Loss: 0.677 | Val. Acc: 62.87%\n",
"\tTrain Loss: 0.683 | Train Acc: 60.00%\n",
"\t Val. Loss: 0.669 | Val. Acc: 67.02%\n",
"Epoch: 02 | Epoch Time: 0m 4s\n",
"\tTrain Loss: 0.665 | Train Acc: 65.13%\n",
"\t Val. Loss: 0.650 | Val. Acc: 69.13%\n",
"\tTrain Loss: 0.651 | Train Acc: 68.09%\n",
"\t Val. Loss: 0.632 | Val. Acc: 71.31%\n",
"Epoch: 03 | Epoch Time: 0m 4s\n",
"\tTrain Loss: 0.629 | Train Acc: 72.45%\n",
"\t Val. Loss: 0.611 | Val. Acc: 73.54%\n",
"\tTrain Loss: 0.603 | Train Acc: 74.06%\n",
"\t Val. Loss: 0.582 | Val. Acc: 74.86%\n",
"Epoch: 04 | Epoch Time: 0m 4s\n",
"\tTrain Loss: 0.583 | Train Acc: 76.17%\n",
"\t Val. Loss: 0.566 | Val. Acc: 77.00%\n",
"\tTrain Loss: 0.545 | Train Acc: 78.13%\n",
"\t Val. Loss: 0.528 | Val. Acc: 78.88%\n",
"Epoch: 05 | Epoch Time: 0m 4s\n",
"\tTrain Loss: 0.533 | Train Acc: 80.22%\n",
"\t Val. Loss: 0.521 | Val. Acc: 80.28%\n",
"\tTrain Loss: 0.485 | Train Acc: 82.10%\n",
"\t Val. Loss: 0.477 | Val. Acc: 81.64%\n",
"Epoch: 06 | Epoch Time: 0m 4s\n",
"\tTrain Loss: 0.484 | Train Acc: 83.24%\n",
"\t Val. Loss: 0.480 | Val. Acc: 82.53%\n",
"\tTrain Loss: 0.430 | Train Acc: 85.15%\n",
"\t Val. Loss: 0.437 | Val. Acc: 83.25%\n",
"Epoch: 07 | Epoch Time: 0m 4s\n",
"\tTrain Loss: 0.440 | Train Acc: 85.46%\n",
"\t Val. Loss: 0.443 | Val. Acc: 84.40%\n",
"\tTrain Loss: 0.386 | Train Acc: 86.92%\n",
"\t Val. Loss: 0.404 | Val. Acc: 84.59%\n",
"Epoch: 08 | Epoch Time: 0m 4s\n",
"\tTrain Loss: 0.401 | Train Acc: 87.10%\n",
"\t Val. Loss: 0.414 | Val. Acc: 85.45%\n",
"\tTrain Loss: 0.350 | Train Acc: 88.21%\n",
"\t Val. Loss: 0.383 | Val. Acc: 85.19%\n",
"Epoch: 09 | Epoch Time: 0m 4s\n",
"\tTrain Loss: 0.367 | Train Acc: 88.41%\n",
"\t Val. Loss: 0.390 | Val. Acc: 86.39%\n",
"\tTrain Loss: 0.319 | Train Acc: 89.36%\n",
"\t Val. Loss: 0.363 | Val. Acc: 85.86%\n",
"Epoch: 10 | Epoch Time: 0m 4s\n",
"\tTrain Loss: 0.340 | Train Acc: 89.23%\n",
"\t Val. Loss: 0.370 | Val. Acc: 86.96%\n"
"\tTrain Loss: 0.295 | Train Acc: 90.17%\n",
"\t Val. Loss: 0.349 | Val. Acc: 86.27%\n"
]
}
],
@ -1372,7 +1337,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Test Loss: 0.393 | Test Acc: 85.39%\n"
"Test Loss: 0.374 | Test Acc: 84.75%\n"
]
}
],
@ -1421,7 +1386,7 @@
{
"data": {
"text/plain": [
"9.809165021579247e-06"
"2.818893153744284e-05"
]
},
"execution_count": 57,
@ -1451,7 +1416,7 @@
{
"data": {
"text/plain": [
"0.9999963045120239"
"0.9997795224189758"
]
},
"execution_count": 58,
@ -1481,7 +1446,7 @@
{
"data": {
"text/plain": [
"0.7485461235046387"
"0.6041761040687561"
]
},
"execution_count": 59,
@ -1512,7 +1477,7 @@
{
"data": {
"text/plain": [
"0.7485461235046387"
"0.6041760444641113"
]
},
"execution_count": 60,

View File

@ -68,28 +68,11 @@
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "sqImRLskbrAd"
},
"metadata": {},
"outputs": [],
"source": [
"def get_train_valid_split(raw_train_data, split_ratio = 0.7):\n",
"\n",
" raw_train_data = list(raw_train_data)\n",
" \n",
" random.shuffle(raw_train_data)\n",
" \n",
" n_train_examples = int(len(raw_train_data) * split_ratio)\n",
" \n",
" train_data = raw_train_data[:n_train_examples]\n",
" valid_data = raw_train_data[n_train_examples:]\n",
" \n",
" #train_data = RawTextIterableDataset(train_data)\n",
" #valid_data = RawTextIterableDataset(valid_data)\n",
" \n",
" return train_data, valid_data"
"raw_train_data = list(raw_train_data)\n",
"raw_test_data = list(raw_test_data)"
]
},
{
@ -98,31 +81,33 @@
"metadata": {
"colab": {},
"colab_type": "code",
"id": "YgKzkSjibsCh"
"id": "sqImRLskbrAd"
},
"outputs": [],
"source": [
"raw_train_data, raw_valid_data = get_train_valid_split(raw_train_data)"
"def get_train_valid_split(raw_train_data, split_ratio = 0.7):\n",
" \n",
" random.shuffle(raw_train_data)\n",
" \n",
" n_train_examples = int(len(raw_train_data) * split_ratio)\n",
" \n",
" train_data = raw_train_data[:n_train_examples]\n",
" valid_data = raw_train_data[n_train_examples:]\n",
" \n",
" return train_data, valid_data"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(17500, 7500)"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"metadata": {
"colab": {},
"colab_type": "code",
"id": "YgKzkSjibsCh"
},
"outputs": [],
"source": [
"len(raw_train_data), len(raw_valid_data)"
"raw_train_data, raw_valid_data = get_train_valid_split(raw_train_data)"
]
},
{
@ -216,7 +201,7 @@
"outputs": [],
"source": [
"def build_vocab_from_data(raw_data, tokenizer, **vocab_kwargs):\n",
" \n",
" \n",
" token_freqs = collections.Counter()\n",
" \n",
" for label, text in raw_data:\n",
@ -240,7 +225,9 @@
"source": [
"max_size = 25_000\n",
"\n",
"vocab = build_vocab_from_data(raw_train_data, tokenizer, max_size = max_size)"
"vocab = build_vocab_from_data(raw_train_data, \n",
" tokenizer, \n",
" max_size = max_size)"
]
},
{
@ -249,18 +236,15 @@
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"25002"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
"name": "stdout",
"output_type": "stream",
"text": [
"Unique tokens in vocab: 25,002\n"
]
}
],
"source": [
"len(vocab)"
"print(f'Unique tokens in vocab: {len(vocab):,}')"
]
},
{
@ -273,10 +257,8 @@
},
"outputs": [],
"source": [
"def process_raw_data(raw_data, tokenizer, vocab):\n",
" \n",
" raw_data = [(label, text) for (label, text) in raw_data]\n",
"\n",
"def raw_data_to_dataset(raw_data, tokenizer, vocab):\n",
" \n",
" text_transform = sequential_transforms(tokenizer.tokenize,\n",
" vocab_func(vocab),\n",
" totensor(dtype=torch.long))\n",
@ -303,9 +285,9 @@
},
"outputs": [],
"source": [
"train_data = process_raw_data(raw_train_data, tokenizer, vocab)\n",
"valid_data = process_raw_data(raw_valid_data, tokenizer, vocab)\n",
"test_data = process_raw_data(raw_test_data, tokenizer, vocab)"
"train_data = raw_data_to_dataset(raw_train_data, tokenizer, vocab)\n",
"valid_data = raw_data_to_dataset(raw_valid_data, tokenizer, vocab)\n",
"test_data = raw_data_to_dataset(raw_test_data, tokenizer, vocab)"
]
},
{
@ -314,18 +296,19 @@
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(17500, 7500, 25000)"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
"name": "stdout",
"output_type": "stream",
"text": [
"Number of training examples: 17,500\n",
"Number of validation examples: 7,500\n",
"Number of testing examples: 25,000\n"
]
}
],
"source": [
"len(train_data), len(valid_data), len(test_data)"
"print(f'Number of training examples: {len(train_data):,}')\n",
"print(f'Number of validation examples: {len(valid_data):,}')\n",
"print(f'Number of testing examples: {len(test_data):,}')"
]
},
{
@ -1191,34 +1174,34 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Epoch: 01 | Epoch Time: 0m 27s\n",
"Epoch: 01 | Epoch Time: 0m 25s\n",
"\tTrain Loss: 0.604 | Train Acc: 64.21%\n",
"\t Val. Loss: 0.457 | Val. Acc: 78.76%\n",
"Epoch: 02 | Epoch Time: 0m 27s\n",
"Epoch: 02 | Epoch Time: 0m 25s\n",
"\tTrain Loss: 0.364 | Train Acc: 84.16%\n",
"\t Val. Loss: 0.355 | Val. Acc: 84.73%\n",
"Epoch: 03 | Epoch Time: 0m 26s\n",
"Epoch: 03 | Epoch Time: 0m 25s\n",
"\tTrain Loss: 0.270 | Train Acc: 89.23%\n",
"\t Val. Loss: 0.384 | Val. Acc: 84.55%\n",
"Epoch: 04 | Epoch Time: 0m 26s\n",
"Epoch: 04 | Epoch Time: 0m 25s\n",
"\tTrain Loss: 0.206 | Train Acc: 92.15%\n",
"\t Val. Loss: 0.355 | Val. Acc: 86.63%\n",
"Epoch: 05 | Epoch Time: 0m 27s\n",
"Epoch: 05 | Epoch Time: 0m 25s\n",
"\tTrain Loss: 0.150 | Train Acc: 94.58%\n",
"\t Val. Loss: 0.435 | Val. Acc: 86.43%\n",
"Epoch: 06 | Epoch Time: 0m 27s\n",
"Epoch: 06 | Epoch Time: 0m 25s\n",
"\tTrain Loss: 0.101 | Train Acc: 96.54%\n",
"\t Val. Loss: 0.455 | Val. Acc: 86.67%\n",
"Epoch: 07 | Epoch Time: 0m 27s\n",
"Epoch: 07 | Epoch Time: 0m 25s\n",
"\tTrain Loss: 0.084 | Train Acc: 97.17%\n",
"\t Val. Loss: 0.505 | Val. Acc: 84.09%\n",
"Epoch: 08 | Epoch Time: 0m 26s\n",
"Epoch: 08 | Epoch Time: 0m 25s\n",
"\tTrain Loss: 0.066 | Train Acc: 97.82%\n",
"\t Val. Loss: 0.508 | Val. Acc: 86.05%\n",
"Epoch: 09 | Epoch Time: 0m 26s\n",
"Epoch: 09 | Epoch Time: 0m 25s\n",
"\tTrain Loss: 0.041 | Train Acc: 98.78%\n",
"\t Val. Loss: 0.605 | Val. Acc: 86.25%\n",
"Epoch: 10 | Epoch Time: 0m 26s\n",
"Epoch: 10 | Epoch Time: 0m 25s\n",
"\tTrain Loss: 0.035 | Train Acc: 99.01%\n",
"\t Val. Loss: 0.681 | Val. Acc: 85.79%\n"
]