2016-02-04 18:23:40 +08:00
# WikiExtractor
2015-03-22 20:41:39 +08:00
[WikiExtractor.py ](http://medialab.di.unipi.it/wiki/Wikipedia_Extractor ) is a Python script that extracts and cleans text from a [Wikipedia database dump ](http://download.wikimedia.org/ ).
2020-07-23 00:25:35 +08:00
The tool is written in Python and requires Python 3 but no additional library.
2020-07-22 20:12:37 +08:00
**Warning**: problems have been reported on Windows due to poor support for `StringIO` in the Python implementation on Windows.
2015-03-22 20:41:39 +08:00
2015-03-22 20:59:58 +08:00
For further information, see the [project Home Page ](http://medialab.di.unipi.it/wiki/Wikipedia_Extractor ) or the [Wiki ](https://github.com/attardi/wikiextractor/wiki ).
2015-03-22 20:58:50 +08:00
2016-02-04 18:23:40 +08:00
# Wikipedia Cirrus Extractor
2016-02-11 16:55:55 +08:00
`cirrus-extractor.py` is a version of the script that performs extraction from a Wikipedia Cirrus dump.
Cirrus dumps contain text with already expanded templates.
2016-02-04 18:23:40 +08:00
2016-02-11 16:55:55 +08:00
Cirrus dumps are available at:
[cirrussearch ](http://dumps.wikimedia.org/other/cirrussearch/ ).
2016-02-04 18:23:40 +08:00
# Details
2016-08-30 12:41:21 +08:00
WikiExtractor performs template expansion by preprocessing the whole dump and extracting template definitions.
2015-10-26 00:03:17 +08:00
2016-02-13 01:16:54 +08:00
In order to speed up processing:
2015-10-26 00:03:17 +08:00
2016-02-13 01:16:54 +08:00
- multiprocessing is used for dealing with articles in parallel
- a cache is kept of parsed templates (only useful for repeated extractions).
## Installation
2020-07-23 00:25:35 +08:00
The script may be invoked directly:
python -m wikiextractor.Wikiextractor
however it can also be installed from `PyPi` by doing:
pip install wikiextractor
or locally with:
2016-02-13 01:16:54 +08:00
(sudo) python setup.py install
2015-03-22 20:41:39 +08:00
## Usage
2020-07-23 00:25:35 +08:00
### Wikiextractor
The script is invoked with a Wikipedia dump file as an argument:
python -m wikiextractor.Wikiextractor < Wikipedia dump file >
2015-10-26 00:03:17 +08:00
The output is stored in several files of similar size in a given directory.
2020-07-23 00:25:35 +08:00
Each file will contains several documents in this [document format ](wiki/File-Format ).
2015-03-22 20:41:39 +08:00
2017-03-03 23:56:20 +08:00
usage: WikiExtractor.py [-h] [-o OUTPUT] [-b n[KMG]] [-c] [--json] [--html]
[-l] [-s] [--lists] [-ns ns1,ns2]
[--templates TEMPLATES] [--no-templates] [-r]
[--min_text_length MIN_TEXT_LENGTH]
2017-04-28 08:15:17 +08:00
[--filter_category path_of_categories_file]
2017-03-03 23:56:20 +08:00
[--filter_disambig_pages] [-it abbr,b,big]
[-de gallery,timeline,noinclude] [--keep_tables]
[--processes PROCESSES] [-q] [--debug] [-a] [-v]
2017-04-29 00:36:46 +08:00
[--log_file]
2016-06-19 19:10:36 +08:00
input
Wikipedia Extractor:
Extracts and cleans text from a Wikipedia database dump and stores output in a
number of files of similar size in a given directory.
Each file will contain several documents in the format:
< doc id = "" revid = "" url = "" title = "" >
...
< / doc >
2017-03-03 23:56:20 +08:00
If the program is invoked with the --json flag, then each file will
contain several documents formatted as json ojects, one per line, with
the following structure
{"id": "", "revid": "", "url":"", "title": "", "text": "..."}
2020-03-29 13:48:13 +08:00
Template expansion requires preprocessing first the whole dump and
2016-06-19 19:10:36 +08:00
collecting template definitions.
2015-03-22 20:41:39 +08:00
2015-04-20 12:56:29 +08:00
positional arguments:
2015-10-26 00:03:17 +08:00
input XML wiki dump file
2015-04-15 20:30:55 +08:00
2015-03-22 20:41:39 +08:00
optional arguments:
-h, --help show this help message and exit
2017-03-03 23:56:20 +08:00
--processes PROCESSES
Number of processes to use (default 1)
2015-04-20 12:56:29 +08:00
Output:
2015-03-22 20:41:39 +08:00
-o OUTPUT, --output OUTPUT
2016-06-19 19:10:36 +08:00
directory for extracted files (or '-' for dumping to
2015-11-20 07:06:23 +08:00
stdout)
2015-04-15 20:30:55 +08:00
-b n[KMG], --bytes n[KMG]
2015-11-20 07:06:23 +08:00
maximum bytes per output file (default 1M)
2015-03-22 20:41:39 +08:00
-c, --compress compress output files using bzip
2017-03-03 23:56:20 +08:00
--json write output in json format instead of the default one
2015-04-20 12:56:29 +08:00
Processing:
2015-11-20 07:34:23 +08:00
--html produce HTML output, subsumes --links
2015-03-22 20:41:39 +08:00
-l, --links preserve links
2016-06-19 19:10:36 +08:00
-s, --sections preserve sections
2016-02-13 06:31:21 +08:00
--lists preserve lists
2015-03-22 20:41:39 +08:00
-ns ns1,ns2, --namespaces ns1,ns2
2017-03-03 23:56:20 +08:00
accepted namespaces in links
2015-03-22 20:41:39 +08:00
--templates TEMPLATES
2016-06-19 19:10:36 +08:00
use or create file containing templates
2015-04-12 17:05:52 +08:00
--no-templates Do not expand templates
2016-06-19 19:10:36 +08:00
-r, --revision Include the document revision id (default=False)
--min_text_length MIN_TEXT_LENGTH
Minimum expanded text length required to write
document (default=0)
2017-04-28 08:15:17 +08:00
--filter_category path_of_categories_file
Include or exclude specific categories from the dataset. Specify the categories in
file 'path_of_categories_file'. Format:
One category one line, and if the line starts with:
1) #: Comments, ignored;
2) ^: the categories will be in excluding-categories
3) others: the categories will be in including-categories.
Priority:
1) If excluding-categories is not empty, and any category of a page exists in excluding-categories, the page will be excluded; else
2) If including-categories is not empty, and no category of a page exists in including-categories, the page will be excluded; else
3) the page will be included
2016-06-19 19:10:36 +08:00
--filter_disambig_pages
Remove pages from output that contain disabmiguation
markup (default=False)
2017-03-03 23:56:20 +08:00
-it abbr,b,big, --ignored_tags abbr,b,big
comma separated list of tags that will be dropped,
keeping their content
-de gallery,timeline,noinclude, --discard_elements gallery,timeline,noinclude
comma separated list of elements that will be removed
from the article text
--keep_tables Preserve tables in the output article text
(default=False)
2015-04-20 12:56:29 +08:00
Special:
-q, --quiet suppress reporting progress info
--debug print debug info
2016-06-19 19:10:36 +08:00
-a, --article analyze a file containing a single article (debug
option)
2015-03-22 20:41:39 +08:00
-v, --version print program version
2017-04-29 00:36:46 +08:00
--log_file specify a file to save the log information.
2015-03-22 20:41:39 +08:00
2016-06-19 19:10:36 +08:00
2015-11-20 07:06:23 +08:00
Saving templates to a file will speed up performing extraction the next time,
assuming template definitions have not changed.
2015-03-22 20:41:39 +08:00
2015-11-20 07:06:23 +08:00
Option --no-templates significantly speeds up the extractor, avoiding the cost
of expanding [MediaWiki templates ](https://www.mediawiki.org/wiki/Help:Templates ).
2015-04-12 17:18:19 +08:00
2015-04-26 14:57:25 +08:00
For further information, visit [the documentation ](http://attardi.github.io/wikiextractor ).
2020-07-22 17:38:18 +08:00
2020-07-23 00:25:35 +08:00
### Cirrus Extractor
~~~
usage: cirrus-extract.py [-h] [-o OUTPUT] [-b n[KMG]] [-c] [-ns ns1,ns2] [-q]
[-v]
input
Wikipedia Cirrus Extractor:
Extracts and cleans text from a Wikipedia Cirrus dump and stores output in a
number of files of similar size in a given directory.
Each file will contain several documents in the format:
< doc id = "" url = "" title = "" language = "" revision = "" >
...
< / doc >
positional arguments:
input Cirrus Json wiki dump file
optional arguments:
-h, --help show this help message and exit
Output:
-o OUTPUT, --output OUTPUT
directory for extracted files (or '-' for dumping to
stdin)
-b n[KMG], --bytes n[KMG]
maximum bytes per output file (default 1M)
-c, --compress compress output files using bzip
Processing:
-ns ns1,ns2, --namespaces ns1,ns2
accepted namespaces
Special:
-q, --quiet suppress reporting progress info
-v, --version print program version
~~~
2020-07-22 17:38:18 +08:00
## License
2020-07-22 17:39:15 +08:00
The code is made available under the [GNU Affero General Public License v3.0 ](LICENSE ).
2020-07-23 00:25:35 +08:00
## Reference
If you find this code useful, please refer it in publications as:
~~~
@misc {Wikiextractor2015,
author = {Giusepppe Attardi},
title = {WikiExtractor},
year = {2015},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/attardi/wikiextractor}}
}
~~~