2015-03-22 20:03:01 +08:00
# wikiextractor
2015-03-22 20:41:39 +08:00
[WikiExtractor.py ](http://medialab.di.unipi.it/wiki/Wikipedia_Extractor ) is a Python script that extracts and cleans text from a [Wikipedia database dump ](http://download.wikimedia.org/ ).
The tool is written in Python and requires no additional library.
2015-03-22 20:59:58 +08:00
For further information, see the [project Home Page ](http://medialab.di.unipi.it/wiki/Wikipedia_Extractor ) or the [Wiki ](https://github.com/attardi/wikiextractor/wiki ).
2015-03-22 20:58:50 +08:00
2015-04-21 23:27:34 +08:00
This is a beta version that performs template expansion by preprocesssng the whole dump and extracting template definitions.
The current version keeps a cache of parsed templates, achieving a speedup of twice over the previous version.
2015-03-22 20:41:39 +08:00
## Usage
The script is invoked with a Wikipedia dump file as an argument.
The output is stored in a number of files of similar size in a chosen directory.
Each file will contains several documents in this [document format ](http://medialab.di.unipi.it/wiki/Document_Format ).
2015-04-20 12:56:29 +08:00
usage: WikiExtractor.py [-h] [-o OUTPUT] [-b n[KMG]] [-c] [--html] [-l]
[-ns ns1,ns2] [-s] [--templates TEMPLATES]
2015-06-19 18:15:45 +08:00
[--no-templates] [--processes PROCESSES] [-q] [--debug]
2015-04-20 12:56:29 +08:00
[-a] [-v]
input
2015-03-22 20:41:39 +08:00
2015-04-20 12:56:29 +08:00
positional arguments:
2015-06-19 18:15:45 +08:00
input XML wiki dump file; use '-' to read from stdin
2015-04-15 20:30:55 +08:00
2015-03-22 20:41:39 +08:00
optional arguments:
-h, --help show this help message and exit
2015-06-19 18:15:45 +08:00
--processes PROCESSES number of processes to use (default number of CPU cores)
2015-04-20 12:56:29 +08:00
Output:
2015-03-22 20:41:39 +08:00
-o OUTPUT, --output OUTPUT
2015-06-19 18:15:45 +08:00
output path; a file if no max bytes per file set,
otherwise a directory to collect files. use '-' for stdout.
2015-04-15 20:30:55 +08:00
-b n[KMG], --bytes n[KMG]
2015-06-19 18:15:45 +08:00
maximum bytes per output file (default is no limit: one file)
2015-03-22 20:41:39 +08:00
-c, --compress compress output files using bzip
2015-04-20 12:56:29 +08:00
Processing:
--html produce HTML output, subsumes --links and --sections
2015-03-22 20:41:39 +08:00
-l, --links preserve links
-ns ns1,ns2, --namespaces ns1,ns2
2015-04-20 12:56:29 +08:00
accepted namespaces
2015-03-22 20:41:39 +08:00
-s, --sections preserve sections
--templates TEMPLATES
2015-04-20 12:56:29 +08:00
use or create file containing templates
2015-04-12 17:05:52 +08:00
--no-templates Do not expand templates
2015-04-20 12:56:29 +08:00
Special:
-q, --quiet suppress reporting progress info
--debug print debug info
-a, --article analyze a file containing a single article (debug)
option
2015-03-22 20:41:39 +08:00
-v, --version print program version
2015-04-12 17:05:52 +08:00
Saving templates to a file will speed up performing extraction the next time,
assuming template definitions have not changed.
2015-03-22 20:41:39 +08:00
2015-04-12 17:18:19 +08:00
Option --no-templates significantly speeds up the extractor, avoiding the cost of expanding [MediaWiki templates ](https://www.mediawiki.org/wiki/Help:Templates ).
2015-04-26 14:57:25 +08:00
For further information, visit [the documentation ](http://attardi.github.io/wikiextractor ).
2015-04-21 23:22:41 +08:00