A tool for extracting plain text from Wikipedia dumps
Go to file
Giuseppe Attardi 36f9467c33 Merge pull request #48 from rom1504/patch-1
fix typo in Wikipedia Cirrus Extractor section
2016-02-11 10:28:47 +01:00
ChangeLog See ChangeLog. 2016-02-11 01:03:31 +01:00
cirrus-extract.py See ChangeLog. 2016-02-04 11:23:40 +01:00
extractPage.py See ChangeLog. 2015-04-15 14:30:55 +02:00
LICENSE Initial commit 2015-03-22 13:03:01 +01:00
README.md fix typo in Wikipedia Cirrus Extractor section 2016-02-11 09:55:55 +01:00
setup.py update: Add setup.py 2016-02-05 23:56:36 +01:00
WikiExtractor.py See ChangeLog. 2016-02-11 01:03:31 +01:00

WikiExtractor

WikiExtractor.py is a Python script that extracts and cleans text from a Wikipedia database dump.

The tool is written in Python and requires Python 2.7 but no additional library. Warning: problems have been reported on Windows with the use of multiprocessing.

For further information, see the project Home Page or the Wiki.

Wikipedia Cirrus Extractor

cirrus-extractor.py is a version of the script that performs extraction from a Wikipedia Cirrus dump. Cirrus dumps contain text with already expanded templates.

Cirrus dumps are available at: cirrussearch.

Details

WikiExtractor performs template expansion by preprocesssng the whole dump and extracting template definitions.

The latest version includes the following performance improvements:

  • multiprocessing is used for dealing with articles in parallel (this requires a Python installation with proper implementation of the StringIO library)
  • a cache is kept of parsed templates.

Usage

The script is invoked with a Wikipedia dump file as an argument. The output is stored in several files of similar size in a given directory. Each file will contains several documents in this document format.

usage: WikiExtractor.py [-h] [-o OUTPUT] [-b n[KMG]] [-c] [--html] [-l]
		    [-ns ns1,ns2] [-s] [--templates TEMPLATES]
		    [--no-templates] [--processes PROCESSES] [-q] [--debug]
		    [-a] [-v]
		    input

positional arguments:
  input                 XML wiki dump file

optional arguments:
  -h, --help            show this help message and exit
  --processes PROCESSES number of processes to use (default: number of CPU cores)

Output:
  -o OUTPUT, --output OUTPUT
		    a directory where to store the extracted files (or '-' for dumping to
                        stdout)
  -b n[KMG], --bytes n[KMG]
                        maximum bytes per output file (default 1M)
  -c, --compress        compress output files using bzip

Processing:
  --html                produce HTML output, subsumes --links
  -l, --links           preserve links
  -ns ns1,ns2, --namespaces ns1,ns2
		    accepted namespaces
  --templates TEMPLATES
		    use or create file containing templates
  --no-templates        Do not expand templates
  --escapedoc           use to escape the contents of the output
                        <doc>...</doc>

Special:
  -q, --quiet           suppress reporting progress info
  --debug               print debug info
  -a, --article         analyze a file containing a single article (debug option)
  -v, --version         print program version

Saving templates to a file will speed up performing extraction the next time, assuming template definitions have not changed.

Option --no-templates significantly speeds up the extractor, avoiding the cost of expanding MediaWiki templates.

For further information, visit the documentation.