Downloads (all)

Posted on -

Contents. Offline Wikipedia readers Some of the many ways to read Wikipedia while offline:. XOWA:.:. WikiTaxi:.

  1. Download All Drivers For Free
  2. Downloads All Game

aarddict:. BzReader:.

Selected Wikipedia articles as a PDF, OpenDocument, etc.:. Selected Wikipedia articles as a printed book:. Wiki as E-Book:. WikiFilter:. Wikipedia on rockbox: Some of them are mobile applications - see '. Where do I get it?

English-language Wikipedia. Dumps from any Wikimedia Foundation project: and the. English Wikipedia dumps in SQL and XML: and the. the data dump using a BitTorrent client (torrenting has many benefits and reduces server load, saving bandwidth costs). pages-articles-multistream.xml.bz2 – Current revisions only, no talk or user pages; this is probably what you want, and is approximately 14 GB compressed (expands to over 58 GB when decompressed). pages-meta-current.xml.bz2 – Current revisions only, all pages (including talk). abstract.xml.gz – page abstracts.

all-titles-in-ns0.gz – Article titles only (with redirects). SQL files for the pages and links are also available. All revisions, all pages: These files expand to multiple of text. Please only download these if you know you can cope with this quantity of data. Go to and look out for all the files that have 'pages-meta-history' in their name. To download a subset of the database in XML format, such as a specific category or a list of articles see:, usage of which is described at. Wiki front-end software:.

Database backend software:. Image dumps: See below. Should I get multistream? Very short: GET THE MULTISTREAM VERSION!

(and the corresponding index file, pages-articles-multistream-index.txt.bz2) Slightly longer: pages-articles.xml.bz2 and pages-articles-multistream.xml.bz2 both contain the same.xml file. So if you unpack either, you get the same file.

But with multistream, it is possible to get an article from the archive without unpacking the whole thing. Your reader should handle this for you, if your reader doesn't support it it will work anyway since multistream and non-multistream contain the same.xml. The only downside to multistream is that it is marginally larger, currently 13.9GB vs. 13.1GB for the English Wikipedia.

You might be tempted to get the smaller non-multistream archive, but this will be useless if you don't unpack it. And it will unpack to 5-10 times its original size. Penny wise, pound stupid. Get multistream. Developers: for multistream you can get an index file, pages-articles-multistream-index.txt.bz2. The first field of this index is # of bytes to seek into the compressed archive, the second is the article ID, the third the article title. If you are a developer you should pay attention because this doesn't seem to be documented anywhere else here and this information was effectively reverse engineered.

Hint: cut a small part out of the archive with dd using the byte offset as found in the index, use bzip2recover and search the first file for the article ID. Other languages In the directory you will find the latest SQL and XML dumps for the projects, not just English. The sub-directories are named for the and the appropriate project. Some other directories (e.g.

Simple, nostalgia) exist, with the same structure. These dumps are also available from the.

Downloads

Where are the uploaded files (image, audio, video, etc., files)? Images and other uploaded media are available from mirrors in addition to being served directly from Wikimedia servers. Bulk download is (as of September 2013) available from mirrors but not offered directly from Wikimedia servers. You should rsync from the mirror, then fill in the missing images from; when downloading from upload.wikimedia.org you should throttle yourself to 1 cache miss per second (you can check headers on a response to see if was a hit or miss and then back off when you get a miss) and you shouldn't use more than one or two simultaneous HTTP connections. In any case, make sure you have an accurate string with contact info (email address) so ops can contact you if there's an issue. You should be getting checksums from the mediawiki API and verifying them. The page contains some guidelines, although not all of them apply (for example, because upload.wikimedia.org isn't MediaWiki, there is no maxlag parameter).

Unlike most article text, images are not necessarily licensed under the GFDL & CC-BY-SA-3.0. They may be under one of many, in the, believed to be, or even copyright infringements (which should be ). In particular, use of fair use images outside the context of Wikipedia or similar works may be illegal. Images under most licenses require a credit, and possibly other attached copyright information. This information is included in image description pages, which are part of the text dumps available from.

In conclusion, download these images at your own risk Dealing with compressed files Compressed dump files are significantly compressed, thus after being decompressed will take up large amounts of drive space. A large list of decompression programs are described in.

The following programs in particular can be used to decompress bzip2 and files. Beginning with, a basic decompression program enables decompression of zip files.

Among others, the following can be used to decompress bzip2 files. (from ) is available for free under a BSD license. is available for free under an license. (Mac). ships with the command-line bzip2 tool. GNU/. Most GNU/Linux distributions ship with the command-line bzip2 tool.

(BSD). Some BSD systems ship with the command-line bzip2 tool as part of the operating system. Others, such as, provide it as a package which must first be installed. Notes. Some older versions of bzip2 may not be able to handle files larger than 2 GB, so make sure you have the latest version if you experience any problems. Some older archives are compressed with gzip, which is compatible with PKZIP (the most common Windows format).

Dealing with large files As files grow in size, so does the likelihood they will exceed some limit of a computing device. Each operating system, file system, hard storage device, and software (application) has a maximum file size limit.

Each one of these will likely have a different maximum, and the lowest limit of all of them will become the file size limit for a storage device. The older the software in a computing device, the more likely it will have a 2 GB file limit somewhere in the system. This is due to older software using 32-bit integers for file indexing, which limits file sizes to 2^31 bytes (2 GB) (for signed integers), or 2^32 (4 GB) (for unsigned integers).

Download All Drivers For Free

Older have this 2 or 4 GB limit, but the newer file libraries have been converted to 64-bit integers thus supporting file sizes up to 2^63 or 2^64 bytes (8 or 16 ). Before starting a download of a large file, check the storage device to ensure its file system can support files of such a large size, and check the amount of free space to ensure that it can hold the downloaded file. File system limits There are two limits for a file system: the file system size limit, and the file system limit. In general, since the file size limit is less than the file system limit, the larger file system limits are a moot point. A large percentage of users assume they can create files up to the size of their storage device, but are wrong in their assumption.

For example, a 16 GB storage device formatted as FAT32 file system has a file limit of 4 GB for any single file. The following is a list of the most common file systems, and see for additional detailed information. supports files up to 4. FAT16 is the factory format of smaller drives and all cards that are 2 GB or smaller. supports files up to 4 GB. FAT32 is the factory format of larger drives and all cards that are 4 GB or larger. supports files up to 127.

ExFAT is the factory format of all cards, but is incompatible with most flavors of UNIX due to licensing problems. supports files up to 16. NTFS is the default file system for modern computers, including Windows 2000, Windows XP, and all their successors to date. Versions after Windows 8 can support larger files if the file system is formatted with a larger cluster size.

supports files up to 16. (Mac). (HFS+) supports files up to 8 on 10.2+. HFS+ is the default file system for computers. and supports files up to 16 GB, but up to 2 TB with larger block sizes.

See for more information. supports files up to 16 TB, using 4 KB block size. supports files up to 8 EB.

supports files up to 1 EB, 8 TB on 32-bit systems. supports files up to 4 PB. supports files up to 16 EB. supports files up to 8 EB. 2 supports files up to 2 GB.

Downloads All Game

supports files up to 16 EB. FreeBSD and other BSDs. (UFS) supports files up to 8 ZiB. Operating system limits Each operating system has internal file system limits for file size and drive size, which is independent of the file system or physical media. If the operating system has any limits lower than the file system or physical media, then the OS limits will be the real limit. Windows 95, 98, ME have a 4 GB limit for all file sizes.

Windows XP has a 16 TB limit for all file sizes. Windows 7 has a 16 TB limit for all file sizes. Windows 8, 10, and Server 2012 have a 256 TB limit for all file sizes.

32-bit kernel 2.4.x systems have a 2 TB limit for all file systems. 64-bit kernel 2.4.x systems have an 8 EB limit for all file systems. 32-bit kernel 2.6.x systems without option CONFIGLBD have a 2 TB limit for all file systems. 32-bit kernel 2.6.x systems with option CONFIGLBD and all 64-bit kernel 2.6.x systems have an 8 ZB limit for all file systems. Google Android is based on Linux, which determines its base limits.

Internal storage:. 2.3 and later uses the file system. 2.2 and earlier uses the 2 file system.

External storage slots:. All Android devices should support FAT16, FAT32, ext2 file systems. Android 2.3 and later supports ext4 file system. (see ). All devices support (HFS+) for internal storage.

No devices have external storage slots. Devices on 10.3 or later run supporting a max file size of 8 EB. Tips Detect corrupted files It is useful to check the sums (provided in a file in the download directory) to make sure the download was complete and accurate.

This can be checked by running the 'md5sum' command on the files downloaded. Given their sizes, this may take some time to calculate. Due to the technical details of how files are stored, file sizes may be reported differently on different filesystems, and so are not necessarily reliable. Also, corruption may have occurred during the download, though this is unlikely.

Reformatting external USB drives If you plan to download Wikipedia Dump files to one computer and use an external or to copy them to other computers, then you will run into the 4 GB FAT32 file size limit. To work around this limit, reformat the 4 GB USB drive to a file system that supports larger file sizes. If working exclusively with Windows XP-Vista-7 computers, then reformat the USB drive to NTFS file system. Linux and Unix If you seem to be hitting the 2 GB limit, try using version 1.10 or greater, version 7.11.1-1 or greater, or a recent version of (using -dump). Also, you can resume downloads (for example wget -c). Why not just retrieve data from at runtime? Suppose you are building a piece of software that at certain points displays information that came from Wikipedia.

If you want your program to display the information in a different way than can be seen in the live version, you'll probably need the wikicode that is used to enter it, instead of the finished HTML. Also, if you want to get all the data, you'll probably want to transfer it in the most efficient way that's possible. The wikipedia.org servers need to do quite a bit of work to convert the wikicode into HTML. That's time consuming both for you and for the wikipedia.org servers, so simply spidering all pages is not the way to go. To access any article in XML, one at a time, access.

Read more about this at. Please be aware that live mirrors of Wikipedia that are dynamically loaded from the Wikimedia servers are prohibited. Please do not use a web crawler Please do not use a to download large numbers of articles. Aggressive crawling of the server can cause a dramatic slow-down of Wikipedia.

Sample blocked crawler email IP address nnn.nnn.nnn.nnn was retrieving up to 50 pages per second from wikipedia.org addresses. Something like at least a second delay between requests is reasonable. Please respect that setting.

If you must exceed it a little, do so only during the least busy times shown in our site load graphs at. It's worth noting that to crawl the whole site at one hit per second will take several weeks.

The originating IP is now blocked or will be shortly. Please contact us if you want it unblocked.

Please don't try to circumvent it – we'll just block your whole IP range. If you want information on how to get our content more efficiently, we offer a variety of methods, including weekly database dumps which you can load into MySQL and crawl locally at any rate you find convenient. Tools are also available which will do that for you as often as you like once you have the infrastructure in place. Instead of an email reply you may prefer to visit at irc.freenode.net to discuss your options with our team. Doing SQL queries on the current database dump You can do SQL queries on the current database dump using (as a replacement for the disabled page). Database schema SQL schema See also: The sql file used to initialize a MediaWiki database can be found. XML schema The XML schema for each dump is defined at the top of the file.

And also described in the. Help to parse dumps for use in scripts. describes the Parse::MediaWikiDump library, which can parse XML dumps. is a script that preprocesses raw XML dumps and builds link tables, category hierarchies, collects anchor text for each article etc. is a.NET library to read MySQL dumps without the need to use MySQL database.

is a Java program that can parse XML dumps and extract entries in files. Python based scripts for parsing sql.gz files from wikipedia dumps. Doing Hadoop MapReduce on the Wikipedia current database dump You can do Hadoop MapReduce queries on the current database dump, but you will need an extension to the InputRecordFormat to have each be a single mapper input. A working set of java methods (jobControl, mapper, reducer, and XmlInputRecordFormat) is available at Help to import dumps into MySQL See:., Static HTML tree dumps for mirroring or CD distribution MediaWiki 1.5 includes routines to dump a wiki to HTML, rendering the HTML with the same parser used on a live wiki. As the following page states, putting one of these dumps on the web unmodified will constitute a trademark violation. They are intended for private viewing in an intranet or desktop installation.

If you want to draft a traditional website in Mediawiki and dump it to HTML format, you might want to try. If you'd like to help develop dump-to-static HTML tools, please drop us a note on. Static HTML dumps are now available, but are not current. See also:. lists some other not working options for getting static HTML dumps. Kiwix.

Kiwix on an Android tablet is by far the largest offline distribution of Wikipedia to date. As an offline reader, Kiwix works with a library of contents that are zim files: you can pick & choose whichever Wikimedia content (Wikipedia in any language, or Wiktionary, Wikisource, etc.), as well as TED talks, PhET maths&physics simulations, etc. It is free and opensource, and currently available for download on. as well as legacy Windows, MacOS and GNU/Linux (see for download page). Aard Dictionary is an Offline Wikipedia reader. Cross-Platform for Windows, Mac, Linux, Android, Maemo. Runs on rooted Nook and Sony PRS-T1 eBooks readers.

E-book The store provides ebooks created from a large set of Wikipedia articles with grayscale images for e-book-readers (2013). Wikiviewer for The wikiviewer plugin for rockbox permits viewing converted Wikipedia dumps on many devices. It needs a custom build and conversion of the wiki dumps using the instructions available at. The conversion recompresses the file and splits it into 1 files and an index file which all need to be in the same folder on the device or micro sd card. Old dumps. The static version of Wikipedia created by Wikimedia: Feb.

11, 2013 - This is apparently offline now. There was no content. (site down as of October 2005 ) was an experimental program set up by to generate html dumps, inclusive of images, search function and alphabetical index. At the linked site experimental dumps and the script itself can be downloaded. As an example it was used to generate these copies of, (old database) format and, (new format). Uses a version to generate periodic static copies at (site down as of October 2017).

Dynamic HTML generation from a local XML database dump Instead of converting a database dump file to many pieces of static HTML, one can also use a dynamic HTML generator. Browsing a wiki page is just like browsing a Wiki site, but the content is fetched and converted from a local dump file on request from the browser.

XOWA XOWA is a free, open-source application that helps download Wikipedia to a computer. Access all of Wikipedia offline, without an internet connection! It is currently in the beta stage of development, but is functional.

Ft245r drivers for mac os

It is available for download. Features. Displays all articles from Wikipedia without an internet connection. Download a complete, recent copy of English Wikipedia.

Display 5.2+ million articles in full HTML formatting. Show images within an article. Access 3.7+ million images using the offline image databases. Works with any Wikimedia wiki, including Wikipedia, Wiktionary, Wikisource, Wikiquote, Wikivoyage (also some non-wmf dumps). Works with any non-English language wiki such as French Wikipedia, German Wikisource, Dutch Wikivoyage, etc.

Works with other specialized wikis such as Wikidata, Wikimedia Commons, Wikispecies, or any other MediaWiki generated dump. Set up over 660+ other wikis including:. English Wiktionary. English Wikisource.

English Wikiquote. English Wikivoyage. Non-English wikis, such as French Wiktionary, German Wikisource, Dutch Wikivoyage. Wikidata. Wikimedia Commons. Wikispecies. And many more!.

Update your wiki whenever you want, using Wikimedia's database backups. Navigate between offline wikis. Click on 'Look up this word in Wiktionary' and instantly view the page in Wiktionary. Edit articles to remove vandalism or errors.

Install to a flash memory card for portability to other machines. Run on Windows, Linux and Mac OS X. View the HTML for any wiki page. Search for any page by title using a Wikipedia-like Search box. Browse pages by alphabetical order using Special:AllPages. Find a word on a page.

Access a history of viewed pages. Bookmark your favorite pages.