I actually had a similar thought, I downloaded an XML dump of the Wikipedia 
database from herehttp://en.m.wikipedia.org/wiki/Wikipedia:Database_download 
and then I wrote a fairly simple python script that would traverse the XML 
document tree creating individual files for each article (that wasn't a 
redirect) and formatting them like a Tiddler. Then I was able to use the NodeJS 
version of TiddlyWiki to start upk using this directory of tiddlers.

The first thing I noticed was how massive the memory footprint was and how slow 
the site was to load (plus I only took the first 10,000 articles). So TW is 
unfortunately not fast enough or strong enough to load the entire Wikipedia 
database, but if you have your own Media wiki installation that you would like 
to convert as others have pointed out there are ways to dump the database that 
can work and TW should be able to handle it if you have a smaller number of 
articles. Though you may have to convert the syntax yourself unless someone 
makes a plugin to add Media wiki syntax support.

If you or anyone else wants the python script just let me know and I'll post it.

-- 
You received this message because you are subscribed to the Google Groups 
"TiddlyWiki" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/tiddlywiki.
For more options, visit https://groups.google.com/d/optout.

Reply via email to