If we had a listing of all the wiki pages, we could use a wget script to grab them all from the google cache. It would be straightforward copy and paste to the media wiki then. Ideally, though, we would get an lwip db dump from the scribblewiki folks, which we could then merge into our own wiki db.
As a side note, does anyone have a cool lwIP logo? Everything I do looks like bad programmer art. On Thu, Sep 25, 2008 at 10:04 AM, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > Grubb, Jared wrote: >> >> So, what we need now is a DB dump from the old wiki... I'm not sure how to >> get that. >> jared >> > > That won't be easy - unless there is another way than asking the hosters of > the 'old' wiki (which really is the 'current' :) > Anyway, who tells us the 'old' wiki will be online again at all to grab the > pages back? I tried the google cache method, but all links lead to the real > site, which makes it pretty hard to even grab the whole wiki as a backup of > html pages... (can't use a simple crawler and tell it to stay on the google > cache server ignoring extern links) > > Does anyone have a better idea? I'm a little afraid of losing the pages > built so far! > > Simon > > > _______________________________________________ > lwip-users mailing list > [email protected] > http://lists.nongnu.org/mailman/listinfo/lwip-users > -- Thomas Taranowski Certified netburner consultant baringforge.com _______________________________________________ lwip-users mailing list [email protected] http://lists.nongnu.org/mailman/listinfo/lwip-users
