https://bugzilla.wikimedia.org/show_bug.cgi?id=62468

            Bug ID: 62468
           Summary: Add option to have the Internet Archiver (and/or other
                    robots) retrieve raw wikitext of all pages
           Product: MediaWiki
           Version: 1.23-git
          Hardware: All
                OS: All
            Status: NEW
          Severity: enhancement
          Priority: Unprioritized
         Component: General/Unknown
          Assignee: wikibugs-l@lists.wikimedia.org
          Reporter: nathanlarson3...@gmail.com
       Web browser: ---
   Mobile Platform: ---

I propose to add an option to have the Internet Archiver (and/or other robots)
retrieve raw wikitext. This way, if a wiki goes down, it will be possible to
more easily create a successor wiki by gathering that data from the Internet
Archive. As it is now, all that can be obtained are the parsed pages. That's
okay for a static archive, but one might want to revive the wiki for further
editing. Also, Archive should be opened

I would say that someone should write a script to convert the parsed pages
retrieved from the Internet Archive back into wikitext, but that will run into
problems with templates and such, unless it's designed to identify them and
recreate them. It would be a much easier and cleaner solution to just make the
wikitext available from the get-go.

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
_______________________________________________
Wikibugs-l mailing list
Wikibugs-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikibugs-l

Reply via email to