Robert Rohde wrote: > That wouldn't get you file descriptions or copyright status, etc. If > your goal is something like mirroring a wiki, you really need access > to page descriptions as well. > > At present, the main solution is to copy all of Commons, which is > overkill for many applications. It would be nice if the dump > generator had a way of parsing out only the relevant Commons content. > > -Robert Rohde
I'd expect a "commons selected dump" to be pretty similar to pages-articles. What you can do is to request just the images used with Special:Export or the API (depending of how small those wiki really are, it could be feasible or not). _______________________________________________ Wikitech-l mailing list [email protected] https://lists.wikimedia.org/mailman/listinfo/wikitech-l
