Aha, I see, I'm conflating a few different issues then. For now, the mobile apps simply benefit from this hard work by the CommonsMetadata API being more reliable.
So, I'm definitely back to THIS IS AWESOME! Thanks, Dan On 11 December 2014 at 16:56, Erik Moeller <[email protected]> wrote: > > On Thu, Dec 11, 2014 at 5:41 PM, Ricordisamoa > <[email protected]> wrote: > > As far as I understand the information Guillaume is talking about is > exactly > > the one scraped by CommonsMetadata. > > See https://tools.wmflabs.org/mrmetadata/how_it_works.html: > > «The script needs to go through all file description pages of a wiki, and > > check for machine-readable metadata by querying the CommonsMetadata > > extension.» > > That's correct, the whole purpose of the cleanup drive is to make sure > that there's something scrape-able to begin with, i.e. to eliminate > the cases where you just get nothing useful back from the > CommonsMetadata extension. This sets the stage for potential further > work along the lines of > https://commons.wikimedia.org/wiki/Commons:Structured_data -- which is > pretty meaty and complex work in its own right. > > Erik > -- > Erik Möller > VP of Product & Strategy, Wikimedia Foundation > > _______________________________________________ > Wikitech-ambassadors mailing list > [email protected] > https://lists.wikimedia.org/mailman/listinfo/wikitech-ambassadors > -- Dan Garry Associate Product Manager, Mobile Apps Wikimedia Foundation
_______________________________________________ Wikitech-ambassadors mailing list [email protected] https://lists.wikimedia.org/mailman/listinfo/wikitech-ambassadors
