Hi Ian, I’ve recovered this script. Are you able to review it? [1] There is a known issue due to a wrong migration from compat to core but the script should work anyway.
Best xqt [1] https://gerrit.wikimedia.org/r/c/pywikibot/core/+/757889 > Am 28.01.2022 um 11:15 schrieb Ian Watt <ianw...@gmail.com>: > > Thank you, xqt > > That’s really helpful. > > Ian > > > Ian Watt > ianw...@gmail.com > > > >> On 28 Jan 2022, at 06:13, i...@gno.de wrote: >> >> Hi Ian, >> >> data_ingestion.py is still available but is not tested. There might be some >> breaking changes since 5.6 and 7.0 which can cause the script failing. >> >> Either you have to pretend the archive folder when calling the script or add >> the path to user_script_paths in your user-config.py [1]. >> >> You cannot go back to an older Pywikibot version (pre 6.0) because Pywikibot >> 6.6.1 is required for the current MW release used at commons. [2] >> >> Possibly I find some time to recover the script shortly. >> >> Best >> xqt >> >> >> [1] >> https://doc.wikimedia.org/pywikibot/master/api_ref/pywikibot.config.html?highlight=config#external-script-path-settings >> >> [2] https://www.mediawiki.org/wiki/Manual:Pywikibot/Compatibility >> >> >> >>>> Am 27.01.2022 um 17:07 schrieb Ian Watt <ianw...@gmail.com>: >>>> >>> Hi, >>> >>> I’m assisting our local Museums and Galleries on a project to open up >>> around 4,000 images as CC-0) via Commons. We picked a bad time to do it >>> (Pattypan being borked). >>> >>> I’m a PyWikiBot noob - although I have some long-term familiarity with >>> Python. I’m trying to work out if using PWB might be a route to get these >>> images onto Commons. >>> >>> I have both downloaded image files and URLs which I can point a script at - >>> as well as good metadata for them. >>> >>> I’ve been looking at pre-canned PWB scripts and see that data_ingestion >>> *might* do the trick. >>> >>> I see that it is in /scripts/archive/ >>> >>> Is this still a viable script - or is it deprecated in some way? >>> >>> Does anyone have a guide for using it beyond the comments at the top of the >>> script? >>> >>> I had a look at /tests/data/csv_ingestion.csv and it looks kind of bare - >>> as I’d expect more fields etc. I’d rather construct something more like >>> the metadata fields that I’d use with Patypan if using that - rather than >>> be faced with 4,000 files uploaded and have to add metadata to them in a >>> separate process or *shudder* manually. >>> >>> Any suggestions (including ‘don’t do this’) with explanations would be >>> welcome please. >>> >>> Many thanks >>> >>> Ian >>> >>> Ian Watt >>> ianw...@gmail.com >>> >>> >>> watty62 >>> >>> >>> _______________________________________________ >>> pywikibot mailing list -- pywikibot@lists.wikimedia.org >>> To unsubscribe send an email to pywikibot-le...@lists.wikimedia.org >> _______________________________________________ >> pywikibot mailing list -- pywikibot@lists.wikimedia.org >> To unsubscribe send an email to pywikibot-le...@lists.wikimedia.org > > _______________________________________________ > pywikibot mailing list -- pywikibot@lists.wikimedia.org > To unsubscribe send an email to pywikibot-le...@lists.wikimedia.org
_______________________________________________ pywikibot mailing list -- pywikibot@lists.wikimedia.org To unsubscribe send an email to pywikibot-le...@lists.wikimedia.org