jenkins-bot has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/323704 )

Change subject: [DOC] Keep -help doc beneath 79 chars
......................................................................


[DOC] Keep -help doc beneath 79 chars

- Help doc must not exceed 80 chars. Otherwise there is an additional
  line feed printed.

Change-Id: Ic8255b49ec52cbb6b271b2669fd030ee44cfdfe3
---
M scripts/archivebot.py
M scripts/blockpageschecker.py
M scripts/checkimages.py
M scripts/coordinate_import.py
M scripts/illustrate_wikidata.py
M scripts/image.py
M scripts/imagecopy_self.py
M scripts/imagerecat.py
M scripts/imageuncat.py
M scripts/interwiki.py
M scripts/patrol.py
M scripts/redirect.py
M scripts/reflinks.py
M scripts/template.py
M scripts/transferbot.py
M scripts/upload.py
16 files changed, 62 insertions(+), 62 deletions(-)

Approvals:
  Magul: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/scripts/archivebot.py b/scripts/archivebot.py
index ea20c78..4e28140 100755
--- a/scripts/archivebot.py
+++ b/scripts/archivebot.py
@@ -34,9 +34,9 @@
                      Must be a subpage of the current page. Variables are
                      supported.
 algo                 specifies the maximum age of a thread. Must be in the form
-                     old(<delay>) where <delay> specifies the age in 
minutes(m),
-                     hours (h), days (d), weeks(w), months (M) or years (y)
-                     like 24h or 5d. Default is old(24h)
+                     old(<delay>) where <delay> specifies the age in
+                     minutes (m), hours (h), days (d), weeks(w), months (M) or
+                     years (y)  like 24h or 5d. Default is old(24h)
 counter              The current value of a counter which could be assigned as
                      variable. Will be actualized by bot. Initial value is 1.
 maxarchivesize       The maximum archive size before incrementing the counter.
diff --git a/scripts/blockpageschecker.py b/scripts/blockpageschecker.py
index e90b0c5..0e9bd35 100755
--- a/scripts/blockpageschecker.py
+++ b/scripts/blockpageschecker.py
@@ -26,8 +26,8 @@
 
 Furthermore, the following command line parameters are supported:
 
--always         Doesn't ask every time if the bot should make the change or 
not,
-                do it always.
+-always         Doesn't ask every time whether the bot should make the change.
+                Do it always.
 
 -show           When the bot can't delete the template from the page (wrong
                 regex or something like that) it will ask you if it should show
diff --git a/scripts/checkimages.py b/scripts/checkimages.py
index 35a0a02..40ad67a 100755
--- a/scripts/checkimages.py
+++ b/scripts/checkimages.py
@@ -60,10 +60,8 @@
 right parameter.
 
 * Name=     Set the name of the block
-* Find=     Use it to define what search in the text of the image's 
description,
-            while
-  Findonly= search only if the exactly text that you give is in the image's
-            description.
+* Find=     search this text in the image's description
+* Findonly= search for exactly this text in the image's description
 * Summary=  That's the summary that the bot will use when it will notify the
             problem.
 * Head=     That's the incipit that the bot will use for the message.
@@ -71,8 +69,7 @@
             image's problem.
 
 ---- Known issues/FIXMEs: ----
-* Clean the code, some passages are pretty difficult to understand if you're 
not
-  the coder.
+* Clean the code, some passages are pretty difficult to understand.
 * Add the "catch the language" function for commons.
 * Fix and reorganise the new documentation
 * Add a report for the image tagged.
diff --git a/scripts/coordinate_import.py b/scripts/coordinate_import.py
index 0d64b28..7ed163d 100755
--- a/scripts/coordinate_import.py
+++ b/scripts/coordinate_import.py
@@ -11,7 +11,8 @@
 This will work on all pages in the category "coordinates not on Wikidata" and
 will import the coordinates on these pages to Wikidata.
 
-The data from the "GeoData" extension (https://www.mediawiki.org/wiki/GeoData)
+The data from the "GeoData" extension
+(https://www.mediawiki.org/wiki/Extension:GeoData)
 is used so that extension has to be setup properly. You can look at the
 [[Special:Nearby]] page on your local Wiki to see if it's populated.
 
diff --git a/scripts/illustrate_wikidata.py b/scripts/illustrate_wikidata.py
index fcf225a..6e0f91a 100755
--- a/scripts/illustrate_wikidata.py
+++ b/scripts/illustrate_wikidata.py
@@ -1,10 +1,11 @@
 #!/usr/bin/python
 # -*- coding: utf-8 -*-
 """
-Bot to add images to Wikidata items. The image is extracted from the 
page_props.
+Bot to add images to Wikidata items.
 
-For this to be available the PageImages extension
-(https://www.mediawiki.org/wiki/Extension:PageImages) needs to be installed
+The image is extracted from the page_props. For this to be available the
+PageImages extension (https://www.mediawiki.org/wiki/Extension:PageImages)
+needs to be installed
 
 Usage:
 
diff --git a/scripts/image.py b/scripts/image.py
index 6ff6f23..c7267f7 100755
--- a/scripts/image.py
+++ b/scripts/image.py
@@ -26,8 +26,8 @@
 
 Examples:
 
-The image "FlagrantCopyvio.jpg" is about to be deleted, so let's first remove 
it
-from everything that displays it:
+The image "FlagrantCopyvio.jpg" is about to be deleted, so let's first remove
+it from everything that displays it:
 
     python pwb.py image FlagrantCopyvio.jpg
 
diff --git a/scripts/imagecopy_self.py b/scripts/imagecopy_self.py
index e55d945..1e5018a 100644
--- a/scripts/imagecopy_self.py
+++ b/scripts/imagecopy_self.py
@@ -1,6 +1,6 @@
 # -*- coding: utf-8 -*-
 """
-Script to copy self published files from English Wikipedia to Wikimedia 
Commons.
+Script to copy self published files from English Wikipedia to Commons.
 
 This bot is based on imagecopy.py and intended to be used to empty out
 http://en.wikipedia.org/wiki/Category:Self-published_work
diff --git a/scripts/imagerecat.py b/scripts/imagerecat.py
index a8d0bba..cf098df 100755
--- a/scripts/imagerecat.py
+++ b/scripts/imagerecat.py
@@ -9,8 +9,8 @@
 
 The following command line parameters are supported:
 
--onlyfilter     Don't use Commonsense to get categories, just filter the 
current
-                categories
+-onlyfilter     Don't use Commonsense to get categories, just filter the
+                current categories
 
 -onlyuncat      Only work on uncategorized images. Will prevent the bot from
                 working on an image multiple times.
diff --git a/scripts/imageuncat.py b/scripts/imageuncat.py
index 38cdaf8..476bac3 100755
--- a/scripts/imageuncat.py
+++ b/scripts/imageuncat.py
@@ -3,16 +3,17 @@
 """
 Program to add uncat template to images without categories at commons.
 
-See imagerecat.py (still working on that one) to add these images to 
categories.
+See imagerecat.py to add these images to categories.
 
 This script is working on the given site, so if the commons should be handled,
 the site commons should be given and not a Wikipedia or similar.
 
--yesterday        Go through all uploads from yesterday. (Deprecated here, 
moved
-                  to pagegenerators)
+-yesterday        Go through all uploads from yesterday. (Deprecated here,
+                  moved to pagegenerators)
 
--recentchanges    Go through the changes made from 'offset' minutes with 
'duration'
-                  minutes of timespan. It must be given two arguments as
+-recentchanges    Go through the changes made from 'offset' minutes with
+                  'duration' minutes of timespan. It must be given two
+                  arguments as
                   '-recentchanges:offset,duration'
 
                   Default value of offset is 120, and that of duration is 70
diff --git a/scripts/interwiki.py b/scripts/interwiki.py
index bcc6ab0..baf0c5a 100755
--- a/scripts/interwiki.py
+++ b/scripts/interwiki.py
@@ -49,9 +49,9 @@
                    process interrupts again, it saves all unprocessed pages in
                    one new dump file of the given site.
 
-    -continue:     like restore, but after having gone through the dumped 
pages,
-                   continue alphabetically starting at the last of the dumped
-                   pages. The dump file will be subsequently removed.
+    -continue:     like restore, but after having gone through the dumped
+                   pages, continue alphabetically starting at the last of the
+                   dumped pages. The dump file will be subsequently removed.
 
     -warnfile:     used as -warnfile:filename, reads all warnings from the
                    given file that apply to the home wiki language,
@@ -113,7 +113,7 @@
 
     -hintsonly     The bot does not ask for a page to work on, even if none of
                    the above page sources was specified. This will make the
-                   first existing page of -hint or -hinfile slip in as the 
start
+                   first existing page of -hint or -hinfile slip in as start
                    page, determining properties like namespace, disambiguation
                    state, and so on. When no existing page is found in the
                    hints, the bot does nothing.
@@ -134,23 +134,23 @@
 
                    There are some special hints, trying a number of languages
                    at once:
-                      * all:       All languages with at least ca. 100 
articles.
+                      * all:       All languages with at least ca. 100 articles
                       * 10:        The 10 largest languages (sites with most
                                    articles). Analogous for any other natural
-                                   number.
-                      * arab:      All languages using the Arabic alphabet.
-                      * cyril:     All languages that use the Cyrillic 
alphabet.
-                      * chinese:   All Chinese dialects.
-                      * latin:     All languages using the Latin script.
-                      * scand:     All Scandinavian languages.
+                                   number
+                      * arab:      All languages using the Arabic alphabet
+                      * cyril:     All languages that use the Cyrillic alphabet
+                      * chinese:   All Chinese dialects
+                      * latin:     All languages using the Latin script
+                      * scand:     All Scandinavian languages
 
                    Names of families that forward their interlanguage links
                    to the wiki family being worked upon can be used, they are:
-                      * commons:   Interlanguage links of Mediawiki Commons.
-                      * incubator: Links in pages on the Mediawiki Incubator.
-                      * meta:      Interlanguage links of named pages on Meta.
-                      * species:   Interlanguage links of the wikispecies wiki.
-                      * strategy:  Links in pages on Wikimedias strategy wiki.
+                      * commons:   Interlanguage links of Mediawiki Commons
+                      * incubator: Links in pages on the Mediawiki Incubator
+                      * meta:      Interlanguage links of named pages on Meta
+                      * species:   Interlanguage links of the wikispecies wiki
+                      * strategy:  Links in pages on Wikimedias strategy wiki
                       * test:      Take interwiki links from Test Wikipedia
 
                    Languages, groups and families having the same page title
@@ -337,7 +337,7 @@
 # (C) Daniel Herding, 2004
 # (C) Yuri Astrakhan, 2005-2006
 # (C) xqt, 2009-2014
-# (C) Pywikibot team, 2007-2016
+# (C) Pywikibot team, 2007-2017
 #
 # Distributed under the terms of the MIT license.
 #
diff --git a/scripts/patrol.py b/scripts/patrol.py
index 10f1db9..eb5b4a2 100755
--- a/scripts/patrol.py
+++ b/scripts/patrol.py
@@ -17,8 +17,8 @@
 which start with the mentioned link (e.g. [[foo]] will also patrol [[foobar]]).
 
 To avoid redlinks it's possible to use Special:PrefixIndex as a prefix so that
-it will list all pages which will be patrolled. The page after the slash will 
be
-used then.
+it will list all pages which will be patrolled. The page after the slash will
+be used then.
 
 On Wikisource, it'll also check if the page is on the author namespace in which
 case it'll also patrol pages which are linked from that page.
diff --git a/scripts/redirect.py b/scripts/redirect.py
index 2c9fadb..f4bf614 100755
--- a/scripts/redirect.py
+++ b/scripts/redirect.py
@@ -17,9 +17,9 @@
 
 broken         Tries to fix redirect which point to nowhere by using the last
 br             moved target of the destination page. If this fails and the
-               -delete option is set, it either deletes the page or marks it 
for
-               deletion depending on whether the account has admin rights. It
-               will mark the redirect not for deletion if there is no speedy
+               -delete option is set, it either deletes the page or marks it
+               for deletion depending on whether the account has admin rights.
+               It will mark the redirect not for deletion if there is no speedy
                deletion template available. Shortcut action command is "br".
 
 both           Both of the above. Retrieves redirect pages from live wiki,
diff --git a/scripts/reflinks.py b/scripts/reflinks.py
index 1e930a4..fd3352f 100755
--- a/scripts/reflinks.py
+++ b/scripts/reflinks.py
@@ -3,13 +3,13 @@
 """
 Fetch and add titles for bare links in references.
 
-This bot will search for references which are only made of a link without 
title,
+This bot will search for references which are only made of a link without title
 (i.e. <ref>[https://www.google.fr/]</ref> or <ref>https://www.google.fr/</ref>)
 and will fetch the html title from the link to use it as the title of the wiki
 link in the reference, i.e.
 <ref>[https://www.google.fr/search?q=test test - Google Search]</ref>
 
-The bot checks every 20 edits a special stop page : if the page has been 
edited,
+The bot checks every 20 edits a special stop page. If the page has been edited,
 it stops.
 
 DumZiBoT is running that script on en: & fr: at every new dump, running it on
diff --git a/scripts/template.py b/scripts/template.py
index 66f3e95..334c226 100755
--- a/scripts/template.py
+++ b/scripts/template.py
@@ -48,8 +48,8 @@
 
 -addcat:     Appends the given category to every page that is edited. This is
              useful when a category is being broken out from a template
-             parameter or when templates are being upmerged but more 
information
-             must be preserved.
+             parameter or when templates are being upmerged but more
+             information must be preserved.
 
 other:       First argument is the old template name, second one is the new
              name.
diff --git a/scripts/transferbot.py b/scripts/transferbot.py
index b9a690e..e6ece36 100755
--- a/scripts/transferbot.py
+++ b/scripts/transferbot.py
@@ -22,8 +22,8 @@
 
 Example commands:
 
-Transfer all pages in category "Query service" from the English Wikipedia to 
the
-Arabic Wiktionary, adding "Wiktionary:Import enwp/" as prefix:
+Transfer all pages in category "Query service" from the English Wikipedia to
+the Arabic Wiktionary, adding "Wiktionary:Import enwp/" as prefix:
 
     python pwb.py transferbot -family:wikipedia -lang:en -cat:"Query service" \
         -tofamily:wiktionary -tolang:ar -prefix:"Wiktionary:Import enwp/"
diff --git a/scripts/upload.py b/scripts/upload.py
index 46b40e2..0eee51d 100755
--- a/scripts/upload.py
+++ b/scripts/upload.py
@@ -33,17 +33,17 @@
 It is possible to combine -abortonwarn and -ignorewarn so that if the specific
 warning is given it won't apply the general one but more specific one. So if it
 should ignore specific warnings and abort on the rest it's possible by defining
-no warning for -abortonwarn and the specific warnings for -ignorewarn. The 
order
-does not matter. If both are unspecific or a warning is specified by both, 
it'll
-prefer aborting.
+no warning for -abortonwarn and the specific warnings for -ignorewarn. The
+order does not matter. If both are unspecific or a warning is specified by
+both, it'll prefer aborting.
 
-If any other arguments are given, the first is either URL, filename or 
directory
-to upload, and the rest is a proposed description to go with the upload. If 
none
-of these are given, the user is asked for the directory, file or URL to upload.
-The bot will then upload the image to the wiki.
+If any other arguments are given, the first is either URL, filename or
+directory to upload, and the rest is a proposed description to go with the
+upload. If none of these are given, the user is asked for the directory, file
+or URL to upload. The bot will then upload the image to the wiki.
 
-The script will ask for the location of an image(s), if not given as a 
parameter,
-and for a description.
+The script will ask for the location of an image(s), if not given as a
+parameter, and for a description.
 """
 #
 # (C) Rob W.W. Hooft, Andre Engels 2003-2004

-- 
To view, visit https://gerrit.wikimedia.org/r/323704
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: Ic8255b49ec52cbb6b271b2669fd030ee44cfdfe3
Gerrit-PatchSet: 3
Gerrit-Project: pywikibot/core
Gerrit-Branch: master
Gerrit-Owner: Xqt <i...@gno.de>
Gerrit-Reviewer: John Vandenberg <jay...@gmail.com>
Gerrit-Reviewer: Magul <tomasz.magul...@gmail.com>
Gerrit-Reviewer: Sn1per <geof...@gmail.com>
Gerrit-Reviewer: Xqt <i...@gno.de>
Gerrit-Reviewer: jenkins-bot <>

_______________________________________________
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits

Reply via email to