jenkins-bot has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/476106 )

Change subject: [PEP8] Fixed W504 violations in script/ directory
......................................................................

[PEP8] Fixed W504 violations in script/ directory

Files modified:
- scripts/maintenance/download_dump.py:112
- scripts/maintenance/download_dump.py:147
- scripts/maintenance/download_dump.py:148
- scripts/maintenance/download_dump.py:155
- scripts/maintenance/download_dump.py:190
- scripts/archive/featured.py:568
- scripts/archive/featured.py:576
- scripts/archive/featured.py:577
- scripts/archive/featured.py:590
- scripts/archive/featured.py:591
- scripts/replace.py:634
- scripts/replace.py:1107
- scripts/replace.py:1135
- scripts/newitem.py:141
- scripts/imagecopy_self.py:487
- scripts/imagecopy_self.py:541
- scripts/imagecopy_self.py:542
- scripts/imagecopy_self.py:553
- scripts/imagecopy_self.py:994
- scripts/patrol.py:322
- scripts/upload.py:188
- scripts/commonscat.py:355
- scripts/wikisourcetext.py:119
- scripts/makecat.py:26
- scripts/category_redirect.py:156
- scripts/clean_sandbox.py:204
- scripts/fixing_redirects.py:118
- scripts/fixing_redirects.py:119
- scripts/fixing_redirects.py:120
- scripts/fixing_redirects.py:122
- scripts/imagecopy.py:278
- scripts/imagecopy.py:279
- scripts/imagecopy.py:585
- scripts/delete.py:201
- scripts/weblinkchecker.py:615
- scripts/image.py:93
- scripts/interwikidata.py:75
- scripts/category.py:936
- scripts/category.py:1198
- scripts/unusedfiles.py:58
- scripts/unusedfiles.py:71
- scripts/reflinks.py:181
- scripts/reflinks.py:183
- scripts/lonelypages.py:85
- scripts/lonelypages.py:86
- scripts/script_wui.py:291
- scripts/redirect.py:217
- scripts/redirect.py:364
- scripts/version.py:49
- scripts/version.py:50

Bug: T207836
Change-Id: I56fa521246851d0607620e6efeff637c9ee186ba
---
M scripts/archive/featured.py
M scripts/category.py
M scripts/category_redirect.py
M scripts/clean_sandbox.py
M scripts/commonscat.py
M scripts/delete.py
M scripts/fixing_redirects.py
M scripts/image.py
M scripts/imagecopy.py
M scripts/imagecopy_self.py
M scripts/interwikidata.py
M scripts/lonelypages.py
M scripts/maintenance/download_dump.py
M scripts/makecat.py
M scripts/newitem.py
M scripts/patrol.py
M scripts/redirect.py
M scripts/reflinks.py
M scripts/replace.py
M scripts/script_wui.py
M scripts/unusedfiles.py
M scripts/upload.py
M scripts/version.py
M scripts/weblinkchecker.py
M scripts/wikisourcetext.py
25 files changed, 92 insertions(+), 91 deletions(-)

Approvals:
  D3r1ck01: Looks good to me, but someone else must approve
  Xqt: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/scripts/archive/featured.py b/scripts/archive/featured.py
index 5f5c406..0dea1e9 100755
--- a/scripts/archive/featured.py
+++ b/scripts/archive/featured.py
@@ -565,17 +565,17 @@
                 pywikibot.output('(already added)')
             else:
                 # insert just before interwiki
-                if (not interactive or
-                    pywikibot.input_yn(
+                if (not interactive
+                    or pywikibot.input_yn(
                         'Connecting %s -> %s. Proceed?'
                         % (source.title(), dest.title()),
                         default=False, automatic_quit=False)):
                     if self.getOption('side'):
                         # Placing {{Link FA|xx}} right next to
                         # corresponding interwiki
-                        text = (text[:m1.end()] +
-                                ' {{%s|%s}}' % (add_tl[0], fromsite.code) +
-                                text[m1.end():])
+                        text = (text[:m1.end()]
+                                + ' {{%s|%s}}' % (add_tl[0], fromsite.code)
+                                + text[m1.end():])
                     else:
                         # Moving {{Link FA|xx}} to top of interwikis
                         iw = textlib.getLanguageLinks(text, tosite)
@@ -587,9 +587,9 @@
                     changed = True
         if remove_tl:
             if m2:
-                if (changed or  # Don't force the user to say "Y" twice
-                    not interactive or
-                    pywikibot.input_yn(
+                if (changed  # Don't force the user to say "Y" twice
+                    or not interactive
+                    or pywikibot.input_yn(
                         'Connecting %s -> %s. Proceed?'
                         % (source.title(), dest.title()),
                         default=False, automatic_quit=False)):
diff --git a/scripts/category.py b/scripts/category.py
index 2ba64b6..491c3dd 100755
--- a/scripts/category.py
+++ b/scripts/category.py
@@ -933,8 +933,8 @@

         listString = ''
         for article in setOfArticles:
-            if (not article.is_filepage() or
-                    self.showImages) and not article.is_categorypage():
+            if (not article.is_filepage()
+                    or self.showImages) and not article.is_categorypage():
                 if self.talkPages and not article.isTalkPage():
                     listString += '* [[{0}]] -- [[{1}|talk]]\n'.format(
                         article.title(), article.toggleTalkPage().title())
@@ -1195,8 +1195,8 @@
             new_cat_title = pywikibot.input('Please enter the category '
                                             'the page should be moved to:',
                                             default=None)  # require an answer
-            new_cat = pywikibot.Category(pywikibot.Link('Category:' +
-                                                        new_cat_title))
+            new_cat = pywikibot.Category(pywikibot.Link('Category:'
+                                                        + new_cat_title))
             # recurse into chosen category
             self.move_to_category(member, original_cat, new_cat)

diff --git a/scripts/category_redirect.py b/scripts/category_redirect.py
index ccc68cc..b844b93 100755
--- a/scripts/category_redirect.py
+++ b/scripts/category_redirect.py
@@ -153,8 +153,8 @@
                         'categorymembers', cmtitle=oldCat.title(),
                         cmprop='title|sortkey', cmnamespace='10',
                         cmlimit='max'):
-                    doc = pywikibot.Page(pywikibot.Link(item['title'] +
-                                                        '/doc', self.site))
+                    doc = pywikibot.Page(pywikibot.Link(item['title']
+                                                        + '/doc', self.site))
                     try:
                         doc.get()
                     except pywikibot.Error:
diff --git a/scripts/clean_sandbox.py b/scripts/clean_sandbox.py
index e72f4df..d47ec0f 100755
--- a/scripts/clean_sandbox.py
+++ b/scripts/clean_sandbox.py
@@ -202,8 +202,8 @@
                         pywikibot.output('Standard content was changed, '
                                          'sandbox cleaned.')
                     else:
-                        edit_delta = (datetime.datetime.utcnow() -
-                                      sandbox_page.editTime())
+                        edit_delta = (datetime.datetime.utcnow()
+                                      - sandbox_page.editTime())
                         delta = self.getOption('delay_td') - edit_delta
                         # Is the last edit more than 'delay' minutes ago?
                         if delta <= datetime.timedelta(0):
diff --git a/scripts/commonscat.py b/scripts/commonscat.py
index 38c3a76..9429010 100755
--- a/scripts/commonscat.py
+++ b/scripts/commonscat.py
@@ -352,8 +352,8 @@
         """Change the current commonscat template and target."""
         if oldcat == '3=S' or linktitle == '3=S':
             return  # TODO: handle additional param on de-wiki
-        if not linktitle and (page.title().lower() in oldcat.lower() or
-                              oldcat.lower() in page.title().lower()):
+        if not linktitle and (page.title().lower() in oldcat.lower()
+                              or oldcat.lower() in page.title().lower()):
             linktitle = oldcat
         if linktitle and newcat != page.title(with_ns=False):
             newtext = re.sub(r'(?i)\{\{%s\|?[^{}]*(?:\{\{.*\}\})?\}\}'
diff --git a/scripts/delete.py b/scripts/delete.py
index 61a8b85..9ef5803 100755
--- a/scripts/delete.py
+++ b/scripts/delete.py
@@ -198,8 +198,8 @@
         if self.getOption('undelete'):
             self.current_page.undelete(self.summary)
         else:
-            if (self.getOption('isorphan') is not False and
-                    not self.getOption('always')):
+            if (self.getOption('isorphan') is not False
+                    and not self.getOption('always')):
                 self.display_references()

             if self.getOption('orphansonly'):
diff --git a/scripts/fixing_redirects.py b/scripts/fixing_redirects.py
index 8c1f212..8a833ab 100755
--- a/scripts/fixing_redirects.py
+++ b/scripts/fixing_redirects.py
@@ -115,12 +115,12 @@
                 newlink = '[[{}]]'.format(new_page_title)
             # check if we can create a link with trailing characters instead of
             # a pipelink
-            elif (len(new_page_title) <= len(link_text) and
-                  firstcap(link_text[:len(new_page_title)]) ==
-                  firstcap(new_page_title) and
-                  re.sub(re.compile(linktrail), '',
-                         link_text[len(new_page_title):]) == '' and
-                  not section):
+            elif (len(new_page_title) <= len(link_text)
+                  and (firstcap(link_text[:len(new_page_title)])
+                       == firstcap(new_page_title))
+                  and re.sub(re.compile(linktrail), '',
+                             link_text[len(new_page_title):]) == ''
+                  and not section):
                 newlink = '[[{}]]{}'.format(link_text[:len(new_page_title)],
                                             link_text[len(new_page_title):])
             else:
diff --git a/scripts/image.py b/scripts/image.py
index ca3a664..662b3fc 100755
--- a/scripts/image.py
+++ b/scripts/image.py
@@ -90,8 +90,8 @@

         namespace = self.site.namespaces[6]
         if namespace.case == 'first-letter':
-            case = re.escape(self.old_image[0].upper() +
-                             self.old_image[0].lower())
+            case = re.escape(self.old_image[0].upper()
+                             + self.old_image[0].lower())
             escaped = '[' + case + ']' + re.escape(self.old_image[1:])
         else:
             escaped = re.escape(self.old_image)
diff --git a/scripts/imagecopy.py b/scripts/imagecopy.py
index 7fcbb52..a46cf0c 100644
--- a/scripts/imagecopy.py
+++ b/scripts/imagecopy.py
@@ -276,10 +276,10 @@

         # I want every picture to be tagged with the bottemplate so i can check
         # my contributions later.
-        CH = ('\n\n{{BotMoveToCommons|' + self.imagePage.site.lang +
-              '.' + self.imagePage.site.family.name +
-              '|year={{subst:CURRENTYEAR}}|month={{subst:CURRENTMONTHNAME}}'
-              '|day={{subst:CURRENTDAY}}}}' + CH)
+        CH = ('\n\n{{BotMoveToCommons|%s.%s|year={{subst:CURRENTYEAR}}'
+              '|month={{subst:CURRENTMONTHNAME}}|day={{subst:CURRENTDAY}}}}'
+              % (self.imagePage.site.lang, self.imagePage.site.family.name)
+              + CH)

         if self.category:
             CH = CH.replace(
@@ -583,8 +583,8 @@
                 imageTransfer(imagepage, newname, category,
                               delete_after_done).start()

-    pywikibot.output('Still ' + str(threading.activeCount()) +
-                     ' active threads, lets wait')
+    pywikibot.output('Still ' + str(threading.activeCount())
+                     + ' active threads, lets wait')
     for openthread in threading.enumerate():
         if openthread != threading.currentThread():
             openthread.join()
diff --git a/scripts/imagecopy_self.py b/scripts/imagecopy_self.py
index 3718367..8ba89d7 100644
--- a/scripts/imagecopy_self.py
+++ b/scripts/imagecopy_self.py
@@ -484,8 +484,8 @@
             date = self.getUploadDate(imagepage)

         # Author
-        if not (contents['author'] == '' or
-                contents['author'] == self.getAuthor(imagepage)):
+        if not (contents['author'] == ''
+                or contents['author'] == self.getAuthor(imagepage)):
             author = self.convertLinks(contents['author'], imagepage.site())
         else:
             author = self.getAuthorText(imagepage)
@@ -538,9 +538,10 @@
         """
         uploadtime = imagepage.getFileVersionHistory()[-1][0]
         uploadDatetime = datetime.strptime(uploadtime, '%Y-%m-%dT%H:%M:%SZ')
-        return ('{{Date|' + str(uploadDatetime.year) + '|' +
-                str(uploadDatetime.month) + '|' + str(uploadDatetime.day) +
-                '}} (original upload date)')
+        return ('{{Date|%s|%s|%s}} (original upload date)'
+                % (str(uploadDatetime.year),
+                   str(uploadDatetime.month,
+                   str(uploadDatetime.day))))

     def getSource(self, imagepage, source=''):
         """Get text to put in the source field of new information template."""
@@ -550,8 +551,8 @@
         if source == '':
             source = '{{Own}}'

-        return (source.strip() +
-                '<BR />Transferred from [http://%(lang)s.%(family)s.org '
+        return (source.strip()
+                + '<BR />Transferred from [http://%(lang)s.%(family)s.org '
                 '%(lang)s.%(family)s]') % {'lang': lang, 'family': family}

     def getAuthorText(self, imagepage):
@@ -991,8 +992,9 @@
             if imagepage.site.lang in moveToCommonsTemplate:
                 for moveTemplate in moveToCommonsTemplate[
                         imagepage.site.lang]:
-                    imtxt = re.sub(r'(?i){{' + moveTemplate +
-                                   r'[^}]*}}', r'', imtxt)
+                    imtxt = re.sub(r'(?i){{' + moveTemplate + r'[^}]*}}',
+                                   r'',
+                                   imtxt)

             # add {{NowCommons}}
             if imagepage.site.lang in nowCommonsTemplate:
diff --git a/scripts/interwikidata.py b/scripts/interwikidata.py
index 9944230..6a4c71a 100644
--- a/scripts/interwikidata.py
+++ b/scripts/interwikidata.py
@@ -72,8 +72,8 @@

     def treat_page(self):
         """Check page."""
-        if (self.current_page.namespace() not in namespaces and
-                not self.getOption('ignore_ns')):
+        if (self.current_page.namespace() not in namespaces
+                and not self.getOption('ignore_ns')):
             output('{page} is not in allowed namespaces, skipping'
                    .format(page=self.current_page.title(
                        as_link=True)))
diff --git a/scripts/lonelypages.py b/scripts/lonelypages.py
index d2adce2..8177149 100755
--- a/scripts/lonelypages.py
+++ b/scripts/lonelypages.py
@@ -82,9 +82,10 @@
                 pywikibot.warning('Orphan template alias "{0}" does not exist '
                                   'on "{1}"'.format(name, site))
         self.regex = re.compile(
-            r'\{\{(?:' + ':|'.join(template_ns) + '|)(' +
-            '|'.join(re.escape(name) for name in self._names) +
-            r')[\|\}]', re.I)
+            r'\{\{(?:'
+            + ':|'.join(template_ns) + '|)('
+            + '|'.join(re.escape(name) for name in self._names)
+            + r')[\|\}]', re.I)


 # The orphan template names in the different languages.
diff --git a/scripts/maintenance/download_dump.py 
b/scripts/maintenance/download_dump.py
index b2291b9..8347a7a 100644
--- a/scripts/maintenance/download_dump.py
+++ b/scripts/maintenance/download_dump.py
@@ -109,8 +109,8 @@
         for non_atomic in range(2):
             try:
                 if toolforge_dump_filepath:
-                    pywikibot.output('Symlinking file from ' +
-                                     toolforge_dump_filepath)
+                    pywikibot.output('Symlinking file from '
+                                     + toolforge_dump_filepath)
                     if non_atomic:
                         if os.path.exists(file_final_storepath):
                             remove(file_final_storepath)
@@ -144,16 +144,16 @@
                                     display = map(convert_from_bytes,
                                                   (downloaded, total))
                                     prior_display = display_string
-                                    display_string = ('\r|{0}{1}|' +
-                                                      ' ' * 5 +
-                                                      '{2}/{3}').format(
+                                    display_string = ('\r|{0}{1}|'
+                                                      '{2}{3}/{4}').format(
                                         '=' * done,
                                         '-' * (parts - done),
-                                        *display)
+                                        ' ' * 5,
+                                        * display)
                                     # Add whitespace to cover up prior bar
                                     display_string += ' ' * (
-                                        len(prior_display.rstrip()) -
-                                        len(display_string.rstrip()))
+                                        len(prior_display.rstrip())
+                                        - len(display_string.rstrip()))

                                     pywikibot.output(display_string,
                                                      newline=False)
@@ -187,7 +187,7 @@
                 # If the atomic download fails, try without a temporary file
                 # If the non-atomic download also fails, exit the script
                 if not non_atomic:
-                    pywikibot.output('Cannot make temporary file, ' +
+                    pywikibot.output('Cannot make temporary file, '
                                      'falling back to non-atomic download')
                     file_current_storepath = file_final_storepath
                 else:
diff --git a/scripts/makecat.py b/scripts/makecat.py
index 935923e..f848f4d 100755
--- a/scripts/makecat.py
+++ b/scripts/makecat.py
@@ -264,8 +264,8 @@
                                                     workingcatname))
     filename = pywikibot.config.datafilepath(
         'category',
-        workingcatname.encode('ascii', 'xmlcharrefreplace').decode('ascii') +
-        '_exclude.txt')
+        workingcatname.encode('ascii', 'xmlcharrefreplace').decode('ascii')
+        + '_exclude.txt')
     try:
         with codecs.open(filename, 'r', encoding=mysite.encoding()) as f:
             for line in f.readlines():
diff --git a/scripts/newitem.py b/scripts/newitem.py
index 0d276af..c3c3f6b 100755
--- a/scripts/newitem.py
+++ b/scripts/newitem.py
@@ -138,9 +138,7 @@

     options = {}
     for arg in local_args:
-        if (
-                arg.startswith('-pageage:') or
-                arg.startswith('-lastedit:')):
+        if arg.startswith(('-pageage:', '-lastedit:')):
             key, val = arg.split(':', 1)
             options[key[1:]] = int(val)
         elif gen.handleArg(arg):
diff --git a/scripts/patrol.py b/scripts/patrol.py
index 85aa2b7..11af97d 100755
--- a/scripts/patrol.py
+++ b/scripts/patrol.py
@@ -318,8 +318,8 @@
                 pywikibot.output('User {0} has created or modified page {1}'
                                  .format(username, title))

-            if self.getOption('autopatroluserns') and (page['ns'] == 2 or
-                                                       page['ns'] == 3):
+            if (self.getOption('autopatroluserns')
+                    and page['ns'] in (2, 3)):
                 # simple rule to whitelist any user editing their own userspace
                 if title.partition(':')[2].split('/')[0].startswith(username):
                     verbose_output('{0} is whitelisted to modify {1}'
diff --git a/scripts/redirect.py b/scripts/redirect.py
index f387739..32641a7 100755
--- a/scripts/redirect.py
+++ b/scripts/redirect.py
@@ -214,8 +214,8 @@
             if self.api_number:
                 gen.set_maximum_items(self.api_number)
             for p in gen:
-                done = (self.api_until and
-                        p.title(with_ns=False) >= self.api_until)
+                done = (self.api_until
+                        and p.title(with_ns=False) >= self.api_until)
                 if done:
                     return
                 yield p
@@ -360,8 +360,8 @@
         # this will run forever, until user interrupts it
         if self.offset <= 0:
             self.offset = 1
-        start = (datetime.datetime.utcnow() -
-                 datetime.timedelta(0, self.offset * 3600))
+        start = (datetime.datetime.utcnow()
+                 - datetime.timedelta(0, self.offset * 3600))
         # self.offset hours ago
         offset_time = start.strftime('%Y%m%d%H%M%S')
         pywikibot.output('Retrieving {0} moved pages...'
diff --git a/scripts/reflinks.py b/scripts/reflinks.py
index 52d33a5..aa842eb 100755
--- a/scripts/reflinks.py
+++ b/scripts/reflinks.py
@@ -177,9 +177,9 @@
 # Regex that match bare references
 linksInRef = re.compile(
     # bracketed URLs
-    r'(?i)<ref(?P<name>[^>]*)>\s*\[?(?P<url>(?:http|https)://(?:' +
+    r'(?i)<ref(?P<name>[^>]*)>\s*\[?(?P<url>(?:http|https)://(?:'
     # unbracketed with()
-    r'^\[\]\s<>"]+\([^\[\]\s<>"]+[^\[\]\s\.:;\\,<>\?"]+|' +
+    r'^\[\]\s<>"]+\([^\[\]\s<>"]+[^\[\]\s\.:;\\,<>\?"]+|'
     # unbracketed without ()
     r'[^\[\]\s<>"]+[^\[\]\s\)\.:;\\,<>\?"]+|[^\[\]\s<>"]+))'
     r'[!?,\s]*\]?\s*</ref>')
diff --git a/scripts/replace.py b/scripts/replace.py
index 3fb4157..9275494 100755
--- a/scripts/replace.py
+++ b/scripts/replace.py
@@ -631,8 +631,8 @@
         for replacement in self.replacements:
             if self.sleep is not None:
                 time.sleep(self.sleep)
-            if (replacement.container and
-                    replacement.container.name in skipped_containers):
+            if (replacement.container
+                    and replacement.container.name in skipped_containers):
                 continue
             elif page is not None and self.isTitleExcepted(
                     page.title(), replacement.exceptions):
@@ -1104,8 +1104,8 @@
                 pywikibot.warning('The old string "{0}" contains formatting '
                                   'characters like U+200E'.format(
                                       chars.replace_invisible(replacement[0])))
-            if (not callable(replacement[1]) and
-                    chars.contains_invisible(replacement[1])):
+            if (not callable(replacement[1])
+                    and chars.contains_invisible(replacement[1])):
                 pywikibot.warning('The new string "{0}" contains formatting '
                                   'characters like U+200E'.format(
                                       chars.replace_invisible(replacement[1])))
@@ -1132,8 +1132,8 @@
         else:
             missing_fixes_summaries += missing_fix_summaries

-    if ((not edit_summary or edit_summary is True) and
-            (missing_fixes_summaries or single_summary)):
+    if ((not edit_summary or edit_summary is True)
+            and (missing_fixes_summaries or single_summary)):
         if single_summary:
             pywikibot.output('The summary message for the command line '
                              'replacements will be something like: '
diff --git a/scripts/script_wui.py b/scripts/script_wui.py
index efeac37..a1bfbb6 100755
--- a/scripts/script_wui.py
+++ b/scripts/script_wui.py
@@ -288,8 +288,8 @@
         pywikibot.output(
             'environment: garbage; %s / memory; %s / members; %s' % (
                 gc.collect(),
-                resource.getrusage(resource.RUSAGE_SELF).ru_maxrss *
-                resource.getpagesize(),
+                resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
+                * resource.getpagesize(),
                 len(dir())))
     else:
         pywikibot.output(
diff --git a/scripts/unusedfiles.py b/scripts/unusedfiles.py
index 4465b40..6028a88 100755
--- a/scripts/unusedfiles.py
+++ b/scripts/unusedfiles.py
@@ -55,8 +55,8 @@
         self.template_user = i18n.translate(self.site,
                                             template_to_the_user)
         self.summary = i18n.twtranslate(self.site, 'unusedfiles-comment')
-        if not (self.template_image and
-                (self.template_user or self.getOption('nouserwarning'))):
+        if not (self.template_image
+                and (self.template_user or self.getOption('nouserwarning'))):
             raise pywikibot.Error('This script is not localized for {0} site.'
                                   .format(self.site))

@@ -68,8 +68,8 @@
             return
         # Use fileUrl() and fileIsShared() to confirm it is local media
         # rather than a local page with the same name as shared media.
-        if (image.fileUrl() and not image.fileIsShared() and
-                'http://' not in image.text):
+        if (image.fileUrl() and not image.fileIsShared()
+                and 'http://' not in image.text):
             if self.template_image in image.text:
                 pywikibot.output('{0} done already'
                                  .format(image.title(as_link=True)))
diff --git a/scripts/upload.py b/scripts/upload.py
index 16cb277..fed6924 100755
--- a/scripts/upload.py
+++ b/scripts/upload.py
@@ -185,8 +185,8 @@
             pywikibot.output(error)
         url = pywikibot.input('URL, file or directory where files are now:')

-    if always and ((aborts is not True and ignorewarn is not True) or
-                   not description or url is None):
+    if always and (aborts is not True and ignorewarn is not True
+                   or not description or url is None):
         additional = ''
         missing = []
         if url is None:
diff --git a/scripts/version.py b/scripts/version.py
index 840756b..6473371 100755
--- a/scripts/version.py
+++ b/scripts/version.py
@@ -46,9 +46,9 @@
     pywikibot.output('requests version: ' + requests.__version__)

     has_wikimedia_cert = False
-    if (not hasattr(requests, 'certs') or
-            not hasattr(requests.certs, 'where') or
-            not callable(requests.certs.where)):
+    if (not hasattr(requests, 'certs')
+            or not hasattr(requests.certs, 'where')
+            or not callable(requests.certs.where)):
         pywikibot.output('  cacerts: not defined')
     elif not os.path.isfile(requests.certs.where()):
         pywikibot.output('  cacerts: {} (missing)'.format(
diff --git a/scripts/weblinkchecker.py b/scripts/weblinkchecker.py
index 18fd8d5..6aee186 100755
--- a/scripts/weblinkchecker.py
+++ b/scripts/weblinkchecker.py
@@ -612,8 +612,8 @@
             pywikibot.output('Exception while processing URL {0} in page {1}'
                              .format(self.url, self.page.title()))
             raise
-        if (r.status == requests.codes.ok and
-                str(r.status) not in self.HTTPignore):
+        if (r.status == requests.codes.ok
+                and str(r.status) not in self.HTTPignore):
             ok = True
         else:
             message = '{0}'.format(r.status)
diff --git a/scripts/wikisourcetext.py b/scripts/wikisourcetext.py
index 83734d4..7d60767 100644
--- a/scripts/wikisourcetext.py
+++ b/scripts/wikisourcetext.py
@@ -117,8 +117,8 @@
         # TODO: create i18 files
         # Get edit summary message if it's empty.
         if not self.getOption('summary'):
-            self.options['summary'] = i18n.twtranslate(
-                self.site, 'djvutext-creating')
+            self.options['summary'] = i18n.twtranslate(self.site,
+                                                       'djvutext-creating')

         if self.getOption('ocr'):
             self._num_threads = self.getOption('threads')

--
To view, visit https://gerrit.wikimedia.org/r/476106
To unsubscribe, or for help writing mail filters, visit 
https://gerrit.wikimedia.org/r/settings

Gerrit-Project: pywikibot/core
Gerrit-Branch: master
Gerrit-MessageType: merged
Gerrit-Change-Id: I56fa521246851d0607620e6efeff637c9ee186ba
Gerrit-Change-Number: 476106
Gerrit-PatchSet: 7
Gerrit-Owner: Takidelfin <[email protected]>
Gerrit-Reviewer: D3r1ck01 <[email protected]>
Gerrit-Reviewer: Dvorapa <[email protected]>
Gerrit-Reviewer: Framawiki <[email protected]>
Gerrit-Reviewer: John Vandenberg <[email protected]>
Gerrit-Reviewer: Takidelfin <[email protected]>
Gerrit-Reviewer: Xqt <[email protected]>
Gerrit-Reviewer: jenkins-bot (75)
Gerrit-CC: Urbanecm <[email protected]>
_______________________________________________
Pywikibot-commits mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/pywikibot-commits

Reply via email to