Hello community,

here is the log from the commit of package python-tldextract for 
openSUSE:Leap:15.2 checked in at 2020-03-13 11:00:00
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Leap:15.2/python-tldextract (Old)
 and      /work/SRC/openSUSE:Leap:15.2/.python-tldextract.new.3160 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-tldextract"

Fri Mar 13 11:00:00 2020 rev:18 rq:784531 version:2.2.2

Changes:
--------
--- /work/SRC/openSUSE:Leap:15.2/python-tldextract/python-tldextract.changes    
2020-03-02 13:24:04.498483971 +0100
+++ 
/work/SRC/openSUSE:Leap:15.2/.python-tldextract.new.3160/python-tldextract.changes
  2020-03-13 11:01:56.404629964 +0100
@@ -1,0 +2,17 @@
+Wed Mar 11 11:24:28 UTC 2020 - [email protected]
+
+- version update to 2.2.2
+  * Bugfixes
+    * Catch file not found
+    * Use pkgutil instead of pkg_resources 
([#163](https://github.com/john-kurkowski/tldextract/pull/163))
+    * Performance: avoid recomputes, a regex, and a partition
+  * Misc.
+    * Update LICENSE from GitHub template
+    * Fix warning about literal comparison
+    * Modernize testing 
([#177](https://github.com/john-kurkowski/tldextract/issues/177))
+        * Use the latest pylint that works in Python 2
+        * Appease pylint with the new rules
+        * Support Python 3.8-dev
+        * Drop support for EOL Python 3.4
+
+-------------------------------------------------------------------

Old:
----
  tldextract-2.2.1.tar.gz

New:
----
  tldextract-2.2.2.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-tldextract.spec ++++++
--- /var/tmp/diff_new_pack.atPRPj/_old  2020-03-13 11:01:56.848630281 +0100
+++ /var/tmp/diff_new_pack.atPRPj/_new  2020-03-13 11:01:56.848630281 +0100
@@ -1,7 +1,7 @@
 #
 # spec file for package python-tldextract
 #
-# Copyright (c) 2019 SUSE LINUX GmbH, Nuernberg, Germany.
+# Copyright (c) 2020 SUSE LLC
 #
 # All modifications and additions to the file contributed by third parties
 # remain the property of their copyright owners, unless otherwise agreed
@@ -18,16 +18,16 @@
 
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 Name:           python-tldextract
-Version:        2.2.1
+Version:        2.2.2
 Release:        0
 Summary:        Python module to separate the TLD of a URL
 License:        BSD-3-Clause
 Group:          Development/Languages/Python
-Url:            https://github.com/john-kurkowski/tldextract
+URL:            https://github.com/john-kurkowski/tldextract
 Source:         
https://files.pythonhosted.org/packages/source/t/tldextract/tldextract-%{version}.tar.gz
 Source1:        %{name}-rpmlintrc
 # No internet connection on OBS build hosts; skip suffix list snapshot diff
-Patch:          tldextract-tests-offline.patch
+Patch0:         tldextract-tests-offline.patch
 ### BEGIN test requirements
 BuildRequires:  %{python_module pytest-mock}
 BuildRequires:  %{python_module pytest}
@@ -41,9 +41,9 @@
 Requires:       python-idna >= 2.1.0
 Requires:       python-requests >= 2.1.0
 Requires:       python-requests-file >= 1.4
+Requires:       python-setuptools
 Obsoletes:      python-tldextract <= 2.0.1
 BuildArch:      noarch
-
 %python_subpackages
 
 %description
@@ -58,6 +58,7 @@
 %autopatch -p1
 
 %build
+sed -i 's:--pylint::' pytest.ini
 %python_build
 
 %install
@@ -66,7 +67,7 @@
 
 %check
 %python_exec setup.py develop --user
-%python_exec -m pytest -v tests
+%pytest tests
 
 %files %{python_files}
 %license LICENSE

++++++ tldextract-2.2.1.tar.gz -> tldextract-2.2.2.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/.gitignore 
new/tldextract-2.2.2/.gitignore
--- old/tldextract-2.2.1/.gitignore     1970-01-01 01:00:00.000000000 +0100
+++ new/tldextract-2.2.2/.gitignore     2015-11-18 17:15:06.000000000 +0100
@@ -0,0 +1,9 @@
+*.pyc
+*.tld_set
+.cache
+build
+dist
+tldextract_app/tldextract
+tldextract_app/web
+tldextract.egg-info
+.tox
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/.travis.yml 
new/tldextract-2.2.2/.travis.yml
--- old/tldextract-2.2.1/.travis.yml    1970-01-01 01:00:00.000000000 +0100
+++ new/tldextract-2.2.2/.travis.yml    2019-10-16 02:07:21.000000000 +0200
@@ -0,0 +1,22 @@
+sudo: false
+language: python
+matrix:
+  include:
+    - python: 2.7
+      env: TOXENV=py27
+    - python: 3.5
+      env: TOXENV=py35
+    - python: 3.6
+      env: TOXENV=py36
+    - python: 3.7
+      env: TOXENV=py37
+    - python: 3.8-dev
+      env: TOXENV=py38
+    - python: pypy
+      env: TOXENV=pypy
+    - python: pypy3
+      env: TOXENV=pypy3
+    - env: TOXENV=codestyle
+python: 3.7
+install: pip install tox
+script: tox
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/CHANGELOG.md 
new/tldextract-2.2.2/CHANGELOG.md
--- old/tldextract-2.2.1/CHANGELOG.md   1970-01-01 01:00:00.000000000 +0100
+++ new/tldextract-2.2.2/CHANGELOG.md   2019-10-16 02:07:39.000000000 +0200
@@ -0,0 +1,203 @@
+# tldextract Changelog
+
+After upgrading, update your cache file by deleting it or via `tldextract
+--update`.
+
+## 2.2.2 (2019-10-15)
+
+* Bugfixes
+    * Catch file not found
+    * Use pkgutil instead of pkg_resources 
([#163](https://github.com/john-kurkowski/tldextract/pull/163))
+    * Performance: avoid recomputes, a regex, and a partition
+* Misc.
+    * Update LICENSE from GitHub template
+    * Fix warning about literal comparison
+    * Modernize testing 
([#177](https://github.com/john-kurkowski/tldextract/issues/177))
+        * Use the latest pylint that works in Python 2
+        * Appease pylint with the new rules
+        * Support Python 3.8-dev
+        * Drop support for EOL Python 3.4
+
+## 2.2.1 (2019-03-05)
+
+* Bugfixes
+    * Ignore case on punycode prefix check 
([#133](https://github.com/john-kurkowski/tldextract/issues/133))
+    * Drop support for EOL Python 2.6 
([#152](https://github.com/john-kurkowski/tldextract/issues/152))
+    * Improve sundry doc and README bits
+
+## 2.2.0 (2017-10-26)
+
+* Features
+    * Add `cache_fetch_timeout` kwarg and `TLDEXTRACT_CACHE_TIMEOUT` env var 
([#139](https://github.com/john-kurkowski/tldextract/issues/139))
+* Bugfixes
+    * Work around `pkg_resources` missing, again 
([#137](https://github.com/john-kurkowski/tldextract/issues/137))
+    * Always close sessions 
([#140](https://github.com/john-kurkowski/tldextract/issues/140))
+
+## 2.1.0 (2017-05-24)
+
+* Features
+    * Add `fqdn` convenience property 
([#129](https://github.com/john-kurkowski/tldextract/issues/129))
+    * Add `ipv4` convenience property 
([#126](https://github.com/john-kurkowski/tldextract/issues/126))
+
+## 2.0.3 (2017-05-20)
+
+* Bugfixes
+    * Switch to explicit Python version check 
([#124](https://github.com/john-kurkowski/tldextract/issues/124))
+* Misc.
+    * Document public vs. private domains
+    * Document support for Python 3.6
+
+## 2.0.2 (2016-10-16)
+
+* Misc.
+    * Release as a universal wheel 
([#110](https://github.com/john-kurkowski/tldextract/issues/110))
+    * Consolidate test suite running with tox 
([#104](https://github.com/john-kurkowski/tldextract/issues/104))
+
+## 2.0.1 (2016-04-25)
+
+* Bugfixes
+    * Relax required `requests` version: >= 2.1 
([#98](https://github.com/john-kurkowski/tldextract/issues/98))
+* Misc.
+    * Include tests in release source tarball 
([#97](https://github.com/john-kurkowski/tldextract/issues/97))
+
+## 2.0.0 (2016-04-21)
+
+No changes since 2.0rc1.
+
+## 2.0rc1 (2016-04-04)
+
+This release focuses on shedding confusing code branches & deprecated cruft.
+
+* Breaking Changes
+    * Renamed/changed the type of `TLDExtract` constructor param
+      `suffix_list_url`
+        * It used to take a `str` or iterable. Its replacement,
+          `suffix_list_urls` only takes an iterable. This better communicates
+          that it tries a _sequence_ of URLs, in order. To only try 1 URL, pass
+          an iterable with exactly 1 URL `str`.
+    * Serialize the local cache of the remote PSL as JSON (no more `pickle`) - 
[#81](https://github.com/john-kurkowski/tldextract/issues/81)
+        * This should be a transparent upgrade for most users.
+        * However, if you're configured to _only_ read from your local cache
+          file, no other sources or fallbacks, the new version will be unable
+          to read the old cache format, and an error will be raised.
+    * Remove deprecated code
+        * `TLDExtract`'s `fetch` param. To disable live HTTP requests for the
+          latest PSL, instead pass `suffix_list_urls=None`.
+        * `ExtractResult.tld` property. Use `ExtractResult.suffix` instead.
+    * Moved code
+        * Split `tldextract.tldextract` into a few files.
+            * The official public interface of this package comes via `import
+              tldextract`. But if you were relying on direct import from
+              `tldextract.tldextract` anyway, those imports may have moved.
+            * You can run the package `python -m tldextract` for the same
+              effect as the included `tldextract` console script. This used to
+              be `python -m tldextract.tldextract`.
+* Misc.
+    * Use `requests` instead of `urllib` - 
[#89](https://github.com/john-kurkowski/tldextract/issues/89)
+        * As a side-effect, this fixes 
[#93](https://github.com/john-kurkowski/tldextract/pull/93).
+
+## 1.7.5 (2016-02-07)
+
+* Bugfixes
+    * Support possible gzipped PSL response - 
[#88](https://github.com/john-kurkowski/tldextract/pull/88)
+
+## 1.7.4 (2015-12-26)
+
+* Bugfixes
+    * Fix potential for `UnicodeEncodeError` with info log - 
[#85](https://github.com/john-kurkowski/tldextract/pull/85)
+
+## 1.7.3 (2015-12-12)
+
+* Bugfixes
+    * Support IDNA2008 - 
[#82](https://github.com/john-kurkowski/tldextract/pull/82)
+* Misc.
+    * Ease running scripts during local development
+
+## 1.7.2 (2015-11-28)
+
+* Bugfixes
+    * Domain parsing fails with trailing spaces - 
[#75](https://github.com/john-kurkowski/tldextract/pull/75)
+    * Update to latest, direct PSL links - 
[#77](https://github.com/john-kurkowski/tldextract/pull/77)
+* Misc.
+    * Update bundled PSL snapshot
+    * Require requirements.txt for local development
+    * Enforce linting via the test suite - 
[#79](https://github.com/john-kurkowski/tldextract/pull/79)
+    * Switch to py.test runner - 
[#80](https://github.com/john-kurkowski/tldextract/pull/80)
+    * No longer distribute tests. No mention of `test_suite` in setup.py. CI is
+      handled centrally now, on this project's GitHub.
+
+## 1.7.1 (2015-08-22)
+
+Fix publishing mistake with 1.7.0.
+
+## 1.7.0 (2015-08-22)
+
+* Features
+    * Can include PSL's private domains on CLI with `--private_domains` 
boolean flag
+* Bugfixes
+    * Improved support for multiple Punycode (or Punycode-looking) parts of a 
URL
+        * Mixed in/valid
+        * Mixed encodings
+    * Fix `ExtractResult._asdict` on Python 3.4. This should also save space,
+      as `__dict__` is not created for each `ExtractResult` instance.
+
+## 1.6 (2015-03-22)
+
+* Features
+    * Pass `extra_suffixes` directly to constructor
+* Bugfixes
+    * Punycode URLs were returned decoded, rather than left alone
+    * Things that look like Punycode to tldextract, but aren't, shouldn't raise
+    * Print unified diff to debug log, rather than inconsistent stderr
+
+## 1.5.1 (2014-10-13)
+
+* Bugfixes
+    * Missing setuptools dependency
+    * Avoid u'' literal for Python 3.0 - 3.2 compatibility. Tests will still 
fail though.
+
+## 1.5 (2014-09-08)
+
+* Bugfixes
+    * Exclude PSL's private domains by default - 
[#19](https://github.com/john-kurkowski/tldextract/pull/19)
+        * This is a **BREAKING** bugfix if you relied on the PSL's private
+          domains
+        * Revert to old behavior by setting `include_psl_private_domains=True`
+    * `UnicodeError` for inputs that looked like an IP
+
+## 1.4 (2014-06-01)
+
+* Features
+    * Support punycode inputs
+* Bugfixes
+    * Fix minor Python 3 unicode errors
+
+## 1.3.1 (2013-12-16)
+
+* Bugfixes
+    * Match PSL's GitHub mirror rename, from mozilla-central to gecko-dev
+    * Try Mozilla's PSL SPOT first, then the mirror
+
+## 1.3 (2013-12-08)
+
+* Features
+    * Specify your own PSL url/file with `suffix_list_url` kwarg
+    * `fallback_to_snapshot` kwarg - defaults to True
+* Deprecations
+    * `fetch` kwarg
+
+## 1.2 (2013-07-07)
+
+* Features
+    * Better CLI
+    * Cache env var support
+    * Python 3.3 support
+    * New aliases `suffix` and `registered_domain`
+* Bugfixes
+    * Fix dns root label
+
+## 1.1 (2012-03-22)
+
+* Bugfixes
+    * Reliable logger name
+    * Forgotten `import sys`
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/LICENSE new/tldextract-2.2.2/LICENSE
--- old/tldextract-2.2.1/LICENSE        2017-04-30 01:27:53.000000000 +0200
+++ new/tldextract-2.2.2/LICENSE        2019-10-16 02:07:21.000000000 +0200
@@ -1,24 +1,29 @@
-Copyright (c) 2013-2017, John Kurkowski
+BSD 3-Clause License
+
+Copyright (c) 2019, John Kurkowski
 All rights reserved.
 
 Redistribution and use in source and binary forms, with or without
 modification, are permitted provided that the following conditions are met:
-    * Redistributions of source code must retain the above copyright
-      notice, this list of conditions and the following disclaimer.
-    * Redistributions in binary form must reproduce the above copyright
-      notice, this list of conditions and the following disclaimer in the
-      documentation and/or other materials provided with the distribution.
-    * The names of its contributors may not be used to endorse or promote
-      products derived from this software without specific prior written
-      permission.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
-ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-DISCLAIMED. IN NO EVENT SHALL JOHN KURKOWSKI BE LIABLE FOR ANY
-DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
-ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
-SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+1. Redistributions of source code must retain the above copyright notice, this
+   list of conditions and the following disclaimer.
+
+2. Redistributions in binary form must reproduce the above copyright notice,
+   this list of conditions and the following disclaimer in the documentation
+   and/or other materials provided with the distribution.
+
+3. Neither the name of the copyright holder nor the names of its
+   contributors may be used to endorse or promote products derived from
+   this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/PKG-INFO 
new/tldextract-2.2.2/PKG-INFO
--- old/tldextract-2.2.1/PKG-INFO       2019-03-05 19:55:34.000000000 +0100
+++ new/tldextract-2.2.2/PKG-INFO       2019-10-16 02:10:39.000000000 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 2.1
 Name: tldextract
-Version: 2.2.1
+Version: 2.2.2
 Summary: Accurately separate the TLD from the registered domain and subdomains 
of a URL, using the Public Suffix List. By default, this includes the public 
ICANN TLDs and their exceptions. You can optionally support the Public Suffix 
List's private domains as well.
 Home-page: https://github.com/john-kurkowski/tldextract
 Author: John Kurkowski
@@ -40,8 +40,9 @@
 Classifier: Programming Language :: Python :: 2
 Classifier: Programming Language :: Python :: 2.7
 Classifier: Programming Language :: Python :: 3
-Classifier: Programming Language :: Python :: 3.4
 Classifier: Programming Language :: Python :: 3.5
 Classifier: Programming Language :: Python :: 3.6
+Classifier: Programming Language :: Python :: 3.7
+Classifier: Programming Language :: Python :: 3.8
 Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*
 Description-Content-Type: text/markdown
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/README.md 
new/tldextract-2.2.2/README.md
--- old/tldextract-2.2.1/README.md      2019-03-05 17:47:17.000000000 +0100
+++ new/tldextract-2.2.2/README.md      2019-10-16 02:07:21.000000000 +0200
@@ -189,7 +189,8 @@
     suffix_list_urls=["http://foo.bar.baz";],
     # Recommended: Specify your own cache file, to minimize ambiguities about 
where
     # tldextract is getting its data, or cached data, from.
-    cache_file='/path/to/your/cache/file')
+    cache_file='/path/to/your/cache/file',
+    fallback_to_snapshot=False)
 ```
 
 The above snippet will fetch from the URL *you* specified, upon first need to 
download the
@@ -200,7 +201,8 @@
 ```python
 extract = tldextract.TLDExtract(
     suffix_list_urls=["file://absolute/path/to/your/local/suffix/list/file"],
-    cache_file='/path/to/your/cache/file')
+    cache_file='/path/to/your/cache/file',
+    fallback_to_snapshot=False)
 ```
 
 Use an absolute path when specifying the `suffix_list_urls` keyword argument.
@@ -263,5 +265,5 @@
 
 ```zsh
 tox -l
-tox -e py35-requests-2.9.1
+tox -e py37
 ```
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/conftest.py 
new/tldextract-2.2.2/conftest.py
--- old/tldextract-2.2.1/conftest.py    1970-01-01 01:00:00.000000000 +0100
+++ new/tldextract-2.2.2/conftest.py    2019-10-16 02:07:21.000000000 +0200
@@ -0,0 +1,7 @@
+'''py.test standard config file.'''
+
+# pylint: disable=invalid-name
+
+collect_ignore = ('setup.py',)
+
+# pylint: enable=invalid-name
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/pylintrc 
new/tldextract-2.2.2/pylintrc
--- old/tldextract-2.2.1/pylintrc       1970-01-01 01:00:00.000000000 +0100
+++ new/tldextract-2.2.2/pylintrc       2019-10-16 02:07:21.000000000 +0200
@@ -0,0 +1,407 @@
+[MASTER]
+
+# Specify a configuration file.
+#rcfile=
+
+# Python code to execute, usually for sys.path manipulation such as
+# pygtk.require().
+#init-hook=
+
+# Add files or directories to the blacklist. They should be base names, not
+# paths.
+ignore=CVS
+
+# Add files or directories matching the regex patterns to the blacklist. The
+# regex matches against base names, not paths.
+ignore-patterns=
+
+# Pickle collected data for later comparisons.
+persistent=yes
+
+# List of plugins (as comma separated values of python modules names) to load,
+# usually to register additional checkers.
+load-plugins=
+
+# Use multiple processes to speed up Pylint.
+jobs=1
+
+# Allow loading of arbitrary C extensions. Extensions are imported into the
+# active Python interpreter and may run arbitrary code.
+unsafe-load-any-extension=no
+
+# A comma-separated list of package or module names from where C extensions may
+# be loaded. Extensions are loading into the active Python interpreter and may
+# run arbitrary code
+extension-pkg-whitelist=
+
+# Allow optimization of some AST trees. This will activate a peephole AST
+# optimizer, which will apply various small optimizations. For instance, it can
+# be used to obtain the result of joining multiple strings with the addition
+# operator. Joining a lot of strings can lead to a maximum recursion error in
+# Pylint and this flag can prevent that. It has one side effect, the resulting
+# AST will be different than the one from reality. This option is deprecated
+# and it will be removed in Pylint 2.0.
+optimize-ast=no
+
+
+[MESSAGES CONTROL]
+
+# Only show warnings with the listed confidence levels. Leave empty to show
+# all. Valid levels: HIGH, INFERENCE, INFERENCE_FAILURE, UNDEFINED
+confidence=
+
+# Enable the message, report, category or checker with the given id(s). You can
+# either give multiple identifier separated by comma (,) or put this option
+# multiple time (only on the command line, not in the configuration file where
+# it should appear only once). See also the "--disable" option for examples.
+#enable=
+
+# Disable the message, report, category or checker with the given id(s). You
+# can either give multiple identifiers separated by comma (,) or put this
+# option multiple times (only on the command line, not in the configuration
+# file where it should appear only once).You can also use "--disable=all" to
+# disable everything first and then reenable specific checks. For example, if
+# you want to run only the similarities checker, you can use "--disable=all
+# --enable=similarities". If you want to run only the classes checker, but have
+# no Warning level messages displayed, use"--disable=all --enable=classes
+# --disable=W"
+disable=import-star-module-level,old-octal-literal,oct-method,print-statement,unpacking-in-except,parameter-unpacking,backtick,old-raise-syntax,old-ne-operator,long-suffix,dict-view-method,dict-iter-method,metaclass-assignment,next-method-called,raising-string,indexing-exception,raw_input-builtin,long-builtin,file-builtin,execfile-builtin,coerce-builtin,cmp-builtin,buffer-builtin,basestring-builtin,apply-builtin,filter-builtin-not-iterating,using-cmp-argument,useless-suppression,range-builtin-not-iterating,suppressed-message,no-absolute-import,old-division,cmp-method,reload-builtin,zip-builtin-not-iterating,intern-builtin,unichr-builtin,reduce-builtin,standarderror-builtin,unicode-builtin,xrange-builtin,coerce-method,delslice-method,getslice-method,setslice-method,input-builtin,round-builtin,hex-method,nonzero-method,map-builtin-not-iterating,useless-object-inheritance,import-outside-toplevel
+
+
+[REPORTS]
+
+# Set the output format. Available formats are text, parseable, colorized, msvs
+# (visual studio) and html. You can also give a reporter class, eg
+# mypackage.mymodule.MyReporterClass.
+output-format=text
+
+# Put messages in a separate file for each module / package specified on the
+# command line instead of printing them on stdout. Reports (if any) will be
+# written in a file name "pylint_global.[txt|html]". This option is deprecated
+# and it will be removed in Pylint 2.0.
+files-output=no
+
+# Tells whether to display a full report or only the messages
+reports=yes
+
+# Python expression which should return a note less than 10 (10 is the highest
+# note). You have access to the variables errors warning, statement which
+# respectively contain the number of errors / warnings messages and the total
+# number of statements analyzed. This is used by the global evaluation report
+# (RP0004).
+evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / 
statement) * 10)
+
+# Template used to display messages. This is a python new-style format string
+# used to format the message information. See doc for all details
+#msg-template=
+
+
+[BASIC]
+
+# Good variable names which should always be accepted, separated by a comma
+good-names=i,j,k,ex,Run,_
+
+# Bad variable names which should always be refused, separated by a comma
+bad-names=foo,bar,baz,toto,tutu,tata
+
+# Colon-delimited sets of names that determine each other's naming style when
+# the name regexes allow several styles.
+name-group=
+
+# Include a hint for the correct naming format with invalid-name
+include-naming-hint=no
+
+# List of decorators that produce properties, such as abc.abstractproperty. Add
+# to this list to register other decorators that produce valid properties.
+property-classes=abc.abstractproperty
+
+# Regular expression matching correct function names
+function-rgx=[a-z_][a-z0-9_]{2,50}$
+
+# Naming hint for function names
+function-name-hint=[a-z_][a-z0-9_]{2,50}$
+
+# Regular expression matching correct variable names
+variable-rgx=[a-z_][a-z0-9_]{2,30}$
+
+# Naming hint for variable names
+variable-name-hint=[a-z_][a-z0-9_]{2,30}$
+
+# Regular expression matching correct constant names
+const-rgx=(([A-Z_][A-Z0-9_]*)|(__.*__))$
+
+# Naming hint for constant names
+const-name-hint=(([A-Z_][A-Z0-9_]*)|(__.*__))$
+
+# Regular expression matching correct attribute names
+attr-rgx=[a-z_][a-z0-9_]{2,30}$
+
+# Naming hint for attribute names
+attr-name-hint=[a-z_][a-z0-9_]{2,30}$
+
+# Regular expression matching correct argument names
+argument-rgx=[a-z_][a-z0-9_]{2,30}$
+
+# Naming hint for argument names
+argument-name-hint=[a-z_][a-z0-9_]{2,30}$
+
+# Regular expression matching correct class attribute names
+class-attribute-rgx=([A-Za-z_][A-Za-z0-9_]{2,30}|(__.*__))$
+
+# Naming hint for class attribute names
+class-attribute-name-hint=([A-Za-z_][A-Za-z0-9_]{2,30}|(__.*__))$
+
+# Regular expression matching correct inline iteration names
+inlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$
+
+# Naming hint for inline iteration names
+inlinevar-name-hint=[A-Za-z_][A-Za-z0-9_]*$
+
+# Regular expression matching correct class names
+class-rgx=[A-Z_][a-zA-Z0-9]+$
+
+# Naming hint for class names
+class-name-hint=[A-Z_][a-zA-Z0-9]+$
+
+# Regular expression matching correct module names
+module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$
+
+# Naming hint for module names
+module-name-hint=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$
+
+# Regular expression matching correct method names
+method-rgx=[a-z_][a-z0-9_]{2,50}$
+
+# Naming hint for method names
+method-name-hint=[a-z_][a-z0-9_]{2,50}$
+
+# Regular expression which should only match function or class names that do
+# not require a docstring.
+no-docstring-rgx=(__.*__|test_.*|.*_test)
+
+# Minimum line length for functions/classes that require docstrings, shorter
+# ones are exempt.
+docstring-min-length=10
+
+
+[ELIF]
+
+# Maximum number of nested blocks for function / method body
+max-nested-blocks=5
+
+
+[FORMAT]
+
+# Maximum number of characters on a single line.
+max-line-length=100
+
+# Regexp for a line that is allowed to be longer than the limit.
+ignore-long-lines=^\s*(# )?<?https?://\S+>?$
+
+# Allow the body of an if to be on the same line as the test if there is no
+# else.
+single-line-if-stmt=no
+
+# List of optional constructs for which whitespace checking is disabled. `dict-
+# separator` is used to allow tabulation in dicts, etc.: {1  : 1,\n222: 2}.
+# `trailing-comma` allows a space between comma and closing bracket: (a, ).
+# `empty-line` allows space-only lines.
+no-space-check=trailing-comma,dict-separator
+
+# Maximum number of lines in a module
+max-module-lines=1000
+
+# String used as indentation unit. This is usually "    " (4 spaces) or "\t" (1
+# tab).
+indent-string='    '
+
+# Number of spaces of indent required inside a hanging  or continued line.
+indent-after-paren=4
+
+# Expected format of line ending, e.g. empty (any line ending), LF or CRLF.
+expected-line-ending-format=
+
+
+[LOGGING]
+
+# Logging modules to check that the string format arguments are in logging
+# function parameter format
+logging-modules=logging
+
+
+[MISCELLANEOUS]
+
+# List of note tags to take in consideration, separated by a comma.
+notes=
+
+
+[SIMILARITIES]
+
+# Minimum lines number of a similarity.
+min-similarity-lines=4
+
+# Ignore comments when computing similarities.
+ignore-comments=yes
+
+# Ignore docstrings when computing similarities.
+ignore-docstrings=yes
+
+# Ignore imports when computing similarities.
+ignore-imports=no
+
+
+[SPELLING]
+
+# Spelling dictionary name. Available dictionaries: none. To make it working
+# install python-enchant package.
+spelling-dict=
+
+# List of comma separated words that should not be checked.
+spelling-ignore-words=
+
+# A path to a file that contains private dictionary; one word per line.
+spelling-private-dict-file=
+
+# Tells whether to store unknown words to indicated private dictionary in
+# --spelling-private-dict-file option instead of raising a message.
+spelling-store-unknown-words=no
+
+
+[TYPECHECK]
+
+# Tells whether missing members accessed in mixin class should be ignored. A
+# mixin class is detected if its name ends with "mixin" (case insensitive).
+ignore-mixin-members=yes
+
+# List of module names for which member attributes should not be checked
+# (useful for modules/projects where namespaces are manipulated during runtime
+# and thus existing member attributes cannot be deduced by static analysis. It
+# supports qualified module names, as well as Unix pattern matching.
+ignored-modules=
+
+# List of class names for which member attributes should not be checked (useful
+# for classes with dynamically set attributes). This supports the use of
+# qualified names.
+ignored-classes=optparse.Values,thread._local,_thread._local
+
+# List of members which are set dynamically and missed by pylint inference
+# system, and so shouldn't trigger E1101 when accessed. Python regular
+# expressions are accepted.
+generated-members=
+
+# List of decorators that produce context managers, such as
+# contextlib.contextmanager. Add to this list to register other decorators that
+# produce valid context managers.
+contextmanager-decorators=contextlib.contextmanager
+
+
+[VARIABLES]
+
+# Tells whether we should check for unused import in __init__ files.
+init-import=no
+
+# A regular expression matching the name of dummy variables (i.e. expectedly
+# not used).
+dummy-variables-rgx=(_+[a-zA-Z0-9]*?$)|dummy
+
+# List of additional names supposed to be defined in builtins. Remember that
+# you should avoid to define new builtins when possible.
+additional-builtins=
+
+# List of strings which can identify a callback function by name. A callback
+# name must start or end with one of those strings.
+callbacks=cb_,_cb
+
+# List of qualified module names which can have objects that can redefine
+# builtins.
+redefining-builtins-modules=six.moves,future.builtins
+
+
+[CLASSES]
+
+# List of method names used to declare (i.e. assign) instance attributes.
+defining-attr-methods=__init__,__new__,setUp
+
+# List of valid names for the first argument in a class method.
+valid-classmethod-first-arg=cls
+
+# List of valid names for the first argument in a metaclass class method.
+valid-metaclass-classmethod-first-arg=mcs
+
+# List of member names, which should be excluded from the protected access
+# warning.
+exclude-protected=_asdict,_fields,_replace,_source,_make
+
+
+[DESIGN]
+
+# Maximum number of arguments for function / method
+max-args=5
+
+# Argument names that match this expression will be ignored. Default to name
+# with leading underscore
+ignored-argument-names=_.*
+
+# Maximum number of locals for function / method body
+max-locals=15
+
+# Maximum number of return / yield for function / method body
+max-returns=6
+
+# Maximum number of branch for function / method body
+max-branches=12
+
+# Maximum number of statements in function / method body
+max-statements=50
+
+# Maximum number of parents for a class (see R0901).
+max-parents=7
+
+# Maximum number of attributes for a class (see R0902).
+max-attributes=7
+
+# Minimum number of public methods for a class (see R0903).
+min-public-methods=1
+
+# Maximum number of public methods for a class (see R0904).
+max-public-methods=20
+
+# Maximum number of boolean expressions in a if statement
+max-bool-expr=5
+
+
+[IMPORTS]
+
+# Deprecated modules which should not be used, separated by a comma
+deprecated-modules=regsub,TERMIOS,Bastion,rexec
+
+# Create a graph of every (i.e. internal and external) dependencies in the
+# given file (report RP0402 must not be disabled)
+import-graph=
+
+# Create a graph of external dependencies in the given file (report RP0402 must
+# not be disabled)
+ext-import-graph=
+
+# Create a graph of internal dependencies in the given file (report RP0402 must
+# not be disabled)
+int-import-graph=
+
+# Force import order to recognize a module as part of the standard
+# compatibility libraries.
+known-standard-library=
+
+# Force import order to recognize a module as part of a third party library.
+known-third-party=enchant
+
+# Analyse import fallback blocks. This can be used to support both Python 2 and
+# 3 compatible code, which means that the block might have code that exists
+# only in one or another interpreter, leading to false positives when analysed.
+analyse-fallback-blocks=no
+
+
+[EXCEPTIONS]
+
+# Exceptions that will emit a warning when being caught. Defaults to
+# "Exception"
+overgeneral-exceptions=Exception
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/pytest.ini 
new/tldextract-2.2.2/pytest.ini
--- old/tldextract-2.2.1/pytest.ini     1970-01-01 01:00:00.000000000 +0100
+++ new/tldextract-2.2.2/pytest.ini     2015-11-18 17:15:06.000000000 +0100
@@ -0,0 +1,3 @@
+[pytest]
+addopts = --doctest-modules --pylint
+norecursedirs = tldextract_app
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/setup.py 
new/tldextract-2.2.2/setup.py
--- old/tldextract-2.2.1/setup.py       2019-03-05 19:53:32.000000000 +0100
+++ new/tldextract-2.2.2/setup.py       2019-10-16 02:07:39.000000000 +0200
@@ -37,7 +37,7 @@
 
 setup(
     name="tldextract",
-    version="2.2.1",
+    version="2.2.2",
     author="John Kurkowski",
     author_email="[email protected]",
     description=("Accurately separate the TLD from the registered domain and "
@@ -60,9 +60,10 @@
         "Programming Language :: Python :: 2",
         "Programming Language :: Python :: 2.7",
         "Programming Language :: Python :: 3",
-        "Programming Language :: Python :: 3.4",
         "Programming Language :: Python :: 3.5",
         "Programming Language :: Python :: 3.6",
+        "Programming Language :: Python :: 3.7",
+        "Programming Language :: Python :: 3.8",
     ],
     entry_points={
         'console_scripts': [
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/tests/main_test.py 
new/tldextract-2.2.2/tests/main_test.py
--- old/tldextract-2.2.1/tests/main_test.py     2019-03-05 18:09:22.000000000 
+0100
+++ new/tldextract-2.2.2/tests/main_test.py     2019-10-16 02:07:21.000000000 
+0200
@@ -22,7 +22,7 @@
 # pylint: enable=invalid-name
 
 
-def assert_extract(
+def assert_extract(  # pylint: disable=missing-docstring
         url,
         expected_domain_data,
         expected_ip_data='',
@@ -71,6 +71,11 @@
     assert_extract('http://www.com', ('www.com', '', 'www', 'com'))
 
 
+def test_suffix():
+    assert_extract('com', ('', '', '', 'com'))
+    assert_extract('co.uk', ('', '', '', 'co.uk'))
+
+
 def test_local_host():
     assert_extract('http://internalunlikelyhostname/',
                    ('', '', 'internalunlikelyhostname', ''))
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/tldextract/cli.py 
new/tldextract-2.2.2/tldextract/cli.py
--- old/tldextract-2.2.1/tldextract/cli.py      2017-11-20 00:39:43.000000000 
+0100
+++ new/tldextract-2.2.2/tldextract/cli.py      2019-10-16 02:07:21.000000000 
+0200
@@ -2,6 +2,7 @@
 
 
 import logging
+import sys
 
 try:
     import pkg_resources
@@ -48,9 +49,10 @@
 
     if args.update:
         tld_extract.update(True)
-    elif len(args.input) is 0:
+    elif not args.input:
         parser.print_usage()
-        exit(1)
+        sys.exit(1)
+        return
 
     for i in args.input:
         print(' '.join(tld_extract(i)))  # pylint: disable=superfluous-parens
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/tldextract/remote.py 
new/tldextract-2.2.2/tldextract/remote.py
--- old/tldextract-2.2.1/tldextract/remote.py   2019-03-05 18:09:22.000000000 
+0100
+++ new/tldextract-2.2.2/tldextract/remote.py   2019-10-16 02:07:21.000000000 
+0200
@@ -20,8 +20,6 @@
 
 IP_RE = 
re.compile(r'^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$')
  # pylint: disable=line-too-long
 
-PUNY_RE = re.compile(r'^xn--', re.IGNORECASE)
-
 SCHEME_RE = re.compile(r'^([' + scheme_chars + ']+:)?//')
 
 LOG = logging.getLogger('tldextract')
@@ -36,14 +34,15 @@
 
         for url in urls:
             try:
-                text = session.get(url, timeout=cache_fetch_timeout).text
+                resp = session.get(url, timeout=cache_fetch_timeout)
+                resp.raise_for_status()
             except requests.exceptions.RequestException:
                 LOG.exception(
                     'Exception reading Public Suffix List url %s',
                     url
                 )
             else:
-                return _decode_utf8(text)
+                return _decode_utf8(resp.text)
 
     LOG.error(
         'No Public Suffix List found. Consider using a mirror or constructing '
@@ -59,8 +58,7 @@
     """
     if not isinstance(text, unicode):
         return unicode(text, 'utf-8')
-    else:
-        return text
+    return text
 
 
 def looks_like_ip(maybe_ip):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/tldextract/tldextract.py 
new/tldextract-2.2.2/tldextract/tldextract.py
--- old/tldextract-2.2.1/tldextract/tldextract.py       2019-03-05 
18:09:22.000000000 +0100
+++ new/tldextract-2.2.2/tldextract/tldextract.py       2019-10-16 
02:07:21.000000000 +0200
@@ -51,34 +51,19 @@
 
 
 import collections
-from contextlib import closing
 import errno
 from functools import wraps
 import json
 import logging
 import os
+import pkgutil
 import re
 
 import idna
 
-try:
-    import pkg_resources
-except ImportError:
-    class pkg_resources(object):  # pylint: disable=invalid-name
-
-        """Fake pkg_resources interface which falls back to getting resources
-        inside `tldextract`'s directory.
-        """
-        @classmethod
-        def resource_stream(cls, _, resource_name):
-            moddir = os.path.dirname(__file__)
-            path = os.path.join(moddir, resource_name)
-            return open(path)
-
 from .remote import find_first_response
 from .remote import looks_like_ip
 from .remote import IP_RE
-from .remote import PUNY_RE
 from .remote import SCHEME_RE
 
 # pylint: disable=invalid-name,undefined-variable
@@ -177,9 +162,8 @@
         requests, set this to something falsy.
 
         The default list of URLs point to the latest version of the Mozilla 
Public Suffix List and
-        its mirror, but any similar document could be specified.
-
-        Local files can be specified by using the `file://` protocol. (See 
`urllib2` documentation.)
+        its mirror, but any similar document could be specified. Local files 
can be specified by
+        using the `file://` protocol. (See `urllib2` documentation.)
 
         If there is no `cache_file` loaded and no data is found from the 
`suffix_list_urls`,
         the module will fall back to the included TLD set snapshot. If you do 
not want
@@ -245,24 +229,15 @@
 
         labels = netloc.split(".")
 
-        def decode_punycode(label):
-            if PUNY_RE.match(label):
-                try:
-                    return idna.decode(label.encode('ascii'))
-                except UnicodeError:
-                    pass
-            return label
-
-        translations = [decode_punycode(label).lower() for label in labels]
+        translations = [_decode_punycode(label) for label in labels]
         suffix_index = self._get_tld_extractor().suffix_index(translations)
 
-        registered_domain = ".".join(labels[:suffix_index])
         suffix = ".".join(labels[suffix_index:])
-
         if not suffix and netloc and looks_like_ip(netloc):
             return ExtractResult('', netloc, '')
 
-        subdomain, _, domain = registered_domain.rpartition('.')
+        subdomain = ".".join(labels[:suffix_index - 1]) if suffix_index else ""
+        domain = labels[suffix_index - 1] if suffix_index else ""
         return ExtractResult(subdomain, domain, suffix)
 
     def update(self, fetch_now=False):
@@ -285,6 +260,8 @@
         2. Local system cache file
         3. Remote PSL, over HTTP
         4. Bundled PSL snapshot file'''
+        # pylint: disable=no-else-return
+
         if self._extractor:
             return self._extractor
 
@@ -323,7 +300,7 @@
         error, or if this object is not set to use the cache
         file.'''
         if not self.cache_file:
-            return
+            return None
 
         try:
             with open(self.cache_file) as cache_file:
@@ -342,22 +319,18 @@
 
     @staticmethod
     def _get_snapshot_tld_extractor():
-        snapshot_stream = pkg_resources.resource_stream(__name__, 
'.tld_set_snapshot')
-        with closing(snapshot_stream) as snapshot_file:
-            return json.loads(snapshot_file.read().decode('utf-8'))
+        snapshot_data = pkgutil.get_data(__name__, '.tld_set_snapshot')
+        return json.loads(snapshot_data.decode('utf-8'))
 
     def _cache_tlds(self, tlds):
         '''Logs a diff of the new TLDs and caches them on disk, according to
         settings passed to __init__.'''
         if LOG.isEnabledFor(logging.DEBUG):
             import difflib
-            snapshot_stream = pkg_resources.resource_stream(__name__, 
'.tld_set_snapshot')
-            with closing(snapshot_stream) as snapshot_file:
-                snapshot = sorted(
-                    json.loads(snapshot_file.read().decode('utf-8'))
-                )
+            snapshot_data = pkgutil.get_data(__name__, '.tld_set_snapshot')
+            snapshot = sorted(json.loads(snapshot_data.decode('utf-8')))
             new = sorted(tlds)
-            LOG.debug('computed TLD diff:\n' + '\n'.join(difflib.unified_diff(
+            LOG.debug('computed TLD diff:\n%s', '\n'.join(difflib.unified_diff(
                 snapshot,
                 new,
                 fromfile=".tld_set_snapshot",
@@ -396,6 +369,9 @@
 
 
 class _PublicSuffixListTLDExtractor(object):
+    """Wrapper around this project's main algo for PSL
+    lookups.
+    """
 
     def __init__(self, tlds):
         self.tlds = frozenset(tlds)
@@ -404,7 +380,8 @@
         """Returns the index of the first suffix label.
         Returns len(spl) if no suffix is found
         """
-        for i in range(len(lower_spl)):
+        length = len(lower_spl)
+        for i in range(length):
             maybe_tld = '.'.join(lower_spl[i:])
             exception_tld = '!' + maybe_tld
             if exception_tld in self.tlds:
@@ -417,4 +394,15 @@
             if wildcard_tld in self.tlds:
                 return i
 
-        return len(lower_spl)
+        return length
+
+
+def _decode_punycode(label):
+    lowered = label.lower()
+    looks_like_puny = lowered.startswith('xn--')
+    if looks_like_puny:
+        try:
+            return idna.decode(label.encode('ascii')).lower()
+        except UnicodeError:
+            pass
+    return lowered
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/tldextract.egg-info/PKG-INFO 
new/tldextract-2.2.2/tldextract.egg-info/PKG-INFO
--- old/tldextract-2.2.1/tldextract.egg-info/PKG-INFO   2019-03-05 
19:55:34.000000000 +0100
+++ new/tldextract-2.2.2/tldextract.egg-info/PKG-INFO   2019-10-16 
02:10:39.000000000 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 2.1
 Name: tldextract
-Version: 2.2.1
+Version: 2.2.2
 Summary: Accurately separate the TLD from the registered domain and subdomains 
of a URL, using the Public Suffix List. By default, this includes the public 
ICANN TLDs and their exceptions. You can optionally support the Public Suffix 
List's private domains as well.
 Home-page: https://github.com/john-kurkowski/tldextract
 Author: John Kurkowski
@@ -40,8 +40,9 @@
 Classifier: Programming Language :: Python :: 2
 Classifier: Programming Language :: Python :: 2.7
 Classifier: Programming Language :: Python :: 3
-Classifier: Programming Language :: Python :: 3.4
 Classifier: Programming Language :: Python :: 3.5
 Classifier: Programming Language :: Python :: 3.6
+Classifier: Programming Language :: Python :: 3.7
+Classifier: Programming Language :: Python :: 3.8
 Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*
 Description-Content-Type: text/markdown
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/tldextract.egg-info/SOURCES.txt 
new/tldextract-2.2.2/tldextract.egg-info/SOURCES.txt
--- old/tldextract-2.2.1/tldextract.egg-info/SOURCES.txt        2019-03-05 
19:55:34.000000000 +0100
+++ new/tldextract-2.2.2/tldextract.egg-info/SOURCES.txt        2019-10-16 
02:10:39.000000000 +0200
@@ -1,8 +1,15 @@
+.gitignore
+.travis.yml
+CHANGELOG.md
 LICENSE
 MANIFEST.in
 README.md
+conftest.py
+pylintrc
+pytest.ini
 setup.cfg
 setup.py
+tox.ini
 tests/__init__.py
 tests/conftest.py
 tests/custom_suffix_test.py
@@ -21,4 +28,9 @@
 tldextract.egg-info/dependency_links.txt
 tldextract.egg-info/entry_points.txt
 tldextract.egg-info/requires.txt
-tldextract.egg-info/top_level.txt
\ No newline at end of file
+tldextract.egg-info/top_level.txt
+tldextract_app/README.md
+tldextract_app/app.yaml
+tldextract_app/handlers.py
+tldextract_app/requirements.txt
+tldextract_app/setup.py
\ No newline at end of file
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/tldextract_app/README.md 
new/tldextract-2.2.2/tldextract_app/README.md
--- old/tldextract-2.2.1/tldextract_app/README.md       1970-01-01 
01:00:00.000000000 +0100
+++ new/tldextract-2.2.2/tldextract_app/README.md       2015-03-01 
04:09:52.000000000 +0100
@@ -0,0 +1,10 @@
+# GAE App
+
+## Setup
+
+    pip install -r requirements.txt
+    python setup.py
+
+## Upload to GAE
+
+    appcfg.py update ./
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/tldextract_app/app.yaml 
new/tldextract-2.2.2/tldextract_app/app.yaml
--- old/tldextract-2.2.1/tldextract_app/app.yaml        1970-01-01 
01:00:00.000000000 +0100
+++ new/tldextract-2.2.2/tldextract_app/app.yaml        2015-03-01 
04:09:52.000000000 +0100
@@ -0,0 +1,20 @@
+application: tldextract-hrd
+version: 1
+runtime: python27
+api_version: 1
+threadsafe: false # TODO: migrate CGI to WSGI
+
+handlers:
+- url: /.*
+  script: handlers.py
+
+skip_files:
+- ^(.*/)?app\.yaml
+- ^(.*/)?app\.yml
+- ^(.*/)?index\.yaml
+- ^(.*/)?index\.yml
+- ^(.*/)?#.*#
+- ^(.*/)?.*~
+- ^(.*/)?.*\.py[co]
+- ^(.*/)?.*/RCS/.*
+- ^(.*/)?\.tld_set$
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/tldextract_app/handlers.py 
new/tldextract-2.2.2/tldextract_app/handlers.py
--- old/tldextract-2.2.1/tldextract_app/handlers.py     1970-01-01 
01:00:00.000000000 +0100
+++ new/tldextract-2.2.2/tldextract_app/handlers.py     2016-08-28 
00:50:24.000000000 +0200
@@ -0,0 +1,36 @@
+'''web.py handlers for making a JSON-over-HTTP API around tldextract.'''
+
+import json
+
+# pylint: disable=import-error
+import tldextract
+import web
+
+URLS = (
+    '/api/extract', 'Extract',
+    '/api/re', 'TLDSet',
+    '/test', 'Test',
+)
+
+
+class Extract(object):
+
+    def GET(self):  # pylint: disable=invalid-name,no-self-use
+        url = web.input(url='').url
+        if not url:
+            return web.webapi.badrequest()
+
+        ext = tldextract.extract(url)._asdict()
+        web.header('Content-Type', 'application/json')
+        return json.dumps(ext) + '\n'
+
+
+class TLDSet(object):
+
+    def GET(self):  # pylint: disable=invalid-name,no-self-use
+        web.header('Content-Type', 'text/html; charset=utf-8')
+        return '<br/>'.join(sorted(tldextract.tldextract.TLD_EXTRACTOR.tlds))
+
+
+APP = web.application(URLS, globals())
+main = APP.cgirun()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/tldextract_app/requirements.txt 
new/tldextract-2.2.2/tldextract_app/requirements.txt
--- old/tldextract-2.2.1/tldextract_app/requirements.txt        1970-01-01 
01:00:00.000000000 +0100
+++ new/tldextract-2.2.2/tldextract_app/requirements.txt        2013-10-02 
17:04:00.000000000 +0200
@@ -0,0 +1 @@
+web.py==0.34
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/tldextract_app/setup.py 
new/tldextract-2.2.2/tldextract_app/setup.py
--- old/tldextract-2.2.1/tldextract_app/setup.py        1970-01-01 
01:00:00.000000000 +0100
+++ new/tldextract-2.2.2/tldextract_app/setup.py        2015-11-18 
17:15:06.000000000 +0100
@@ -0,0 +1,26 @@
+'''tldextract GAE app setup.py'''
+
+import os
+import sys
+
+
+def main():
+    # Must include deps in this folder for GAE. It doesn't use a e.g.
+    # requirements.txt.
+    app_folder = os.path.dirname(__file__)
+    sys.path.append(os.path.abspath(os.path.join(app_folder, os.pardir)))
+    deps = ('tldextract', 'web')
+    for modname in deps:
+        symlink = os.path.join(app_folder, modname)
+        try:
+            os.remove(symlink)
+        except (OSError, IOError):
+            pass
+
+        mod = __import__(modname)
+        loc = os.path.dirname(mod.__file__)
+        os.symlink(loc, symlink)
+
+
+if __name__ == "__main__":
+    main()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/tldextract-2.2.1/tox.ini new/tldextract-2.2.2/tox.ini
--- old/tldextract-2.2.1/tox.ini        1970-01-01 01:00:00.000000000 +0100
+++ new/tldextract-2.2.2/tox.ini        2019-10-16 02:07:21.000000000 +0200
@@ -0,0 +1,20 @@
+[tox]
+envlist = py{27,35,36,37,38,py,py3}
+
+[testenv]
+deps =
+    py27: pylint==1.9.5
+    py{35,36,37,38,py,py3}: pylint
+    pytest
+    pytest-gitignore
+    pytest-mock
+    pytest-pylint
+    responses
+    requests
+
+commands = pytest {posargs}
+
+[testenv:codestyle]
+deps = pycodestyle
+# E501 - line too long
+commands = pycodestyle tldextract tldextract_app tests --ignore=E501 {posargs}


Reply via email to