Hello community,

here is the log from the commit of package python3-requests for 
openSUSE:Factory checked in at 2015-04-25 09:54:25
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python3-requests (Old)
 and      /work/SRC/openSUSE:Factory/.python3-requests.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python3-requests"

Changes:
--------
--- /work/SRC/openSUSE:Factory/python3-requests/python3-requests.changes        
2015-03-16 07:01:19.000000000 +0100
+++ /work/SRC/openSUSE:Factory/.python3-requests.new/python3-requests.changes   
2015-04-25 11:26:34.000000000 +0200
@@ -1,0 +2,24 @@
+Sat Apr 25 00:26:58 UTC 2015 - a...@gmx.de
+
+- update to version 2.6.2:
+  * Fix regression where compressed data that was sent as chunked data
+    was not properly decompressed. (#2561)
+
+-------------------------------------------------------------------
+Thu Apr 23 15:26:14 UTC 2015 - a...@gmx.de
+
+- update to version 2.6.1:
+  * Remove VendorAlias import machinery introduced in v2.5.2.
+  * Simplify the PreparedRequest.prepare API: We no longer require the
+    user to pass an empty list to the hooks keyword
+    argument. (c.f. #2552)
+  * Resolve redirects now receives and forwards all of the original
+    arguments to the adapter. (#2503)
+  * Handle UnicodeDecodeErrors when trying to deal with a unicode URL
+    that cannot be encoded in ASCII. (#2540)
+  * Populate the parsed path of the URI field when performing Digest
+    Authentication. (#2426)
+  * Copy a PreparedRequest’s CookieJar more reliably when it is not an
+    instance of RequestsCookieJar. (#2527)
+
+-------------------------------------------------------------------

Old:
----
  requests-2.6.0.tar.gz

New:
----
  requests-2.6.2.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python3-requests.spec ++++++
--- /var/tmp/diff_new_pack.FpWkan/_old  2015-04-25 11:26:34.000000000 +0200
+++ /var/tmp/diff_new_pack.FpWkan/_new  2015-04-25 11:26:34.000000000 +0200
@@ -17,7 +17,7 @@
 
 
 Name:           python3-requests
-Version:        2.6.0
+Version:        2.6.2
 Release:        0
 Url:            http://python-requests.org
 Summary:        Awesome Python HTTP Library That's Actually Usable

++++++ requests-2.6.0.tar.gz -> requests-2.6.2.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/requests-2.6.0/HISTORY.rst 
new/requests-2.6.2/HISTORY.rst
--- old/requests-2.6.0/HISTORY.rst      2015-03-14 17:43:47.000000000 +0100
+++ new/requests-2.6.2/HISTORY.rst      2015-04-23 18:28:35.000000000 +0200
@@ -3,17 +3,47 @@
 Release History
 ---------------
 
+2.6.2 (2015-04-23)
+++++++++++++++++++
+
+**Bugfixes**
+
+- Fix regression where compressed data that was sent as chunked data was not
+  properly decompressed. (#2561)
+
+2.6.1 (2015-04-22)
+++++++++++++++++++
+
+**Bugfixes**
+
+- Remove VendorAlias import machinery introduced in v2.5.2.
+
+- Simplify the PreparedRequest.prepare API: We no longer require the user to
+  pass an empty list to the hooks keyword argument. (c.f. #2552)
+
+- Resolve redirects now receives and forwards all of the original arguments to
+  the adapter. (#2503)
+
+- Handle UnicodeDecodeErrors when trying to deal with a unicode URL that
+  cannot be encoded in ASCII. (#2540)
+
+- Populate the parsed path of the URI field when performing Digest
+  Authentication. (#2426)
+
+- Copy a PreparedRequest's CookieJar more reliably when it is not an instance
+  of RequestsCookieJar. (#2527)
+
 2.6.0 (2015-03-14)
 ++++++++++++++++++
 
 **Bugfixes**
 
-- Fix handling of cookies on redirect. Previously a cookie without a host
-  value set would use the hostname for the redirected URL exposing requests
-  users to session fixation attacks and potentially cookie stealing. This was
-  disclosed privately by Matthew Daley of `BugFuzz <https://bugfuzz.com>`_.
-  An CVE identifier has not yet been assigned for this. This affects all
-  versions of requests from v2.1.0 to v2.5.3 (inclusive on both ends).
+- CVE-2015-2296: Fix handling of cookies on redirect. Previously a cookie
+  without a host value set would use the hostname for the redirected URL
+  exposing requests users to session fixation attacks and potentially cookie
+  stealing. This was disclosed privately by Matthew Daley of
+  `BugFuzz <https://bugfuzz.com>`_. This affects all versions of requests from
+  v2.1.0 to v2.5.3 (inclusive on both ends).
 
 - Fix error when requests is an ``install_requires`` dependency and ``python
   setup.py test`` is run. (#2462)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/requests-2.6.0/PKG-INFO new/requests-2.6.2/PKG-INFO
--- old/requests-2.6.0/PKG-INFO 2015-03-14 17:44:16.000000000 +0100
+++ new/requests-2.6.2/PKG-INFO 2015-04-23 18:30:21.000000000 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 1.1
 Name: requests
-Version: 2.6.0
+Version: 2.6.2
 Summary: Python HTTP for Humans.
 Home-page: http://python-requests.org
 Author: Kenneth Reitz
@@ -16,6 +16,8 @@
                 :target: https://pypi.python.org/pypi/requests
         
         
+        
+        
         Requests is an Apache2 Licensed HTTP library, written in Python, for 
human
         beings.
         
@@ -98,17 +100,47 @@
         Release History
         ---------------
         
+        2.6.2 (2015-04-23)
+        ++++++++++++++++++
+        
+        **Bugfixes**
+        
+        - Fix regression where compressed data that was sent as chunked data 
was not
+          properly decompressed. (#2561)
+        
+        2.6.1 (2015-04-22)
+        ++++++++++++++++++
+        
+        **Bugfixes**
+        
+        - Remove VendorAlias import machinery introduced in v2.5.2.
+        
+        - Simplify the PreparedRequest.prepare API: We no longer require the 
user to
+          pass an empty list to the hooks keyword argument. (c.f. #2552)
+        
+        - Resolve redirects now receives and forwards all of the original 
arguments to
+          the adapter. (#2503)
+        
+        - Handle UnicodeDecodeErrors when trying to deal with a unicode URL 
that
+          cannot be encoded in ASCII. (#2540)
+        
+        - Populate the parsed path of the URI field when performing Digest
+          Authentication. (#2426)
+        
+        - Copy a PreparedRequest's CookieJar more reliably when it is not an 
instance
+          of RequestsCookieJar. (#2527)
+        
         2.6.0 (2015-03-14)
         ++++++++++++++++++
         
         **Bugfixes**
         
-        - Fix handling of cookies on redirect. Previously a cookie without a 
host
-          value set would use the hostname for the redirected URL exposing 
requests
-          users to session fixation attacks and potentially cookie stealing. 
This was
-          disclosed privately by Matthew Daley of `BugFuzz 
<https://bugfuzz.com>`_.
-          An CVE identifier has not yet been assigned for this. This affects 
all
-          versions of requests from v2.1.0 to v2.5.3 (inclusive on both ends).
+        - CVE-2015-2296: Fix handling of cookies on redirect. Previously a 
cookie
+          without a host value set would use the hostname for the redirected 
URL
+          exposing requests users to session fixation attacks and potentially 
cookie
+          stealing. This was disclosed privately by Matthew Daley of
+          `BugFuzz <https://bugfuzz.com>`_. This affects all versions of 
requests from
+          v2.1.0 to v2.5.3 (inclusive on both ends).
         
         - Fix error when requests is an ``install_requires`` dependency and 
``python
           setup.py test`` is run. (#2462)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/requests-2.6.0/README.rst 
new/requests-2.6.2/README.rst
--- old/requests-2.6.0/README.rst       2015-03-11 01:33:32.000000000 +0100
+++ new/requests-2.6.2/README.rst       2015-04-22 23:39:52.000000000 +0200
@@ -8,6 +8,8 @@
         :target: https://pypi.python.org/pypi/requests
 
 
+
+
 Requests is an Apache2 Licensed HTTP library, written in Python, for human
 beings.
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/requests-2.6.0/requests/__init__.py 
new/requests-2.6.2/requests/__init__.py
--- old/requests-2.6.0/requests/__init__.py     2015-03-14 17:43:47.000000000 
+0100
+++ new/requests-2.6.2/requests/__init__.py     2015-04-23 18:28:49.000000000 
+0200
@@ -42,8 +42,8 @@
 """
 
 __title__ = 'requests'
-__version__ = '2.6.0'
-__build__ = 0x020503
+__version__ = '2.6.2'
+__build__ = 0x020602
 __author__ = 'Kenneth Reitz'
 __license__ = 'Apache 2.0'
 __copyright__ = 'Copyright 2015 Kenneth Reitz'
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/requests-2.6.0/requests/auth.py 
new/requests-2.6.2/requests/auth.py
--- old/requests-2.6.0/requests/auth.py 2015-02-23 23:01:24.000000000 +0100
+++ new/requests-2.6.2/requests/auth.py 2015-04-22 23:39:52.000000000 +0200
@@ -103,7 +103,8 @@
         # XXX not implemented yet
         entdig = None
         p_parsed = urlparse(url)
-        path = p_parsed.path
+        #: path is request-uri defined in RFC 2616 which should not be empty
+        path = p_parsed.path or "/"
         if p_parsed.query:
             path += '?' + p_parsed.query
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/requests-2.6.0/requests/cookies.py 
new/requests-2.6.2/requests/cookies.py
--- old/requests-2.6.0/requests/cookies.py      2015-01-19 04:08:46.000000000 
+0100
+++ new/requests-2.6.2/requests/cookies.py      2015-04-22 23:39:52.000000000 
+0200
@@ -6,6 +6,7 @@
 requests.utils imports from here, so be careful with imports.
 """
 
+import copy
 import time
 import collections
 from .compat import cookielib, urlparse, urlunparse, Morsel
@@ -302,7 +303,7 @@
         """Updates this jar with cookies from another CookieJar or dict-like"""
         if isinstance(other, cookielib.CookieJar):
             for cookie in other:
-                self.set_cookie(cookie)
+                self.set_cookie(copy.copy(cookie))
         else:
             super(RequestsCookieJar, self).update(other)
 
@@ -359,6 +360,21 @@
         return new_cj
 
 
+def _copy_cookie_jar(jar):
+    if jar is None:
+        return None
+
+    if hasattr(jar, 'copy'):
+        # We're dealing with an instane of RequestsCookieJar
+        return jar.copy()
+    # We're dealing with a generic CookieJar instance
+    new_jar = copy.copy(jar)
+    new_jar.clear()
+    for cookie in jar:
+        new_jar.set_cookie(copy.copy(cookie))
+    return new_jar
+
+
 def create_cookie(name, value, **kwargs):
     """Make a cookie from underspecified parameters.
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/requests-2.6.0/requests/models.py 
new/requests-2.6.2/requests/models.py
--- old/requests-2.6.0/requests/models.py       2015-03-14 17:30:10.000000000 
+0100
+++ new/requests-2.6.2/requests/models.py       2015-04-22 23:39:52.000000000 
+0200
@@ -15,7 +15,7 @@
 from .structures import CaseInsensitiveDict
 
 from .auth import HTTPBasicAuth
-from .cookies import cookiejar_from_dict, get_cookie_header
+from .cookies import cookiejar_from_dict, get_cookie_header, _copy_cookie_jar
 from .packages.urllib3.fields import RequestField
 from .packages.urllib3.filepost import encode_multipart_formdata
 from .packages.urllib3.util import parse_url
@@ -320,7 +320,7 @@
         p.method = self.method
         p.url = self.url
         p.headers = self.headers.copy() if self.headers is not None else None
-        p._cookies = self._cookies.copy() if self._cookies is not None else 
None
+        p._cookies = _copy_cookie_jar(self._cookies)
         p.body = self.body
         p.hooks = self.hooks
         return p
@@ -358,7 +358,8 @@
 
         if not scheme:
             raise MissingSchema("Invalid URL {0!r}: No schema supplied. "
-                                "Perhaps you meant http://{0}?".format(url))
+                                "Perhaps you meant http://{0}?".format(
+                                    to_native_string(url, 'utf8')))
 
         if not host:
             raise InvalidURL("Invalid URL %r: No host supplied" % url)
@@ -501,7 +502,15 @@
             self.prepare_content_length(self.body)
 
     def prepare_cookies(self, cookies):
-        """Prepares the given HTTP cookie data."""
+        """Prepares the given HTTP cookie data.
+
+        This function eventually generates a ``Cookie`` header from the
+        given cookies using cookielib. Due to cookielib's design, the header
+        will not be regenerated if it already exists, meaning this function
+        can only be called once for the life of the
+        :class:`PreparedRequest <PreparedRequest>` object. Any subsequent calls
+        to ``prepare_cookies`` will have no actual effect, unless the "Cookie"
+        header is removed beforehand."""
 
         if isinstance(cookies, cookielib.CookieJar):
             self._cookies = cookies
@@ -514,6 +523,10 @@
 
     def prepare_hooks(self, hooks):
         """Prepares the given hooks."""
+        # hooks can be passed as None to the prepare method and to this
+        # method. To prevent iterating over None, simply use an empty list
+        # if hooks is False-y
+        hooks = hooks or []
         for event in hooks:
             self.register_hook(event, hooks[event])
 
@@ -573,7 +586,11 @@
         self.cookies = cookiejar_from_dict({})
 
         #: The amount of time elapsed between sending the request
-        #: and the arrival of the response (as a timedelta)
+        #: and the arrival of the response (as a timedelta).
+        #: This property specifically measures the time taken between sending
+        #: the first byte of the request and finishing parsing the headers. It
+        #: is therefore unaffected by consuming the response content or the
+        #: value of the ``stream`` keyword argument.
         self.elapsed = datetime.timedelta(0)
 
         #: The :class:`PreparedRequest <PreparedRequest>` object to which this
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/requests-2.6.0/requests/packages/__init__.py 
new/requests-2.6.2/requests/packages/__init__.py
--- old/requests-2.6.0/requests/packages/__init__.py    2015-03-14 
03:47:10.000000000 +0100
+++ new/requests-2.6.2/requests/packages/__init__.py    2015-04-23 
15:59:37.000000000 +0200
@@ -1,107 +1,3 @@
-"""
-Copyright (c) Donald Stufft, pip, and individual contributors
-
-Permission is hereby granted, free of charge, to any person obtaining
-a copy of this software and associated documentation files (the
-"Software"), to deal in the Software without restriction, including
-without limitation the rights to use, copy, modify, merge, publish,
-distribute, sublicense, and/or sell copies of the Software, and to
-permit persons to whom the Software is furnished to do so, subject to
-the following conditions:
-
-The above copyright notice and this permission notice shall be
-included in all copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
-LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
-OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
-WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
-"""
 from __future__ import absolute_import
 
-import sys
-
-
-class VendorAlias(object):
-
-    def __init__(self, package_names):
-        self._package_names = package_names
-        self._vendor_name = __name__
-        self._vendor_pkg = self._vendor_name + "."
-        self._vendor_pkgs = [
-            self._vendor_pkg + name for name in self._package_names
-        ]
-
-    def find_module(self, fullname, path=None):
-        if fullname.startswith(self._vendor_pkg):
-            return self
-
-    def load_module(self, name):
-        # Ensure that this only works for the vendored name
-        if not name.startswith(self._vendor_pkg):
-            raise ImportError(
-                "Cannot import %s, must be a subpackage of '%s'." % (
-                    name, self._vendor_name,
-                )
-            )
-
-        if not (name == self._vendor_name or
-                any(name.startswith(pkg) for pkg in self._vendor_pkgs)):
-            raise ImportError(
-                "Cannot import %s, must be one of %s." % (
-                    name, self._vendor_pkgs
-                )
-            )
-
-        # Check to see if we already have this item in sys.modules, if we do
-        # then simply return that.
-        if name in sys.modules:
-            return sys.modules[name]
-
-        # Check to see if we can import the vendor name
-        try:
-            # We do this dance here because we want to try and import this
-            # module without hitting a recursion error because of a bunch of
-            # VendorAlias instances on sys.meta_path
-            real_meta_path = sys.meta_path[:]
-            try:
-                sys.meta_path = [
-                    m for m in sys.meta_path
-                    if not isinstance(m, VendorAlias)
-                ]
-                __import__(name)
-                module = sys.modules[name]
-            finally:
-                # Re-add any additions to sys.meta_path that were made while
-                # during the import we just did, otherwise things like
-                # requests.packages.urllib3.poolmanager will fail.
-                for m in sys.meta_path:
-                    if m not in real_meta_path:
-                        real_meta_path.append(m)
-
-                # Restore sys.meta_path with any new items.
-                sys.meta_path = real_meta_path
-        except ImportError:
-            # We can't import the vendor name, so we'll try to import the
-            # "real" name.
-            real_name = name[len(self._vendor_pkg):]
-            try:
-                __import__(real_name)
-                module = sys.modules[real_name]
-            except ImportError:
-                raise ImportError("No module named '%s'" % (name,))
-
-        # If we've gotten here we've found the module we're looking for, either
-        # as part of our vendored package, or as the real name, so we'll add
-        # it to sys.modules as the vendored name so that we don't have to do
-        # the lookup again.
-        sys.modules[name] = module
-
-        # Finally, return the loaded module
-        return module
-
-
-sys.meta_path.append(VendorAlias(["urllib3", "chardet"]))
+from . import urllib3
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/requests-2.6.0/requests/packages/urllib3/__init__.py 
new/requests-2.6.2/requests/packages/urllib3/__init__.py
--- old/requests-2.6.0/requests/packages/urllib3/__init__.py    2015-03-12 
02:52:01.000000000 +0100
+++ new/requests-2.6.2/requests/packages/urllib3/__init__.py    2015-04-23 
18:27:04.000000000 +0200
@@ -4,7 +4,7 @@
 
 __author__ = 'Andrey Petrov (andrey.pet...@shazow.net)'
 __license__ = 'MIT'
-__version__ = '1.10.2'
+__version__ = '1.10.3'
 
 
 from .connectionpool import (
@@ -55,9 +55,11 @@
 del NullHandler
 
 
-# Set security warning to always go off by default.
 import warnings
+# SecurityWarning's always go off by default.
 warnings.simplefilter('always', exceptions.SecurityWarning)
+# InsecurePlatformWarning's don't vary between requests, so we keep it default.
+warnings.simplefilter('default', exceptions.InsecurePlatformWarning)
 
 def disable_warnings(category=exceptions.HTTPWarning):
     """
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/requests-2.6.0/requests/packages/urllib3/_collections.py 
new/requests-2.6.2/requests/packages/urllib3/_collections.py
--- old/requests-2.6.0/requests/packages/urllib3/_collections.py        
2015-03-12 02:52:01.000000000 +0100
+++ new/requests-2.6.2/requests/packages/urllib3/_collections.py        
2015-04-23 18:26:41.000000000 +0200
@@ -227,20 +227,20 @@
                 # Need to convert the tuple to list for further extension
                 _dict_setitem(self, key_lower, [vals[0], vals[1], val])
 
-    def extend(*args, **kwargs):
+    def extend(self, *args, **kwargs):
         """Generic import function for any type of header-like object.
         Adapted version of MutableMapping.update in order to insert items
         with self.add instead of self.__setitem__
         """
-        if len(args) > 2:
-            raise TypeError("update() takes at most 2 positional "
+        if len(args) > 1:
+            raise TypeError("extend() takes at most 1 positional "
                             "arguments ({} given)".format(len(args)))
-        elif not args:
-            raise TypeError("update() takes at least 1 argument (0 given)")
-        self = args[0]
-        other = args[1] if len(args) >= 2 else ()
+        other = args[0] if len(args) >= 1 else ()
         
-        if isinstance(other, Mapping):
+        if isinstance(other, HTTPHeaderDict):
+            for key, val in other.iteritems():
+                self.add(key, val)
+        elif isinstance(other, Mapping):
             for key in other:
                 self.add(key, other[key])
         elif hasattr(other, "keys"):
@@ -304,17 +304,20 @@
         return list(self.iteritems())
 
     @classmethod
-    def from_httplib(cls, message, duplicates=('set-cookie',)): # Python 2
+    def from_httplib(cls, message): # Python 2
         """Read headers from a Python 2 httplib message object."""
-        ret = cls(message.items())
-        # ret now contains only the last header line for each duplicate.
-        # Importing with all duplicates would be nice, but this would
-        # mean to repeat most of the raw parsing already done, when the
-        # message object was created. Extracting only the headers of interest 
-        # separately, the cookies, should be faster and requires less
-        # extra code.
-        for key in duplicates:
-            ret.discard(key)
-            for val in message.getheaders(key):
-                ret.add(key, val)
-            return ret
+        # python2.7 does not expose a proper API for exporting multiheaders
+        # efficiently. This function re-reads raw lines from the message 
+        # object and extracts the multiheaders properly.
+        headers = []
+         
+        for line in message.headers:
+            if line.startswith((' ', '\t')):
+                key, value = headers[-1]
+                headers[-1] = (key, value + '\r\n' + line.rstrip())
+                continue
+    
+            key, value = line.split(':', 1)
+            headers.append((key, value.strip()))
+
+        return cls(headers)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/requests-2.6.0/requests/packages/urllib3/connection.py 
new/requests-2.6.2/requests/packages/urllib3/connection.py
--- old/requests-2.6.0/requests/packages/urllib3/connection.py  2015-03-12 
02:52:01.000000000 +0100
+++ new/requests-2.6.2/requests/packages/urllib3/connection.py  2015-04-23 
18:26:41.000000000 +0200
@@ -260,3 +260,5 @@
     # Make a copy for testing.
     UnverifiedHTTPSConnection = HTTPSConnection
     HTTPSConnection = VerifiedHTTPSConnection
+else:
+    HTTPSConnection = DummyConnection
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/requests-2.6.0/requests/packages/urllib3/connectionpool.py 
new/requests-2.6.2/requests/packages/urllib3/connectionpool.py
--- old/requests-2.6.0/requests/packages/urllib3/connectionpool.py      
2015-03-12 02:52:01.000000000 +0100
+++ new/requests-2.6.2/requests/packages/urllib3/connectionpool.py      
2015-04-23 18:26:41.000000000 +0200
@@ -735,7 +735,6 @@
                  % (self.num_connections, self.host))
 
         if not self.ConnectionCls or self.ConnectionCls is DummyConnection:
-            # Platform-specific: Python without ssl
             raise SSLError("Can't connect to HTTPS URL because the SSL "
                            "module is not available.")
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/requests-2.6.0/requests/packages/urllib3/contrib/pyopenssl.py 
new/requests-2.6.2/requests/packages/urllib3/contrib/pyopenssl.py
--- old/requests-2.6.0/requests/packages/urllib3/contrib/pyopenssl.py   
2015-03-12 02:52:01.000000000 +0100
+++ new/requests-2.6.2/requests/packages/urllib3/contrib/pyopenssl.py   
2015-04-23 18:26:41.000000000 +0200
@@ -38,8 +38,6 @@
 ----------------
 
 :var DEFAULT_SSL_CIPHER_LIST: The list of supported SSL/TLS cipher suites.
-    Default: ``ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:
-    ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS``
 
 .. _sni: https://en.wikipedia.org/wiki/Server_Name_Indication
 .. _crime attack: https://en.wikipedia.org/wiki/CRIME_(security_exploit)
@@ -85,22 +83,7 @@
                        + OpenSSL.SSL.VERIFY_FAIL_IF_NO_PEER_CERT,
 }
 
-# A secure default.
-# Sources for more information on TLS ciphers:
-#
-# - https://wiki.mozilla.org/Security/Server_Side_TLS
-# - https://www.ssllabs.com/projects/best-practices/index.html
-# - https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
-#
-# The general intent is:
-# - Prefer cipher suites that offer perfect forward secrecy (DHE/ECDHE),
-# - prefer ECDHE over DHE for better performance,
-# - prefer any AES-GCM over any AES-CBC for better performance and security,
-# - use 3DES as fallback which is secure but slow,
-# - disable NULL authentication, MD5 MACs and DSS for security reasons.
-DEFAULT_SSL_CIPHER_LIST = "ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:" + \
-    "ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:" + \
-    "!aNULL:!MD5:!DSS"
+DEFAULT_SSL_CIPHER_LIST = util.ssl_.DEFAULT_CIPHERS
 
 
 orig_util_HAS_SNI = util.HAS_SNI
@@ -299,7 +282,9 @@
         try:
             cnx.do_handshake()
         except OpenSSL.SSL.WantReadError:
-            select.select([sock], [], [])
+            rd, _, _ = select.select([sock], [], [], sock.gettimeout())
+            if not rd:
+                raise timeout('select timed out')
             continue
         except OpenSSL.SSL.Error as e:
             raise ssl.SSLError('bad handshake', e)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/requests-2.6.0/requests/packages/urllib3/exceptions.py 
new/requests-2.6.2/requests/packages/urllib3/exceptions.py
--- old/requests-2.6.0/requests/packages/urllib3/exceptions.py  2015-03-12 
02:52:01.000000000 +0100
+++ new/requests-2.6.2/requests/packages/urllib3/exceptions.py  2015-04-23 
18:26:41.000000000 +0200
@@ -162,3 +162,8 @@
 class InsecurePlatformWarning(SecurityWarning):
     "Warned when certain SSL configuration is not available on a platform."
     pass
+
+
+class ResponseNotChunked(ProtocolError, ValueError):
+    "Response needs to be chunked in order to read it as chunks."
+    pass
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/requests-2.6.0/requests/packages/urllib3/response.py 
new/requests-2.6.2/requests/packages/urllib3/response.py
--- old/requests-2.6.0/requests/packages/urllib3/response.py    2015-03-12 
02:52:01.000000000 +0100
+++ new/requests-2.6.2/requests/packages/urllib3/response.py    2015-04-23 
18:26:41.000000000 +0200
@@ -1,9 +1,15 @@
+try:
+    import http.client as httplib
+except ImportError:
+    import httplib
 import zlib
 import io
 from socket import timeout as SocketTimeout
 
 from ._collections import HTTPHeaderDict
-from .exceptions import ProtocolError, DecodeError, ReadTimeoutError
+from .exceptions import (
+    ProtocolError, DecodeError, ReadTimeoutError, ResponseNotChunked
+)
 from .packages.six import string_types as basestring, binary_type, PY3
 from .connection import HTTPException, BaseSSLError
 from .util.response import is_fp_closed
@@ -117,8 +123,17 @@
         if hasattr(body, 'read'):
             self._fp = body
 
-        if preload_content and not self._body:
-            self._body = self.read(decode_content=decode_content)
+        # Are we using the chunked-style of transfer encoding?
+        self.chunked = False
+        self.chunk_left = None
+        tr_enc = self.headers.get('transfer-encoding', '')
+        if tr_enc.lower() == "chunked":
+            self.chunked = True
+
+        # We certainly don't want to preload content when the response is 
chunked.
+        if not self.chunked:
+            if preload_content and not self._body:
+                self._body = self.read(decode_content=decode_content)
 
     def get_redirect_location(self):
         """
@@ -157,6 +172,36 @@
         """
         return self._fp_bytes_read
 
+    def _init_decoder(self):
+        """
+        Set-up the _decoder attribute if necessar.
+        """
+        # Note: content-encoding value should be case-insensitive, per RFC 7230
+        # Section 3.2
+        content_encoding = self.headers.get('content-encoding', '').lower()
+        if self._decoder is None:
+            if content_encoding in self.CONTENT_DECODERS:
+                self._decoder = _get_decoder(content_encoding)
+
+    def _decode(self, data, decode_content, flush_decoder):
+        """
+        Decode the data passed in and potentially flush the decoder.
+        """
+        try:
+            if decode_content and self._decoder:
+                data = self._decoder.decompress(data)
+        except (IOError, zlib.error) as e:
+            content_encoding = self.headers.get('content-encoding', '').lower()
+            raise DecodeError(
+                "Received response with content-encoding: %s, but "
+                "failed to decode it." % content_encoding, e)
+
+        if flush_decoder and decode_content and self._decoder:
+            buf = self._decoder.decompress(binary_type())
+            data += buf + self._decoder.flush()
+
+        return data
+
     def read(self, amt=None, decode_content=None, cache_content=False):
         """
         Similar to :meth:`httplib.HTTPResponse.read`, but with two additional
@@ -178,12 +223,7 @@
             after having ``.read()`` the file object. (Overridden if ``amt`` is
             set.)
         """
-        # Note: content-encoding value should be case-insensitive, per RFC 7230
-        # Section 3.2
-        content_encoding = self.headers.get('content-encoding', '').lower()
-        if self._decoder is None:
-            if content_encoding in self.CONTENT_DECODERS:
-                self._decoder = _get_decoder(content_encoding)
+        self._init_decoder()
         if decode_content is None:
             decode_content = self.decode_content
 
@@ -232,17 +272,7 @@
 
             self._fp_bytes_read += len(data)
 
-            try:
-                if decode_content and self._decoder:
-                    data = self._decoder.decompress(data)
-            except (IOError, zlib.error) as e:
-                raise DecodeError(
-                    "Received response with content-encoding: %s, but "
-                    "failed to decode it." % content_encoding, e)
-
-            if flush_decoder and decode_content and self._decoder:
-                buf = self._decoder.decompress(binary_type())
-                data += buf + self._decoder.flush()
+            data = self._decode(data, decode_content, flush_decoder)
 
             if cache_content:
                 self._body = data
@@ -269,11 +299,16 @@
             If True, will attempt to decode the body based on the
             'content-encoding' header.
         """
-        while not is_fp_closed(self._fp):
-            data = self.read(amt=amt, decode_content=decode_content)
+        self._init_decoder()
+        if self.chunked:
+            for line in self.read_chunked(amt):
+                yield self._decode(line, decode_content, True)
+        else:
+            while not is_fp_closed(self._fp):
+                data = self.read(amt=amt, decode_content=decode_content)
 
-            if data:
-                yield data
+                if data:
+                    yield data
 
     @classmethod
     def from_httplib(ResponseCls, r, **response_kw):
@@ -351,3 +386,59 @@
         else:
             b[:len(temp)] = temp
             return len(temp)
+
+    def read_chunked(self, amt=None):
+        # FIXME: Rewrite this method and make it a class with
+        #        a better structured logic.
+        if not self.chunked:
+            raise ResponseNotChunked("Response is not chunked. "
+                "Header 'transfer-encoding: chunked' is missing.")
+        while True:
+            # First, we'll figure out length of a chunk and then
+            # we'll try to read it from socket.
+            if self.chunk_left is None:
+                line = self._fp.fp.readline()
+                line = line.decode()
+                # See RFC 7230: Chunked Transfer Coding.
+                i = line.find(';')
+                if i >= 0:
+                    line = line[:i]  # Strip chunk-extensions.
+                try:
+                    self.chunk_left = int(line, 16)
+                except ValueError:
+                    # Invalid chunked protocol response, abort.
+                    self.close()
+                    raise httplib.IncompleteRead(''.join(line))
+                if self.chunk_left == 0:
+                    break
+            if amt is None:
+                chunk = self._fp._safe_read(self.chunk_left)
+                yield chunk
+                self._fp._safe_read(2)  # Toss the CRLF at the end of the 
chunk.
+                self.chunk_left = None
+            elif amt < self.chunk_left:
+                value = self._fp._safe_read(amt)
+                self.chunk_left = self.chunk_left - amt
+                yield value
+            elif amt == self.chunk_left:
+                value = self._fp._safe_read(amt)
+                self._fp._safe_read(2)  # Toss the CRLF at the end of the 
chunk.
+                self.chunk_left = None
+                yield value
+            else:  # amt > self.chunk_left
+                yield self._fp._safe_read(self.chunk_left)
+                self._fp._safe_read(2)  # Toss the CRLF at the end of the 
chunk.
+                self.chunk_left = None
+
+        # Chunk content ends with \r\n: discard it.
+        while True:
+            line = self._fp.fp.readline()
+            if not line:
+                # Some sites may not end with '\r\n'.
+                break
+            if line == b'\r\n':
+                break
+
+        # We read everything; close the "file".
+        self.release_conn()
+
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/requests-2.6.0/requests/packages/urllib3/util/ssl_.py 
new/requests-2.6.2/requests/packages/urllib3/util/ssl_.py
--- old/requests-2.6.0/requests/packages/urllib3/util/ssl_.py   2015-03-12 
02:52:01.000000000 +0100
+++ new/requests-2.6.2/requests/packages/urllib3/util/ssl_.py   2015-04-23 
18:26:41.000000000 +0200
@@ -9,10 +9,10 @@
 create_default_context = None
 
 import errno
-import ssl
 import warnings
 
 try:  # Test for SSL features
+    import ssl
     from ssl import wrap_socket, CERT_NONE, PROTOCOL_SSLv23
     from ssl import HAS_SNI  # Has SNI?
 except ImportError:
@@ -25,14 +25,24 @@
     OP_NO_SSLv2, OP_NO_SSLv3 = 0x1000000, 0x2000000
     OP_NO_COMPRESSION = 0x20000
 
-try:
-    from ssl import _DEFAULT_CIPHERS
-except ImportError:
-    _DEFAULT_CIPHERS = (
-        
'ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+HIGH:'
-        
'DH+HIGH:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+HIGH:RSA+3DES:!aNULL:'
-        '!eNULL:!MD5'
-    )
+# A secure default.
+# Sources for more information on TLS ciphers:
+#
+# - https://wiki.mozilla.org/Security/Server_Side_TLS
+# - https://www.ssllabs.com/projects/best-practices/index.html
+# - https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
+#
+# The general intent is:
+# - Prefer cipher suites that offer perfect forward secrecy (DHE/ECDHE),
+# - prefer ECDHE over DHE for better performance,
+# - prefer any AES-GCM over any AES-CBC for better performance and security,
+# - use 3DES as fallback which is secure but slow,
+# - disable NULL authentication, MD5 MACs and DSS for security reasons.
+DEFAULT_CIPHERS = (
+    'ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+HIGH:'
+    'DH+HIGH:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+HIGH:RSA+3DES:!aNULL:'
+    '!eNULL:!MD5'
+)
 
 try:
     from ssl import SSLContext  # Modern SSL?
@@ -40,7 +50,8 @@
     import sys
 
     class SSLContext(object):  # Platform-specific: Python 2 & 3.1
-        supports_set_ciphers = sys.version_info >= (2, 7)
+        supports_set_ciphers = ((2, 7) <= sys.version_info < (3,) or
+                                (3, 2) <= sys.version_info)
 
         def __init__(self, protocol_version):
             self.protocol = protocol_version
@@ -167,7 +178,7 @@
     return candidate
 
 
-def create_urllib3_context(ssl_version=None, cert_reqs=ssl.CERT_REQUIRED,
+def create_urllib3_context(ssl_version=None, cert_reqs=None,
                            options=None, ciphers=None):
     """All arguments have the same meaning as ``ssl_wrap_socket``.
 
@@ -204,6 +215,9 @@
     """
     context = SSLContext(ssl_version or ssl.PROTOCOL_SSLv23)
 
+    # Setting the default here, as we may have no ssl module on import
+    cert_reqs = ssl.CERT_REQUIRED if cert_reqs is None else cert_reqs
+
     if options is None:
         options = 0
         # SSLv2 is easily broken and is considered harmful and dangerous
@@ -217,7 +231,7 @@
     context.options |= options
 
     if getattr(context, 'supports_set_ciphers', True):  # Platform-specific: 
Python 2.6
-        context.set_ciphers(ciphers or _DEFAULT_CIPHERS)
+        context.set_ciphers(ciphers or DEFAULT_CIPHERS)
 
     context.verify_mode = cert_reqs
     if getattr(context, 'check_hostname', None) is not None:  # 
Platform-specific: Python 3.2
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/requests-2.6.0/requests/sessions.py 
new/requests-2.6.2/requests/sessions.py
--- old/requests-2.6.0/requests/sessions.py     2015-03-14 17:30:10.000000000 
+0100
+++ new/requests-2.6.2/requests/sessions.py     2015-04-22 23:39:52.000000000 
+0200
@@ -90,7 +90,7 @@
 
 class SessionRedirectMixin(object):
     def resolve_redirects(self, resp, req, stream=False, timeout=None,
-                          verify=True, cert=None, proxies=None):
+                          verify=True, cert=None, proxies=None, 
**adapter_kwargs):
         """Receives a Response. Returns a generator of Responses."""
 
         i = 0
@@ -193,6 +193,7 @@
                 cert=cert,
                 proxies=proxies,
                 allow_redirects=False,
+                **adapter_kwargs
             )
 
             extract_cookies_to_jar(self.cookies, prepared_request, resp.raw)
@@ -560,10 +561,6 @@
         # Set up variables needed for resolve_redirects and dispatching of 
hooks
         allow_redirects = kwargs.pop('allow_redirects', True)
         stream = kwargs.get('stream')
-        timeout = kwargs.get('timeout')
-        verify = kwargs.get('verify')
-        cert = kwargs.get('cert')
-        proxies = kwargs.get('proxies')
         hooks = request.hooks
 
         # Get the appropriate adapter to use
@@ -591,12 +588,7 @@
         extract_cookies_to_jar(self.cookies, request, r.raw)
 
         # Redirect resolving generator.
-        gen = self.resolve_redirects(r, request,
-            stream=stream,
-            timeout=timeout,
-            verify=verify,
-            cert=cert,
-            proxies=proxies)
+        gen = self.resolve_redirects(r, request, **kwargs)
 
         # Resolve redirects if allowed.
         history = [resp for resp in gen] if allow_redirects else []
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/requests-2.6.0/requests.egg-info/PKG-INFO 
new/requests-2.6.2/requests.egg-info/PKG-INFO
--- old/requests-2.6.0/requests.egg-info/PKG-INFO       2015-03-14 
17:44:16.000000000 +0100
+++ new/requests-2.6.2/requests.egg-info/PKG-INFO       2015-04-23 
18:30:21.000000000 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 1.1
 Name: requests
-Version: 2.6.0
+Version: 2.6.2
 Summary: Python HTTP for Humans.
 Home-page: http://python-requests.org
 Author: Kenneth Reitz
@@ -16,6 +16,8 @@
                 :target: https://pypi.python.org/pypi/requests
         
         
+        
+        
         Requests is an Apache2 Licensed HTTP library, written in Python, for 
human
         beings.
         
@@ -98,17 +100,47 @@
         Release History
         ---------------
         
+        2.6.2 (2015-04-23)
+        ++++++++++++++++++
+        
+        **Bugfixes**
+        
+        - Fix regression where compressed data that was sent as chunked data 
was not
+          properly decompressed. (#2561)
+        
+        2.6.1 (2015-04-22)
+        ++++++++++++++++++
+        
+        **Bugfixes**
+        
+        - Remove VendorAlias import machinery introduced in v2.5.2.
+        
+        - Simplify the PreparedRequest.prepare API: We no longer require the 
user to
+          pass an empty list to the hooks keyword argument. (c.f. #2552)
+        
+        - Resolve redirects now receives and forwards all of the original 
arguments to
+          the adapter. (#2503)
+        
+        - Handle UnicodeDecodeErrors when trying to deal with a unicode URL 
that
+          cannot be encoded in ASCII. (#2540)
+        
+        - Populate the parsed path of the URI field when performing Digest
+          Authentication. (#2426)
+        
+        - Copy a PreparedRequest's CookieJar more reliably when it is not an 
instance
+          of RequestsCookieJar. (#2527)
+        
         2.6.0 (2015-03-14)
         ++++++++++++++++++
         
         **Bugfixes**
         
-        - Fix handling of cookies on redirect. Previously a cookie without a 
host
-          value set would use the hostname for the redirected URL exposing 
requests
-          users to session fixation attacks and potentially cookie stealing. 
This was
-          disclosed privately by Matthew Daley of `BugFuzz 
<https://bugfuzz.com>`_.
-          An CVE identifier has not yet been assigned for this. This affects 
all
-          versions of requests from v2.1.0 to v2.5.3 (inclusive on both ends).
+        - CVE-2015-2296: Fix handling of cookies on redirect. Previously a 
cookie
+          without a host value set would use the hostname for the redirected 
URL
+          exposing requests users to session fixation attacks and potentially 
cookie
+          stealing. This was disclosed privately by Matthew Daley of
+          `BugFuzz <https://bugfuzz.com>`_. This affects all versions of 
requests from
+          v2.1.0 to v2.5.3 (inclusive on both ends).
         
         - Fix error when requests is an ``install_requires`` dependency and 
``python
           setup.py test`` is run. (#2462)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/requests-2.6.0/setup.py new/requests-2.6.2/setup.py
--- old/requests-2.6.0/setup.py 2015-03-14 17:30:10.000000000 +0100
+++ new/requests-2.6.2/setup.py 2015-04-22 23:39:52.000000000 +0200
@@ -30,12 +30,8 @@
 
 version = ''
 with open('requests/__init__.py', 'r') as fd:
-    reg = re.compile(r'__version__\s*=\s*[\'"]([^\'"]*)[\'"]')
-    for line in fd:
-        m = reg.match(line)
-        if m:
-            version = m.group(1)
-            break
+    version = re.search(r'^__version__\s*=\s*[\'"]([^\'"]*)[\'"]',
+                        fd.read(), re.MULTILINE).group(1)
 
 if not version:
     raise RuntimeError('Cannot find version information')
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/requests-2.6.0/test_requests.py 
new/requests-2.6.2/test_requests.py
--- old/requests-2.6.0/test_requests.py 2015-03-11 16:02:15.000000000 +0100
+++ new/requests-2.6.2/test_requests.py 2015-04-22 23:39:52.000000000 +0200
@@ -1613,7 +1613,6 @@
     p.prepare(
         method='GET',
         url=u('http://www.example.com/üniçø∂é'),
-        hooks=[]
     )
     assert_copy(p, p.copy())
 


Reply via email to