[issue13359] urllib2 doesn't escape spaces in http requests
Mads Kiilerich added the comment: Yes, the url sent by urllib2 must not contain spaces. In my opinion the only way to handle that correctly is to not pass urls with spaces to urlopen. Escaping the urls is not a good solution - even if the API was to be designed from scratch. It would be better to raise an exception if it is passed an invalid url. Note for example that '/' and the %-encoding of '/' are different, and it must thus be possible to pass an url containing both to urlopen. That is not possible if it automically escapes. -- ___ Python tracker <http://bugs.python.org/issue13359> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13359] urllib2 doesn't escape spaces in http requests
Mads Kiilerich added the comment: FWIW, I don't think it is a good idea to escape automatically. It will change the behaviour in a non-backward compatible way for existing applications that pass encoded urls to this function. I think the existing behaviour is better. The documentation and the failure mode for passing URLs with spaces could however be improved. -- nosy: +kiilerix ___ Python tracker <http://bugs.python.org/issue13359> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6774] socket.shutdown documentation: on some platforms, closing one half closes the other half
Mads Kiilerich added the comment: I was scared by the note in the documentation and wondered if the socket Python API was completely incapable of handling half-closed connections cross platform. pitrou helped me on IRC to track the note down to this issue. IMO the bug report should have been rejected and the documentation patch should be removed. It shouldn't be that surprising that shutting something down that already has been shutdown (by the peer) will fail. I don't see any indication that a "shutdown call closes the connection on the other half". It makes it half-closed as it should - and if it did anything else (which the note indicates) then it would be a big violation of BSD TCP API. Ok, it might be slightly surprising that the next shutdown on the other end fails, but that is fully covered by "Note Some behavior may be platform dependent, since calls are made to the operating system socket APIs." It is not specific to Python in any way, AFAICT. If anything it could just say something like "Note that shutdown of a socket that already has been shut down by the peer is platform dependent and might fail." -- nosy: +kiilerix, pitrou title: socket.shudown documentation: on some platforms, closing one half closes the other half -> socket.shutdown documentation: on some platforms, closing one half closes the other half ___ Python tracker <http://bugs.python.org/issue6774> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13721] ssl.wrap_socket on a connected but failed connection succeeds and .peer_certificate gives AttributeError
Mads Kiilerich added the comment: I would find ValueError surprising here. socket.error or SSLError would be less surprising ... even though it technically isn't completely true. But isn't it more like a kind of RuntimeError to call getpeercert after the socket has been closed? -- ___ Python tracker <http://bugs.python.org/issue13721> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13721] ssl.wrap_socket on a connected but failed connection succeeds and .peer_certificate gives AttributeError
Mads Kiilerich added the comment: > I'm a bit wary of API bloat here. Yes, but explicit is better than magic ... > Thanks. So fixing how getpeercert behaves and either raise a dedicated > error or return None would improve things here, right? Well ... that would at least make it theoretically possible to claim that it works as intended ;-) A counter argument could be that retrieving the certificate that already has been used for negotiation isn't a socket operation. It would make sense to be able to look at it even after the socket has been closed. From that point of view _sslobj should be kept "forever". A return value of None would still not indicate if we had a working connection without certificate or a failed connection. That would be annoying. My primary concern with my Mercurial hat on is to get the documentation updated so we know how to write code that works correctly also with previous Python versions. -- ___ Python tracker <http://bugs.python.org/issue13721> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13721] ssl.wrap_socket on a connected but failed connection succeeds and .peer_certificate gives AttributeError
Mads Kiilerich added the comment: I won't claim to know more about socket error codes than what the Linux man pages says. According to them only send() can fail with ECONNRESET, even though POSIX Programmer's Manual man pages mentions many others. getpeername() is however not documented as being able to return ECONNRESET, and ENOTCONN is apparently the most appropriate error code for getpeername() both on linux (3) and posix (3p). So: ENOTCONN from getpeername just means that the connection isn't connected. It doesn't indicate that no connection attempt has been mode, and the use of getpeername in SSLSocket.__init__ is thus not usable for checking "if the underlying socket isn’t connected yet". The wrap_socket API promises to be able to wrap both connected and unconnected sockets without being told explicit what the programmer intended. A system network socket knows if it is unused or failed, but I am not aware of any reliable cross-platform way to observe that (but that doesn't mean that none exist). The only way to reliably implement the documented wrap_socket API might thus be to maintain a flag in PySocketSockObject. Introducing a new and more explicit way of wrapping connected sockets might be a simpler and more stable solution. >From another perspective: Any user of sockets must be aware that socket >operations can fail at any time. It might thus not be a problem that >wrap_socket fails to fail, as long as the programmer knows how to catch the >failure in the next operation. From that point of view the problem is that it >is surprising and undocumented how getpeercert can fail. -- ___ Python tracker <http://bugs.python.org/issue13721> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13721] ssl.wrap_socket on a connected but failed connection succeeds and .peer_certificate gives AttributeError
Mads Kiilerich added the comment: I think it would be confusing if getpeercert returned None both for valid connections without certificates and also for invalid connections. I would almost prefer the current behaviour (AttributeError) if just it was documented and there was a documented way to check if the connection actually was alive. Do you agree that checking .cipher() is the recommended way to do that in a way that is compatible with past and future 2.x versions? I hope the proper fix will ensure that an exception always is raised if the ssl handshake fails - and that a successful wrap_socket means that the ssl negotiation did succeed with the given constraints. It might however only be feasible to fix that for 3.x. I filed Issue13724 for the create_socket documentation. -- ___ Python tracker <http://bugs.python.org/issue13721> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13724] socket.create_connection and multiple IP addresses
New submission from Mads Kiilerich : Forked from issue13721 where I was too lazy to report it separately: http://docs.python.org/release/2.7.2/library/socket#socket.create_connection doesn't describe how it loops over all IP addresses. That seems to be the functions main advantage (and a gotcha) compared to creating the socket and connecting directly. I propose to warn that it might "hang" up to n*timeout before anything happens with something like this in documentation and docstring: ... and return the socket object. If the host resolves to multiple IP addresses then they will all be tried in turn until one of them succeeds. Passing the optional timeout parameter will set the timeout on the socket instance before each attempt to connect. If no ... -- assignee: docs@python components: Documentation messages: 150772 nosy: docs@python, kiilerix, pitrou priority: normal severity: normal status: open title: socket.create_connection and multiple IP addresses type: enhancement versions: Python 2.7 ___ Python tracker <http://bugs.python.org/issue13724> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13721] ssl.wrap_socket on a connected but failed connection succeeds and .peer_certificate gives AttributeError
New submission from Mads Kiilerich : According to http://docs.python.org/release/2.7.2/library/ssl wrap_socket can be used either on connected sockets or on un-connected sockets which then can be .connect'ed. In the latter case SSLSocket.connect() will (AFAIK) always raise a nice exception if the connection fails before the ssl negotiation has completed. But when an existing connected but failing socket is wrapped then SSLSocket.__init__ will create an instance with self._sslobj = None without negotiating ssl and without raising any exception. Many SSLSocket methods (such as .cipher) checks for self._sslobj, but for example .getpeercert doesn't and will derefence None. That can lead to spurious crashes in applications. This problem showed up with Mercurial and connections from China to code.google.com being blocked by the Chinese firewall - see for example https://bugzilla.redhat.com/show_bug.cgi?id=771691 . In that case import socket, ssl, time s = socket.create_connection(('code.google.com', 443)) time.sleep(1) ssl_sock = ssl.wrap_socket(s) ssl_sock.getpeercert(True) would fail with ... ssl_sock.getpeercert(True) File "/usr/lib64/python2.7/ssl.py", line 172, in getpeercert return self._sslobj.peer_certificate(binary_form) AttributeError: 'NoneType' object has no attribute 'peer_certificate' The problem occurs in the case where The Chinese Wall responds correctly to the SYN with SYN+ACK but soon after sends a RST. The sleep is necessary to reproduce it consistently; that makes sure the RST has been received and getpeername fails. Otherwise getpeername succeeds and the connection reset is only seen later on while negotiation ssl, and socket errors there are handled 'correctly'. The problem can be reproduced on Linux with iptables -t filter -A FORWARD -p tcp --dport 443 ! --tcp-flags SYN SYN -j REJECT --reject-with tcp-reset I would expect that wrap_socket / SSLSocket.__init__ raised an exception if the wrapped socket has been connected but failed. Calling getpeername is insufficient to detect that (and it is too racy to be reliable). Alternatively all SSLSocket methods should take care not to dereference self._sslobj and they should respond properly - preferably with a socket/ssl exception. Alternatively the documentation should describe how all applications that wraps connected sockets has to verify that it actually was connected. Checking .cipher() is apparently currently the least ugly way to do that? One good(?) reason to wrap connected sockets is to be able to use socket.create_connection which tries all IP adresses of a fqdn before it fails. (Btw: That isn't described in the documentation! That confused me while debugging this.) I guess applications (like Mercurial) that for that reason wants to use create_connection with 2.7.2 and older should check .cipher() as a workaround? -- components: Library (Lib) messages: 150754 nosy: kiilerix priority: normal severity: normal status: open title: ssl.wrap_socket on a connected but failed connection succeeds and .peer_certificate gives AttributeError type: behavior versions: Python 2.7 ___ Python tracker <http://bugs.python.org/issue13721> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12833] raw_input misbehaves when readline is imported
Changes by Mads Kiilerich : -- nosy: +kiilerix ___ Python tracker <http://bugs.python.org/issue12833> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12786] subprocess wait() hangs when stdin is closed
Changes by Mads Kiilerich : -- nosy: +kiilerix ___ Python tracker <http://bugs.python.org/issue12786> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12000] SSL certificate verification failed if no dNSName entry in subjectAltName
Mads Kiilerich added the comment: Nicolas Bareil wrote, On 05/07/2011 09:48 AM: > Do you think this test should fail? Until now I have considered this behaviour OK but undocumented and officially unsupported in Python. One (the best?) reason for considering it OK is that if someone (intentionally or not) trusts a certificate that happens to have the textual representation of an IP address in commonName then there is no doubt what the intention with that is. This case is thus within what i considered secure behaviour. But the more I look at it the more convinced I get that this test should fail. RFC 2818 mentions subjectAltName iPAddress as a "must" for IP addresses - even though it only uses a lower-case and thus perhaps-not-necessarily-authoritative "must". But the best argument against IP in commonName is that it isn't mentioned anywhere, and when it isn't explicitly permitted we should consider it forbidden. A consequence of that is that my previous concern is invalid. There is no reason the presence of a subjectAltName iPAddress should prevent fallback from dNSName to commonName. -- ___ Python tracker <http://bugs.python.org/issue12000> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12000] SSL certificate verification failed if no dNSName entry in subjectAltName
Mads Kiilerich added the comment: In my opinion the RFCs are a bit unclear about how iPAddress subjectAltNames should be handled. (I also don't know if Python currently do the right thing by accepting and matching IP addresses if specified in commonName.) Until now Python failed to the safe side by not matching on subjectAltName iPAddress but also not falling back to commonName if they were specified. AFAICS, with this change it is possible to create strange certificates that Python would accept when an IP address matched commonName but other implementations would reject because of iPAddress mismatch. That is probably not a real problem, but I wanted to point it out as the biggest issue I could find with this fix. Nice catch. We could perhaps add IP addresses to dnsnames even though we don't match on them. -- ___ Python tracker <http://bugs.python.org/issue12000> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12000] SSL certificate verification failed if no dNSName entry in subjectAltName
Changes by Mads Kiilerich : -- nosy: +kiilerix ___ Python tracker <http://bugs.python.org/issue12000> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11725] httplib and urllib2 failed ssl connection httplib.BadStatusLine
Mads Kiilerich added the comment: I have filed issue11736 as a more or less related (or bogus) issue. -- nosy: +kiilerix ___ Python tracker <http://bugs.python.org/issue11725> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11736] windows installers ssl module / openssl broken for some sites
New submission from Mads Kiilerich : (Probably the same root cause as issue11725 and using the same test case and analysis, but it seems like it isn't just somebody elses problem.) Expected behaviour: C:\Python26>python --version Python 2.6.4 C:\Python26>python -c "import urllib2; urllib2.urlopen('https://www.finratrace.org')" Traceback (most recent call last): File "", line 1, in File "C:\Python26\lib\urllib2.py", line 124, in urlopen return _opener.open(url, data, timeout) File "C:\Python26\lib\urllib2.py", line 395, in open response = meth(req, response) File "C:\Python26\lib\urllib2.py", line 508, in http_response 'http', request, response, code, msg, hdrs) File "C:\Python26\lib\urllib2.py", line 433, in error return self._call_chain(*args) File "C:\Python26\lib\urllib2.py", line 367, in _call_chain result = func(*args) File "C:\Python26\lib\urllib2.py", line 516, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 403: Forbidden but C:\Python26>python --version Python 2.6.5 C:\Python26>python -c "import urllib2; urllib2.urlopen('https://www.finratrace.org')" (hangs...) the same behaviour is seen on all following versions up to 2.7.1. I guess the Python windows installer started to contain an openssl that changed something? I think this has caused a number of strange Mercurial bug reports. -- components: Windows messages: 132736 nosy: kiilerix priority: normal severity: normal status: open title: windows installers ssl module / openssl broken for some sites versions: Python 2.7 ___ Python tracker <http://bugs.python.org/issue11736> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10795] standard library do not use ssl as recommended
Mads Kiilerich added the comment: The response I got to this issue hinted that it was a lame issue I filed. I haven't had time/focus to investigate further and give constructive feedback. I think it is kind of OK to require explicit specification of the ca_certs as long as it is made clear in all the relevant places that it _has_ to be done. I think it would be a good idea to deprecate the default value for ca_certs and issue a warning if ca_certs hasn't been specified (as None or a path). I have heard that some Python variants come with the system ca_certs built in and hard-coded somehow. That is in a way very nice and convenient and a good solution (as long the user wants to use the same ca_certs for all purposes), but it would have to be available and reliable on all platforms to be really useful. -- ___ Python tracker <http://bugs.python.org/issue10795> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10795] standard library do not use ssl as recommended
New submission from Mads Kiilerich : As discussed on issue1589 it is now possible to create decent ssl connections with the ssl module - assuming ca_certs is specified and it is checked that the certificates matches. The standard library do however neither do that nor make it possible to do it in the places where it uses ssl. For example smtplib starttls do not make it possible at all to specify ca_certs. I suggest all uses of ssl should be reviewed - and fixed if necessary. The documentation should also be improved to make it clear what is necessary to create "secure" connections. -- components: Library (Lib) messages: 124898 nosy: kiilerix, pitrou priority: normal severity: normal status: open title: standard library do not use ssl as recommended versions: Python 2.7 ___ Python tracker <http://bugs.python.org/issue10795> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10618] regression in subprocess.call() command quoting
Changes by Mads Kiilerich : -- nosy: +kiilerix ___ Python tracker <http://bugs.python.org/issue10618> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1589] New SSL module doesn't seem to verify hostname against commonName in certificate
Mads Kiilerich added the comment: > So I know the current patch doesn't support IP addresses Not exactly. The committed patch do not consider IP addresses - especially not iPAddress entries in subjectAltName. But Python only distinguishes resolvable names from IP addresses at a very low level. At the ssl module level the name and IP is considered the same, so we actually do support IP addresses if specified in commonName or subjectAltName DNS. We are thus "vulnerable" to this issue. (AFAIK AFAICS) (It seems like IP in commonName isn't permitted by the RFCs, but I think it is quite common, especially for self-signed certificates.) > CVE-2010-3170: http://www.mozilla.org/security/announce/2010/mfsa2010-70.html For reference, the actual report can be found on http://www.securityfocus.com/archive/1/513396 FWIW, I don't think it is critical at all. Granted, it is a deviation from the specification, and that is not good in a security critical part. But we do not claim to implement the full specification, so I don't think this deviation makes any difference. Further, this issue will only have relevance if one the trusted CAs create invalid certificates. But if the trusted CAs create invalid certificates the user has lost anyway and things can't get much worse. -- ___ Python tracker <http://bugs.python.org/issue1589> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1589] New SSL module doesn't seem to verify hostname against commonName in certificate
Mads Kiilerich added the comment: Can you confirm that the exception raised both on "too early" and "too late" is something like "...SSL3_GET_SERVER_CERTIFICATE:certificate verify failed"? (If so: It would be nice if a slightly more helpful message could be given. I don't know if that is possible.) -- ___ Python tracker <http://bugs.python.org/issue1589> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1589] New SSL module doesn't seem to verify hostname against commonName in certificate
Mads Kiilerich added the comment: > Indeed. But, strictly speaking, there are no tests for IPs, so it > shouldn't be taken for granted that it works, even for commonName. > The rationale is that there isn't really any point in using an IP rather > a host name. I don't know if there is a point or not, but some hosts are for some reason intended to be connected to using IP address and their certificates thus contains IP addresses. I think we should support that too, and I find it a bit confusing to only have partial support for subjectAltName. > Well, that's additional logic to code. I'm not sure it's worth it, > especially given that the function is called match_hostname in the first > place. "hostname" in Python usually refers to both IP addresses and DNS hostnames (just like in URLs), so I think it is a fair assumption that IP addresses also works in this hostname function. Perhaps it should be noted that CertificateError only is raised by match_hostname so a paranoid programmer don't start catching it everywhere - and also that match_hostname won't raise SSLError. -- ___ Python tracker <http://bugs.python.org/issue1589> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1589] New SSL module doesn't seem to verify hostname against commonName in certificate
Mads Kiilerich added the comment: I'm sorry to make the discussion longer ... >From a Python user/programmers point of view it would be nice if >http://docs.python.org/library/ssl.html also clarified what "validation" means >(apparently that the cert chain all the way from one of ca_certs is valid and >with valid dates, except that CRLs not are checked?). It could perhaps be >mentioned next to the ca_certs description. It would also be nice to see an >example with subjectAltName, both with DNS and IP entries. Has it been tested that the way Python uses OpenSSL really checks both notBefore and notAfter? Some comments to the patch. Some of them are just thoughts that can be ignored, but I think some of them are relevant. _dnsname_to_pat: AFAICS * shouldn't match the empty string. I would expect "fail(cert, '.a.com')". I would prefer to fail to the safe side and only allow a left-most wildcard and thus not allow multiple or f* wildcards, just like draft-saintandre-tls-server-id-check-09 suggests. I would prefer to not use re for such an important task where clarity and correctness is so important. If we only allow left-most wildcards it shouldn't be necessary. match_hostname: I don't understand "IP addresses are not accepted for hostname". I assume that if commonName specifies an IP address then a hostname with this address is valid. So isn't it more that "subjectAltName iPAddress isn't supported"? But wouldn't it be better and simpler to simply support iPAddress - either as the only check iff hostname "looks" like an IP address, alternatively in all cases check against both DNS and IP entries? "dnsnames" doesn't say much about what it is. Perhaps "unmatched"? "if san: ... else: ..." would perhaps be a bit clearer. "doesn't match with either of (%s)" ... isn't the paranthesis around the list elements too pythonic for a message intended for end users? Separate error messages for subjectAltName and commonName could be helpful. I assume it should be "no appropriate _commonName_" to match "subjectAltName". test: cert for example.com is defined twice. Finally: How about unicode and/or IDN hostnames? -- ___ Python tracker <http://bugs.python.org/issue1589> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1589] New SSL module doesn't seem to verify hostname against commonName in certificate
Mads Kiilerich added the comment: I added some extra verification to Mercurial (http://www.selenic.com/hg/rev/f2937d6492c5). Feel free to use the following under the Python license in Python or elsewhere. It could be a separate method/function or it could integrated in wrap_socket and controlled by a keyword. I would appreciate if you find the implementation insufficient or incorrect. The purpose with this function is to verify if the received and validated certificate matches the host we intended to connect to. I try to keep it simple and to fail to the safe side. "Correct" subjectAltName handling seems not to be feasible. Are CRLs checked by the SSL module? Otherwise it deserves a big fat warning. (I now assume that notBefore is handled by the SSL module and shouldn't be checked here.) def _verifycert(cert, hostname): '''Verify that cert (in socket.getpeercert() format) matches hostname and is valid at this time. CRLs and subjectAltName are not handled. Returns error message if any problems are found and None on success. ''' if not cert: return _('no certificate received') notafter = cert.get('notAfter') if notafter and time.time() > ssl.cert_time_to_seconds(notafter): return _('certificate expired %s') % notafter dnsname = hostname.lower() for s in cert.get('subject', []): key, value = s[0] if key == 'commonName': certname = value.lower() if (certname == dnsname or '.' in dnsname and certname == '*.' + dnsname.split('.', 1)[1]): return None return _('certificate is for %s') % certname return _('no commonName found in certificate') def check(a, b): if a != b: print (a, b) # Test non-wildcard certificates check(_verifycert({'subject': ((('commonName', 'example.com'),),)}, 'example.com'), None) check(_verifycert({'subject': ((('commonName', 'example.com'),),)}, 'www.example.com'), 'certificate is for example.com') check(_verifycert({'subject': ((('commonName', 'www.example.com'),),)}, 'example.com'), 'certificate is for www.example.com') # Test wildcard certificates check(_verifycert({'subject': ((('commonName', '*.example.com'),),)}, 'www.example.com'), None) check(_verifycert({'subject': ((('commonName', '*.example.com'),),)}, 'example.com'), 'certificate is for *.example.com') check(_verifycert({'subject': ((('commonName', '*.example.com'),),)}, 'w.w.example.com'), 'certificate is for *.example.com') # Avoid some pitfalls check(_verifycert({'subject': ((('commonName', '*.foo'),),)}, 'foo'), 'certificate is for *.foo') check(_verifycert({'subject': ((('commonName', '*o'),),)}, 'foo'), 'certificate is for *o') import time lastyear = time.gmtime().tm_year - 1 nextyear = time.gmtime().tm_year + 1 check(_verifycert({'notAfter': 'May 9 00:00:00 %s GMT' % lastyear}, 'example.com'), 'certificate expired May 9 00:00:00 %s GMT' % lastyear) check(_verifycert({'notAfter': 'Sep 29 15:29:48 %s GMT' % nextyear, 'subject': ()}, 'example.com'), 'no commonName found in certificate') check(_verifycert(None, 'example.com'), 'no certificate received') -- nosy: +kiilerix ___ Python tracker <http://bugs.python.org/issue1589> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9698] When reusing an handler, urllib(2)'s digest authentication fails after multiple regative replies
Changes by Mads Kiilerich : -- nosy: +kiilerix ___ Python tracker <http://bugs.python.org/issue9698> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8797] urllib2 basicauth broken in 2.6.5: RuntimeError: maximum recursion depth exceeded in cmp
Mads Kiilerich added the comment: On 08/27/2010 03:47 AM, Senthil Kumaran wrote: > I agree with you respect to the other error codes, there is already > another bug open to handle this. The reset counter is reset on success > too, in another part of the code. FWIW: From Mercurial it is our experience that the reset counter only gets reset on errors, never on success. -- ___ Python tracker <http://bugs.python.org/issue8797> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8797] urllib2 basicauth broken in 2.6.5: RuntimeError: maximum recursion depth exceeded in cmp
Mads Kiilerich added the comment: Senthil, can you tell us why this fix is correct - and convince us that it is the Final Fix for this issue? Not because I don't trust you, but because this issue has a bad track record. Some comments/questions to this patch: Why do 401 require such special handling? Why not handle it like the other errors? How do this work together with http://code.google.com/p/support/issues/detail?id=3985 ? Detail: I'm surprised you don't use reset_retry_count() - that makes it a bit harder to grok the code. And the patch doesn't reduce the complexity of the code. But ... I really don't understand ... .retried is a kind of error counter. Why do we reset it on errors? I would expect it to be reset on success ... or perhaps on anything but 401, 403 and 407. Or perhaps it should be reset whenever a new URL is requested. -- ___ Python tracker <http://bugs.python.org/issue8797> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8797] urllib2 basicauth broken in 2.6.5: RuntimeError: maximum recursion depth exceeded in cmp
Mads Kiilerich added the comment: zenyatta: Which Mercurial version? We thought we had implemented a sufficiently ugly workaround in Mercurial. Please file an issue in http://mercurial.selenic.com/bts/ . -- ___ Python tracker <http://bugs.python.org/issue8797> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8469] struct - please make sizes explicit
Mads Kiilerich added the comment: Thanks for improving the documentation! A couple of comments for possible further improvements: I think it would be helpful to also see an early notice about how to achieve platform independence, versus the default of the local platform. And perhaps the description of "standard" perhaps could be improved. Perhaps something like the following could be used. Relative to release26-maint/Doc/library/struct.rst rev 81959. -- keywords: +patch Added file: http://bugs.python.org/file17651/struct.diff ___ Python tracker <http://bugs.python.org/issue8469> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8797] urllib2 basicauth broken in 2.6.5: RuntimeError: maximum recursion depth exceeded in cmp
Mads Kiilerich added the comment: FYI: The regression was introduced in 2.6.5 with issue3819 . This also causes problems with Mercurial pushing to google code - see http://mercurial.selenic.com/bts/issue2179 . IIRC google code redirects to an url which then in turn requests authentication. I'm not sure how that will work with a retry count of 1. The funny and inconsistent thing is that it apparently would work if they replied 403 instead of 401. "durin42" from google/mercurial knows the details. (It seems suspicious to me that there are code paths where the retry counter is reset. Doesn't that mean that a remote server will be able to cause endless recursion? Shouldn't there be some kind of hard upper limit on the number of retries?) -- nosy: +kiilerix ___ Python tracker <http://bugs.python.org/issue8797> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3819] urllib2 sends Basic auth across redirects
Mads Kiilerich added the comment: FYI, this change caused a regression in Mercurial - see http://mercurial.selenic.com/bts/issue2179. -- nosy: +kiilerix ___ Python tracker <http://bugs.python.org/issue3819> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8469] struct - please make sizes explicit
Mads Kiilerich added the comment: The more times I read the documentation and your comments I can see that the implementation is OK and the documentation is "complete" and can be read correctly. Please take this as constructive feedback to improving the documentation to make it easier to understand and harder to read incorrectly. Yes, adding a "Standard size" column would have been very helpful. (I had missed the section on "standard" sizes.) "Standard" is a very general term. And slightly confusing that standard isn't the default. Could the term "platform independent" (or "fixed"?) be added as an explanation of "standard" - or perhaps used instead? Programming skills and platform knowledge at C level should not be a requirement to understand and use struct, so perhaps the references to C should be less high-profile, and perhaps something like this could be added: "All sizes except trivial 1-byte entries (whatever that means) are platform dependent - use calcsize to get the size on your platform." Perhaps the sections explaining 's', 'p', 'ILqQ', 'P' and '?' could be changed to (foot)notes to the table to make it easier to see where they belongs and if they can be skipped. Perhaps "@" in the byte order table could be replaced with "@ (default)"? (And perhaps drop "If the first character is not one of these, '@' is assumed.") The byte order character must come first in the format string and is a key to understand the other format characters, so perhaps everything related to that should come first? -- ___ Python tracker <http://bugs.python.org/issue8469> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8469] struct - please make sizes explicit
New submission from Mads Kiilerich : The struct module is often used (at least by me) to implement protocols and binary formats. That makes the exact sizes (number of bits/bytes) of the different types very important. Please add the sizes to for example the table on http://docs.python.org/library/struct . I know that some of the sizes varies with the platform, and in these cases it is fine to define it in terms of the C types, but for Python programmers writing cross-platform code such variable types doesn't matter and are "never" used. (I assume that it is possible to specify all possible types in a cross-platform way, but I'm not sure and the answer is not obvious from the documentation.) -- components: Library (Lib) messages: 103699 nosy: kiilerix severity: normal status: open title: struct - please make sizes explicit versions: Python 2.6 ___ Python tracker <http://bugs.python.org/issue8469> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8003] Fragile and unexpected error-handling in asyncore
New submission from Mads Kiilerich : I had an issue which seems to be very similar to issue4690 - but different. I created a subclass of asyncore.dispatcher, .create_socket added it to the channel map, but then .bind failed (because the port was in use - my bad), .listen wasn't called, and .accepting thus not set. The problem is that .accepting is the safeguard against .handle_read and .handle_write ever being called. Without that safeguard it then started spinning in .handle_read and .handle_write, calling handlers that wasn't implemented. I guess the right way to handle this is to handle exceptions thrown by .bind and then call .close. But even if I do that there will be a race-condition between the error occurs and I can call .close. My main issue/request is that asyncore should be less fragile and handle such errors better. I don't know exactly how that should be implemented, but I think it is a reasonable expectation that no handlers (except for an error handler) is called on a dispatcher until a .connect or .listen has completed. It seems odd to have to implement .handle_write just to call .send just to trigger .handle_close which then must be implemented to call .close. Perhaps a flag could keep track of the "under construction" state (instead of assuming that it is either accepting, ready to connect, or connected). I also notice that if .handle_read_event ever gets called on a closed (listen) socket then it will end up in .handle_connect_event which is very wrong. Using python-2.6.2-4.fc12.i686, but it seems to be the same on release2.6-maint. -- components: Library (Lib) messages: 99942 nosy: kiilerix severity: normal status: open title: Fragile and unexpected error-handling in asyncore type: behavior versions: Python 2.6 ___ Python tracker <http://bugs.python.org/issue8003> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2622] Import errors in email.message.py
Mads Kiilerich added the comment: I have updated the patch. (Applied to 2.6 where it seems like some casings had been fixed, so I dropped all the rejects. Changes to test_email.py has been.) -- Added file: http://bugs.python.org/file14352/emailcasings2.patch ___ Python tracker <http://bugs.python.org/issue2622> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6005] Bug in socket example
New submission from Mads Kiilerich : http://docs.python.org/library/socket.html says about socket.send: "Applications are responsible for checking that all data has been sent; if only some of the data was transmitted, the application needs to attempt delivery of the remaining data." And about socket.sendall: "Unlike send(), this method continues to send data from string until either all data has been sent or an error occurs." However, the examples on the same page uses plain conn.send(data) without checking anything. That is misleading. A solution could be to use conn.sendall(data) instead. -- assignee: georg.brandl components: Documentation messages: 87633 nosy: georg.brandl, kiilerix severity: normal status: open title: Bug in socket example type: behavior versions: Python 2.6 ___ Python tracker <http://bugs.python.org/issue6005> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1533164] Installed but not listed *.pyo break bdist_rpm
Mads Kiilerich added the comment: Ok, if you will keep bdist_rpm's new .pyo creation then you are right. FWIW I think it is a bit of an ugly hack and would prefer another solution. BTW: The "brp-python-bytecompile creates usr/bin/*.pyo" issue has been resolved, https://bugzilla.redhat.com/show_bug.cgi?id=182498#c8 ___ Python tracker <http://bugs.python.org/issue1533164> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1533164] Installed but not listed *.pyo break bdist_rpm
Mads Kiilerich added the comment: > IIUC, setting > > %define __os_install_post %{___build_post} > > should prevent invocation of brp-python-bytecompile. Ok. But preventing invocation of brp-python-bytecompile is IMHO not a solution. brp-python-bytecompile is needed for the reasons mentioned in http://fedoraproject.org/wiki/Packaging/Python#Including_pyos . ___ Python tracker <http://bugs.python.org/issue1533164> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1533164] Installed but not listed *.pyo break bdist_rpm
Mads Kiilerich added the comment: Martin, What is the goal of bdist_rpm? I haven't seen that stated explicitly anywhere, but I assume the goal is to make a fair attempt to easily create usable RPMs for some software already using distutil, acknowledging that it might not work in all cases (because some projects do strange (buggy?) things) and that the RPMs probably can't be used in distributions directly (because they probably have their own rules and requirements). The applied patch makes it possible for bdist_rpm to work in _some_ situations on Fedora. IMHO that is +1. Yes, this patch might not be enough to make it work with *.py "__main__" files. IMHO that is a less critical issue. Personally I have never seen bdist_rpm fail for this reason. (But "often" for other reasons.) An advantage of this patch is that it just fixes the existing approach to work in more situations. Disabling _unpackaged_files_terminate_build would IMHO be a bad solution. That would cause "successful" RPM builds which doesn't include all the files distutil installed. And FWIW I don't understand how __os_install_post could solve the problem. If you want another approach: Why use a filelist at all? Yes, it is needed if the RPM is built "in place", but these days (at least on Fedora) RPMs are always built in an empty RPM_BUILD_ROOT. So everything found in RPM_BUILD_ROOT has been installed by distutils, and that includes all the files and directories the RPM should contain. For 2.5 a simplified patch for this is: # files section spec_file.extend([ '', -'%files -f INSTALLED_FILES', +'%files', '%defattr(-,root,root)', +'/', ]) That will also make the RPM own all directories in the path to its files. That is bad in a distribution but might be OK for bdist_rpm. To avoid that we could continue to use "-f INSTALLED_FILES" but generate the file list with a simple "find" command in the %install section and remove well-known paths such as /usr/lib/python*/site-packages, /usr/bin, /etc and whatever we could come up with from the list. This approach might work for almost all (sufficiently wellformed) packages using distutil and will redefine bdist_rpm to "put all files in a an RPM instead of installing them directly, so that they can be removed by uninstalling the RPM". For example it works for logilab.astng and logilab.pylint which didn't work before. ___ Python tracker <http://bugs.python.org/issue1533164> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1533164] Installed but not listed *.pyo break bdist_rpm
Mads Kiilerich added the comment: The command "rpm" is (now, and in rpm.org version) for runtime and installation only. Rpm building is done by /usr/bin/rpm-build (from the package with the same name, run "yum install rpm-build"). /usr/lib/python2.5/distutils/command/bdist_rpm.py will select rpm-build if it is available. Yes, the fallback is confusing. That is a consequence of the rpm fork and probably out of scope for this issue ;-) ___ Python tracker <http://bugs.python.org/issue1533164> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1533164] Installed but not listed *.pyo break bdist_rpm
Mads Kiilerich added the comment: Note that: This bug now is tracked in Fedora as https://bugzilla.redhat.com/show_bug.cgi?id=236535 The root of the problem on Fedora is that SElinux will give noisy warnings if python tries to to access or create non-existing foo.pyo for a packaged foo.py. Fedoras solution to this is to always create and package .pyo files to which proper SElinux labels can be attached. The .pyo files are automatically created by /usr/lib/rpm/brp-python-bytecompile behind the scenes when a rpm is build. The problem is that bdist_rpm only knows and lists the .pyo files created by setup and thus doesn't include the brp-python-bytecompile .pyos. It could be argued that Fedora thus enforces a custom policy on RPM, and it should be Fedoras job to complete that job and patch bdist_rpm to match that custom policy. See the pending patch at https://bugzilla.redhat.com/show_bug.cgi?id=236535#c20 However, it would be nice if Python distutils acknowledged that downstream Fedora has a special need here and helped solving it. Perhaps the patch could be accepted in distutils so that bdist_rpm always runs setup with -O1? Or some other change that could make it easy to configure/customize distutils to match the platforms needs ... Finally, note that Fedora and Red Hat (and apparently also Suse and Mandriva) uses rpm.org, and rpm5 is a competing fork. (Rpm5 probably claims that rpm.org is the evil guys who forked.) Tarek, I assume you meant you are creating a Fedora 10 or RHEL/CentOS 5 VM? I will be glad to help testing and answering further questions. ___ Python tracker <http://bugs.python.org/issue1533164> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2916] urlgrabber.grabber calls setdefaulttimeout
Changes by Mads Kiilerich <[EMAIL PROTECTED]>: -- type: -> behavior __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2916> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2916] urlgrabber.grabber calls setdefaulttimeout
New submission from Mads Kiilerich <[EMAIL PROTECTED]>: Module docstring says """Setting this option causes urlgrabber to call the settimeout method on the Socket object used for the request.""" But actually it temporarily changes the global default timeout. If other threads are creating sockets in this short timespan they might fail in unexpected ways. There might not be any other easy way to set a timeout. The long term solution should be to provide a way to do it right. A short term workaround it to update the documentation to describe current behaviour. -- components: Library (Lib) messages: 67066 nosy: kiilerix severity: normal status: open title: urlgrabber.grabber calls setdefaulttimeout versions: Python 2.5 __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2916> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2622] Import errors in email.message.py
Mads Kiilerich <[EMAIL PROTECTED]> added the comment: Testing that email.message doesn't use the "wrong" casing email.Generator isn't enough. That would just test that this patch has been applied. It must also be tested that no other modules uses the wrong casing of email.Generator. Or other email packages. Or any other packages at all. IMHO the "right" test would be to test that modulefinder can find all relevant modules in all cases. The problem is that it gives irrelevant warnings. I tested with some shell hacking to find all modulefinder failures which could be found with another casing: find * -name '*.py'|sed 's,\.py$,,g;s,/,.,g;s,\.__init__$,,g' > /tmp/all_fs_modules for a in $(find * -name '*.py'); do echo $a; python -m modulefinder $a; echo; done > /tmp/all_referenced_modules for a in $(grep ^? /tmp/all_referenced_modules|sed 's,^\? \(.*\) imported from .*,\1,g'|sort|uniq); do grep -i "^$a"'$' /tmp/all_fs_modules; done > /tmp/referenced_existing_ignorecased email.base64mime email.charset email.encoders email.errors email.generator email.header email.iterators email.message email.parser email.quoprimime email.utils ftplib - where the last hit comes from bogus regexp matching. The test takes long time to run as it is. That could probably be improved. But still I think this is to be compared with "lint"-like tools which should be run reguarly but isn't suitable for unit tests. I feel ashamed for arguing against introducing a test. I think I do that because I think that this isn't a "normal" bug and thus isn't suitable for unit testing. The email module itself really is fully backwards compatible. And modulefinder does a good job doing what it does and can't be blamed for not figuring the email hackery out. The problem comes when a third external modules puts things together and they doesn't fit together as one could expect. Also, currently both casings works and should work. Using the old casing isn't a "bug bug", but it has consequences which IMHO is enough to call it a bug and fix it. Perhaps Python could have a standard way markup of deprecated functions so that it could be checked that the standard librarary didn't use them. __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2622> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2622] Import errors in email.message.py
Mads Kiilerich <[EMAIL PROTECTED]> added the comment: OK. I had assumed that backward compatibility was tested in the _renamed tests, so that these tests one day could be dropped together with backward compatibility. I didn't notice that my search'n'replaces showed me that I was wrong. But a bugfix in a stable release really shouldn't change any tests unless the tests are wrong. And I can't come up with a reasonable new test. It could perhaps be tested that all modules could be py2exe'ed and imported individually with automatic dependency resolving... But such a test doesn't belong in the test suite. I suggest that my patch is applied without the test cleanup. __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2622> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2622] Import errors in email.message.py
Mads Kiilerich <[EMAIL PROTECTED]> added the comment: This patch seems to fix the issue for me. The easiest way to verify might be to create another patch and compare them... -- keywords: +patch Added file: http://bugs.python.org/file10106/emailcasings.patch __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2622> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2622] Import errors in email.message.py
Mads Kiilerich <[EMAIL PROTECTED]> added the comment: AFA I understand it the ImportError comes when running a py2exe/app'ed package where iterators.py hasn't been included. I was just about to file a report about (I think) the same issue, seen on XP when py2exe'ing code using the email module. Exactly the same problem with a good(?) explanation can be found on http://mail.python.org/pipermail/spambayes/2007-December/021485.html The problem comes because the real module names now are lowercase, and email/__init__.py plays tricks with _LOWERNAMES in order to keep the old uppercase names working. The problem is that the email lib itself uses the old (deprecated?) non-existing name. IMHO the solution is to use right-cased names. I have (only) tested it by changing the single reference to email.Iterators. I think this is a safe bugfix which should be included in 2.5 ASAP. A workaround is to import email.iterators from some other code or to tell py2exe/pyapp explicitly include the modules in the package. -- nosy: +kiilerix __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2622> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1227748] subprocess: inheritance of std descriptors inconsistent
Mads Kiilerich added the comment: Note to others searching for a solution to this and similar problems: http://svn.python.org/view/python/trunk/Lib/subprocess.py?rev=60115&view=auto shows that this now (for 2.6?) has been changed so that close_fds now controls inheritance through the CreateProcess parameter bInheritHandles Thanks! -- nosy: +kiilerix _ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1227748> _ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com