[issue42378] logging reopens file with same mode, possibly truncating

2022-01-07 Thread Dustin Oprea


Dustin Oprea  added the comment:

<- I'm intentionally using mode 'w' (to support development) and it's never 
been an issue until I recently refactored the project to be asynchronous. Now, 
every time I fail, I suddenly lose the logs. Not awesome.

--

___
Python tracker 
<https://bugs.python.org/issue42378>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42378] logging reopens file with same mode, possibly truncating

2022-01-07 Thread Dustin Oprea


Dustin Oprea  added the comment:

I believe I'm seeing this, still, in an async situation. It seems like the 
obvious culprit. 

When will this go out in a release? I'm on 3.10.1 from December (under Arch). 
The PR got merged to master in July but I went through all changelogs back to 
March and don't see it listed.

--
nosy: +Dustin.Oprea

___
Python tracker 
<https://bugs.python.org/issue42378>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18233] SSLSocket.getpeercertchain()

2017-05-12 Thread Dustin Oprea

Dustin Oprea added the comment:

Thanks for expounding on this, Christian. Assuming your assertions are
correct, this makes perfect sense.

Can anyone listening close this?

On May 12, 2017 17:45, "Christian Heimes" <rep...@bugs.python.org> wrote:

Christian Heimes added the comment:

The ticket is dead for a very good reason. Past me was not clever enough
and didn't know about the difference between the cert chain sent by the
peer and the actual trust chain. The peer's cert chain is not trustworthy
and must *only* be used to build the actual trust chain. X.509 chain trust
chain construction is a tricky business.

Although I thought that peer cert chain is a useful piece of information,
it is also dangerous. It's simply not trustworthy. In virtually all cases
you want to know the chain of certificates that leads from a local trust
anchor to the end-entity cert. In most cases it just happens to be the same
(excluding root CA). But that's not reliable.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue18233>
___

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue18233>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19675] Pool dies with excessive workers, but does not cleanup

2017-02-14 Thread Dustin Oprea

Dustin Oprea added the comment:

I don't think this can be tested. Throwing exceptions in the remote process 
causes exceptions that can't be caught in the same way (when the initializer 
fails the pool just attempts to recreate the process over and over) and I don't 
think it'd be acceptable to try to spawn too many processes in order to induce 
the original problem. There's not a lot of surface area to what we've doing 
here. We can't simulate failures any other way.

Can I get an agreement on this from someone?

--
pull_requests: +60

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue19675>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19675] Pool dies with excessive workers, but does not cleanup

2017-02-13 Thread Dustin Oprea

Dustin Oprea added the comment:

Okay. Thanks for weighing-in.

I'm trying to figure out how to write the tests. The existing set of tests
for multiprocessing is a near nightmare. It seems like I might have to use
one of the existing "source code" definitions to test for the no-cleanup
(traditional) scenario but then have to define a new one to define the pool
with an initializer that fails. However, then, I'll have to define three
new TestCase-based classes to test for the various start-methods.

Any suggestions?

On Mon, Feb 13, 2017 at 11:50 AM, Davin Potts <rep...@bugs.python.org>
wrote:

>
> Changes by Davin Potts <pyt...@discontinuity.net>:
>
>
> --
> versions: +Python 2.7, Python 3.7
>
> ___
> Python tracker <rep...@bugs.python.org>
> <http://bugs.python.org/issue19675>
> ___
>

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue19675>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19675] Pool dies with excessive workers, but does not cleanup

2017-02-12 Thread Dustin Oprea

Dustin Oprea added the comment:

https://github.com/python/cpython/pull/57

On Sun, Feb 12, 2017 at 5:29 PM, Camilla Montonen <rep...@bugs.python.org>
wrote:

>
> Camilla Montonen added the comment:
>
> @dsoprea: would you like to open a PR for this issue on Github? if not,
> are you happy for me to do it?
>
> --
>
> ___
> Python tracker <rep...@bugs.python.org>
> <http://bugs.python.org/issue19675>
> ___
>

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue19675>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue27403] os.path.dirname doesn't handle Windows' URNs correctly

2016-06-27 Thread Dustin Oprea

Dustin Oprea added the comment:

Thank you for your elaborate response. I appreciate knowing that 
"\\server\share" could be considered as the "drive" portion of the path.

I'm having trouble determining if "\\?\" is literally some type of valid UNC 
prefix or you're just using it to represent some format/idea. Just curious.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue27403>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue27403] os.path.dirname doesn't handle Windows' URNs correctly

2016-06-27 Thread Dustin Oprea

New submission from Dustin Oprea:

Notice that os.path.dirname() returns whatever it is given if it is given a 
URN, regardless of slash-type. Oddly, you have to double-up the forward-slashes 
(like you're escaping them) in order to get the correct result (if you're using 
forward-slashes). Back-slashes appear to be broken no matter what.

C:\Python35-32>python
Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:01:18) [MSC v.1900 32 bit 
(Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import os.path
>>> os.path.dirname("a\\b")
'a\\b'
>>> os.path.dirname("//a/b")
'//a/b'
>>> os.path.dirname("a//b")
'a'

Any ideas?

--
components: Windows
messages: 269404
nosy: Dustin.Oprea, paul.moore, steve.dower, tim.golden, zach.ware
priority: normal
severity: normal
status: open
title: os.path.dirname doesn't handle Windows' URNs correctly
type: behavior
versions: Python 2.7, Python 3.5

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue27403>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21328] Resize doesn't change reported length on create_string_buffer()

2016-02-05 Thread Dustin Oprea

Dustin Oprea added the comment:

I'm closing it. The ticket has been open two-years and no one else seemed to be 
interested in this issue until now, which leads me to believe that it's a 
PEBCAK/understanding issue. The rationale for why it's irrelevant seems sound. 
Thanks for digging through the code, Tamás. Thank you for your comments, Martin 
and Eryk.

--
status: open -> closed

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue21328>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18233] SSLSocket.getpeercertchain()

2015-05-28 Thread Dustin Oprea

Dustin Oprea added the comment:

Forget it. This project is dead.

Dustin
On May 28, 2015 11:58 AM, Jeroen Ruigrok van der Werven 
rep...@bugs.python.org wrote:


 Jeroen Ruigrok van der Werven added the comment:

 Given that cryptography.io is fast becoming the solution for dealing with
 X.509 certificates on Python, I would like to add my vote to add my vote
 for this feature. Right now, getting the full chain in DER is what I am
 missing to complete a task at work.

 --
 nosy: +asmodai

 ___
 Python tracker rep...@bugs.python.org
 http://bugs.python.org/issue18233
 ___


--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18233
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18233] SSLSocket.getpeercertchain()

2015-05-28 Thread Dustin Oprea

Dustin Oprea added the comment:

Disregard. I thought this was something else.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18233
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22835] urllib2/httplib is rendering 400s for every authenticated-SSL request, suddenly

2014-11-11 Thread Dustin Oprea

Dustin Oprea added the comment:

I usually use both on my local system.

Dustin
On Nov 11, 2014 4:43 AM, Antoine Pitrou rep...@bugs.python.org wrote:


 Antoine Pitrou added the comment:

 Le 11/11/2014 07:50, Dustin Oprea a écrit :
 
  Dustin Oprea added the comment:
 
  I think I was getting mixed results by using requests and urllib2/3.
 After nearly being driven crazy, I performed the following steps:
 
  1. Recreated client certificates, and verified that the correct CA was
 being used from Nginx.
 
  2. Experimenting using an SSL-wrapped client-socket directly, in tandem
 with s_client.
 
  3. I then removed all of my virtualhosts except for a new one that
 pointed to a flat directory, just to make sure that I wasn't activating the
 wrong virtualhost, and there weren't any other complexities.
 
  4. Implemented a bonafide, signed, SSL certificate on my local system,
 and overriding the hostname using /etc/hosts.
 
  5. This got me past the 400.

 Ah! Perhaps your HTTPS setup was relying on SNI to select the proper
 virtual host ?

 --

 ___
 Python tracker rep...@bugs.python.org
 http://bugs.python.org/issue22835
 ___


--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22835
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22835] urllib2/httplib is rendering 400s for every authenticated-SSL request, suddenly

2014-11-11 Thread Dustin Oprea

Dustin Oprea added the comment:

Agreed. Thank you, @Antoine.

On Tue, Nov 11, 2014 at 2:21 PM, Antoine Pitrou rep...@bugs.python.org
wrote:


 Antoine Pitrou added the comment:

 In any case, it sounds like your problem is fixed, so we can close this
 issue.

 --
 resolution:  - not a bug
 status: open - closed

 ___
 Python tracker rep...@bugs.python.org
 http://bugs.python.org/issue22835
 ___


--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22835
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22835] urllib2/httplib is rendering 400s for every authenticated-SSL request, suddenly

2014-11-10 Thread Dustin Oprea

Dustin Oprea added the comment:

I think I was getting mixed results by using requests and urllib2/3. After 
nearly being driven crazy, I performed the following steps:

1. Recreated client certificates, and verified that the correct CA was being 
used from Nginx.

2. Experimenting using an SSL-wrapped client-socket directly, in tandem with 
s_client.

3. I then removed all of my virtualhosts except for a new one that pointed to a 
flat directory, just to make sure that I wasn't activating the wrong 
virtualhost, and there weren't any other complexities.

4. Implemented a bonafide, signed, SSL certificate on my local system, and 
overriding the hostname using /etc/hosts.

5. This got me past the 400. I switched back to using my local hostname with my 
self-signed certificate, and told wrap_socket to not verify (at this point, I 
stopped checking with s_client).

6. I started reactivating all of my normal virtualhost includes, one include at 
a time.

7. Reverted back to using the standard, proprietary client, and verified that 
it worked.

I'm guessing that a) something happened to my original certificates, b) I 
might've had an incorrect CA certificate for authentication, and/or c) I had 
added a default virtualhost on the non-standard port that I am using that 
always returns Forbidden, and this might've been unexpectedly catching the 
wrong requests.

Since I verified my client certificates against my internal issuer in the 
beginning, I don't think it's (a) or (b).

I could've done without these problems. I can't even say what started it all.

--
nosy: +dsoprea

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22835
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22835] urllib2/httplib is rendering 400s for every authenticated-SSL request, suddenly

2014-11-09 Thread Dustin Oprea

New submission from Dustin Oprea:

I am trying to do an authenticated-SSL request to an Nginx server using 
*requests*, which wraps urllib2/httplib. It's worked perfectly for months until 
Friday on my local system (Mac 10.9.5), and there have been no 
upgrades/patches. 

My Python 2.7.6 client fails when connecting to Nginx, locally. I get a 400, 
with this:

html
headtitle400 No required SSL certificate was sent/title/head
body bgcolor=white
centerh1400 Bad Request/h1/center
centerNo required SSL certificate was sent/center
hrcenternginx/1.4.6 (Ubuntu)/center
/body
/html

This is an example that uses urllib2/httplib, directly:

import urllib2
import httplib

cert_filepath = '/var/lib/rt_data/ssl/rt.crt.pem'
key_filepath = '/var/lib/rt_data/ssl/rt.private_key.pem'

url = 'https://deploy_api.local:8443/auth/admin/1/hosts'


class HTTPSClientAuthHandler(urllib2.HTTPSHandler):
Wrapper to allow for authenticated SSL connections.

def __init__(self, key, cert):
urllib2.HTTPSHandler.__init__(self)
self.key = key
self.cert = cert

def https_open(self, req):
# Rather than pass in a reference to a connection class, we pass in
# a reference to a function which, for all intents and purposes,
# will behave as a constructor
return self.do_open(self.getConnection, req)

def getConnection(self, host, timeout=300):
return httplib.HTTPSConnection(host, key_file=self.key, 
cert_file=self.cert)

opener = urllib2.build_opener(HTTPSClientAuthHandler(key_filepath, 
cert_filepath))
response = opener.open(url)

response_data = response.read()
print(response_data)

These are the factors:

- It works when connecting to the remote server. Both local and remote are 
Nginx with similar configs.
- cURL works perfectly:

  curl -s -v -X GET -k --cert /var/lib/rt_data/ssl/rt.crt.pem --key 
/var/lib/rt_data/ssl/rt.private_key.pem 
https://server.local:8443/auth/admin/1/hosts

- I've tried under Vagrant with Ubuntu 12.04 (2.7.3) and 14.04 (2.7.6). No 
difference.
- It works with Python 3.4 on the local system. This only has only affected 2.7 
very suddenly.

Due to the error-message above, it seems like there's a break down in sending 
the certificate/key.

I have no idea what's going on, and this has caused me a fair amount of 
distress. Can you provide me a direction?

--
components: Library (Lib)
messages: 230934
nosy: Dustin.Oprea
priority: normal
severity: normal
status: open
title: urllib2/httplib is rendering 400s for every authenticated-SSL request, 
suddenly
type: behavior
versions: Python 2.7

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22835
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21928] Incorrect reference to partial() in functools.wraps documentation

2014-07-06 Thread Dustin Oprea

New submission from Dustin Oprea:

functools.wraps docs say This is a convenience function for invoking 
partial(update_wrapper, wrapped=wrapped, assigned=assigned, updated=updated) as 
a function decorator when defining a wrapper function. The referenced function 
should be update_wrapper(), not partial().

--
assignee: docs@python
components: Documentation
messages: 222426
nosy: Dustin.Oprea, docs@python
priority: normal
severity: normal
status: open
title: Incorrect reference to partial() in functools.wraps documentation
versions: Python 2.7, Python 3.4

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21928
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21328] Resize doesn't change reported length on create_string_buffer()

2014-04-22 Thread Dustin Oprea

New submission from Dustin Oprea:

The memory is resized, but the value returned by len() doesn't change:

 b = ctypes.create_string_buffer(23)
 len(b)
23
 b.raw = '0' * 23
 b.raw = '0' * 24
Traceback (most recent call last):
  File stdin, line 1, in module
ValueError: string too long

 ctypes.resize(b, 28)
 len(b)
23
 b.raw = '0' * 28
 b.raw = '0' * 29
Traceback (most recent call last):
  File stdin, line 1, in module
ValueError: string too long

--
components: ctypes
messages: 217005
nosy: Dustin.Oprea
priority: normal
severity: normal
status: open
title: Resize doesn't change reported length on create_string_buffer()
versions: Python 2.7

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21328
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19675] Pool dies with excessive workers, but does not cleanup

2013-11-20 Thread Dustin Oprea

New submission from Dustin Oprea:

If you provide a number of processes to a Pool that the OS can't fulfill, Pool 
will raise an OSError and die, but does not cleanup any of the processes that 
it has forked.

This is a session in Python where I can allocate a large, but fulfillable, 
number of processes (just to exhibit what's possible in my current system):

 from multiprocessing import Pool
 p = Pool(500)
 p.close()
 p.join()

Now, this is a request that will fail. However, even after this fails, I can't 
allocate even a single worker:

 p = Pool(700)
Traceback (most recent call last):
  File stdin, line 1, in module
  File 
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/__init__.py,
 line 232, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild)
  File 
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py,
 line 159, in __init__
self._repopulate_pool()
  File 
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py,
 line 222, in _repopulate_pool
w.start()
  File 
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/process.py,
 line 130, in start
self._popen = Popen(self)
  File 
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/forking.py,
 line 121, in __init__
self.pid = os.fork()
OSError: [Errno 35] Resource temporarily unavailable

 p = Pool(1)
Traceback (most recent call last):
  File stdin, line 1, in module
  File 
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/__init__.py,
 line 232, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild)
  File 
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py,
 line 159, in __init__
self._repopulate_pool()
  File 
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py,
 line 222, in _repopulate_pool
w.start()
  File 
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/process.py,
 line 130, in start
self._popen = Popen(self)
  File 
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/forking.py,
 line 121, in __init__
self.pid = os.fork()
OSError: [Errno 35] Resource temporarily unavailable

The only way to clean this up is to close the parent (the interpreter).

I'm submitting a patch for 2.7.6 that intercepts exceptions and cleans-up the 
workers before bubbling. The affected method is _repopulate_pool(), and appears 
to be the same in 2.7.6, 3.3.3, and probably every other recent version of 
Python.

This is the old version:

for i in range(self._processes - len(self._pool)):
w = self.Process(target=worker,
 args=(self._inqueue, self._outqueue,
   self._initializer,
   self._initargs, self._maxtasksperchild)
)
self._pool.append(w)
w.name = w.name.replace('Process', 'PoolWorker')
w.daemon = True
w.start()
debug('added worker')

This is the new version:

try:
for i in range(self._processes - len(self._pool)):
w = self.Process(target=worker,
 args=(self._inqueue, self._outqueue,
   self._initializer,
   self._initargs, self._maxtasksperchild)
)
self._pool.append(w)
w.name = w.name.replace('Process', 'PoolWorker')
w.daemon = True
w.start()
debug('added worker')
except:
debug(Process creation error. Cleaning-up (%d) workers. % 
(len(self._pool)))

for process in self._pool:
if process.is_alive() is False:
continue

process.terminate()
process.join()

debug(Processing cleaning-up. Bubbling error.)
raise

This is what happens, now: I can go from requesting a number that's too high to 
immediately requesting one that's also high but within limits, and there's now 
no problem as all resources have been freed:

 from multiprocessing import Pool
 p = Pool(700)
Traceback (most recent call last):
  File stdin, line 1, in module
  File 
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/__init__.py,
 line 232, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild)
  File 
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py,
 line 159, in __init__
self._repopulate_pool()
  File 
/System/Library/Frameworks/Python.framework/Versions/2.7/lib

[issue19388] Inflated download counts

2013-10-25 Thread Dustin Oprea

New submission from Dustin Oprea:

Noah recommended that I approach the distutils mailing list to report a 
potential PyPI problem. I can't seem to find a webpage for the distutils list, 
so I'm posting an official bug.

I have a few packages on PyPI, and I often find my counts immediately taking 
hold, and, for the more useful projects, skyrocketing. However, I recently 
started a service that requires membership. In the last month, PyPI reports 
3000 downloads of the client, yet Google Analytics only reports a handful of 
visits to the website. I have even less membership signups (as expected, so 
soon after launch). Why are the download counts so inflated? Obviously, they're 
very misleading and limited if they don't ignore spurious visitors (like 
robots).

What has to be done to get this to be accurate?

I've included two screenshots of PyPI and GA.

--
assignee: eric.araujo
components: Distutils
messages: 201234
nosy: dsoprea, eric.araujo, tarek
priority: normal
severity: normal
status: open
title: Inflated download counts
type: enhancement
Added file: http://bugs.python.org/file32351/inflated_counts.zip

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19388
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18233] SSLSocket.getpeercertchain()

2013-10-11 Thread Dustin Oprea

Dustin Oprea added the comment:

My two-cents is to leave it a tuple (why not?).



Dustin

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18233
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18233] SSLSocket.getpeercertchain()

2013-10-10 Thread Dustin Oprea

Dustin Oprea added the comment:

I was about to submit a feature request to add exactly this. The [second] patch 
works like a charm. When are you going to land on a particular resolution so 
that it can get committed in?



Dustin

--
nosy: +dsoprea

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18233
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com