ANN: pyCologne - next Meeting Wednesday, April 11, 2012, 6:30pm
When the last Easter egg is eaten and the last chocolate Easter bunny nibbled to to bits, Pythonistas start looking for intellectual nourishment again. Luckily, the April meeting of pyCologne, the Python User Group Köln, is coming right up: When: Wednesday, April 11, 2012, 6:30pm Where: Pool 0.14, Benutzerrechenzentrum (computing centre RRZK-B) University of Cologne, Berrenrather Str. 136, 50937 Köln We kindly request you to tell us whether you intend to come (or not) through our Doodle (no obligation): http://pycologne.de/pudel This time we have the following short talk on the agenda: * Python Cloud-Services (Jesaja Everling) Additional presentations, lightning talks, news, book presentations etc. are welcome at each of our meetings! From about 8.30pm we will enjoy the rest of the evening in a nearby restaurant. Further information, including directions on how to get to the location, can be found at: http://www.pycologne.de (Sorry, the web-links are in German only.) Until then, have a Happy Easter, Chris -- http://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations/
Re: Reading Live Output from a Subprocess
Dubslow buns...@gmail.com wrote: It's just a short test script written in python, so I have no idea how to even control the buffering (and even if I did, I still can't modify the subprocess I need to use in my script). What confuses me then is why Perl is able to get around this just fine without faking a terminal or similar stuff. (And also, this needs to work in Windows as well.) For the record, here's the test script: ## #!/usr/bin/python import time, sys try: total = int(sys.argv[1]) except IndexError: total = 10 for i in range(total): print('This is iteration', i) time.sleep(1) print('Done. Exiting!') sys.exit(0) ## I am probably missing something, but this works for me - sub_proc1.py -- from time import sleep for i in range(5): print(i) sleep(1) sub_proc2.py -- import subprocess as sub proc = sub.Popen([python, sub_proc1.py]) x, y = proc.communicate() Running sub_proc1 gives the obvious output - the digits 0 to 4 displayed with delays of 1 second. Running sub_proc2 gives exactly the same output. This is using python 3.2.2 on Windows Server 2003. Frank Millman -- http://mail.python.org/mailman/listinfo/python-list
Re: Reading Live Output from a Subprocess
On Fri, 06 Apr 2012 12:21:51 -0700, Dubslow wrote: It's just a short test script written in python, so I have no idea how to even control the buffering In Python, you can set the buffering when opening a file via the third argument to the open() function, but you can't change a stream's buffering once it has been created. Although Python's file objects are built on the C stdio streams, they don't provide an equivalent to setvbuf(). On Linux, you could use e.g.: sys.stdout = open('/dev/stdout', 'w', 1) Other than that, if you want behaviour equivalent to line buffering, call sys.stdout.flush() after each print statement. (and even if I did, I still can't modify the subprocess I need to use in my script). In which case, discussion of how to make Python scripts use line-buffered output is beside the point. What confuses me then is why Perl is able to get around this just fine without faking a terminal or similar stuff. It isn't. If a program sends its output to the OS in blocks, anything which reads that output gets it in blocks. The language doesn't matter; writing the parent program in assembler still wouldn't help. I take it then that setting Shell=True will not be fake enough for catching output live? No. It just invokes the command via /bin/sh or cmd.exe. It doesn't affect how the process' standard descriptors are set up. On Unix, the only real use for shell=True is if you have a canned shell command, e.g. from a file, and you need to execute it. In that situation, args should be a string rather than a list. And you should never try to construct such a string dynamically in order to pass arguments; that's an injection attack waiting to happen. -- http://mail.python.org/mailman/listinfo/python-list
Re: Learning new APIs/classes (beginner question)
On Apr 7, 1:52 am, Steven D'Aprano steve +comp.lang.pyt...@pearwood.info wrote: Sounds like this library is documented the same way most third party libraries are: as an afterthought, by somebody who is so familiar with the software that he cannot imagine why anyone might actually need documentation. I feel your pain. Thanks Steven, I suspected this might be the case, but wasn't sure if I was missing something obvious. Maybe I'll start on a different project using better-documented or just the build-in libraries. Many thanks, Martin. -- http://mail.python.org/mailman/listinfo/python-list
Multiprocessing Logging
Hi, I'm currently writing a multiprocess applications with Python 3.2 and multiprocessing module. My subprocesses will use a QueueHandler to log messages (by sending them to the main process, which uses a QueueListener). However, if logging is already configured when I create the subprocesses, they will inherit the configuration, and opened file descriptors used for logging in the main process. However, when I tried this with a basicConfig configuration, which prints to a file, the messages are only written once to the file. I don't understand why. Normally, the mainprocess contains a logging handler to log to the file. This handler will be copied into the child processes. Child processes will then have two handlers : a QueueHandler, and a FileHandler. They should write to the file handler and then send the message to the main process QueueListener, which should write the message AGAIN to the FileHandler. But that's not the case. Any rea How can I totally unset the logging configuration in child processes and only enable the QueueHandler (foor all configured loggers, not only the root one) ? Thanks for your help, -- http://mail.python.org/mailman/listinfo/python-list
Python Training
Hi, We are an IT Training company located in Bangalore, Chennai and Coimbatore. We provide Python training with Placement Assistance. For more details, email to mo...@cgonsoft.com -- http://mail.python.org/mailman/listinfo/python-list
Re: Python Training
On Apr 7, 2:21 pm, Mohan kumar mo...@cegonsoft.com wrote: Hi, We are an IT Training company located in Bangalore, Chennai and Coimbatore. We provide Python training with Placement Assistance. For more details, email to mo...@cgonsoft.com We also provide Online Training. -- http://mail.python.org/mailman/listinfo/python-list
Re: escaping/encoding/formatting in python
On Sat, Apr 7, 2012 at 3:36 PM, Nobody nob...@nowhere.com wrote: The delimiter can be chosen either by analysing the string or by choosing something a string at random and relying upon a collision being statistically improbable. The same techniques being available to MIME multi-part encoders, and for the same reason. Nestable structures can be quite annoying to parse. ChrisA -- http://mail.python.org/mailman/listinfo/python-list
Re: PEP 274
This proposal was suggested in 2001 and is only now being implemented. Why the extended delay? Sent from my iPhone On Apr 7, 2012, at 3:32 AM, Alec Taylor alec.tayl...@gmail.com wrote: Has been withdrawn... and implemented http://www.python.org/dev/peps/pep-0274/ -- http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
ordering with duck typing in 3.1
hi, please, what am i doing wrong here? the docs say http://docs.python.org/release/3.1.3/library/stdtypes.html#comparisons in general, __lt__() and __eq__() are sufficient, if you want the conventional meanings of the comparison operators but i am seeing assert 2 three E TypeError: unorderable types: int() IntVar() with this test: class IntVar(object): def __init__(self, value=None): if value is not None: value = int(value) self.value = value def setter(self): def wrapper(stream_in, thunk): self.value = thunk() return self.value return wrapper def __int__(self): return self.value def __lt__(self, other): return self.value other def __eq__(self, other): return self.value == other def __hash__(self): return hash(self.value) class DynamicTest(TestCase): def test_lt(self): three = IntVar(3) assert three 4 assert 2 three assert 3 == three so what am i missing? thanks, andrew -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing Logging
Le 07/04/2012 11:22, Thibaut DIRLIK a écrit : Hi, I'm currently writing a multiprocess applications with Python 3.2 and multiprocessing module. My subprocesses will use a QueueHandler to log messages (by sending them to the main process, which uses a QueueListener). However, if logging is already configured when I create the subprocesses, they will inherit the configuration, and opened file descriptors used for logging in the main process. However, when I tried this with a basicConfig configuration, which prints to a file, the messages are only written once to the file. I don't understand why. Normally, the mainprocess contains a logging handler to log to the file. This handler will be copied into the child processes. Child processes will then have two handlers : a QueueHandler, and a FileHandler. They should write to the file handler and then send the message to the main process QueueListener, which should write the message AGAIN to the FileHandler. But that's not the case. Any rea How can I totally unset the logging configuration in child processes and only enable the QueueHandler (foor all configured loggers, not only the root one) ? Thanks for your help, Ok, I understand what happenned. In fact, configuring the logging before forking works fine. Subprocess inherits the configuration, as I thought. The problem was that I didn't passed any handler to the QueueListener constructor. The when the listener recieved an message, it wasn't handled. I'm not sure how the logging module works, but what handlers should I pass the QueueListener constructor ? I mean, maybe that I would like some messages (depending of the logger) to be logged to a file, while some others message would just be printed to stdout. This doesn't seem to be doable with a QueueListener. Maybe I should implement my own system, and pass a little more informations with the record sent in the queue : the logger name for example. Then, in the main process I would do a logging.getLogger(loggername) and log the record using this logger (by the way it was configured). What do you think ? -- http://mail.python.org/mailman/listinfo/python-list
Distribute app without source?
Thanks in advance for any insights! My partner and I have developed an application primarily intended for internal use within our company. However, we face the need to expose the app to certain non-employees. We would like to do so without exposing our source code. Our targets include users of Windows and Mac OS, but not UNIX. We are using Python 3.2 and tkinter. It appears, and limited testing bears out, that py2app, and presumably py2exe, are not options given lack of 3.x support. PyInstaller does not support the 64-bit version we are using. Does it make sense for us to try to use pyInstaller with a 32-bit install of Python 3.2? Are there other options? Any assistance or input will be welcome. regards, Bill -- http://mail.python.org/mailman/listinfo/python-list
Re: ordering with duck typing in 3.1
Am 07.04.2012 14:23 schrieb andrew cooke: class IntVar(object): def __init__(self, value=None): if value is not None: value = int(value) self.value = value def setter(self): def wrapper(stream_in, thunk): self.value = thunk() return self.value return wrapper def __int__(self): return self.value def __lt__(self, other): return self.value other def __eq__(self, other): return self.value == other def __hash__(self): return hash(self.value) so what am i missing? If I don't confuse things, I think you are missing a __gt__() in your IntVar() class. This is because first, a '2 three' is tried with 2.__lt__(three). As this fails due to the used types, it is reversed: 'three 2' is equivalent. As your three doesn't have a __gt__(), three.__gt__(2) fails as well. Thomas -- http://mail.python.org/mailman/listinfo/python-list
Re: Distribute app without source?
On 4/7/2012 8:07 AM, Bill Felton wrote: We are using Python 3.2 and tkinter. It appears, and limited testing bears out, that py2app, and presumably py2exe, are not options given lack of 3.x support. cx_Freeze supports Python 3.2. It works fine for my purposes, but I have not done any serious work with it. I don't know about tkinter, but I was able to freeze a simple Qt application. -- CPython 3.2.2 | Windows NT 6.1.7601.17640 -- http://mail.python.org/mailman/listinfo/python-list
Re: 'string_escape' in python 3
On Sat, Apr 7, 2012 at 12:10 AM, Ian Kelly ian.g.ke...@gmail.com wrote: import codecs codecs.getdecoder('unicode_escape')(s)[0] 'Hello: this is a test' Cheers, Ian Thanks, Ian. I had assumed that if a unicode string didn't have a .decode method, then I couldn't use a decoder on it, so it hadn't occurred to me to try that N. -- http://mail.python.org/mailman/listinfo/python-list
how i loved lisp cons and UML and Agile and Design Patterns and Pythonic and KISS and YMMV and stopped worrying
OMG, how i loved lisp cons and macros and UML and Agile eXtreme Programing and Design Patterns and Anti-Patterns and Pythonic and KISS and YMMV and stopped worrying. 〈World Multiconference on Systemics, Cybernetics and Informatics???〉 http://xahlee.org/comp/WMSCI.html highly advanced plain text format follows, as a amenity for tech geekers. --- World Multiconference on Systemics, Cybernetics and Informatics ??? Xah Lee, 2010-04-04 Starting in 2004, i regularly receive email asking me to participate a conference, called “World Multiconference on Systemics, Cybernetics and Informatics” (WMSCI). Here's one of such email i got today: Dear Xah Lee: As you know the Nobel Laureate Herbert Simon affirmed that design is an essential ingredient of the Artificial Sciences Ranulph Glanville, president of the American Society for Cybernetics and expert in design theory, affirms that “Research is a variety of design. So do research as design. Design is key to research. Research has to be designed.” An increasing number of authors are stressing the relationships between Design and Research. Design is a mean for Research, and Research is a mean for Design. Design and research are related via cybernetic loops in the context of means-ends logic. Consequently, we invite you to submit a paper/abstract and/ot to organize an invited session in the International Symposium on Design and Research in the Artificial and the Natural Sciences: DRANS 2010 (http://www.sysconfer.org/drans) which is being organized in the context of The 14th World Multi- Conference on Systemics, Cybernetics and Informatics: WMSCI 2010 (http://www.sysconfer.org/wmsci), 2010 in Orlando, Florida, USA. … Here's the first email i got from them from my mail archive: From: sci2...@iiis.org Subject: Inviting you to participate in SCI 2005 Date: October 20, 2004 1:39:48 PM PDT To: x...@xahlee.org Dear Dr. Xah Lee: On behalf of the SCI 2005 Organizing Committee, I would like to invite you to participate in the 9th World Multi-Conference on Systemics, Cybernetics and Informatics (http://www.iiisci.org/sci2005), which will take place in Orlando, Florida, USA, from July 10-13, 2005. Full text wmsci.txt. I do not know this organization. I don't know how they got my email or how they know that i'm involved in the computer science community. (surely from trawling email addresses in science forums) Though, after getting a few of their emails, one clearly gets a sense that it is a scam, soliciting innocent idiotic academicians (many PhDs are idiots.). Here's what Wikipedia has to say about them: World Multiconference on Systemics, Cybernetics and Informatics. Here's a juicy quote: WMSCI attracted publicity of a less favorable sort in 2005 when three graduate students at MIT succeeded in getting a paper accepted as a “non-reviewed paper” to the conference that had been randomly generated by a computer program called SCIgen.[8] Documents generated by this software have been used to submit papers to other similar conferences. Compare to the Sokal affair. WMSCI has been accused of using spam to advertise its conferences.[8] Now and then, whenever i got their email, the curiosity in me do lookup the several terms they used in the email, partly to check the validity. For example, in this one, it mentions Herbert Simon. Another one i recall i got recently mentioned Science 2.0. Both of the terms i haven't heard of before. One'd think that it is easy to tell scam from real science, but with today's science proliferation, it's actually not that easy. Even if you are a academic, it's rather common that many new science terms you never heard of, because there are tremendous growth of new disciplines or cross disciplines, along with new jargons. Cross-discipline is rather common and natural, unlike in the past where science is more or less clearly delineated hierarchy like Physics, Math, Chemistry, Biology, etc and their sub-branches. However, many of today's new areas is a bit questionable, sometimes a deliberate money-making scheme, which i suppose is the case for WMSCI. Many of these, use terms like “post-modern”, “science 2.0” to excuse themselves from the rather strict judgment of classic science. Many of these terms such as “systemics”, “cybernetics”, “infomatics” are vague. Depending on the context, it could be a valid emerging science discipline, but it could also be pure new-age hogwash. And sometimes, nobody really knows today. Fledgling scientific fields may started off as pseudo-science but later became well accepted with more solid theories. (e.g. evolutionary psychology) In the past 2 decades, there are quite a few cases where peer reviewed papers published in respected journals are exposed as highly questionable or deliberate hoax, arose massive debate on the peer review system. The peer-review system itself can't hold all the blame, but part of it has to do with the incredible growth of sciences and
Re: ordering with duck typing in 3.1
andrew cooke wrote in news:33019705.1873.1333801405463.JavaMail.geo-discussion-forums@ynmm9 in gmane.comp.python.general: hi, please, what am i doing wrong here? the docs say http://docs.python.org/release/3.1.3/library/stdtypes.html#comparisons in general, __lt__() and __eq__() are sufficient, if you want the conventional meanings of the comparison operators but i am seeing assert 2 three E TypeError: unorderable types: int() IntVar() with this test: class IntVar(object): def __lt__(self, other): return self.value other so what am i missing? The part of the docs you are relying on uses the wording in general, IOW, it is not saying that defining __eq__ and __lt__ will always be sufficient. In this case the expression 2 three is calling int.__lt__, which doesn't know how to comapre to an instance of your class so returns NotImplemented. At this point if you had defined a __gt__ method the interpreter would then try and call that having first switched the arguments around. But you didn't so a TypeError is raised. I'm afraid I couldn't find anywhere in the docs where that behaviour is described, I suspect I only know it from lurking on usenet for a number of years. The best description that I could find of the behaviour you are seeing is at: http://docs.python.org/py3k/reference/expressions.html#not-in There is a paragraph that contains: ... the == and != operators always consider objects of different types to be unequal, while the , , = and = operators raise a TypeError when comparing objects of different types that do not implement these operators for the given pair of types. ... Perhapse the docs could be reworded to note that, to define a full set of comparisons between *different* types, you need to define a full set of special methods. Some links I found along the way: http://docs.python.org/release/3.1.3/library/constants.html? highlight=__lt__#NotImplemented http://code.activestate.com/recipes/576685/ http://docs.python.org/py3k/library/functools.html#functools.total_ordering Rob. -- http://mail.python.org/mailman/listinfo/python-list
multithreading
I'm about to write my first module and I don't know how I should handle multithreading/-processing. I'm not doing multi-threading inside my module. I'm just trying to make it thread-safe so that users *can* do multi-threading. For instance, let's say I want to make this code thread-safe: --- myDict = {} def f(name, val): if name not in myDict: myDict[name] = val return myDict[name] --- I could use threading.Lock() but I don't know if that might interfere with some other modules imported by the user. In some languages you can't mix multi-threading libraries. Is Python one of them? Kiuhnm -- http://mail.python.org/mailman/listinfo/python-list
Re: ordering with duck typing in 3.1
On Sat, 7 Apr 2012 05:23:25 -0700 (PDT) andrew cooke and...@acooke.org wrote: hi, please, what am i doing wrong here? the docs say http://docs.python.org/release/3.1.3/library/stdtypes.html#comparisons in general, __lt__() and __eq__() are sufficient, if you want the conventional meanings of the comparison operators but i am seeing assert 2 three E TypeError: unorderable types: int() IntVar() with this test: class IntVar(object): def __init__(self, value=None): if value is not None: value = int(value) self.value = value def setter(self): def wrapper(stream_in, thunk): self.value = thunk() return self.value return wrapper def __int__(self): return self.value def __lt__(self, other): return self.value other def __eq__(self, other): return self.value == other def __hash__(self): return hash(self.value) class DynamicTest(TestCase): def test_lt(self): three = IntVar(3) assert three 4 assert 2 three assert 3 == three so what am i missing? I think that quote from the docs is just to point out that you only need those two (== and ) to derive any of the other comparisons; but not to imply that a class that only defines those two will automatically possess the others. However, you can do that, with functools.total_ordering. Regards, John -- http://mail.python.org/mailman/listinfo/python-list
Re: ordering with duck typing in 3.1
Thomas Rachel wrote: Am 07.04.2012 14:23 schrieb andrew cooke: class IntVar(object): def __init__(self, value=None): if value is not None: value = int(value) self.value = value def setter(self): def wrapper(stream_in, thunk): self.value = thunk() return self.value return wrapper def __int__(self): return self.value def __lt__(self, other): return self.value other def __eq__(self, other): return self.value == other def __hash__(self): return hash(self.value) so what am i missing? If I don't confuse things, I think you are missing a __gt__() in your IntVar() class. This is because first, a '2 three' is tried with 2.__lt__(three). As this fails due to the used types, it is reversed: 'three 2' is equivalent. As your three doesn't have a __gt__(), three.__gt__(2) fails as well. Practically, yes. Just that that's not what the documentation says. Looks like Python no longer tries to cobble together missing relations based on the usual properties of ordering. Mel. -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing Logging
Thibaut merwin.irc at gmail.com writes: Ok, I understand what happenned. In fact, configuring the logging before forking works fine. Subprocess inherits the configuration, as I thought. The problem was that I didn't passed any handler to the QueueListener constructor. The when the listener recieved an message, it wasn't handled. I'm not sure how the logging module works, but what handlers should I pass the QueueListener constructor ? I mean, maybe that I would like some messages (depending of the logger) to be logged to a file, while some others message would just be printed to stdout. This doesn't seem to be doable with a QueueListener. Maybe I should implement my own system, and pass a little more informations with the record sent in the queue : the logger name for example. Then, in the main process I would do a logging.getLogger(loggername) and log the record using this logger (by the way it was configured). What do you think ? You probably need different logging configurations in different processes. In your multiprocessing application, nominate one of the processes as a logging listener. It should initialize a QueueListener subclass which you write. All other processes should just configure a QueueHandler, which uses the same queue as the QueueListener. All the processes with QueueHandlers just send their records to the queue. The process with the QueueListener picks these up and handles them by calling the QueueListener's handle() method. The default implementation of QueueListener.handle() is: def handle(self, record): record = self.prepare(record) for handler in self.handlers: handler.handle(record) where self.handlers is just the handlers you passed to the QueueListener constructor. However, if you want a very flexible configuration where different loggers have different handlers, this is easy to arrange. Just configure logging in the listener process however you want, and then, in your QueueListener subclass, do something like this: class MyQueueListener(logging.handlers.QueueListener): def handle(self, record): record = self.prepare(record) logger = logging.getLogger(record.name) logger.handle(record) This will pass the events to whatever handlers are configured for a particular logger. I will try to update the Cookbook in the logging docs with this approach, and a working script. Background information is available here: [1][2] Regards, Vinay Sajip [1] http://plumberjack.blogspot.co.uk/2010/09/using-logging-with-multiprocessing.html [2] http://plumberjack.blogspot.co.uk/2010/09/improved-queuehandler-queuelistener.html -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing Logging
Le 07/04/2012 16:47, Vinay Sajip a écrit : Thibautmerwin.ircat gmail.com writes: Ok, I understand what happenned. In fact, configuring the logging before forking works fine. Subprocess inherits the configuration, as I thought. The problem was that I didn't passed any handler to the QueueListener constructor. The when the listener recieved an message, it wasn't handled. I'm not sure how the logging module works, but what handlers should I pass the QueueListener constructor ? I mean, maybe that I would like some messages (depending of the logger) to be logged to a file, while some others message would just be printed to stdout. This doesn't seem to be doable with a QueueListener. Maybe I should implement my own system, and pass a little more informations with the record sent in the queue : the logger name for example. Then, in the main process I would do a logging.getLogger(loggername) and log the record using this logger (by the way it was configured). What do you think ? You probably need different logging configurations in different processes. In your multiprocessing application, nominate one of the processes as a logging listener. It should initialize a QueueListener subclass which you write. All other processes should just configure a QueueHandler, which uses the same queue as the QueueListener. All the processes with QueueHandlers just send their records to the queue. The process with the QueueListener picks these up and handles them by calling the QueueListener's handle() method. The default implementation of QueueListener.handle() is: def handle(self, record): record = self.prepare(record) for handler in self.handlers: handler.handle(record) where self.handlers is just the handlers you passed to the QueueListener constructor. However, if you want a very flexible configuration where different loggers have different handlers, this is easy to arrange. Just configure logging in the listener process however you want, and then, in your QueueListener subclass, do something like this: class MyQueueListener(logging.handlers.QueueListener): def handle(self, record): record = self.prepare(record) logger = logging.getLogger(record.name) logger.handle(record) This will pass the events to whatever handlers are configured for a particular logger. I will try to update the Cookbook in the logging docs with this approach, and a working script. Background information is available here: [1][2] Regards, Vinay Sajip [1] http://plumberjack.blogspot.co.uk/2010/09/using-logging-with-multiprocessing.html [2] http://plumberjack.blogspot.co.uk/2010/09/improved-queuehandler-queuelistener.html This is exactly what I wanted, it seems perfect. However I still have a question, from what I understood, I have to configure logging AFTER creating the process, to avoid children process to inherits the logging config. Unless there is a way to clean logging configuration in children processes, so they only have one handler : the QueueHandler. I looked at the logging code and it doesn't seems to have an easy way to do this. The problem of configuring the logging after the process creation is that... I can't log during process creation. But if it's too complicated, I will just do this. Thanks again for your help Vinay, -- http://mail.python.org/mailman/listinfo/python-list
Let's have hex
A hexual game: http://cotpi.com/letushavehex/1/ -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing Logging
Thibaut merwin.irc at gmail.com writes: This is exactly what I wanted, it seems perfect. However I still have a question, from what I understood, I have to configure logging AFTER creating the process, to avoid children process to inherits the logging config. Unless there is a way to clean logging configuration in children processes, so they only have one handler : the QueueHandler. I looked at the logging code and it doesn't seems to have an easy way to do this. The problem of configuring the logging after the process creation is that... I can't log during process creation. But if it's too complicated, I will just do this. You may be able to have a clean configuration: for example, dictConfig() allows the configuration dictionary to specify whether existing loggers are disabled. So the details depend on the details of your desired configuration. One more point: I suggested that you subclass QueueListener, but you don't actually need to do this. For example, you can do something like: class DelegatingHandler(object): def handle(self, record): logger = logging.getLogger(record.name) logger.handle(record) And then instantiate the QueueListener with an instance of DelegatingHandler. QueueListener doesn't need actual logging handlers, just something with a handle method which takes a record. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: 'string_escape' in python 3
On Sat, Apr 7, 2012 at 8:30 AM, Nicholas Cole nicholas.c...@gmail.com wrote: On Sat, Apr 7, 2012 at 12:10 AM, Ian Kelly ian.g.ke...@gmail.com wrote: import codecs codecs.getdecoder('unicode_escape')(s)[0] 'Hello: this is a test' Cheers, Ian Thanks, Ian. I had assumed that if a unicode string didn't have a .decode method, then I couldn't use a decoder on it, so it hadn't occurred to me to try that Just a warning, I'm not really sure whether this behavior is intended or not. I just tried it and found that it seems to work. -- http://mail.python.org/mailman/listinfo/python-list
Re: PEP 274
On 4/7/2012 7:20 AM, Rodrick Brown wrote: This proposal was suggested in 2001 and is only now being implemented. Why the extended delay? It was implemented in revised form 3 years ago in 3.0. -- Terry Jan Reedy -- http://mail.python.org/mailman/listinfo/python-list
Re: Python Script Works Locally But Not Remotely with SSH
Thank you for the help. I guess I didn't understand what's really going on. I thought if I SSH even from a Linux to a Windows machine whatever I say on the SSH client command line would be the same as me doing a command on the DOS command-line in Windows. I incorrectly thought SSH is just a tunnel for text... But that's not what's going on, there's a lot more to it. Thanks, I will get into it more to understand this. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python Script Works Locally But Not Remotely with SSH
On 4/7/2012 11:59 AM, goldtech wrote: I thought if I SSH even from a Linux to a Windows machine whatever I say on the SSH client command line would be the same as me doing a command on the DOS command-line in Windows. I incorrectly thought SSH is just a tunnel for text... It gives you whatever shell the SSH server serves (could be cmd, could a Windows build of bash, could be whatever). I think the other posters think you are trying to view Firefox on the Linux client's desktop. If that is indeed the case, you will need an X server on the XP machine (I think; it might not be that simple since it's a Windows process). AFAICT, you want to open Firefox and have it show on the XP machine's desktop. If that's the case, see Jerry Hill's post and configure Windows to either run the SSH server as the logged in user or to allow it to interact with the logged in user's desktop. -- CPython 3.2.2 | Windows NT 6.1.7601.17640 -- http://mail.python.org/mailman/listinfo/python-list
Re: Distribute app without source?
On 4/7/2012 9:07 AM, Bill Felton wrote: Thanks in advance for any insights! My partner and I have developed an application primarily intended for internal use within our company. However, we face the need to expose the app to certain non-employees. We would like to do so without exposing our source code. To really do that, make it a web service, so the code only lives on your server. That also takes care of Our targets include users of Windows and Mac OS, but not UNIX. You could also distribute .pyc files compiled from obfuscated .py files. We are using Python 3.2 and tkinter. It appears, and limited testing bears out, that py2app, and presumably py2exe, are not options given lack of 3.x support. PyInstaller does not support the 64-bit version we are using. Any such thing will include the contents of .pyc files somehow, even if harder to get at. Does it make sense for us to try to use pyInstaller with a 32-bit install of Python 3.2? If you app runs within 2 GB, I would think yes. -- Terry Jan Reedy -- http://mail.python.org/mailman/listinfo/python-list
documentation for asyncmongo?
Is there any kind of API documentation for asyncmongo? On GITHub they say asyncmongo syntax strives to be similar to pymongo http://api.mongodb.org/python/current/api/pymongo/collection.html.. However, many basic things do not work or they are not similar. http://api.mongodb.org/python/2.1.1/tutorial.html Example from pymongo: db.collection_names() [u'posts', u'system.indexes'] The same in asyncmongo: TypeError: 'Cursor' object is not callable Even the connection is different: pymongo.Connect versus asyncmongo.Client. It has a pool_id parameter and what the hack is that? Is there no documentation for asyncmongo at all? Thanks, Laszlo -- http://mail.python.org/mailman/listinfo/python-list
ordering with duck typing in 3.1
Any reason you can't derive from int instead of object? You may also want to check out functions.total_ordering on 2.7+ -- http://mail.python.org/mailman/listinfo/python-list
Re: multithreading
Kiuhnm wrote: I'm about to write my first module and I don't know how I should handle multithreading/-processing. I'm not doing multi-threading inside my module. I'm just trying to make it thread-safe so that users *can* do multi-threading. There are a couple conventions to follow. Trying to anticipate the threading needs of users and then locking everything for the worst case is a bad idea. So what are the conventions? Unless documented otherwise, classes don't guarantee that each instance can be used by more then one thread. Most of the classes in Python's standard library are not one- instance-multiple-threads safe. An important documented exception is queue.queue. Classes should be safe for instance-per-thread multi-threading, unless documented otherwise. Likewise, functions should be thread safe under the assumption that their arguments are not shared between threads, which brings us to your example: For instance, let's say I want to make this code thread-safe: --- myDict = {} def f(name, val): if name not in myDict: myDict[name] = val return myDict[name] --- First, don't re-code Python's built-ins. The example is a job for dict.setdefault(). Language built-ins are already thread-safe (at least in CPython), though not meant as thread synchronization primitives. Second, the example suggests no obvious reason for the single global variabel. It could be a class for which users can make any number of instances. Third, there are cases where you want a single global. Most of the time I'd recommend warning users about threading assumptions. -Bryan -- http://mail.python.org/mailman/listinfo/python-list
Re: Distribute app without source?
On Apr 7, 2012, at 1:22 PM, Terry Reedy wrote: On 4/7/2012 9:07 AM, Bill Felton wrote: Thanks in advance for any insights! My partner and I have developed an application primarily intended for internal use within our company. However, we face the need to expose the app to certain non-employees. We would like to do so without exposing our source code. To really do that, make it a web service, so the code only lives on your server. That also takes care of That's not really an alternative for us, for a variety of reasons. It's not clear at the moment if it would ever be an alternative, given our app and its structure and intended use. As recommended elsewhere in this thread, cx_freeze seems likely to do us some good. Initial testing has been fairly positive, although there are still some issues we're wrestling with. If those continue, I'll post questions to the cx_freeze list. Thanks everyone. regards, Bill -- http://mail.python.org/mailman/listinfo/python-list
Re: multithreading
On 4/7/2012 22:09, Bryan wrote: For instance, let's say I want to make this code thread-safe: --- myDict = {} def f(name, val): if name not in myDict: myDict[name] = val return myDict[name] --- First, don't re-code Python's built-ins. The example is a job for dict.setdefault(). [...] That was just an example for the sake of the discussion. My question is this: can I use 'threading' without interfering with the program which will import my module? Kiuhnm -- http://mail.python.org/mailman/listinfo/python-list
Re: multithreading
On Apr 7, 5:06 pm, Kiuhnm kiuhnm03.4t.yahoo.it wrote: On 4/7/2012 22:09, Bryan wrote: For instance, let's say I want to make this code thread-safe: --- myDict = {} def f(name, val): if name not in myDict: myDict[name] = val return myDict[name] --- First, don't re-code Python's built-ins. The example is a job for dict.setdefault(). [...] That was just an example for the sake of the discussion. My question is this: can I use 'threading' without interfering with the program which will import my module? 'import threading' ought to work everywhere, but that's not enough to tell you whether whatever you're trying to do will actually work. However, you shouldn't need to do it unless your application is meant to /only/ be used in applications that have done 'import threading' elsewhere. Otherwise, you probably have a pretty serious design issue. Global state is bad. TLS state is little better, even if it's common in a lot of python modules. Non-thread-safe object instances is usually fine. Object construction needs to be thread-safe, but that's also the default behavior. You need not worry about it unless you're doing very unusual things. Plainly, most of the time you shouldn't need to do anything to support multiples threads beyond avoiding global state. In fact, you should stop and give some serious thought to your design if you need to do anything else. Adam -- http://mail.python.org/mailman/listinfo/python-list
Pass a list of variables to a procedure
Hi there, I would like to be able to pass a list of variables to a procedure, and have the output assigned to them. For instance: x=0 y=0 z=0 vars =[x,y,z] parameters=[1,2,3] for i in range(1,len(vars)): *** somefunction that takes the parameter 1, does a computation and assigns the output to x, and so on and so forth. Such that later in the program I can print x,y,z I hope that makes sense, otherwise I have to do: x=somefunction(1) y=somefunction(2) z=somefunction(3) etc etc Appreciate any help -- http://mail.python.org/mailman/listinfo/python-list
Re: Pass a list of variables to a procedure
On Sat, Apr 7, 2012 at 2:15 PM, KRB alaga...@gmail.com wrote: Hi there, I would like to be able to pass a list of variables to a procedure, and have the output assigned to them. You cannot pass a variable itself to a function; you can only pass a variable's value. Which is to say that Python doesn't use pass-by-reference. Without using black magic, a Python function cannot rebind variables in its caller's scope. Mutable values can be mutated however. Details: http://effbot.org/zone/call-by-object.htm For instance: x=0 y=0 z=0 vars =[x,y,z] parameters=[1,2,3] for i in range(1,len(vars)): *** somefunction that takes the parameter 1, does a computation and assigns the output to x, and so on and so forth. Such that later in the program I can print x,y,z I hope that makes sense, otherwise I have to do: x=somefunction(1) y=somefunction(2) z=somefunction(3) etc etc Just use sequence (un)packing: def somefunction(*parameters): # one would normally use a list comprehension here; # for simplicity, I'm not results = [] for parameter in parameters: result = do_some_calculation(parameter) results.append(result) return results #…later... x, y, z = somefunction(1, 2, 3) Relevant docs: http://docs.python.org/tutorial/datastructures.html#tuples-and-sequences http://docs.python.org/tutorial/controlflow.html#tut-unpacking-arguments Cheers, Chris -- http://mail.python.org/mailman/listinfo/python-list
Re: Pass a list of variables to a procedure
On Sat, 07 Apr 2012 14:15:09 -0700, KRB wrote: I would like to be able to pass a list of variables to a procedure, and have the output assigned to them. Use a dictionary or an object. If the variables are globals (i.e. attributes of the current module), you can pass the result of globals() into the function, or have the function return a dictionary which the caller merges into globals(). There isn't no way to do anything similar for local variables. The parser has to see the name being used a local variable in order for it to actually exist as a local variable. otherwise I have to do: x=somefunction(1) y=somefunction(2) z=somefunction(3) Or you could do: x, y, z = [somefunction(i) for i in [1,2,3]] -- http://mail.python.org/mailman/listinfo/python-list
Re: Why does this hang sometimes?
I am just playing around with threading and subprocess and found that the following program will hang up and never terminate every now and again. import threading import subprocess import time def targ(): p = subprocess.Popen([/bin/sleep, 2]) while p.poll() is None: time.sleep(1) t1 = threading.Thread(target=targ) t2 = threading.Thread(target=targ) t1.start() t2.start() t1.join() t2.join() I found this bug, and while it sounds similar it seems that it was closed during python 2.5 (I'm using 2.7.2): http://bugs.python.org/issue1404925 I can confirm hanging on my installation of 2.7.2. I also ran this code 100 times on 3.2.2 without experiencing a hang. Is version 3.x a possibility for you? -- http://mail.python.org/mailman/listinfo/python-list
Re: Pass a list of variables to a procedure
Thank you, Chris! Sent from my iPhone On Apr 7, 2012, at 3:24 PM, Chris Rebert c...@rebertia.com wrote: On Sat, Apr 7, 2012 at 2:15 PM, KRB alaga...@gmail.com wrote: Hi there, I would like to be able to pass a list of variables to a procedure, and have the output assigned to them. You cannot pass a variable itself to a function; you can only pass a variable's value. Which is to say that Python doesn't use pass-by-reference. Without using black magic, a Python function cannot rebind variables in its caller's scope. Mutable values can be mutated however. Details: http://effbot.org/zone/call-by-object.htm For instance: x=0 y=0 z=0 vars =[x,y,z] parameters=[1,2,3] for i in range(1,len(vars)): *** somefunction that takes the parameter 1, does a computation and assigns the output to x, and so on and so forth. Such that later in the program I can print x,y,z I hope that makes sense, otherwise I have to do: x=somefunction(1) y=somefunction(2) z=somefunction(3) etc etc Just use sequence (un)packing: def somefunction(*parameters): # one would normally use a list comprehension here; # for simplicity, I'm not results = [] for parameter in parameters: result = do_some_calculation(parameter) results.append(result) return results #…later... x, y, z = somefunction(1, 2, 3) Relevant docs: http://docs.python.org/tutorial/datastructures.html#tuples-and-sequences http://docs.python.org/tutorial/controlflow.html#tut-unpacking-arguments Cheers, Chris -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing Logging
Thibaut merwin.irc at gmail.com writes: This is exactly what I wanted, it seems perfect. However I still have a question, from what I understood, I have to configure logging AFTER creating the process, to avoid children process to inherits the logging config. Unless there is a way to clean logging configuration in children processes, so they only have one handler : the QueueHandler. I looked at the logging code and it doesn't seems to have an easy way to do this. The problem of configuring the logging after the process creation is that... I can't log during process creation. But if it's too complicated, I will just do this. I've updated the 3.2 / 3.3 logging cookbook with an example of what I mean. There is a gist of the example script at https://gist.github.com/2331314/ and the cookbook example should show once the docs get built on docs.python.org. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: PEP 274
I have it in 2.7.3 On Sun, Apr 8, 2012 at 2:35 AM, Terry Reedy tjre...@udel.edu wrote: On 4/7/2012 7:20 AM, Rodrick Brown wrote: This proposal was suggested in 2001 and is only now being implemented. Why the extended delay? It was implemented in revised form 3 years ago in 3.0. -- Terry Jan Reedy -- http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
Re: Python Gotcha's?
Roy Smith r...@panix.com wrote: There's absolutely no reason why JSON should follow Python syntax rules. No, but there certainly is a justification for expecting JAVASCRIPT Object Notation (which is, after all, what JSON stands for) to follow Javascript's syntax rules. And Javascript happens to follow the same quoting rules as Python. Now, I fully understand that it is the way it is. I'm merely pointing out that his was not an unreasonable expectation. -- Tim Roberts, t...@probo.com Providenza Boekelheide, Inc. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python Gotcha's?
On Sun, Apr 8, 2012 at 1:47 PM, Tim Roberts t...@probo.com wrote: No, but there certainly is a justification for expecting JAVASCRIPT Object Notation (which is, after all, what JSON stands for) to follow Javascript's syntax rules. And Javascript happens to follow the same quoting rules as Python. Now, I fully understand that it is the way it is. I'm merely pointing out that his was not an unreasonable expectation. I agree, it would make sense. But that's only because of the name; if it had been called Just Simple Object Notation or something stupider, then nobody would expect it to be the same as anything. And these days, nobody's confused by the fact that Java and Javascript are completely different beasts. ChrisA -- http://mail.python.org/mailman/listinfo/python-list
Re: Python Gotcha's?
On 4/4/2012 3:34 PM, Miki Tebeka wrote: Greetings, I'm going to give a Python Gotcha's talk at work. If you have an interesting/common Gotcha (warts/dark corners ...) please share. (Note that I want over http://wiki.python.org/moin/PythonWarts already). Thanks, -- Miki A few Python gotchas: 1. Nobody is really in charge of third party packages. In the Perl world, there's a central repository, CPAN, and quality control. Python's pypi is just a collection of links. Many major packages are maintained by one person, and if they lose interest, the package dies. 2. C extensions are closely tied to the exact version of CPython you're using, and finding a properly built version may be difficult. 3. eggs. The distutils system has certain assumptions built into it about where things go, and tends to fail in obscure ways. There's no uniform way to distribute a package. 4. The syntax for expression-IF is just weird. 5. + as concatenation. This leads to strange numerical semantics, such as (1,2) + (3,4) is (1,2,3,4). But, for numarray arrays, + does addition. What does a mixed mode expression of a numarray and a tuple do? Guess. 5. It's really hard to tell what's messing with the attributes of a class, since anything can store into anything. This creates debugging problems. 6. Multiple inheritance is a mess. Especially super. 7. Using attributes as dictionaries can backfire. The syntax of attributes is limited. So turning XML or HTML structures into Python objects creates problems. 8. Opening a URL can result in an unexpected prompt on standard input if the URL has authentication. This can stall servers. 9. Some libraries aren't thread-safe. Guess which ones. 10. Python 3 isn't upward compatible with Python 2. John Nagle -- http://mail.python.org/mailman/listinfo/python-list
Re: Python Gotcha's?
On Sun, Apr 8, 2012 at 2:01 PM, John Nagle na...@animats.com wrote: 4. The syntax for expression-IF is just weird. Agreed. Putting an expression first feels weird; in every high level language I know of, the word if is followed by the condition, and then by what to do if true, and then what to do if false - not true, then condition, then false. 6. Multiple inheritance is a mess. Especially super. Can you name any language in which multiple inheritance is NOT a mess? Okay, so I'm a bit cynical. But MI is its own problem, and I think the Python 3 implementation is about as good as it's worth hoping for. ChrisA -- http://mail.python.org/mailman/listinfo/python-list
Re: Python Gotcha's?
On Sun, Apr 8, 2012 at 2:19 PM, Chris Angelico ros...@gmail.com wrote: Agreed. Putting an expression first feels weird; in every high level language I know of, the word if is followed by the condition, and then by what to do if true, and then what to do if false - not true, then condition, then false. Clarification: I'm talking primarily about statement-if here. Not many languages have an expression-if that isn't derived from either LISP or C; both of those still have the three parts in the same order, so it comes to the same thing. Python switches them around compared to that. ChrisA -- http://mail.python.org/mailman/listinfo/python-list
Re: multithreading
Kiuhnm wrote: My question is this: can I use 'threading' without interfering with the program which will import my module? Yes. The things to avoid are described at the bottom of: http://docs.python.org/library/threading.html On platforms without threads, 'import threading' will fail. There's a standard library module dummy_threading which offers fake versions of the facilities in threading. It suggests: try: import threading as _threading except ImportError: import dummy_threading as _threading --Bryan -- http://mail.python.org/mailman/listinfo/python-list
[issue9141] Allow objects to decide if they can be collected by GC
Martin v. Löwis mar...@v.loewis.de added the comment: I'm still unclear about the rationale for this change. krisvale says in the patch and in msg109099 that this is to determine whether an object can be collected at this time. Is the intended usage that the result value may change over the lifetime of the object? If so, I'm -1 on the patch. If an object cannot be collected at this time, it means that it is added to gc.garbage, which in turn means that it will never be collected (unless somebody explicitly clears gc.garbage). Supporting the case of objects that can be collected despite living in a cycle is fine to me, but those objects must not change their mind. Supporting the case of objects that are not collectable now, but may be collectable later, may have its use case (which one?), but this is not addressed by the patch (AFAICT). To support it, processing of the entire cycle must be postponed (to the next collection? to the next generation?). I'm -0 on recycling the is_gc slot. Having a GC header and having a non-trivial tp_del are two unrelated issues. If this is done, I think it would be best to rename the slot to tp_gc_flags or something. There is also the slight risk of some type in the wild returning non-1 currently, which then would get misinterpreted. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9141 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14478] Decimal hashing very slow, could be cached
Antoine Pitrou pit...@free.fr added the comment: I recommend that __hash__ should use functools.lru_cache for caching. Why would you do such a thing? A hash value is a single 64-bit slot, no need to add the memory consumption of a whole dictionary and the runtime cost of a LRU eviction policy when you can simply cache the hash in the object itself (like we already do for strings)... -- nosy: +pitrou ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14478 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14478] Decimal hashing very slow, could be cached
Changes by Antoine Pitrou pit...@free.fr: -- stage: - needs patch versions: +Python 3.3 -Python 3.2 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14478 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14310] Socket duplication for windows
Antoine Pitrou pit...@free.fr added the comment: Any other comments? No, the patch looks ok now. Please watch the buildbots after you commit. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14310 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9141] Allow objects to decide if they can be collected by GC
Kristján Valur Jónsson krist...@ccpgames.com added the comment: Jim: The edge case of collecting an object that is alone in a cycle is something that isn't handled. I'm also not sure that it is worth doing or even safe or possible. but that is beside the point and not the topic of this patch. Martin: This patch is formalizing a known fact about cpython objects, albeit an uncommon one: Some objects with finalizers can be safely collected if they are in a certain state. The canonical example is the PyGeneratorObject. gccollect.c has special code for this type and there is a special evil API exposed to deal with the case. _This patch is not changing any behaviour._ Generator objects that are in a special state of existance cannot be collected. Rather, they are (and always have been) put in gc.garbage along with the rest of their cycle chain. In other cases, the finalizer causes no side effects and simply clearing them is ok. I couldn't tell you what those generator execution states are, but the fact is that they exist. A similar problem exists in Stackless python with tasklets. We solved it in this generic way there rather than add more exceptional code to gcmodule.c and this patch is the result of that work. In Stackless, the inverse is true: An object without an explicit finalizer can still be unsafe to collect, because the tp_dealloc can do evil things, but doesn't always, depending on object state. So, objects who change their mind about whether they can be collected or not are a fact of life in python. Yes, even cPython. This patch aims to formalize that fact and give it an interface, rather than to have special code for generator objects in gcmodule.c and an inscrutable API exposed (PyGen_NeedsFinalizing()) About reusing the slot: Slots are a precious commodity in python. This particular slot is undocumented and used only for one known thing: To distinguish PyTypeObjects from PyHeapTypeObjects. In fact, creating a slot and not using special case code (like they did for PyGeneratorObjects) was forward thinking, and I'm trying to build on that. Renaming the slot is a fine idea. A brief search on google code (and google at large) showed no one using this slot. It is one of those undocumented strange slots that one just initializes to 0. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9141 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9141] Allow objects to decide if they can be collected by GC
Kristján Valur Jónsson krist...@ccpgames.com added the comment: Btw. tangentially related to this discussion, issue 10576 aims to make the situation with uncollectable objects a little more bearable. An application can listen for garbage collection, visit gc.garbage and deal with its problematic types in its own way. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9141 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4892] Sending Connection-objects over multiprocessing connections fails
Antoine Pitrou pit...@free.fr added the comment: err, is it possible to edit out those file paths? I don't know how to do that. If you want I can remove the message altogether. But I don't see anything confidential or exploitable in your message. -- nosy: +pitrou ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4892 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4892] Sending Connection-objects over multiprocessing connections fails
Changes by Antoine Pitrou pit...@free.fr: -- assignee: jnoller - nosy: +sbt versions: +Python 3.3 -Python 3.2 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4892 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7978] SocketServer doesn't handle syscall interruption
Antoine Pitrou pit...@free.fr added the comment: Jerzy's latest patch looks ok to me. This is a slight behaviour change so I'm not sure it should go in 3.2/2.7. -- stage: - patch review versions: +Python 3.3 -Python 2.7, Python 3.1 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7978 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14520] Buggy Decimal.__sizeof__
New submission from Antoine Pitrou pit...@free.fr: I'm not sure __sizeof__ is implemented correctly: from decimal import Decimal import sys d = Decimal(123456789123456798123456789123456798123456789123456798) d Decimal('123456789123456798123456789123456798123456789123456798') sys.getsizeof(d) 24 ... looks too small. -- assignee: skrah messages: 157726 nosy: pitrou, skrah priority: normal severity: normal status: open title: Buggy Decimal.__sizeof__ type: behavior versions: Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14520 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14425] Improve handling of 'timeout' parameter default in urllib.urlopen
Senthil Kumaran sent...@uthcode.com added the comment: Hi David, I am sorry, I did not notice your second comment in this bug and later when you closed this, noticed the bug report. Yes, the default=None but actually pointing to a sentinel value is an odd duck and I believe the explanation in docs were updated a couple of times to inform user of that behavior, but still the signature gives a feeling that it could be improved. I am at loss as well in terms of giving an easy solution to fix the docs. Thanks, Senthil -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14425 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14310] Socket duplication for windows
Roundup Robot devn...@psf.upfronthosting.co.za added the comment: New changeset 51b4bddd0e92 by Kristján Valur Jónsson in branch 'default': Issue #14310: inter-process socket duplication for windows http://hg.python.org/cpython/rev/51b4bddd0e92 -- nosy: +python-dev ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14310 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14310] Socket duplication for windows
Changes by Kristján Valur Jónsson krist...@ccpgames.com: -- resolution: - fixed status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14310 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14520] Buggy Decimal.__sizeof__
Stefan Krah stefan-use...@bytereef.org added the comment: It isn't implemented at all. The Python version also always returns 96, irrespective of the coefficient length. Well, arguably the coefficient is a separate object in the Python version: 96 sys.getsizeof(d._int) 212 For the C version I'll do the same as in longobject.c. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14520 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14520] Buggy Decimal.__sizeof__
Stefan Krah stefan-use...@bytereef.org added the comment: In full: d = Decimal(1) sys.getsizeof(d) 96 sys.getsizeof(d._int) 212 -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14520 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14478] Decimal hashing very slow, could be cached
Serhiy Storchaka storch...@gmail.com added the comment: I recommend that __hash__ should use functools.lru_cache for caching. Why would you do such a thing? A hash value is a single 64-bit slot, no need to add the memory consumption of a whole dictionary and the runtime cost of a LRU eviction policy when you can simply cache the hash in the object itself (like we already do for strings)... It was a joke (I think). Taking into account the fact that LRU cache uses a hashtable and need to calculate the hash of arguments (i.e., the Decimal self) to get the cached value of hash. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14478 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14478] Decimal hashing very slow, could be cached
Antoine Pitrou pit...@free.fr added the comment: I recommend that __hash__ should use functools.lru_cache for caching. Why would you do such a thing? A hash value is a single 64-bit slot, no need to add the memory consumption of a whole dictionary and the runtime cost of a LRU eviction policy when you can simply cache the hash in the object itself (like we already do for strings)... It was a joke (I think). Taking into account the fact that LRU cache uses a hashtable and need to calculate the hash of arguments (i.e., the Decimal self) to get the cached value of hash. Damn. Shame on me for not understanding Raymond's humour :-) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14478 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10576] Add a progress callback to gcmodule
Kristján Valur Jónsson krist...@ccpgames.com added the comment: Here is an updated patch, taking Jim's and Antoine's comments into account. Jim, I´d like to comment that I think the reason __del__ objects are uncollectable is more subtle than there being no defined order of calling the __del__ functions. More significantly, no python code may be executed during an implicit garbage collection. Now, it is possible that one could clean up cycles containing only one __del__ method during _expcicit_ collections (calling gc.collect()) but it hardly seems worth the effort. -- Added file: http://bugs.python.org/file25150/gccallback.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10576 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10576] Add a progress callback to gcmodule
Antoine Pitrou pit...@free.fr added the comment: Uploaded another review. I also notice you didn't really address my point, since self.visit is still initialized too early. IMO it should be initialized after the first gc.collect() at the beginning of each test (not in setUp()). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10576 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3561] Windows installer should add Python and Scripts directories to the PATH environment variable
Brian Curtin br...@python.org added the comment: Attached is issue3561.diff which adds a path option, off by default, as a feature to be installed. I've tested installation and un-installation with the feature both installed and not installed and it seems to work fine for me. http://briancurtin.com/python-dev/python-3.3.15437.msi is an installer built with this patch. http://briancurtin.com/python-dev/CustomizePage.png is simply a screenshot of the page where you choose to enable this feature. -- keywords: +needs review stage: - patch review Added file: http://bugs.python.org/file25151/issue3561.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue3561 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4892] Sending Connection-objects over multiprocessing connections fails
sbt shibt...@gmail.com added the comment: Jimbofbx wrote: def main(): from multiprocessing import Pipe, reduction i, o = Pipe() print(i); reduced = reduction.reduce_connection(i) print(reduced); newi = reduced[0](*reduced[1]) print(newi); newi.send(hi) o.recv() On Windows with a PipeConnection object you should use rebuild_pipe_connection() instead of rebuild_connection(). With that change, on Python 3.3 I get multiprocessing.connection.PipeConnection object at 0x025BBCB0 (function rebuild_pipe_connection at 0x0262F420, (('.\\pipe\\pyc-6000-1-30lq4p', 356, False), True, True)) multiprocessing.connection.PipeConnection object at 0x029FF710 Having said all that I agree multiprocessing.reduction should be fixed. Maybe an enable_pickling_support() function could be added to register the necessary things with copyreg. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4892 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13621] Unicode performance regression in python3.3 vs python3.2
Changes by Serhiy Storchaka storch...@gmail.com: -- nosy: +storchaka ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13621 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13031] small speed-up for tarfile.py when unzipping tarballs
Changes by Serhiy Storchaka storch...@gmail.com: -- nosy: +storchaka ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13031 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4892] Sending Connection-objects over multiprocessing connections fails
Antoine Pitrou pit...@free.fr added the comment: Having said all that I agree multiprocessing.reduction should be fixed. Maybe an enable_pickling_support() function could be added to register the necessary things with copyreg. Why not simply use ForkingPickler? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4892 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10408] Denser dicts and linear probing
Changes by Serhiy Storchaka storch...@gmail.com: -- nosy: +storchaka ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10408 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14478] Decimal hashing very slow, could be cached
Changes by Raymond Hettinger raymond.hettin...@gmail.com: -- keywords: +patch Added file: http://bugs.python.org/file25152/decimal_hash.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14478 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14511] _static/opensearch.xml for Python 3.2 docs directs searches to 3.3 docs
Roundup Robot devn...@psf.upfronthosting.co.za added the comment: New changeset 7f123dec2731 by Georg Brandl in branch '3.2': Closes #14511: fix wrong opensearch link for 3.2 docs. http://hg.python.org/cpython/rev/7f123dec2731 New changeset 57a8a8f5e0bc by Georg Brandl in branch 'default': Closes #14511: merge with 3.2 http://hg.python.org/cpython/rev/57a8a8f5e0bc -- nosy: +python-dev resolution: - fixed stage: - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14511 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4892] Sending Connection-objects over multiprocessing connections fails
sbt shibt...@gmail.com added the comment: ForkingPickler is only used when creating a child process. The multiprocessing.reduction module is only really intended for sending stuff to *pre-existing* processes. As things stand, after importing multiprocessing.reduction you can do something like buf = io.BytesIO() pickler = ForkingPickler(buf) pickler.dump(conn) data = buf.getvalue() writer.send_bytes(data) But that is rather less simple and obvious than just doing writer.send(conn) which was possible in pyprocessing. Originally just importing the module magically registered the reduce functions with copyreg. Since this was undesirable, the reduction functions were instead registered with ForkingPickler. But this fix rather missed the point of the module. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4892 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13126] find() slower than rfind()
Changes by Serhiy Storchaka storch...@gmail.com: -- nosy: +storchaka ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13126 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12805] Optimizations for bytes.join() et. al
Changes by Serhiy Storchaka storch...@gmail.com: -- nosy: +storchaka ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12805 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4892] Sending Connection-objects over multiprocessing connections fails
Antoine Pitrou pit...@free.fr added the comment: ForkingPickler is only used when creating a child process. The multiprocessing.reduction module is only really intended for sending stuff to *pre-existing* processes. But ForkingPickler could be used in multiprocessing.connection, couldn't it? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4892 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14478] Decimal hashing very slow, could be cached
Antoine Pitrou pit...@free.fr added the comment: Le samedi 07 avril 2012 à 17:22 +, Raymond Hettinger a écrit : -- keywords: +patch Added file: http://bugs.python.org/file25152/decimal_hash.diff I think patching the C version of Decimal would be more useful :) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14478 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2377] Replace __import__ w/ importlib.__import__
Brett Cannon br...@python.org added the comment: On Fri, Apr 6, 2012 at 16:05, Antoine Pitrou rep...@bugs.python.org wrote: Antoine Pitrou pit...@free.fr added the comment: OK, -v/PYTHONVERBOSE is as done as it is going to be by me. Next up is (attempting) Windows registry stuff. After that I will push to default with test_trace and test_pydoc skipped so others can help me with those. Skipped? How so? By raising unittest.SkipTest. I already know how to fix pydoc, but I need to get module names attached to ImportError and I don't want to bother with that until importlib is in (else it will be weird having it added into default but not in Python/import.c). As for trace, I have not looked at it, but I know what the failure is caused by and it's a question of how best to deal with it. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2377 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4892] Sending Connection-objects over multiprocessing connections fails
James Hutchison jamesghutchi...@gmail.com added the comment: @pitrou You can just delete my original post. I'll repost an edited version here for reference original post with paths removed: This is an issue for me (Python 3.2). I have a custom pool that sends arguments for a function call over a pipe. I cannot send another pipe as an argument. Tim's workaround also does not work for me (win xp 32bit and 64bit) From what I can tell, you can only send a connection as a direct argument to a function call. This limits what I can do because I cannot introduce new pipes to a worker process after it is instantiated. Using this code: def main(): from multiprocessing import Pipe, reduction i, o = Pipe() print(i); reduced = reduction.reduce_connection(i) print(reduced); newi = reduced[0](*reduced[1]) print(newi); newi.send(hi) o.recv() if __name__ == __main__: main(); This is my output: read-write PipeConnection, handle 1760 (function rebuild_connection at 0x00FD4C00, (('.\\pipe\\pyc-3156-1-q5wwnr', 1756, False), True, True)) read-write Connection, handle 1720 newi.send(hi) IOError: [Errno 10038] An operation was attempted on something that is not a socket As you can see, the handle changes -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4892 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4892] Sending Connection-objects over multiprocessing connections fails
Changes by Antoine Pitrou pit...@free.fr: -- Removed message: http://bugs.python.org/msg157702 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4892 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12986] Using getrandbits() in uuid.uuid4() is faster and more readable
Serhiy Storchaka storch...@gmail.com added the comment: uuids = set() for u in [uuid.uuid4() for i in range(1000)]: uuids.add(u) uuids = {uuid.uuid4() for i in range(1000)} However, I'm not sure of the legitimacy of replacement suitable for cryptographic use `os.urandom` on fast pseudo-random `random.getrandbits`. Especially for applications that need to generate a lot of uuids. -- nosy: +storchaka ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12986 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11981] dupe self.fp.tell() in zipfile.ZipFile.writestr
Changes by Serhiy Storchaka storch...@gmail.com: -- nosy: +storchaka ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11981 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10376] ZipFile unzip is unbuffered
Changes by Serhiy Storchaka storch...@gmail.com: -- nosy: +storchaka ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10376 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4892] Sending Connection-objects over multiprocessing connections fails
sbt shibt...@gmail.com added the comment: But ForkingPickler could be used in multiprocessing.connection, couldn't it? I suppose so. Note that the way a connection handle is transferred between existing processes is unnecessarily inefficient on Windows. A background server thread (one per process) has to be started and the receiving process must connect back to the sending process to receive its duplicate handle. There is a simpler way to do this on Windows. The sending process duplicates the handle, and the receiving process duplicates that second handle using DuplicateHandle() and the DUPLICATE_CLOSE_SOURCE flag. That way no server thread is necessary on Windows. I got this to work recently for pickling references to file handles for mmaps on. (A server thread would still be necessary on Unix.) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4892 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14521] math.copysign(1., float('nan')) returns -1.
New submission from mattip matti.pi...@gmail.com: Python 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)] on win32 Type help, copyright, credits or license for more information. import math math.copysign(1., float('inf')) 1.0 math.copysign(1., float('-inf')) -1.0 math.copysign(1., float('nan')) -1.0 math.copysign(1., float('-nan')) 1.0 -- components: None messages: 157746 nosy: mattip priority: normal severity: normal status: open title: math.copysign(1., float('nan')) returns -1. type: behavior versions: Python 2.7 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14521 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6972] zipfile.ZipFile overwrites files outside destination path
Changes by Serhiy Storchaka storch...@gmail.com: -- nosy: +storchaka ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6972 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2824] zipfile to handle duplicate files in archive
Changes by Serhiy Storchaka storch...@gmail.com: -- nosy: +storchaka ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2824 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14522] Avoid using DuplicateHandle() on sockets in multiprocessing.connection
New submission from sbt shibt...@gmail.com: In multiprocessing.connection on Windows, socket handles are indirectly duplicated using DuplicateHandle() instead the WSADuplicateSocket(). According to Microsoft's documentation this is not supported. This is easily avoided by using socket.detach() instead of duplicating the handle. -- files: mp_socket_dup.patch keywords: patch messages: 157747 nosy: sbt priority: normal severity: normal status: open title: Avoid using DuplicateHandle() on sockets in multiprocessing.connection Added file: http://bugs.python.org/file25153/mp_socket_dup.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14522 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14521] math.copysign(1., float('nan')) returns -1.
Martin v. Löwis mar...@v.loewis.de added the comment: This is a near duplicate of issue7281. Most likely, copysign is behaving correctly, and it's already the float conversion that errs. For struct.pack('d', float('nan')), I get '\x00\x00\x00\x00\x00\x00\xf8\xff'; for -nan, I get '\x00\x00\x00\x00\x00\x00\xf8\x7f'; ISTM that this has the sign bits switched. -- nosy: +loewis ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14521 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4892] Sending Connection-objects over multiprocessing connections fails
sbt shibt...@gmail.com added the comment: There is a simpler way to do this on Windows. The sending process duplicates the handle, and the receiving process duplicates that second handle using DuplicateHandle() and the DUPLICATE_CLOSE_SOURCE flag. That way no server thread is necessary on Windows. Note that this should not be done for socket handles since DuplicateHandle() is not supposed to work for them. socket.share() and socket.fromshare() with a server thread can be used for sockets. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4892 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14522] Avoid using DuplicateHandle() on sockets in multiprocessing.connection
Martin v. Löwis mar...@v.loewis.de added the comment: What is the bug that this fixes? Can you provide a test case? -- nosy: +loewis ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14522 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6839] zipfile can't extract file
Changes by Serhiy Storchaka storch...@gmail.com: -- nosy: +storchaka ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6839 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14522] Avoid using DuplicateHandle() on sockets in multiprocessing.connection
Changes by sbt shibt...@gmail.com: Removed file: http://bugs.python.org/file25153/mp_socket_dup.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14522 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4844] ZipFile doesn't range check in _EndRecData()
Changes by Serhiy Storchaka storch...@gmail.com: -- nosy: +storchaka ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4844 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10905] zipfile: fix arcname with leading '///' or '..'
Changes by Serhiy Storchaka storch...@gmail.com: -- nosy: +storchaka ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10905 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14522] Avoid using DuplicateHandle() on sockets in multiprocessing.connection
sbt shibt...@gmail.com added the comment: What is the bug that this fixes? Can you provide a test case? The bug is using an API in a way that the documentation says is wrong/unreliable. There does not seem to be a classification for that. I have never seen a problem caused by using DuplicateHandle() so I cannot provide a test case. Note that socket.dup() used to be implemented using DuplicateHandle(), but that was changed to WSADuplicateSocket(). See Issue 9753. -- Added file: http://bugs.python.org/file25154/mp_socket_dup.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14522 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10614] ZipFile: add a filename_encoding argument
Changes by Serhiy Storchaka storch...@gmail.com: -- nosy: +storchaka ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10614 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com