ANN: psutil 0.4.0 released
Hi folks, I'm pleased to announce the 0.4.0 release of psutil: http://code.google.com/p/psutil === About === psutil is a module providing an interface for retrieving information on all running processes and system utilization (CPU, disk, memory, network) in a portable way by using Python, implementing many functionalities offered by command line tools such as ps, top, free, lsof and others. It works on Linux, Windows, OSX and FreeBSD, both 32-bit and 64-bit, with Python versions from 2.4 to 3.3 by using a single code base. === Major enhancements === Aside from fixing different high priority bugs this release introduces two new important features: disks and network IO counters. With these, you can monitor disks usage and network traffic. 3 scripts were added to provide an example of what kind of applications can be written with these two hooks: http://code.google.com/p/psutil/source/browse/trunk/examples/iotop.py http://code.google.com/p/psutil/source/browse/trunk/examples/nettop.py http://code.google.com/p/psutil/source/browse/trunk/examples/top.py ...and here you can see some screenshots: http://code.google.com/p/psutil/#Example_applications === Other enhancements == - Process.get_connections() has a new 'kind' parameter to filters for connections by using different criteria. - timeout=0 parameter can now be passed to Process.wait() to make it return immediately (non blocking). - Python 3.2 installer for Windows 64 bit is now provided in downloads section. - (FreeBSD) addeed support for Process.getcwd() - (FreeBSD) Process.get_open_files() has been rewritten in C and no longer relies on lsof. - various crashes on module import across differnt platforms were fixed. For a complete list of features and bug fixes see: http://psutil.googlecode.com/svn/trunk/HISTORY === New features by example === import psutil psutil.disk_io_counters() iostat(read_count=8141, write_count=2431, read_bytes=290203, write_bytes=537676, read_time=5868, write_time=94922) psutil.disk_io_counters(perdisk=True) {'sda1' :iostat(read_count=8141, write_count=2431, read_bytes=290203, write_bytes=537676, read_time=5868, write_time=94922), 'sda2' :iostat(read_count=811241, write_count=31, read_bytes=1245, write_bytes=11246, read_time=768008, write_time=922)} psutil.network_io_counters() iostat(bytes_sent=1270374, bytes_recv=7828365, packets_sent=9810, packets_recv=11794) psutil.network_io_counters(pernic=True) {'lo': iostat(bytes_sent=800251705, bytes_recv=800251705, packets_sent=455778, packets_recv=455778), 'eth0': iostat(bytes_sent=813731756, bytes_recv=4183672213, packets_sent=3771021, packets_recv=4199213)} import os p = psutil.Process(os.getpid()) p.get_connections(kind='tcp') [connection(fd=115, family=2, type=1, local_address=('10.0.0.1', 48776), remote_address=('93.186.135.91', 80), status='ESTABLISHED')] p.get_connections(kind='udp6') [] p.get_connections(kind='inet6') [] === Links === * Home page: http://code.google.com/p/psutil * Source tarball: http://psutil.googlecode.com/files/psutil-0.4.0.tar.gz * Api Reference: http://code.google.com/p/psutil/wiki/Documentation As a final note I'd like to thank Jeremy Whitlock, who kindly contributed disk/network io counters code for OSX and Windows. Please try out this new release and let me know if you experience any problem by filing issues on the bug tracker. Thanks in advance. --- Giampaolo Rodola' http://code.google.com/p/pyftpdlib/ http://code.google.com/p/psutil/ -- http://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations/
[ANN] SciPy India 2011 Abstracts due November 2nd
== SciPy 2011 Call for Papers == The third `SciPy India Conference http://scipy.in`_ will be held from December 4th through the 7th at the `Indian Institute of Technology, Bombay (IITB) http://www.iitb.ac.in/`_ in Mumbai, Maharashtra India. At this conference, novel applications and breakthroughs made in the pursuit of science using Python are presented. Attended by leading figures from both academia and industry, it is an excellent opportunity to experience the cutting edge of scientific software development. The conference is followed by two days of tutorials and a code sprint, during which community experts provide training on several scientific Python packages. We invite you to take part by submitting a talk abstract on the conference website at: http://scipy.in Talk/Paper Submission == We solicit talks and accompanying papers (either formal academic or magazine-style articles) that discuss topics regarding scientific computing using Python, including applications, teaching, development and research. We welcome contributions from academia as well as industry. Important Dates == November 2, 2011, Wednesday: Abstracts Due November 7, 2011, Monday: Schedule announced November 28, 2011, Monday: Proceedings paper submission due December 4-5, 2011, Sunday-Monday: Conference December 6-7 2011, Tuesday-Wednesday: Tutorials/Sprints Organizers == * Jarrod Millman, Neuroscience Institute, UC Berkeley, USA (Conference Co-Chair) * Prabhu Ramachandran, Department of Aerospace Engineering, IIT Bombay, India (Conference Co-Chair) * FOSSEE Team -- http://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations/
WSME 0.2.0 released
About WSME -- WSME (Web Service Made Easy) is a very easy way to implement webservices in your python web application (or standalone). Main Changes * Added batch-calls abilities. * Introduce a :class:`UnsetType` and a :data:`Unset` constant so that non-mandatory attributes can remain unset (which is different from null). * Add support for user types. * Add an Enum type (which is a user type). * Various fixes More details on http://packages.python.org/WSME/changes.html. Documentation - http://packages.python.org/WSME/ Download http://pypi.python.org/pypi/WSME/ http://pypi.python.org/pypi/WSME-Soap/ http://pypi.python.org/pypi/WSME-ExtDirect/ Cheers, Christophe de Vienne -- http://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations/
Re: Fast recursive generators?
I am thinking the bye code compiler in python can be faster if all known immutable instances up to the executionare compiled immutable objects to be assigned. -- http://mail.python.org/mailman/listinfo/python-list
ANN: psutil 0.4.0 released
Hi folks, I'm pleased to announce the 0.4.0 release of psutil: http://code.google.com/p/psutil === About === psutil is a module providing an interface for retrieving information on all running processes and system utilization (CPU, disk, memory, network) in a portable way by using Python, implementing many functionalities offered by command line tools such as ps, top, free, lsof and others. It works on Linux, Windows, OSX and FreeBSD, both 32-bit and 64-bit, with Python versions from 2.4 to 3.3 by using a single code base. === Major enhancements === Aside from fixing different high priority bugs this release introduces two new important features: disks and network IO counters. With these, you can monitor disks usage and network traffic. 2 scripts were added to provide an example of what kind of applications can be written with these two hooks: http://code.google.com/p/psutil/source/browse/trunk/examples/iotop.py http://code.google.com/p/psutil/source/browse/trunk/examples/nettop.py ...and here you can see some screenshots: http://code.google.com/p/psutil/#Example_applications === Other enhancements == - Process.get_connections() has a new 'kind' parameter to filters for connections by using different criteria. - timeout=0 parameter can now be passed to Process.wait() to make it return immediately (non blocking). - Python 3.2 installer for Windows 64 bit is now provided in downloads section. - (FreeBSD) addeed support for Process.getcwd() - (FreeBSD) Process.get_open_files() has been rewritten in C and no longer relies on lsof. - various crashes on module import across differnt platforms were fixed. For a complete list of features and bug fixes see: http://psutil.googlecode.com/svn/trunk/HISTORY === New features by example === import psutil psutil.disk_io_counters() iostat(read_count=8141, write_count=2431, read_bytes=290203, write_bytes=537676, read_time=5868, write_time=94922) psutil.disk_io_counters(perdisk=True) {'sda1' :iostat(read_count=8141, write_count=2431, read_bytes=290203, write_bytes=537676, read_time=5868, write_time=94922), 'sda2' :iostat(read_count=811241, write_count=31, read_bytes=1245, write_bytes=11246, read_time=768008, write_time=922)} psutil.network_io_counters() iostat(bytes_sent=1270374, bytes_recv=7828365, packets_sent=9810, packets_recv=11794) psutil.network_io_counters(pernic=True) {'lo': iostat(bytes_sent=800251705, bytes_recv=800251705, packets_sent=455778, packets_recv=455778), 'eth0': iostat(bytes_sent=813731756, bytes_recv=4183672213, packets_sent=3771021, packets_recv=4199213)} import os p = psutil.Process(os.getpid()) p.get_connections(kind='tcp') [connection(fd=115, family=2, type=1, local_address=('10.0.0.1', 48776), remote_address=('93.186.135.91', 80), status='ESTABLISHED')] p.get_connections(kind='udp6') [] p.get_connections(kind='inet6') [] === Links === * Home page: http://code.google.com/p/psutil * Source tarball: http://psutil.googlecode.com/files/psutil-0.4.0.tar.gz * Api Reference: http://code.google.com/p/psutil/wiki/Documentation As a final note I'd like to thank Jeremy Whitlock, who kindly contributed disk/network io counters code for OSX and Windows. Please try out this new release and let me know if you experience any problem by filing issues on the bug tracker. Thanks in advance. --- Giampaolo Rodola' http://code.google.com/p/pyftpdlib/ http://code.google.com/p/psutil/ -- http://mail.python.org/mailman/listinfo/python-list
Re: Need Windows user / developer to help with Pynguin
Thanks to all those who tested and replied! For Windows users who want to just run Pyguin (not modify or tinker with the source code), it would be best to bundle Pynguin up with Py2exe I considered that, but I agree that licensing issues would make it problematic. the Python installer associates .py and .pyw files with python.exe and pythonw.exe respectively, so if you add the extension as Terry mentioned, it should work. I think this is the best way to go. pynguin = pynguin.py Would it be better to go with pynguin.pyw ? Actually, I was thinking of just copying that script to something like run_pynguin.pyw Anyone able to test if just making that single change makes it so that a user can just unpack the .zip file and double click on run_pynguin.pyw to run the app? Win7, with zip built in, just treats x.zip as a directory in Explorer So, is that going to be a problem? The user just sees it as a folder, goes in and double-clicks on run_pynguin.pyw and once python is running what does it see? Are the contents of the virtually unpacked folder going to be available to the script? You pynguin.zip contains one top level file -- a directory called pynguin that contains multiple files I feel that this is the correct way to create a .zip file. I have run in to so many poorly formed .zip files (ones that extract all of their files to the current directory) that when extracting any .zip I always create a dummy folder and put the .zip in there before extracting. Mine is actually called pynguin-0.12.zip and extracts to a folder called pynguin-0.12 Extracting pynguin.zip to a pynguin directory in the same directory as pynguin.zip, the default behavior with Win7 at least, creates a new pynguin directory that contains the extracted pynguin directory. So, windows now creates the dummy folder automatically? Is the problem that the .zip has the same name (minus the extension)? Would it solve the problem to just change the name of the archive to something like pynguin012.zip ? README = README.txt Just out of curiosity, what happens if you double-click the README sans .txt? Does it make you choose which app to open with? -- http://mail.python.org/mailman/listinfo/python-list
Re: Assigning generator expressions to ctype arrays
On Oct 28, 3:24 pm, Terry Reedy tjre...@udel.edu wrote: On 10/28/2011 2:05 PM, Patrick Maupin wrote: On Oct 27, 10:23 pm, Terry Reedytjre...@udel.edu wrote: I do not think everyone else should suffer substantial increase in space and run time to avoid surprising you. What substantial increase? of time and space, as I said, for the temporary array that I think would be needed and which I also described in the previous paragraph that you clipped That's because I don't think it needs a temporary array. A temporary array would provide some invariant guarantees that are nice but not necessary in a lot of real-world cases. There's already a check that winds up raising an exception. Just make it empty an iterator instead. It? I have no idea what you intend that to refer to. Sorry, code path. There is already a code path that says hey I can't handle this. To modify this code path to handle the case of a generic iterable would add a tiny bit of code, but would not add any appreciable space (no temp array needed for my proposal) and would not add any runtime to people who are not passing in iterables or doing other things that currently raise exceptions. I doubt it would be very many because it is *impossible* to make it work in the way that I think people would want it to. How do you know? I have a use case that I really don't think is all that rare. I know exactly how much data I am generating, but I am generating it piecemeal using iterators. It could, but at some cost. Remember, people use ctypes for efficiency, yes, you just made my argument for me. Thank you. It is incredibly inefficient to have to create a temp array. No, I don't think I did make your argument for you. I am currently making a temp list because I have to, and am proposing that with a small change to the ctypes library, that wouldn't always need to be done. But necessary to work with blank box iterators. With your own preconceived set of assumptions. (Which I will admit, you have given quite a bit of thought to, which I appreciate.) Now you are agreeing with my argument. Nope, still not doing that. If ctype_array slice assignment were to be augmented to work with iterators, that would, in my opinion (and see below), That's better for not being absolute. Thank you for admitting other possibilities. require use of temporary arrays. Since slice assignment does not use temporary arrays now (see below), that augmentation should be conditional on the source type being a non-sequence iterator. I don't think any temporary array is required, but in any case, yes the code path through the ctypes array library __setslice__ would have to be modified where it gives up now, in order to decide to do something different if it is passed an iterable. CPython comes with immutable fixed-length arrays (tuples) that do not allow slice assignment and mutable variable-length arrays (lists) that do. The definition is 'replace the indicated slice with a new slice built from all values from an iterable'. Point 1: This works for any properly functioning iterable that produces any finite number of items. Agreed. Iterators are always exhausted. And my proposal would continue to exhaust iterators, or would raise an exception if the iterator wasn't exhausted. Replace can be thought of as delete follewed by add, but the implementation is not that naive. Sure, on a mutable length item. Point 2: If anything goes wrong and an exception is raised, the list is unchanged. This may be true on lists, and is quite often true (and is nice when it happens), but it isn't always true in general. For example, with the current tuple packing/unpacking protocol across an assignment, the only real guarantee is that everything is gathered up into a single object before the assignment is done. It is not the case that nothing will be unpacked unless everything can be unpacked. For example: a,b,c,d,e,f,g,h,i = range(100,109) (a,b,c,d), (e,f), (g,h,i) = (1,2,3,4), (5,6,7), (8,9) Traceback (most recent call last): File stdin, line 1, in module ValueError: too many values to unpack a,b,c,d,e,f,g,h,i (1, 2, 3, 4, 104, 105, 106, 107, 108) This means that there must be temporary internal storage of either old or new references. As I show with the tuple unpacking example, it is not an inviolate law that Python won't unpack a little unless it can unpack everything. An example that uses an improperly functioning generator. (snip) Yes, I agree that lists are wondrously robust. But one of the reasons for this is the flexible interpretation of slice start and end points, that can be as surprising to a beginner as anything I'm proposing. A c_uint array is a new kind of beast: a fixed-length mutable array. So it has to have a different definition of slice assignment than lists. Thomas Heller, the ctypes author, apparently chose 'replacement by a sequence with exactly the same number of items, else raise
Re: Need Windows user / developer to help with Pynguin
On 10/29/2011 9:43 AM, Lee Harr wrote: So, windows now creates the dummy folder automatically? That is the default choice, but users are given a prompt to choose an arbitrary directory. Note that this only applies to the ZIP extractor in Explorer; other archive programs have their own behavior. I agree with you on having a top-level directory in an archive, but MS figures users are more likely to be annoyed with files scattered around the current directory than a nested directory. Unfortunately, many archives out in the wild have a top-level directory while many others don't, so one can rarely ever be certain how a given archive is organized without opening it. Is the problem that the .zip has the same name (minus the extension)? Not at all. Just out of curiosity, what happens if you double-click the README sans .txt? Does it make you choose which app to open with? Typically, that is the case because files without extensions are not registered by default. -- CPython 3.2.2 | Windows NT 6.1.7601.17640 | Thunderbird 7.0 -- http://mail.python.org/mailman/listinfo/python-list
Re: save tuple of simple data types to disk (low memory foot print)
On 10/29/2011 03:00 AM, Steven D'Aprano wrote: On Fri, 28 Oct 2011 22:47:42 +0200, Gelonida N wrote: Hi, I would like to save many dicts with a fixed amount of keys tuples to a file in a memory efficient manner (no random, but only sequential access is required) What do you mean keys tuples? Corrected phrase: I would like to save many dicts with a fixed (and known) amount of keys in a memory efficient manner (no random, but only sequential access is required) to a file (which can later be sent over a slow expensive network to other machines) Example: Every dict will have the keys 'timestamp', 'floatvalue', 'intvalue', 'message1', 'message2' 'timestamp' is an integer 'floatvalue' is a float 'intvalue' an int 'message1' is a string with a length of max 2000 characters, but can often be very short 'message2' the same as message1 so a typical dict will look like { 'timetamp' : 12, 'floatvalue': 3.14159, 'intvalue': 42, 'message1' : '', 'message2' : '=' * 1999 } What do you call many? Fifty? A thousand? A thousand million? How many items in each dict? Ten? A million? File size can be between 100kb and over 100Mb per file. Files will be accumulated over months. I just want to use the smallest possible space, as the data is collected over a certain time (days / months) and will be transferred via UMTS / EDGE / GSM network, where the transfer takes already for quite small data sets several minutes. I want to reduce the transfer time, when requesting files on demand (and the amount of data in order to not exceed the monthly quota) As the keys are the same for each entry I considered converting them to tuples. I don't even understand what that means. You're going to convert the keys to tuples? What will that accomplish? As the keys are the same for each entry I considered converting them (the before mentioned dicts) to tuples. so the dict { 'timetamp' : 12, 'floatvalue': 3.14159, 'intvalue': 42, 'message1' : '', 'message2' : '=' * 1999 } would become [ 12, 3.14159, 42, '', ''=' * 1999 ] The tuples contain only strings, ints (long ints) and floats (double) and the data types for each position within the tuple are fixed. The fastest and simplest way is to pickle the data or to use json. Both formats however are not that optimal. How big are your JSON files? 10KB? 10MB? 10GB? Have you tried using pickle's space-efficient binary format instead of text format? Try using protocol=2 when you call pickle.Pickler. No. This is probably already a big step forward. As I know the data types if each element in the tuple I would however prefer a representation, which is not storing the data types for each typle over and over again (as they are the same for each dict / tuple) Or have you considered simply compressing the files? Compression makes sense but the inital file format should be already rather 'compact' I could store ints and floats with pack. As strings have variable length I'm not sure how to save them efficiently (except adding a length first and then the string. This isn't 1980 and you're very unlikely to be using 720KB floppies. Premature optimization is the root of all evil. Keep in mind that when you save a file to disk, even if it contains only a single bit of data, the actual space used will be an entire block, which on modern hard drives is very likely to be 4KB. Trying to compress files smaller than a single block doesn't actually save you any space. Is there already some 'standard' way or standard library to store such data efficiently? Yes. Pickle and JSON plus zip or gzip. pickle protocol-2 + gzip of the tuple derived from the dict, might be good enough for the start. I have to create a little more typical data in order to see how many percent of my payload would consist of repeating the data types for each tuple. -- http://mail.python.org/mailman/listinfo/python-list
Re: save tuple of simple data types to disk (low memory foot print)
On 10/29/2011 01:08 AM, Roy Smith wrote: In article mailman.2293.1319834877.27778.python-l...@python.org, Gelonida N gelon...@gmail.com wrote: I would like to save many dicts with a fixed amount of keys tuples to a file in a memory efficient manner (no random, but only sequential access is required) There's two possible scenarios here. One, which you seem to be exploring, is to carefully study your data and figure out the best way to externalize it which reduces volume. The other is to just write it out in whatever form is most convenient (JSON is a reasonable thing to try first), and compress the output. Let the compression algorithms worry about extracting the entropy. You may be surprised at how well it works. It's also an easy experiment to try, so if it doesn't work well, at least it didn't cost you much to find out. Yes I have to make some more tests to see the defference between just compressing aplain format (JSON / pickle) and compressing the 'optimized' representation. -- http://mail.python.org/mailman/listinfo/python-list
Re: save tuple of simple data types to disk (low memory foot print)
On 10/29/11 11:44, Gelonida N wrote: I would like to save many dicts with a fixed (and known) amount of keys in a memory efficient manner (no random, but only sequential access is required) to a file (which can later be sent over a slow expensive network to other machines) Example: Every dict will have the keys 'timestamp', 'floatvalue', 'intvalue', 'message1', 'message2' 'timestamp' is an integer 'floatvalue' is a float 'intvalue' an int 'message1' is a string with a length of max 2000 characters, but can often be very short 'message2' the same as message1 so a typical dict will look like { 'timetamp' : 12, 'floatvalue': 3.14159, 'intvalue': 42, 'message1' : '', 'message2' : '=' * 1999 } What do you call many? Fifty? A thousand? A thousand million? How many items in each dict? Ten? A million? File size can be between 100kb and over 100Mb per file. Files will be accumulated over months. If Steven's pickle-protocol2 solution doesn't quite do what you need, you can do something like the code below. Gzip is pretty good at addressing... Or have you considered simply compressing the files? Compression makes sense but the inital file format should be already rather 'compact' ...by compressing out a lot of the duplicate aspects. Which also mitigates some of the verbosity of CSV. It serializes the data to a gzipped CSV file then unserializes it. Just point it at the appropriate data-source, adjust the column-names and data-types -tkc from gzip import GzipFile from csv import writer, reader data = [ # use your real data here { 'timestamp': 12, 'floatvalue': 3.14159, 'intvalue': 42, 'message1': 'hello world', 'message2': '=' * 1999, }, ] * 1 f = GzipFile('data.gz', 'wb') try: w = writer(f) for row in data: w.writerow([ row[name] for name in ( # use your real col-names here 'timestamp', 'floatvalue', 'intvalue', 'message1', 'message2', )]) finally: f.close() output = [] for row in reader(GzipFile('data.gz')): d = dict(( (name, f(row[i])) for i, (f,name) in enumerate(( # adjust for your column-names/data-types (int, 'timestamp'), (float, 'floatvalue'), (int, 'intvalue'), (str, 'message1'), (str, 'message2'), output.append(d) # or output = [ dict(( (name, f(row[i])) for i, (f,name) in enumerate(( # adjust for your column-names/data-types (int, 'timestamp'), (float, 'floatvalue'), (int, 'intvalue'), (str, 'message1'), (str, 'message2'), for row in reader(GzipFile('data.gz')) ] -- http://mail.python.org/mailman/listinfo/python-list
Re: Review Python site with useful code snippets
On Wed, Oct 26, 2011 at 3:51 PM, Chris Hall cha...@gmail.com wrote: I am looking to get reviews, comments, code snippet suggestions, and feature requests for my site. I intend to grow out this site with all kinds of real world code examples to learn from and use in everyday coding. The site is: http://www.pythonsnippet.com If you have anything to contribute or comment, please post it on the site or email me directly. Great sentiment, but there is already http://code.activestate.com/, http://code.google.com/p/python-code-snippets/ and http://stackoverflow.com/questions/2032462/python-code-snippets. Pretty site you put up, though. -- http://mail.python.org/mailman/listinfo/python-list
Customizing class attribute access in classic classes
Hi, I'm wondering if there is any way to customize class attribute access on classic classes? So this works: class Meta(type): def __getattr__(cls, name): return Customized + name class A: __metaclass__ = Meta print A.blah but it turns A into a new-style class. If Meta does not inherit from type, the customization works but A ends up not being a class at all, severely restricting its usefulness. I then hoped I could get Meta to inherit from types.ClassType but that wasn't allowed either. Is there any way to do this or is it just a limitation of classic classes? Regards, Geoff Bache -- http://mail.python.org/mailman/listinfo/python-list
Re: Customizing class attribute access in classic classes
Geoff Bache geoff.ba...@gmail.com writes: I'm wondering if there is any way to customize class attribute access on classic classes? Why do that? What is it you're hoping to achieve, and why limit it to classic classes only? So this works: class Meta(type): def __getattr__(cls, name): return Customized + name class A: __metaclass__ = Meta print A.blah but it turns A into a new-style class. Yes, A is a new-style class *because* it inherits from ‘type’ URL:http://docs.python.org/reference/datamodel.html#new-style-and-classic-classes. Why does that not meet your needs? -- \ Contentsofsignaturemaysettleduringshipping. | `\ | _o__) | Ben Finney -- http://mail.python.org/mailman/listinfo/python-list
Re: Convert DDL to ORM
On 10/25/2011 03:30 AM, Alec Taylor wrote: Good morning, I'm often generating DDLs from EER-Logical diagrams using tools such as PowerDesigner and Oracle Data Modeller. I've recently come across an ORM library (SQLalchemy), and it seems like a quite useful abstraction. Is there a way to convert my DDL to ORM code? It's called reverse engineering. Some ORMs, e.g. Django's ORM can reverse engineer the database into Django Models by using `./manage.py inspectdb`. I believe the equivalent in SQLalchemy would be SQL Autocode, see http://turbogears.org/2.1/docs/main/Utilities/sqlautocode.html and http://code.google.com/p/sqlautocode/ -- http://mail.python.org/mailman/listinfo/python-list
[issue13290] get vars for object with __slots__
New submission from João Bernardo jbv...@gmail.com: I just realized the builtin function `vars` can't handle custom objects without the __dict__ attribute. It would be nice if it worked with objects that have __slots__ to make them look more like normal objects and to make debugging easier. I changed the source of Python/bltinmodule.c to accept this new case, but, as I'm fairly new to the C API, it may not be much good. I'm attaching the whole file to this Issue and the `builtin_vars` function was modified (lines 1859 to 1921) to work with: __slots__ = ('a', 'b', 'c') __slots__ = 'single_name' __slots__ = {'some': 'mapping'} The output is a dict (just like in the __dict__ case). #Example class C: __slots__ = ['x', 'y'] c = C() c.x = 123 vars(c) {'x': 123} -- components: Interpreter Core files: bltinmodule.c messages: 146598 nosy: JBernardo priority: normal severity: normal status: open title: get vars for object with __slots__ type: feature request versions: Python 3.2, Python 3.3 Added file: http://bugs.python.org/file23546/bltinmodule.c ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13290 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13289] a spell error in standard lib SocketServer‘s comment
Roundup Robot devn...@psf.upfronthosting.co.za added the comment: New changeset 8ddd4c618b48 by Ezio Melotti in branch '2.7': #13289: fix typo. http://hg.python.org/cpython/rev/8ddd4c618b48 New changeset fec8fdbccf3b by Ezio Melotti in branch '3.2': #13289: fix typo. http://hg.python.org/cpython/rev/fec8fdbccf3b New changeset 4411a59dc23c by Ezio Melotti in branch 'default': #13289: merge with 3.2. http://hg.python.org/cpython/rev/4411a59dc23c -- nosy: +python-dev ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13289 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13289] a spell error in standard lib SocketServer‘s comment
Ezio Melotti ezio.melo...@gmail.com added the comment: Fixed, thanks for the report! -- assignee: - ezio.melotti nosy: +ezio.melotti resolution: - fixed stage: - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13289 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue670664] HTMLParser.py - more robust SCRIPT tag parsing
Changes by Ezio Melotti ezio.melo...@gmail.com: -- assignee: - ezio.melotti ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue670664 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13281] robotparser.RobotFileParser ignores rules preceeded by a blank line
Petri Lehtinen pe...@digip.org added the comment: Because of the line break, clicking that link gives Server error 404. I don't see a line break, but the comma after the link seems to breaks it. Sorry. The way I read the grammar, 'records' (which start with an agent line) cannot have blank lines and must be separated by blank lines. Ah, true. But it seems to me that having blank lines elsewhere doesn't break the parsing. If other robots.txt parser implementations allow arbitrary blank lines, we could add a strict=False parameter to make the parser non-strict. This would be a new feature of course. Does the parser currently handle blank lines between full records (agentline(s) + ruleline(s)) correctly? I also do not see Crawl-delay and Sitemap (from whitehouse.gov) in the grammar referenced above. So I wonder if de facto practice has evolved. The spec says: Lines with Fields not explicitly specified by this specification may occur in the /robots.txt, allowing for future extension of the format. So these seem to be nonstandard extensions. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13281 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5661] asyncore should catch EPIPE while sending() and receiving()
Roundup Robot devn...@psf.upfronthosting.co.za added the comment: New changeset bf1c4984d4e5 by Charles-François Natali in branch 'default': Issue #5661: Add a test for ECONNRESET/EPIPE handling to test_asyncore. Patch http://hg.python.org/cpython/rev/bf1c4984d4e5 -- nosy: +python-dev ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5661 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13238] Add shell command helpers to shutil module
Antoine Pitrou pit...@free.fr added the comment: With the default whitespace escaping (which allows spaces in filenames), wildcard matching still works (thus the list of directories matching the ../py* pattern), but with full quoting it breaks (thus the nothing named '../py*' result). My question is why it would be a good idea to make a difference between whitespace and other characters. If you use a wildcard pattern, generally it won't contain spaces at all, so you don't have to quote it. If you are injecting a normal filename, noticing that whitespace gets quoted may get you a false sense of security until somebody injects a wildcard character that won't get quoted. So what I'm saying is that a middleground between quoting and no quoting is dangerous and not very useful. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13238 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13265] IDLE crashes when printing some unprintable characters.
Changes by maniram maniram maniandra...@gmail.com: -- versions: -Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13265 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5661] asyncore should catch EPIPE while sending() and receiving()
Charles-François Natali neolo...@free.fr added the comment: The test fails on OS X: == ERROR: test_handle_close_after_conn_broken (test.test_asyncore.TestAPI_UseIPv4Poll) -- Traceback (most recent call last): File /Users/buildbot/buildarea/3.x.parc-snowleopard-1/build/Lib/test/test_asyncore.py, line 661, in test_handle_close_after_conn_broken self.loop_waiting_for_flag(client) File /Users/buildbot/buildarea/3.x.parc-snowleopard-1/build/Lib/test/test_asyncore.py, line 523, in loop_waiting_for_flag asyncore.loop(timeout=0.01, count=1, use_poll=self.use_poll) File /Users/buildbot/buildarea/3.x.parc-snowleopard-1/build/Lib/asyncore.py, line 215, in loop poll_fun(timeout, map) File /Users/buildbot/buildarea/3.x.parc-snowleopard-1/build/Lib/asyncore.py, line 196, in poll2 readwrite(obj, flags) File /Users/buildbot/buildarea/3.x.parc-snowleopard-1/build/Lib/asyncore.py, line 123, in readwrite obj.handle_error() File /Users/buildbot/buildarea/3.x.parc-snowleopard-1/build/Lib/asyncore.py, line 112, in readwrite obj.handle_expt_event() File /Users/buildbot/buildarea/3.x.parc-snowleopard-1/build/Lib/asyncore.py, line 476, in handle_expt_event self.handle_expt() File /Users/buildbot/buildarea/3.x.parc-snowleopard-1/build/Lib/test/test_asyncore.py, line 470, in handle_expt raise Exception(handle_expt not supposed to be called) Exception: handle_expt not supposed to be called Looks like the FD is returned in the exception set on OS X... -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5661 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5661] asyncore should catch EPIPE while sending() and receiving()
Roundup Robot devn...@psf.upfronthosting.co.za added the comment: New changeset 507dfb0ceb3b by Charles-François Natali in branch 'default': Issue #5661: on EPIPE/ECONNRESET, OS X returns the FD with the POLLPRI flag... http://hg.python.org/cpython/rev/507dfb0ceb3b -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5661 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5661] asyncore should catch EPIPE while sending() and receiving()
Xavier de Gaye xdeg...@gmail.com added the comment: The test fails when use_poll is True. The difference between using poll() and poll2(): poll: All the read events are processed before the write events, so the close after the first recv by TestHandler will be followed by a send by TestClient within the same call to poll(). The test is deterministic. poll2: The order in which events are received is os dependent. So it is possible that the first recv by TestHandler is the last event in the 'r' list, so that after the close, a new call to poll2() is done and the first event in this new 'r' list, is not the expected write event for TestClient. What about forcing self.use_poll to False, before calling loop_waiting_for_flag() ? The drawback being that the test will be run twice with the same environment. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5661 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5661] asyncore should catch EPIPE while sending() and receiving()
Charles-François Natali neolo...@free.fr added the comment: The test fails when use_poll is True. The difference between using poll() and poll2(): poll uses select(2), while poll2 uses poll(2) (duh, that's confusing). It seems that on OS X Snow Leopard, poll(2) sets the POLLPRI flag upon EPIPE (and probably doesn't return the FD in the exception set for select(2)...). Not sure whether it's legal, but it's the only OS to do that (POLLPRI usually used for OOB data). Also, note that Tiger doesn't behave that way. OS X often offers such surprises :-) What about forcing self.use_poll to False, before calling loop_waiting_for_flag() ? The drawback being that the test will be run twice with the same environment. I just added an handle_expt() handler, and it works fine. Closing, thanks for the patch! -- assignee: josiahcarlson - resolution: - fixed stage: patch review - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5661 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13288] SSL module doesn't allow access to cert issuer information
Antoine Pitrou pit...@free.fr added the comment: It's available in 3.3: ctx = ssl.SSLContext(ssl.PROTOCOL_TLSv1) ctx.verify_mode = ssl.CERT_REQUIRED ctx.set_default_verify_paths() with ctx.wrap_socket(socket.socket()) as sock: ... sock.connect((svn.python.org, 443)) ... cert = sock.getpeercert() ... pprint.pprint(cert) {'issuer': ((('organizationName', 'Root CA'),), (('organizationalUnitName', 'http://www.cacert.org'),), (('commonName', 'CA Cert Signing Authority'),), (('emailAddress', 'supp...@cacert.org'),)), 'notAfter': 'Jan 9 20:50:13 2012 GMT', 'notBefore': 'Jan 9 20:50:13 2010 GMT', 'serialNumber': '0806E3', 'subject': ((('commonName', 'svn.python.org'),),), 'subjectAltName': (('DNS', 'svn.python.org'), ('othername', 'unsupported')), 'version': 3} -- nosy: +pitrou resolution: - out of date stage: - committed/rejected status: open - closed type: - feature request versions: -Python 2.6, Python 2.7, Python 3.1, Python 3.2, Python 3.4 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13288 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13238] Add shell command helpers to shutil module
Nick Coghlan ncogh...@gmail.com added the comment: Yeah, I was thinking about this a bit more and realised that I'd rejected the quote everything by default approach before I had the idea of providing a custom conversion specifier to disable the implicit string conversion and quoting. So perhaps a better alternative would be: default - str + shlex.quote !u - unquoted (i.e. normal str.format default behaviour) When you have a concise way to explicitly bypass it, making the default behaviour as safe as possible seems like a good way to go. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13238 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10363] Embedded python, handle (memory) leak
Antoine Pitrou pit...@free.fr added the comment: If the handle leaks are restricted to the windows implementation of cpython, could it not be justified to allow C++ in a patch, I can't think of a C only compiler for windows? Well, I think that would be rather clumsy. I'm not a Windows user myself, perhaps other people can share opinions. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10363 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12905] multiple errors in test_socket on OpenBSD
Charles-François Natali neolo...@free.fr added the comment: Rémi, do you want to submit a patch to skip those tests on OpenBSD? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12905 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12797] io.FileIO and io.open should support openat
Charles-François Natali neolo...@free.fr added the comment: Thanks. Although, on second thought, I'm not sure whether Amaury's idea (allowing a custom opener) is not better... Thoughts? +1. This would also address issues #12760 and #12105. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12797 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12105] open() does not able to set flags, such as O_CLOEXEC
Changes by Charles-François Natali neolo...@free.fr: -- status: open - closed superseder: - io.FileIO and io.open should support openat ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12105 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12760] Add create mode to open()
Changes by Charles-François Natali neolo...@free.fr: -- resolution: - duplicate status: open - closed superseder: - io.FileIO and io.open should support openat ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12760 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12797] io.FileIO and io.open should support openat
Ross Lagerwall rosslagerw...@gmail.com added the comment: What would you envisage the API for the custom opener to look like? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12797 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12797] io.FileIO and io.open should support openat
Antoine Pitrou pit...@free.fr added the comment: What would you envisage the API for the custom opener to look like? The same as os.open(), I would say. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12797 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12797] io.FileIO and io.open should support openat
Changes by Arfrever Frehtes Taifersar Arahesis arfrever@gmail.com: -- nosy: +Arfrever ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12797 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10363] Embedded python, handle (memory) leak
Martin v. Löwis mar...@v.loewis.de added the comment: As a policy, we will not rely on C++ destructors for cleanup. There are really two issues here: global locks, and module-specific locks. The global locks can and should be released in Py_Finalize, with no API change. Antoine's patch looks good to me. For module-level locks, PEP-3121-style module finalization should be used. Patches are welcome. Martin: Please understand that there are *MANY* more issues with reloading Python in the same process, as a really large number of stuff doesn't get cleanup up on shutdown. Also expect that fixing these will take 10 years or more, unless somebody puts a huge effort into it. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10363 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13284] email.utils.formatdate function does not handle timezones correctly.
Changes by Petri Lehtinen pe...@digip.org: -- nosy: +petri.lehtinen type: - behavior ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13284 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13285] signal module ignores external signal changes
Changes by Petri Lehtinen pe...@digip.org: -- nosy: +petri.lehtinen ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13285 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12797] io.FileIO and io.open should support openat
Ross Lagerwall rosslagerw...@gmail.com added the comment: Before I implement it properly, is this the kind of api that's desired? import os import io class MyOpener: def __init__(self, dirname): self.dirfd = os.open(dirname, os.O_RDONLY) def open(self, path, flags, mode): return os.openat(self.dirfd, path, flags, mode) myop = MyOpener(/tmp) f = open(testfile, w, opener=myop.open) f.write(hello) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12797 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10519] setobject.c no-op typo
Changes by Petri Lehtinen pe...@digip.org: -- nosy: +petri.lehtinen ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10519 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12761] Typo in Doc/license.rst
Changes by Petri Lehtinen pe...@digip.org: -- resolution: - fixed stage: patch review - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12761 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6650] sre_parse contains a confusing generic error message
Changes by Petri Lehtinen pe...@digip.org: -- nosy: +petri.lehtinen ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6650 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12797] io.FileIO and io.open should support openat
Antoine Pitrou pit...@free.fr added the comment: Before I implement it properly, is this the kind of api that's desired? Yes, although I think most people would use a closure instead of a dedicated class. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12797 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12498] asyncore.dispatcher_with_send, disconnection problem + miss-conception
Xavier de Gaye xdeg...@gmail.com added the comment: Actually the class asyncore.dispatcher_with_send do not handle properly disconnection. When the endpoint shutdown his sending part of the socket, but keep the socket open in reading, the current implementation of dispatcher_with_send will close the socket without sending pending data. Adding this method to the class fix the problem: def handle_close(self): if not self.writable(): dispatcher.close() This does not seem to work. The attached Python 3.2 script 'asyncore_12498.py' goes into an infinite loop of calls to handle_read/handle_close when handle_close is defined as above. In this script the Writer sends a 4096 bytes message when connected, the Reader reads only 64 bytes and closes the connection afterwards. Then follows the sequence: select() returns a read event handled by handle_read() handle_read() calls recv() socket.recv() returns 0 to indicate a closed connection recv() calls handle_close This sequence is repeated forever in asyncore.loop() since out_buffer is never empty. Note that after the first 'handle_close' has been printed, there are no 'handle_write' printed, which indicates that the operating system (here linux) states that the socket is not writable. -- nosy: +xdegaye Added file: http://bugs.python.org/file23547/asyncore_12498.py ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12498 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13280] argparse should use the new Formatter class
Changes by Raymond Hettinger raymond.hettin...@gmail.com: -- resolution: - invalid status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13280 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13281] robotparser.RobotFileParser ignores rules preceeded by a blank line
Terry J. Reedy tjre...@udel.edu added the comment: Sorry, the visual linebreak depends on font size. It *is* the comma that caused the problem. You missed my question about the current test suite. Senthil, you are the listed expert for urllib, which includes robotparser. Any opinions on what to do? -- nosy: +orsenthil ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13281 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13173] Default values for string.Template
Raymond Hettinger raymond.hettin...@gmail.com added the comment: Barry, any thoughts? -- assignee: rhettinger - barry nosy: +barry ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13173 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13274] heapq pure python version uses islice without guarding for negative counts
Ronny Pfannschmidt ronny.pfannschm...@gmail.com added the comment: however some basic consistency between the cpython and pure python versions within the stdlib would be nice since it basically implicitly breaks unaware code on non cpython -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13274 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10519] setobject.c no-op typo
Changes by Armin Rigo ar...@users.sourceforge.net: -- assignee: arigo - ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10519 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13291] latent NameError in xmlrpc package
New submission from Florent Xicluna florent.xicl...@gmail.com: There's two names which should be fixed in xmlrpc package: --- a/Lib/xmlrpc/client.py -elif isinstance(other, (str, unicode)): --- a/Lib/xmlrpc/server.py -response = xmlrpclib.dumps( -xmlrpclib.Fault(1, %s:%s % (exc_type, exc_value)), We may extend test coverage too. -- components: Library (Lib), XML messages: 146622 nosy: flox priority: normal severity: normal stage: test needed status: open title: latent NameError in xmlrpc package type: behavior versions: Python 3.2, Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13291 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11638] python setup.py sdist crashes if version is unicode
David Barnett davidbarne...@gmail.com added the comment: Here's a test for the bug. -- keywords: +patch Added file: http://bugs.python.org/file23548/test_unicode_sdist.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11638 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11638] python setup.py sdist crashes if version is unicode
David Barnett davidbarne...@gmail.com added the comment: One way to fix the symptom (maybe not the correct way) would be to edit tarfile._Stream._init_write_gz and change the line that reads self.__write(self.name + NUL) to something like self.__write(self.name.encode('utf-8') + NUL) tarfile is building up an encoded stream of bytes, and whatever self.name is it needs to be encoded before being inserted into the stream. I'm not positive UTF-8 is right, and maybe it should only convert if isinstance(self.name, unicode). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11638 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13291] latent NameError in xmlrpc package
Florent Xicluna florent.xicl...@gmail.com added the comment: Proposed fix, with some tests. -- keywords: +patch stage: test needed - patch review Added file: http://bugs.python.org/file23549/issue13291_xmlrpc.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13291 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13274] heapq pure python version uses islice without guarding for negative counts
Changes by Alex Gaynor alex.gay...@gmail.com: -- nosy: +alex ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13274 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13274] heapq pure python version uses islice without guarding for negative counts
Changes by Raymond Hettinger raymond.hettin...@gmail.com: -- resolution: - invalid status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13274 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12797] io.FileIO and io.open should support openat
Ross Lagerwall rosslagerw...@gmail.com added the comment: The attached patch adds the opener keyword + tests. -- Added file: http://bugs.python.org/file23550/opener.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12797 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com