[ANN] pyxser-1.5r --- Python Object to XML serializer/deserializer
Hello Python Community. I'm pleased to announce pyxser-1.5r, a python extension which contains functions to serialize and deserialize Python Objects into XML. It is a model based serializer. What can do this serializer? * Serialization of cross references. * Serialization of circular references. * Preserves object references on deserialization. * Custom serializations. * Custom deserializations. * Object attribute selection call-back. * Custom Serialization depth limit. * Standards based serialization. * Standards based XML validation using pyxser XML Schema. * C14N based serialization, as optional kind of output. * Model based XML serialization, represented on XML Schema and XML DTD. This release contains various bug fixes, mainly related to type checking and type handling and also the removal of all memory leaks. The distribution contains working tests for ASCII, Latin-1, UTF-8 and UTF-16 codecs, and a working test which do up to 1,500,000.00 serializations and deserializations, using just 65MB of RAM. Also, on Source Forge I've created two trackers, one for feature requests and another for bug reports. This distribution was tested on Ubuntu 10.04.1 (32bit and 64bit), Mac OS X 10.5.0, CentOS (32bit and 64bit), FreeBSD 8.1 (32bit and 64bit). The project is hosted at: http://sourceforge.net/projects/pyxser/ The web page for the project is located at: http://coder.cl/products/pyxser/ PyPi entry is: http://pypi.python.org/pypi/pyxser/1.5r Best regards, -- Daniel Molina Wegener dmw [at] coder [dot] cl System Programmer Web Developer Phone: +56 (2) 979-0277 | Blog: http://coder.cl/ -- http://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations/
ANN: Cython 0.13 released!
It is with *great* pleasure that I email to announce the release of Cython version 0.13! This release sets another milestone on the path towards Python compatibility and brings major new features and improvements for the usability of the Cython language. Download it here: http://cython.org/release/Cython-0.13.tar.gz == New Features == * Closures are fully supported for Python functions. Cython supports inner functions and lambda expressions. Generators and generator expressions are __not__ supported in this release. * Proper C++ support. Cython knows about C++ classes, templates and overloaded function signatures, so that Cython code can interact with them in a straight forward way. * Type inference is enabled by default for safe C types (e.g. double, bint, C++ classes) and known extension types. This reduces the need for explicit type declarations and can improve the performance of untyped code in some cases. There is also a verbose compile mode for testing the impact on user code. * Cython's for-in-loop can iterate over C arrays and sliced pointers. The type of the loop variable will be inferred automatically in this case. * The Py_UNICODE integer type for Unicode code points is fully supported, including for-loops and 'in' tests on unicode strings. It coerces from and to single character unicode strings. Note that untyped for-loop variables will automatically be inferred as Py_UNICODE when iterating over a unicode string. In most cases, this will be much more efficient than yielding sliced string objects, but can also have a negative performance impact when the variable is used in a Python context multiple times, so that it needs to coerce to a unicode string object more than once. If this happens, typing the loop variable as unicode or object will help. * The built-in functions any(), all(), sum(), list(), set() and dict() are inlined as plain `for` loops when called on generator expressions. Note that generator expressions are not generally supported apart from this feature. Also, tuple(genexpr) is not currently supported - use tuple([listcomp]) instead. * More shipped standard library declarations. The python_* and stdlib/stdio .pxd files have been deprecated in favor of clib.* and cpython[.*] and may get removed in a future release. == Python compatibility == * Pure Python mode no longer disallows non-Python keywords like 'cdef', 'include' or 'cimport'. It also no longer recognises syntax extensions like the for-from loop. * Parsing has improved for Python 3 syntax in Python code, although not all features are correctly supported. The missing Python 3 features are being worked on for the next release. * from __future__ import print_function is supported in Python 2.6 and later. Note that there is currently no emulation for earlier Python versions, so code that uses print() with this future import will require at least Python 2.6. * New compiler directive language_level (valid values: 2 or 3) with corresponding command line options -2 and -3 requests source code compatibility with Python 2.x or Python 3.x respectively. Language level 3 currently enforces unicode literals for unprefixed string literals, enables the print function (requires Python 2.6 or later) and keeps loop variables in list comprehensions from leaking. * Loop variables in set/dict comprehensions no longer leak into the surrounding scope (following Python 2.7). List comprehensions are unchanged in language level 2. == Incompatible changes == * The availability of type inference by default means that Cython will also infer the type of pointers on assignments. Previously, code like this cdef char* s = ... untyped_variable = s would convert the char* to a Python bytes string and assign that. This is no longer the case and no coercion will happen in the example above. The correct way of doing this is through an explicit cast or by typing the target variable, i.e. cdef char* s = ... untyped_variable1 = bytess untyped_variable2 = objects cdef object py_object = s cdef bytes bytes_string = s * bool is no longer a valid type name by default. The problem is that it's not clear whether bool should refer to the Python type or the C++ type, and expecting one and finding the other has already led to several hard-to-find bugs. Both types are available for importing: you can use from cpython cimport bool for the Python bool type, and from libcpp cimport bool for the C++ type. == Contributors == Many people contributed to this release, including: * David Barnett * Stefan Behnel * Chuck Blake * Robert Bradshaw * Craig Citro * Bryan Cole * Lisandro Dalcin * Eric Firing * Danilo Freitas * Christoph Gohlke * Dag Sverre Seljebotn * Kurt Smith * Erik Tollerud * Carl Witty -cc -- http://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations/
Python Ireland presents Sept talks @ The Science Gallery (Wed, 8th Sept, 7pm)
Hi All, What's on:- - Buildout by Diarmuid Bourke - Python for Cloud Computing by Alan Kennedy - Lightning talks - Pub TBD afterwards The event is open for all and it's free. More details at: http://www.python.ie/meetup/2010/sept_2010_talks__the_science_gallery/ Cheers, /// Vicky ~ ~~ http://irishbornchinese.com ~~ ~~ http://www.python.ie ~~ ~ -- http://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations/
web2py book 3rd ed
The third edition of the web2py book is out: http://www.lulu.com/product/paperback/web2py-%283rd-edition%29/12199578 also available in pDF and online for free in html at http://web2py.com/book It includes documentation about many features not documented in the 2nd edition including: - alternate login methods (openid, oauth, rpx, etc.) - application level routes - computed fields and virtual fields - recursive selects and dal shortcuts - field type list:linteger, list:string, list:reference - more deployment recipes and scalability suggestions - more details about GAE deployment - building modular apps with components and plugins - info about plugin_wiki (more or less described in this video: http://vimeo.com/13485916) -- http://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations/
Re: ftplib limitations?
Hi durumdara, On 2010-08-24 16:29, Stefan Schwarzer wrote: I experienced some problem. The server is Windows and FileZilla, the client is Win7 and Python2.6. When I got a file with size 1 303 318 662 byte, python is halt on retrbinary line everytime. So if I understand correctly, the script works well on smaller files but not on the large one? I just did an experiment in the interpreter which corresponds to this script: import ftplib of = open(large_file, wb) def callback(data): of.write(data) ftp = ftplib.FTP(localhost, userid, passwd) ftp.retrbinary(RETR large_file, callback) of.close() ftp.close() The file is 2 GB in size and is fully transferred, without blocking or an error message. The status message from the server is '226-File successfully transferred\n226 31.760 seconds (measured here), 64.48 Mbytes per second', so this looks ok, too. I think your problem is related to the FTP server or its configuration. Have you been able to reproduce the problem? Stefan -- http://mail.python.org/mailman/listinfo/python-list
GAMES
games GAME 1 http://freeonlingamesplay.blogspot.com/2010/08/game-1.html GAME 2 http://freeonlingamesplay.blogspot.com/2010/08/game-2.html GAME 3 http://freeonlingamesplay.blogspot.com/2010/08/game-3.html GAME 4 http://freeonlingamesplay.blogspot.com/2010/08/game-4.html GAME 5 http://freeonlingamesplay.blogspot.com/2010/08/game-5.html GAME 6 http://freeonlingamesplay.blogspot.com/2010/08/game-6.html GAME 7 http://freeonlingamesplay.blogspot.com/2010/08/game-7.html GAME 8 http://freeonlingamesplay.blogspot.com/2010/08/game-8.html GAME 9 http://freeonlingamesplay.blogspot.com/2010/08/game-9.html GAME 10 http://freeonlingamesplay.blogspot.com/2010/08/game-10.html -- http://mail.python.org/mailman/listinfo/python-list
Re: ftplib limitations?
Hi! So if I understand correctly, the script works well on smaller files but not on the large one? Yes. 500-800 MB is ok. 1 GB is not ok. It down all of the file (100%) but the next line never reached. _Which_ line is never reached? The `print` statement after the `retrbinary` call? Yes, the print. Some error I got, but this was in yesterday, I don't remember the text of the error. Can't you reproduce the error by executing the script once more? Can you copy the file to another server and see if the problem shows up there, too? I got everytime, but I don't have another server to test it. I can imagine the error message (a full traceback if possible) would help to say a bit more about the cause of the problem and maybe what to do about it. This was: Filename: Repositories 20100824_101805 (Teljes).zip Size: 1530296127 ..download: 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Traceback (most recent call last): File C:\D\LocalBackup\ftpdown.py, line 31, in module ftp.retrbinary(retr + s, CallBack) File C:\Python26\lib\ftplib.py, line 401, in retrbinary return self.voidresp() File C:\Python26\lib\ftplib.py, line 223, in voidresp resp = self.getresp() File C:\Python26\lib\ftplib.py, line 209, in getresp resp = self.getmultiline() File C:\Python26\lib\ftplib.py, line 195, in getmultiline line = self.getline() File C:\Python26\lib\ftplib.py, line 182, in getline line = self.file.readline() File C:\Python26\lib\socket.py, line 406, in readline data = self._sock.recv(self._rbufsize) socket.error: [Errno 10054] A lÚtez§ kapcsolatot a tßvoli ßllomßs kÚnyszerÝtette n bezßrta So this message is meaning that the remote station forced close the existing connection. Now I'm trying with saving the file into temporary file, not hold in memory. Thanks: dd -- http://mail.python.org/mailman/listinfo/python-list
Re: problem with strptime and time zone
In message 45faa241-620e-42c7-b524-949936f63...@f6g2000yqa.googlegroups.com, Alex Willmer wrote: Dateutil has it's own timezone database ... I hate code which doesn’t just use /usr/share/zoneinfo. How many places do you need to patch every time somebody changes their daylight-saving rules? -- http://mail.python.org/mailman/listinfo/python-list
Re: Proper set-up for a co-existant python 2.6 3.1 installation
vsoler vicente.so...@gmail.com wrote in message news:3d85d8f5-8ce0-470f-b6ec-c86c452a3...@a36g2000yqc.googlegroups.com... On Aug 24, 1:33 am, Martin v. Loewis mar...@v.loewis.de wrote: When I am logged-in in a session as an administrator, the BAT file on the Desktop, and I double-click on it, it does not work. This is not what I meant. Instead, right-click on the BAT file, and select run as administrator. When you say to double-escape the percent signs, do you mean that in my BAT file I should write... FTYPE python.file=C:\Python26\python.exe %%1 %%* and the inverted commas around %%*, are they not necessary? No, I don't think so. Regards, Martin Martin (or anybody else), The problem with FTYPE is solved. However, after having switched to py 3.1 with the help of the BAT script (which only changes FTYPE) I have another problem. (Just for reference, here is my batch file) @ECHO OFF ECHO ECHO Cambia a Python 3.1 ECHO ECHO * ECHO FTYPES: ECHO * ECHO .py=Python.File ECHO .pyc=Python.CompiledFile ECHO .pyo=Python.CompiledFile ECHO .pys=pysFile ECHO .pyw=Python.NoConFile ECHO * ECHO ECHO * FTYPE python.file=C:\Python31\python.exe %%1 %%* FTYPE python.compiledfile=C:\Python31\python.exe %%1 %%* FTYPE python.NoConFile=C:\Python31\pythonw.exe %%1 %%* ECHO * Pause @ECHO ON The problem is that, if I am on top of a .py file, and, with the mouse, I click on the right button, then I click on Edit with IDLE, I get the 2.6 system, not the 3.1 one (which was supposed to be the correct one after the change). My question is: are there any other changes that I should do in order to fully switch from one version to another? Yes, and they are relatively easy to edit with a .reg file instead of a batch file. Below is just an example for type Python.File that adds Open with Python3, Edit with IDLE3, and Open with Pythonwin3 commands to the right-click context menu of a .py file. The first 3 entries are the original Python26 entries. The last three were copied from them and modified to create the alternative context menus. You could also create two .reg files that toggle the original three entries between Python 2.6 and Python 3.1 if you want. ---START Windows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\Python.File\shell\Edit with IDLE\command] @=\C:\\Python26\\pythonw.exe\ \C:\\Python26\\Lib\\idlelib\\idle.pyw\ -n -e \%1\ [HKEY_CLASSES_ROOT\Python.File\shell\Edit with Pythonwin\command] @=C:\\Python26\\Lib\\site-packages\\Pythonwin\\Pythonwin.exe /edit \%1\ [HKEY_CLASSES_ROOT\Python.File\shell\open\command] @=\C:\\Python26\\python.exe\ \%1\ %* [HKEY_CLASSES_ROOT\Python.File\shell\Edit with IDLE3\command] @=\C:\\Python31\\pythonw.exe\ \C:\\Python31\\Lib\\idlelib\\idle.pyw\ -n -e \%1\ [HKEY_CLASSES_ROOT\Python.File\shell\Edit with Pythonwin3\command] @=C:\\Python31\\Lib\\site-packages\\Pythonwin\\Pythonwin.exe /edit \%1\ [HKEY_CLASSES_ROOT\Python.File\shell\Open with Python3\command] @=\C:\\Python31\\python.exe\ \%1\ %* ---END--- -Mark -- http://mail.python.org/mailman/listinfo/python-list
Re: Iterative vs. Recursive coding
Steven D'Aprano st...@remove-this-cybersource.com.au wrote in message news:4c6f8edd$0$28653$c3e8...@news.astraweb.com... On Fri, 20 Aug 2010 17:23:23 +0200, Bruno Desthuilliers wrote: I onced worked in a shop (Win32 desktop / accouting applications mainly) where I was the only guy that could actually understand recursion. FWIW, I also was the only guy around that understood hairy (lol) concepts like callback functions, FSM, polymorphism, hashtables, linked lists, ADTs, algorithm complexity etc... Was there anything they *did* understand, or did they just bang on the keyboard at random until the code compiled? *wink* You underestimate how much programming (of applications) can be done without needing any of this stuff. Needless to say, I didn't last long !-) And rightly so :) I guess they wanted code that could be maintained by anybody. -- Bartc --- news://freenews.netfront.net/ - complaints: n...@netfront.net --- -- http://mail.python.org/mailman/listinfo/python-list
Re: ftplib limitations?
Hi! On aug. 25, 08:07, Stefan Schwarzer sschwar...@sschwarzer.net wrote: The file is 2 GB in size and is fully transferred, without blocking or an error message. The status message from the server is '226-File successfully transferred\n226 31.760 seconds (measured here), 64.48 Mbytes per second', so this looks ok, too. I think your problem is related to the FTP server or its configuration. Have you been able to reproduce the problem? Yes. I tried with saving the file, but I also got this error. but: Total COmmander CAN download the file, and ncftpget also can download it without problem... Hm... :-( Thanks: dd -- http://mail.python.org/mailman/listinfo/python-list
Re: ftplib limitations?
Hi durumdara, On 2010-08-25 09:43, durumdara wrote: I can imagine the error message (a full traceback if possible) would help to say a bit more about the cause of the problem and maybe what to do about it. This was: Filename: Repositories 20100824_101805 (Teljes).zip Size: 1530296127 ..download: 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Traceback (most recent call last): File C:\D\LocalBackup\ftpdown.py, line 31, in module ftp.retrbinary(retr + s, CallBack) File C:\Python26\lib\ftplib.py, line 401, in retrbinary return self.voidresp() File C:\Python26\lib\ftplib.py, line 223, in voidresp resp = self.getresp() File C:\Python26\lib\ftplib.py, line 209, in getresp resp = self.getmultiline() File C:\Python26\lib\ftplib.py, line 195, in getmultiline line = self.getline() File C:\Python26\lib\ftplib.py, line 182, in getline line = self.file.readline() File C:\Python26\lib\socket.py, line 406, in readline data = self._sock.recv(self._rbufsize) socket.error: [Errno 10054] A lÚtez§ kapcsolatot a tßvoli ßllomßs kÚnyszerÝtette n bezßrta So this message is meaning that the remote station forced close the existing connection. The file transfer protocol uses two connections for data transfers, a control connection to send commands and responses, and a data connection for the data payload itself. Now it may be that the data connection, after having started the transfer, works as it should, but the control connection times out because the duration of the transfer is too long. A hint at this is that the traceback above contains `getline` and `readline` calls which strongly suggest that this socket was involved in some text transfer (presumably for a status message). Most FTP servers are configured for a timeout of 5 or 10 minutes. If you find that the file transfers don't fail reproducably for a certain size limit, it's probably not the size of the file that causes the problem but some timing issue (see above). What to do about it? One approach is to try to get the timeout value increased. Of course that depends on the relation between you and the party running the server. Another approach is to catch the exception and ignore it. To make sure you only ignore timeout messages, you may want to check the status code at the start of the error message and re-raise the exception if it's not the status expected for a timeout. Something along the lines of: try: # transer involving `retrbinary` except socket.error, exc: if str(exc).startswith([Errno 10054] ): pass else: raise Note, however, that this is a rather brittle way to handle the problem, as the status code or format of the error message may depend on the platform your program runs on, library versions, etc. In any case you should close and re-open the FTP connection after you got the error from the server. Now I'm trying with saving the file into temporary file, not hold in memory. If my theory holds, that shouldn't make a difference. But maybe my theory is wrong. :) Could you do me a favor and try your download with ftputil [1]? The code should be something like: import ftputil host = ftputil.FTPHost(server, userid, passwd) for name in host.listdir(host.curdir): host.download(name, name, 'b') host.close() There's neither a need nor - at the moment - a possibility to specify a callback if you just want the download. (I'm working on the callback support though!) For finding the error, it's of course better to just use the download command for the file that troubles you. I'm the maintainer of ftputil and if you get the same or similar error here, I may find a workaround for ftputil. As it happens, someone reported a similar problem (_if_ it's the same problem in your case) just a few days ago. [2] [1] http://ftputil.sschwarzer.net [2] http://www.mail-archive.com/ftpu...@codespeak.net/msg00141.html Stefan -- http://mail.python.org/mailman/listinfo/python-list
Re: pypy
Just curious if anyone had the chance to build pypy on a 64bit environment and to see if it really makes a huge difference in performance. Would like to hear some thoughts (or alternatives). I'd recommend asking about this on the pypy mailing list or looking at their documentation first; see http://codespeak.net/pypy/dist/pypy/doc/ HTH, Daniel -- Psss, psss, put it down! - http://www.cafepress.com/putitdown -- http://mail.python.org/mailman/listinfo/python-list
Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?
Hugh Aguilar hughaguila...@yahoo.com writes: On Aug 24, 5:16 pm, Paul Rubin no.em...@nospam.invalid wrote: Anyway, as someone else once said, studying a subject like CS isn't done by reading. It's done by writing out answers to problem after problem. Unless you've been doing that, you haven't been studying. What about using what I learned to write programs that work? Does that count for anything? No. Having put together a cupboard that holds some books without falling apart does not make you a carpenter, much less an architect. -- David Kastrup -- http://mail.python.org/mailman/listinfo/python-list
Re: Using String Methods In Jump Tables
Tim Daneliuk a écrit : On 8/19/2010 7:23 PM, Steven D'Aprano wrote: On Thu, 19 Aug 2010 18:27:11 -0500, Tim Daneliuk wrote: Problem: Given tuples in the form (key, string), use 'key' to determine what string method to apply to the string: table = {'l': str.lower, 'u': str.upper} table['u']('hello world') 'HELLO WORLD' (snip) Yeah ... those old assembler memories never quite fade do they. I dunno what you might call this. A Function Dispatch Table perhaps? I usually refers to this idiom as dict-based dispatch. And FWIW, it's in fact (part of...) polymorphic dispatch implemention in Python's object model: str.__dict__['lower'] method 'lower' of 'str' objects d = dict(l=lower, u=upper) s = aHa for k, v in d.items(): ... print %s : %s % (k, s.__class__.__dict__[v](s)) -- http://mail.python.org/mailman/listinfo/python-list
Re: problem with strptime and time zone
On Aug 25, 8:48 am, Lawrence D'Oliveiro l...@geek- central.gen.new_zealand wrote: In message 45faa241-620e-42c7-b524-949936f63...@f6g2000yqa.googlegroups.com, Alex Willmer wrote: Dateutil has it's own timezone database ... I hate code which doesn’t just use /usr/share/zoneinfo. How many places do you need to patch every time somebody changes their daylight-saving rules? From reading http://labix.org/python-dateutil can read timezone information from several platforms, including /usr/share/zoneinfo. I don't know whether one chooses the source explicitly, or if it is detected with fall back to the internal database. -- http://mail.python.org/mailman/listinfo/python-list
Re: Helper classes design question
On Tue, 24 Aug 2010, Jean-Michel Pichavant wrote: John O'Hagan wrote: I want to know the best way to organise a bunch of functions designed to operate on instances of a given class without cluttering the class itself with a bunch of unrelated methods. What I've done is make what I think are called helper classes, each of which are initialized with an instance of the main class and has methods which are all of the same type (insofar as they return a boolean, or modify the object in place, or whatever). I'm not sure if I'm on the right track here design-wise. Maybe this could be better done with inheritance (not my forte), but my first thought is that no, the helper classes (if that's what they are) are not actually a type of the main class, but are auxiliary to it. I wasn't subscribed when I posted this question so the quoting and threading is messed up (sorry), but thanks for the differing approaches; in the end I have taken Peter Otten's advice and simply put the functions in separate modules according to type (instead of in classes as methods), then imported them. This has all the advantages of using a class, in that I can add new functions in just one place and call them as required without knowing what they are, e.g.: options = {dictionary of option names and values} #derived from optparse for sequence in sequences: for option, value in options.items(): {imported namespace}[option](sequence, value) but it seems simpler and cleaner. John -- http://mail.python.org/mailman/listinfo/python-list
Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?
On 25 Aug, 01:00, Hugh Aguilar hughaguila...@yahoo.com wrote: On Aug 24, 4:17 pm, Richard Owlett rowl...@pcnetinc.com wrote: Hugh Aguilar wrote: [SNIP ;] The real problem here is that C, Forth and C++ lack automatic garbage collection. If I have a program in which I have to worry about memory leaks (as described above), I would be better off to ignore C, Forth and C++ and just use a language that supports garbage collection. Why should I waste my time carefully freeing up heap space? I will very likely not find everything but yet have a few memory leaks anyway. IOW Hugh has surpassed GIGO to achieve AGG - *A*utomatic*G*arbage*G*eneration ;) The C programmers reading this are likely wondering why I'm being attacked. The reason is that Elizabeth Rather has made it clear to everybody that this is what she wants:http://groups.google.com/group/comp.lang.forth/browse_thread/thread/c... Every Forth programmer who aspires to get a job at Forth Inc. is obliged to attack me. Attacking my software that I posted on the FIG site is preferred, but personal attacks work too. It is a loyalty test. Complete bollox. A pox on your persecution fantasies. This isn't about Elizabeth Rather or Forth Inc. It's about your massive ego and blind ignorance. Your example of writing code with memory leaks *and not caring because it's a waste of your time* makes me think that you've never been a programmer of any sort. Ever. In a commercial environment, your slide rule code would be rejected during unit testing, and you'd be fired and your code sent to the bit bucket. This isn't about CS BS; this is about making sure that banks accounts square, that planes fly, that nuclear reactors stay sub-critical; that applications can run 24 by 7, 365 days a year without requiring any human attention. So who designs and writes compilers for fail-safe systems? Who designs and writes operating systems that will run for years, non-stop? Where do they get the assurance that what they're writing is correct -- and provably so? From people that do research, hard math, have degrees, and design algorithms and develop all those other abstract ideas you seem so keen to reject as high-falutin' nonsense. I'd rather poke myself in the eye than run any of the crap you've written. -- http://mail.python.org/mailman/listinfo/python-list
Re: ftplib limitations?
Hi durumdara, On 2010-08-25 11:18, durumdara wrote: On aug. 25, 08:07, Stefan Schwarzer sschwar...@sschwarzer.net wrote: The file is 2 GB in size and is fully transferred, without blocking or an error message. The status message from the server is '226-File successfully transferred\n226 31.760 seconds (measured here), 64.48 Mbytes per second', so this looks ok, too. I think your problem is related to the FTP server or its configuration. Have you been able to reproduce the problem? Yes. I tried with saving the file, but I also got this error. but: Total COmmander CAN download the file, and ncftpget also can download it without problem... I suppose they do the same as in my former suggestion: catching the error and ignoring it. ;-) After all, if I understood you correctly, you get the complete file contents, so with ftplib the download succeeds as well (in a way). You might want to do something like (untested): import os import socket import ftputil def my_download(host, filename): Some intelligent docstring. # Need timestamp to check if we actually have a new # file after the attempted download try: old_mtime = os.path.getmtime(filename) except OSError: old_mtime = 0.0 try: host.download(filename, filename, 'b') except socket.error: is_rewritten = (os.path.getmtime(filename) != old_mtime) # If you're sure that suffices as a test is_complete = (host.path.getsize(filename) == os.path.getsize(filename)) if is_rewritten and is_complete: # Transfer presumably successful, ignore error pass else: # Something else went wrong raise def main(): host = ftputil.FTPHost(...) my_download(host, large_file) host.close() If you don't want to use an external library, you can use `ftplib.FTP`'s `retrbinary` and check the file size with `ftplib.FTP.size`. This size command requires support for the SIZE command on the server, whereas ftputil parses the remote directory listing to extract the size and so doesn't depend on SIZE support. Stefan -- http://mail.python.org/mailman/listinfo/python-list
Simple hack to get *$5000* to your Paypal account
Simple hack to get* $5000 * to your Paypal account At http://moneyforwarding.co.cc i have hidden the Paypal Form link in an image. in that website on Right Side below search box, click on image and enter your name and Paypal ID. -- http://mail.python.org/mailman/listinfo/python-list
Newbie: Win32 COM problem
Simple class to wrap the xlwt module for COM access pyXLS.py: from xlwt import Workbook class WrapXLS: _reg_clsid_ = {c94df6f0-b001-11df-8d63-00e09103a9a0} _reg_desc_ = XLwt wrapper _reg_progid_ = PyXLS.Write _public_methods_ = ['createBook','createSheet','writeSheetCell','saveBook'] # _public_attrs_ = ['book'] def __init__(self): self.book = None def createBook(self): self.book = Workbook() def createSheet(self,sheetName): self.book.add_sheet(sheetName) def writeSheetCell(self, sheet, row, col, value, style=): sheet = self.book.get_sheet(sheet) sheet.write(row,col,value,style) def saveBook(self,fileName): self.book.save(fileName) if __name__=='__main__': import win32com.server.register win32com.server.register.UseCommandLine(WrapXLS) It registers ok with --debug. Code executing within Foxpro (no comments pls): oPyXLS = CREATEOBJECT(PyXLS.Write) oPyXLS.createBook() oPyXLS.createSheet(Sheet 1)-- Error here Output in Python Trace Collector (PythonWin): ... in _GetIDsOfNames_ with '(u'createsheet',)' and '1033' in _Invoke_ with 1001 1033 3 (u'Sheet 1',) Traceback (most recent call last): File C:\Python26\lib\site-packages\win32com\server\dispatcher.py, line 47, in _Invoke_ return self.policy._Invoke_(dispid, lcid, wFlags, args) File C:\Python26\lib\site-packages\win32com\server\policy.py, line 277, in _Invoke_ return self._invoke_(dispid, lcid, wFlags, args) File C:\Python26\lib\site-packages\win32com\server\policy.py, line 282, in _invoke_ return S_OK, -1, self._invokeex_(dispid, lcid, wFlags, args, None, None) File C:\Python26\lib\site-packages\win32com\server\policy.py, line 585, in _invokeex_ return func(*args) File C:\development\PyXLS\pyXLS.py, line 13, in createSheet def createBook(self): AttributeError: WrapXLS instance has no attribute '_book' pythoncom error: Python error invoking COM method. Can anyone help? -- http://mail.python.org/mailman/listinfo/python-list
Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?
Alex McDonald b...@rivadpm.com writes: Your example of writing code with memory leaks *and not caring because it's a waste of your time* makes me think that you've never been a programmer of any sort. Ever. Well, I find his approach towards memory leaks as described in 779b992b-7199-4126-bf3a-7ec40ea80...@j18g2000yqd.googlegroups.com quite sensible, use something like that myself, and recommend it to others. Followups set to c.l.f (adjust as appropriate). - anton -- M. Anton Ertl http://www.complang.tuwien.ac.at/anton/home.html comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html New standard: http://www.forth200x.org/forth200x.html EuroForth 2010: http://www.euroforth.org/ef10/ -- http://mail.python.org/mailman/listinfo/python-list
staticmethod behaviour
Hi, I run today into some problems with my code and I realized that there is something in the behaviours of the @staticmethod that I don't really understand. I don't know if it is an error or not, actually, only that it was, definitely, unexpected. I wrote a small demo of what happens. The code: http://dpaste.com/hold/233795/ The answer I get: User created with static: id, rights, rights2 1 ['read', 'write'] ['write2'] ['write3'] User created with User() None [] ['write2'] ['write3'] I was expecting either all arrays from the second to be [] or to be a copy of the first one. If someone can provide an explanation, I would be thankful :) Regards, Samu -- http://mail.python.org/mailman/listinfo/python-list
Re: staticmethod behaviour
On Aug 25, 3:03 pm, Samu samufuen...@gmail.com wrote: Hi, I run today into some problems with my code and I realized that there is something in the behaviours of the @staticmethod that I don't really understand. I don't know if it is an error or not, actually, only that it was, definitely, unexpected. I wrote a small demo of what happens. The code:http://dpaste.com/hold/233795/ The answer I get: User created with static: id, rights, rights2 1 ['read', 'write'] ['write2'] ['write3'] User created with User() None [] ['write2'] ['write3'] I was expecting either all arrays from the second to be [] or to be a copy of the first one. If someone can provide an explanation, I would be thankful :) Regards, Samu In addition, if I don't define the function as static, but either as a method of the object or a function outside of the class, something like this: def cr_user(): user = User(1, ['read'], rights3=[]) user.rights.append('write') user.rights2.append('write2') user.rights3.append('write3') return user I get instead: User created with static: id, rights, rights2 1 ['read', 'write'] ['write2'] ['write3'] User created with User() None [] ['write2'] [] There is some (maybe deep) concept that I don't get it seems, because that output puzzles me... -- http://mail.python.org/mailman/listinfo/python-list
Re: staticmethod behaviour
Samu wrote: Hi, I run today into some problems with my code and I realized that there is something in the behaviours of the @staticmethod that I don't really understand. I don't know if it is an error or not, actually, only that it was, definitely, unexpected. I wrote a small demo of what happens. The code: class User: def __init__(self, id=None, rights=[], rights2=[], rights3=[]): self.id = id self.rights = rights self.rights2 = rights2 self.rights3 = rights3 @staticmethod def cr_user(): user = User(1, ['read'], rights3=[]) user.rights.append('write') user.rights2.append('write2') user.rights3.append('write3') return user print User created with static: id, rights, rights2 a = User.cr_user() print a.id, a.rights, a.rights2, a.rights3 print User created with User() b = User() print b.id, b.rights, b.rights2, a.rights3 The answer I get: User created with static: id, rights, rights2 1 ['read', 'write'] ['write2'] ['write3'] User created with User() None [] ['write2'] ['write3'] I was expecting either all arrays from the second to be [] or to be a copy of the first one. If someone can provide an explanation, I would be thankful :) The problem is not the staticmethod, it's the mutable default values for __init__(). See http://effbot.org/zone/default-values.htm Peter -- http://mail.python.org/mailman/listinfo/python-list
Re: problem with simple multiprocessing script on OS X
On Aug 24, 5:29 pm, Benjamin Kaplan benjamin.kap...@case.edu wrote: On Tue, Aug 24, 2010 at 3:31 PM, Darren Dale dsdal...@gmail.com wrote: On Aug 23, 9:58 am, Darren Dale dsdal...@gmail.com wrote: The following script runs without problems on Ubuntu and Windows 7. h5py is a package wrapping the hdf5 library (http://code.google.com/p/ h5py/): from multiprocessing import Pool import h5py def update(i): print i def f(i): hello foo return i*i if __name__ == '__main__': pool = Pool() for i in range(10): pool.apply_async(f, [i], callback=update) pool.close() pool.join() On OS X 10.6 (tested using python-2.6.5 from MacPorts), I have to comment out the as-yet unused h5py import, otherwise I get a traceback: Exception in thread Thread-1: Traceback (most recent call last): File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/ lib/python2.6/threading.py, line 532, in __bootstrap_inner self.run() File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/ lib/python2.6/threading.py, line 484, in run self.__target(*self.__args, **self.__kwargs) File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/ lib/python2.6/multiprocessing/pool.py, line 226, in _handle_tasks put(task) PicklingError: Can't pickle type 'function': attribute lookup __builtin__.function failed This is a really critical bug for me, but I'm not sure how to proceed. Can I file a bug report on the python bugtracker if the only code I can come up with to illustrate the problem requires a lame import of a third party module? -- It's working fine for me, OS X 10.6.4, Python 2.6 and h5py from Macports. Really? With the h5py import uncommented? I just uninstalled and reinstalled my entire macports python26/py26-numpy/hdf5-18/py26-h5py stack, and I still see the same error. -- http://mail.python.org/mailman/listinfo/python-list
Re: Help please! strange Tkinter behavior has me totally baffled.
Thanks mucho! That was it! -- Steve Ferg -- http://mail.python.org/mailman/listinfo/python-list
Re: problem with simple multiprocessing script on OS X
On Aug 24, 4:32 pm, Thomas Jollans tho...@jollybox.de wrote: On Tuesday 24 August 2010, it occurred to Darren Dale to exclaim: On Aug 23, 9:58 am, Darren Dale dsdal...@gmail.com wrote: The following script runs without problems on Ubuntu and Windows 7. h5py is a package wrapping the hdf5 library (http://code.google.com/p/ h5py/): from multiprocessing import Pool import h5py def update(i): print i def f(i): hello foo return i*i if __name__ == '__main__': pool = Pool() for i in range(10): pool.apply_async(f, [i], callback=update) pool.close() pool.join() On OS X 10.6 (tested using python-2.6.5 from MacPorts), I have to comment out the as-yet unused h5py import, otherwise I get a traceback: What on earth is h5py doing there? If what you're telling us is actually happening, and the code works 1:1 on Linux and Windows, but fails on OSX, and you're using the same versions of h5py and Python, then the h5py initialization code is not only enticing multiprocessing to try to pickle something other than usual, but it is also doing that due to some platform- dependent witchcraft, and I doubt there's very much separating the OSX versions from the Linux versions of anything involved. I can't find anything in the source to suggest that h5py is doing any platform-specific magic. Do you have an idea of how it would be possible for initialization code to cause multiprocessing to try to pickle something it normally would not? This is a really critical bug for me, but I'm not sure how to proceed. Can I file a bug report on the python bugtracker if the only code I can come up with to illustrate the problem requires a lame import of a third party module? I doubt this is an issue with Python. File a bug on the h5py tracker and see what they say. The people there might at least have some vague inkling of what may be going on. Thanks for the suggestion. I was in touch with the h5py maintainer before my original post. We don't have any leads. -- http://mail.python.org/mailman/listinfo/python-list
Re: staticmethod behaviour
On Aug 25, 3:26 pm, Peter Otten __pete...@web.de wrote: Samu wrote: Hi, I run today into some problems with my code and I realized that there is something in the behaviours of the @staticmethod that I don't really understand. I don't know if it is an error or not, actually, only that it was, definitely, unexpected. I wrote a small demo of what happens. The code: class User: def __init__(self, id=None, rights=[], rights2=[], rights3=[]): self.id = id self.rights = rights self.rights2 = rights2 self.rights3 = rights3 �...@staticmethod def cr_user(): user = User(1, ['read'], rights3=[]) user.rights.append('write') user.rights2.append('write2') user.rights3.append('write3') return user print User created with static: id, rights, rights2 a = User.cr_user() print a.id, a.rights, a.rights2, a.rights3 print User created with User() b = User() print b.id, b.rights, b.rights2, a.rights3 The answer I get: User created with static: id, rights, rights2 1 ['read', 'write'] ['write2'] ['write3'] User created with User() None [] ['write2'] ['write3'] I was expecting either all arrays from the second to be [] or to be a copy of the first one. If someone can provide an explanation, I would be thankful :) The problem is not the staticmethod, it's the mutable default values for __init__(). See http://effbot.org/zone/default-values.htm Peter Ahh, thank you very much for the link. Now I understand. I remember having read that before, but it is not until you face the problem that the concept sticks. But why does it have a different behaviour the staticmethod with the rights3 case then? -- http://mail.python.org/mailman/listinfo/python-list
speed of numpy.power()?
Hi all, I'd like to hear from you on the benefits of using numpy.power(x,y) over (x*x*x*x..) I looks to me that numpy.power takes more time to run. cheers Carlos -- http://mail.python.org/mailman/listinfo/python-list
Re: speed of numpy.power()?
On 25/08/2010 14:59, Carlos Grohmann wrote: Hi all, I'd like to hear from you on the benefits of using numpy.power(x,y) over (x*x*x*x..) I looks to me that numpy.power takes more time to run. cheers Carlos Measure it yourself using the timeit module. Cheers. Mark Lawrence. -- http://mail.python.org/mailman/listinfo/python-list
Python2.4 on ARM-Linux import time module fails
Hi, I cross-compiled Python2.4 for ARM (Linux 2.6.30) in order to run autotest-client-xxx on my ARM target. When I run autotest on ARM target I get ImportError: No module named time Which package I need to install to add support for time module. # bin/autotest samples/filesystem Traceback (most recent call last): File bin/autotest, line 6, in ? import common File /dtv/usb/sda1/autotest-client-0.12.0-dirty/bin/common.py, line 8, in ? root_module_name=autotest_lib.client) File /dtv/usb/sda1/autotest-client-0.12.0-dirty/setup_modules.py, line 139, in setup _monkeypatch_logging_handle_error() File /dtv/usb/sda1/autotest-client-0.12.0-dirty/setup_modules.py, line 103, in _monkeypatch_logging_handle_error import logging File /dtv/usb/sda1/Python-2.4/Lib/logging/__init__.py, line 29, in ? import sys, os, types, time, string, cStringIO ImportError: No module named time With Regards Ajeet Yadav -- http://mail.python.org/mailman/listinfo/python-list
Re: CRIMINAL YanQui MARINES Cesar Laurean Regularly RAPE GIRLS Maria Lauterbach and KILL THEM - CIA/Mossad/Jew did 911 - Wikileaks for ever CRIMES of YANQUI Bustards
On Aug 25, 7:12 am, nanothermite911fbibustards nanothermite911fbibusta...@gmail.com wrote: CRIMINAL YanQui MARINES Cesar Laurean Regularly RAPE GIRLS Maria Lauterbach and KILL THEM Is he a Jew or a white Anglo Saxon race ? or a Southern Baptist Bustard who The girl was a German like the one Roman Polansky raped, Semantha Geimer http://www.cnn.com/2010/CRIME/08/23/north.carolina.marine.murder/ Look at his face, what criminal race is he ? What goes around comes around !!! Former Marine convicted in North Carolina of killing female colleague By the CNN Wire Staff August 23, 2010 8:33 p.m. EDT For more on this story, read the coverage from CNN affiliate WRAL. (CNN) -- Former U.S. Marine Cesar Laurean was convicted in North Carolina on Monday of first degree murder in the 2007 death of Lance Cpl. Maria Lauterbach, who was eight months pregnant when she died. An autopsy showed that Lauterbach, 20, died of blunt force trauma to the head. Police unearthed her charred body from beneath a barbecue pit in Laurean's backyard in January 2008. She had disappeared the month before. Laurean, who was dressed in black slacks and wore a white shirt and black tie, did not show any emotion as the judge read his sentence of life in prison without parole. He either said or mouthed something to someone in the audience of the courtroom before he was led out in handcuffs, video showed. Laurean and Lauterbach were stationed together at Camp Lejeune, North Carolina. North Carolina prosecutors alleged Laurean killed Lauterbach on December 14 and used her ATM card 10 days later before fleeing to Mexico.Laurean was arrested there in April 2008. He holds dual citizenship in the United States and Mexico. Before her death, Lauterbach told the Marines that Laurean had raped her. Laurean denied it, and disappeared just a few weeks before a scheduled rape hearing at Camp LeJeune. The DNA of Lauterbach's unborn child did not match that of Laurean, according to law enforcement personnel. Authorities found Lauterbach's body after Laurean's wife, Christina, produced a note her husband had written claiming the 20-year-old woman slit her own throat during an argument, according to officials. Although a gaping 4-inch wound was found on the left side of Lauterbach's neck, autopsy results indicated that the wound itself would not have been fatal and may have occurred after death. Asked by a Mexican reporter at the time of his arrest whether he killed Lauterbach, Laurean replied, I loved her. Laurean's lawyer said his client would appeal the decision. / The MARINE BASTARD will probably claim INSANITY // The FAT per DIEM FBI bustards use our TAX PAYER MONEY and INCOMPETENCE is UNACCEPTABLE. = http://www.youtube.com/watch?v=lX18zUp6WPY http://www.youtube.com/watch?v=XQapkVCx1HI http://www.youtube.com/watch?v=tXJ-k-iOg0M Hey Racist and INcompetent FBI Bustards, where is the ANTHRAX Mailer ? Where are the 4 blackboxes ? Where are the Pentagon Videos ? Why did you release the 5 dancing Israelis compromising the whole 911 investigation ? If the Dubai Police can catch Mossad Murderers and put the videos and Iranian Police can why cant you put the Pentagon Videos ? If Iran police can put the AMERICAN TERRORIST, Riggi and puting on INTERNATIONAL MEDIA a day after catching him without TORTURE, why cant you put the INNOCENT patsies on the MEDIA. Why did you have to LIE about Dr Afiya Siddiqui and torture that Innocent little mother of 3 and smashing the skull of her one child ? http://www.youtube.com/watch?v=DhMcii8smxkhttp://www.youtube.com/watch?v=0SZ2lxDJmdg There are CRIMINAL cases against CIA CRIMINAL Bustards in Italian courts. FBI bustards paid a penalty of $5.8 million to Steven Hatfill, but only because he was a white. They got away with MURDER of thousands of Non-whites in all parts of the world. Daily 911 news :http://911blogger.com http://www.youtube.com/watch?v=tRfhUezbKLw http://www.youtube.com/watch?v=x7kGZ3XPEm4 http://www.youtube.com/watch?v=lX18zUp6WPY Conclusion : FBI bustards are RACIST and INcompetent. They could neither catch the ANTHRAX or 911 YANK/Jew criminals nor could they cover them up - whichever was their actual goal or task. SLASH the SALARIES of FBI/CIA/NSA etc BUSTARDS into half all across tbe board, esp the whites/jew on the top. FBI Bustards failed to Catch BERNARD MADOFF even after that RACIST and UNPATRIOTIC Act FBI bustards failed to prevent ROMAN POLANSKY from absconding to europe and rapes. FBI bustards failed to prevent OKLAHOMA --http://mail.python.org/mailman/listinfo/python-list ... so you want to render this in TeX ... ? -- http://mail.python.org/mailman/listinfo/python-list
Re: staticmethod behaviour
Samu wrote: the concept sticks. But why does it have a different behaviour the staticmethod with the rights3 case then? Moving from staticmethod to standalone function doesn't affect the output. You have inadvertently changed something else. Peter -- http://mail.python.org/mailman/listinfo/python-list
Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?
On 19 Aug, 16:25, c...@tiac.net (Richard Harter) wrote: On Wed, 18 Aug 2010 01:39:09 -0700 (PDT), Nick Keighley nick_keighley_nos...@hotmail.com wrote: On 17 Aug, 18:34, Standish P stnd...@gmail.com wrote: How are these heaps being implemented ? Is there some illustrative code or a book showing how to implement these heaps in C for example ? any book of algorithms I'd have thought my library is currently inaccessible. Normally I'd have picked up Sedgewick and seen what he had to say on the subject. And possibly Knuth (though that requires taking more of a deep breath). Presumably Plauger's library book includes an implementation of malloc()/free() so that might be a place to start. http://en.wikipedia.org/wiki/Dynamic_memory_allocation http://www.flounder.com/inside_storage_allocation.htm I've no idea how good either of these is serves me right for not checking :-( The wikipedia page is worthless. odd really, you'd think basic computer science wasn't that hard... I found even wikipedia's description of a stack confusing and heavily biased towards implementation The flounder page has substantial meat, but the layout and organization is a mess. A quick google search didn't turn up much that was general - most articles are about implementations in specific environments. -- http://mail.python.org/mailman/listinfo/python-list
Re: speed of numpy.power()?
Carlos Grohmann carlos.grohm...@gmail.com writes: I'd like to hear from you on the benefits of using numpy.power(x,y) over (x*x*x*x..) I looks to me that numpy.power takes more time to run. You can use math.pow, which is no slower than repeated multiplication, even for small exponents. Obviously, after the exponent has grown large enough, numpy.power becomes faster than repeated exponentiation (it's already faster at 100). Like math.pow, it supports negative and non-integer exponents. Unlike math.pow, numpy.power also supports all kinds of interesting objects as bases for exponentiation. -- http://mail.python.org/mailman/listinfo/python-list
Re: staticmethod behaviour
On Aug 25, 4:32 pm, Peter Otten __pete...@web.de wrote: Samu wrote: the concept sticks. But why does it have a different behaviour the staticmethod with the rights3 case then? Moving from staticmethod to standalone function doesn't affect the output. You have inadvertently changed something else. Peter Absolutely right. Thank you very much for your time and answers, Peter :) It helped me a lot! -- http://mail.python.org/mailman/listinfo/python-list
Re: How to see intermediate fail results from unittest as tests are running?
On Aug 18, 9:20 pm, Margie Roginski margierogin...@gmail.com wrote: Hi, I am using unittest in a fairly basic way, where I have a single file that simply defines a class that inherits from unittest.TestCase and then within that class I have a bunch of methods that start with test. Within that file, at the bottom I have: if __name__ == __main__: unittest.main() This works fine and it runs all of the testxx() methods in my file. As it runs it prints if the tests passed or failed, but if they fail, it does not print the details of the assert that made them fail. It collects this info up and prints it all at the end. Ok - my question: Is there any way to get unittest to print the details of the assert that made a test fail, as the tests are running? IE, after a test fails, I would like to see why, rather than waiting until all the tests are done. I've searched the doc and even looked at the code, and it seems the answer is no, but I'm just wondering if I'm missing something. Thanks! Margie trial (Twisted's test runner) has a `--rterrors` option which causes it to display errors as soon as they happen. Jean-Paul -- http://mail.python.org/mailman/listinfo/python-list
Re: speed of numpy.power()?
On Wed, Aug 25, 2010 at 10:59 PM, Carlos Grohmann carlos.grohm...@gmail.com wrote: Hi all, I'd like to hear from you on the benefits of using numpy.power(x,y) over (x*x*x*x..) Without more context, I would say None if x*x*x*x*... works and you are not already using numpy. The point of numpy is mostly to work on numpy arrays, and to support types of data not natively supported by python (single, extended precision). If x is a python object such as int or float, numpy will also be much slower. Using numpy would make sense if for example you are already using numpy everywhere else, for consistency reason, David -- http://mail.python.org/mailman/listinfo/python-list
Re: speed of numpy.power()?
On 25 ago, 12:40, David Cournapeau courn...@gmail.com wrote: On Wed, Aug 25, 2010 at 10:59 PM, Carlos Grohmann Thanks David and Hrvoje. That was the feedback I was looking for. I am using numpy in my app but in some cases I will use math.pow(), as some tests with timeit showed that numpy.power was slower for (x*x*x*x*x). best Carlos -- http://mail.python.org/mailman/listinfo/python-list
Re: problem with simple multiprocessing script on OS X
On Aug 24, 4:32 pm, Thomas Jollans tho...@jollybox.de wrote: On Tuesday 24 August 2010, it occurred to Darren Dale to exclaim: On Aug 23, 9:58 am, Darren Dale dsdal...@gmail.com wrote: The following script runs without problems on Ubuntu and Windows 7. h5py is a package wrapping the hdf5 library (http://code.google.com/p/ h5py/): from multiprocessing import Pool import h5py def update(i): print i def f(i): hello foo return i*i if __name__ == '__main__': pool = Pool() for i in range(10): pool.apply_async(f, [i], callback=update) pool.close() pool.join() On OS X 10.6 (tested using python-2.6.5 from MacPorts), I have to comment out the as-yet unused h5py import, otherwise I get a traceback: What on earth is h5py doing there? If what you're telling us is actually happening, and the code works 1:1 on Linux and Windows, but fails on OSX, and you're using the same versions of h5py and Python, then the h5py initialization code is not only enticing multiprocessing to try to pickle something other than usual, but it is also doing that due to some platform- dependent witchcraft, and I doubt there's very much separating the OSX versions from the Linux versions of anything involved. Your analysis was spot on. About a year ago, I contributed a patch to h5py which checks to see if h5py is being imported into an active IPython session. If so, then a custom tab completer is loaded to make it easier to navigate hdf5 files. In the development version of IPython, a function that used to return None if there was no instance of an IPython interactive shell now creates and returns a new instance. This was the cause of the error I was reporting. If one were to install ipython from the master branch at github or from http://ipython.scipy.org/dist/testing/ipython-dev-nightly.tgz, then the following script will reproduce the problem. I'm not sure why this causes an error, but I'll discuss it with the IPython devs. Thank you Thomas and Benjamin for helping me understand the problem. Darren from multiprocessing import Pool import IPython.core.ipapi as ip ip.get() def update(i): print i def f(i): return i*i if __name__ == '__main__': pool = Pool() for i in range(10): pool.apply_async(f, [i], callback=update) pool.close() pool.join() -- http://mail.python.org/mailman/listinfo/python-list
Re: speed of numpy.power()?
On 8/25/10 8:59 AM, Carlos Grohmann wrote: Hi all, I'd like to hear from you on the benefits of using numpy.power(x,y) over (x*x*x*x..) I looks to me that numpy.power takes more time to run. You will want to ask numpy questions on the numpy mailing list: http://www.scipy.org/Mailing_Lists The advantage that numpy.power(x,y) has over (x*x*x...) is that y can be floating point. We do not attempt to do strength reduction in the integer case. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco -- http://mail.python.org/mailman/listinfo/python-list
Re: speed of numpy.power()?
On Wed, 25 Aug 2010 06:59:36 -0700 (PDT), Carlos Grohmann wrote: I'd like to hear from you on the benefits of using numpy.power(x,y) over (x*x*x*x..) Using the dis package under Python 2.5, I see that computing x_to_the_16 = x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x uses 15 multiplies. I hope that numpy.power does it with 4. -- To email me, substitute nowhere-spamcop, invalid-net. -- http://mail.python.org/mailman/listinfo/python-list
Can PySerial's write method be called by multiple threads?
I have a multi-threaded application where several of the threads need to write to a serial port that's being handled by pySerial. If pySerial thread-safe in the sense that pySerial.write behaves atomically? I.e., if thread 1 executes, serport.write(Hello, world!) and thread 2 executes serport.write(All your bases are belong to us!), is it guaranteed that the output over the serial port won't mix the two together (e.g., Hello All your bases are belong to us!, world!) ? I looked at the source code, and the write method eventually boils down to calling an the OS's write function, which presumably ends up being a call to a C function. Given the global interpreter lock -- and particularly how C functions can't be interrupted by the Python interpreter at all -- it sure seems as though everything is copacetic here? If not I can just add a queue and have everything go through it, but of course I'd like to avoid the extra code and CPU cycles if it isn't at all necessary. Thank you, ---Joel Koltner -- http://mail.python.org/mailman/listinfo/python-list
Re: Can PySerial's write method be called by multiple threads?
On Wednesday 25 August 2010, it occurred to Joel Koltner to exclaim: I have a multi-threaded application where several of the threads need to write to a serial port that's being handled by pySerial. If pySerial thread-safe in the sense that pySerial.write behaves atomically? I.e., if thread 1 executes, serport.write(Hello, world!) and thread 2 executes serport.write(All your bases are belong to us!), is it guaranteed that the output over the serial port won't mix the two together (e.g., Hello All your bases are belong to us!, world!) ? I looked at the source code, and the write method eventually boils down to calling an the OS's write function, which presumably ends up being a call to a C function. Given the global interpreter lock -- and particularly how C functions can't be interrupted by the Python interpreter at all -- it sure seems as though everything is copacetic here? I expect that it gives away the GIL to call the resident write() function, to allow other threads to run while it's sitting there, blocking. I haven't looked at the code, so maybe it doesn't hand over the GIL, but if it doesn't, I'd consider that a bug rather than a feature: the GIL shouldn't be abused as some kind of local mutex, and only gets in the way anyway. Speaking of the GIL, you shouldn't rely on it being there. Ever. It's a necessary evil, or it appears to be necessary. but nobody likes it and if somebody finds a good way to kick it out then that will happen. (That happens to be an explicit exception from the language moratorium, so it's not just my own personal wishful thinking) If not I can just add a queue and have everything go through it, but of course I'd like to avoid the extra code and CPU cycles if it isn't at all necessary. -- http://mail.python.org/mailman/listinfo/python-list
Re: Can PySerial's write method be called by multiple threads?
On 25/08/2010 19:36, Joel Koltner wrote: I have a multi-threaded application where several of the threads need to write to a serial port that's being handled by pySerial. If pySerial thread-safe in the sense that pySerial.write behaves atomically? I.e., if thread 1 executes, serport.write(Hello, world!) and thread 2 executes serport.write(All your bases are belong to us!), is it guaranteed that the output over the serial port won't mix the two together (e.g., Hello All your bases are belong to us!, world!) ? I looked at the source code, and the write method eventually boils down to calling an the OS's write function, which presumably ends up being a call to a C function. Given the global interpreter lock -- and particularly how C functions can't be interrupted by the Python interpreter at all -- it sure seems as though everything is copacetic here? Don't assume that just because it calls a C function the GIL won't be released. I/O calls which can take a relatively long time to complete often release the GIL. If not I can just add a queue and have everything go through it, but of course I'd like to avoid the extra code and CPU cycles if it isn't at all necessary. Unless I know that something is definitely thread-safe, that would be the way I would go. -- http://mail.python.org/mailman/listinfo/python-list
Re: Can PySerial's write method be called by multiple threads?
Thomas Jollans tho...@jollybox.de wrote in message news:mailman.36.1282762569.29448.python-l...@python.org... I expect that it gives away the GIL to call the resident write() function, to allow other threads to run while it's sitting there, blocking. I haven't looked at the code, so maybe it doesn't hand over the GIL, but if it doesn't, I'd consider that a bug rather than a feature: the GIL shouldn't be abused as some kind of local mutex, and only gets in the way anyway. Ah, I expect you're correct. I'm still largely a Python newbie, and only know enough about things like the GIL to get myself into trouble. Speaking of the GIL, you shouldn't rely on it being there. Ever. It's a necessary evil, or it appears to be necessary. but nobody likes it and if somebody finds a good way to kick it out then that will happen. OK, but presumably I can't know whether or not someone who wrote a library like pySerial relied on it or not. Although I suppose this is really a documentation bug -- pySerial's documentation doesn't talk about multi-threaded access directly, although their minicom example does demonstrate it in action. Thanks for the help, ---Joel -- http://mail.python.org/mailman/listinfo/python-list
split string into multi-character letters
Hi, I'm seeking help with a fairly simple string processing task. I've simplified what I'm actually doing into a hypothetical equivalent. Suppose I want to take a word in Spanish, and divide it into individual letters. The problem is that there are a few 2-character combinations that are considered single letters in Spanish - for example 'ch', 'll', 'rr'. Suppose I have: alphabet = ['a','b','c','ch','d','u','r','rr','o'] #this would include the whole alphabet but I shortened it here theword = 'churro' I would like to split the string 'churro' into a list containing: 'ch','u','rr','o' So at each letter I want to look ahead and see if it can be combined with the next letter to make a single 'letter' of the Spanish alphabet. I think this could be done with a regular expression passing the list called alphabet to re.match() for example, but I'm not sure how to use the contents of a whole list as a search string in a regular expression, or if it's even possible. My real application is a bit more complex than the Spanish alphabet so I'm looking for a fairly general solution. Thanks, Jed -- http://mail.python.org/mailman/listinfo/python-list
Re: Declare self.cursor
On 8/24/2010 10:15 AM, Dani Valverde wrote: Hello! I am working on a GUI to connect to a MySQL database using MySQLdb (code in attached file). I define the cursor in lines 55-66 in the OnLogin function within the LoginDlg class. /db= MySQLdb.connect(host='localhost', user=Username , passwd=pwd, db='Ornithobase') self.cursor = db.cursor()/ When I try to use the cursor on another part of the code (EditUser class, line 176) /sql = 'select substring_index(CURRENT_USER(),@,1)' login.cursor.execute(sql)/ I get this error: /AttributeError: 'LoginDlg' object has no attribute 'cursor'/ You can check the code for details, I think is better. Cheers! Dani self.cursor = db.cursor() ... self.Destroy() # probably clears the object Also, it's generally better to hold on to the database handle and get a cursor from it as a local variable when needed. You need the database handle for db.commit(), at least. Getting a cursor is fast. (Actually, in MySQL, there is only one cursor.) I realize it's a desktop application, but still: db= MySQLdb.connect(host='localhost', user='root' , passwd='acrsci00', db='Ornithobase') -- http://mail.python.org/mailman/listinfo/python-list
Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?
On Aug 24, 8:00 pm, Hugh Aguilar hughaguila...@yahoo.com wrote: The C programmers reading this are likely wondering why I'm being attacked. The reason is that Elizabeth Rather has made it clear to everybody that this is what she wants: [http://tinyurl.com/2bjwp7q] Hello to those outside of comp.lang.forth, where Hugh usually leaves his slime trail. I seriously doubt many people will bother to read the message thread Hugh references, but if you do, you'll get to delight in the same nonsense Hugh has brought to comp.lang.forth. Here's the compressed version: 1. Hugh references code (symtab) that he wrote (in Factor) to manage symbol tables. 2. I (and others) did some basic analysis and found it to be a poor algorithm-- both in terms of memory use and performance-- especially compared to the usual solutions (hash tables, splay trees, etc.). 3. I stated that symtab sucked for the intended application. 4. Hugh didn't like that I called his baby ugly and decided to expose his bigotry. 5. Elizabeth Rather said she didn't appreciate Hugh's bigotry in the newsgroup. Yep, that's it. What Hugh is banking on is that you won't read the message thread, and that you'll blindly accept that Elizabeth is some terrible ogre with a vendetta against Hugh. The humor here is that Hugh himself provides a URL that disproves that! So yes, if you care, do read the message thread. It won't take long for you to get a clear impression of Hugh's character. -- http://mail.python.org/mailman/listinfo/python-list
Re: Can PySerial's write method be called by multiple threads?
On 8/25/2010 11:36 AM, Joel Koltner wrote: I have a multi-threaded application where several of the threads need to write to a serial port that's being handled by pySerial. If pySerial thread-safe in the sense that pySerial.write behaves atomically? I.e., if thread 1 executes, serport.write(Hello, world!) and thread 2 executes serport.write(All your bases are belong to us!), is it guaranteed that the output over the serial port won't mix the two together (e.g., Hello All your bases are belong to us!, world!) ? You're not guaranteed that one Python write maps to one OS-level write. Individual print statements in Python are not atomic. You don't need a queue, though; just use your own write function with a lock. import threading lok = threading.Lock() def atomicwrite(fd, data) : with lok : fd.write(data) John Nagle -- http://mail.python.org/mailman/listinfo/python-list
Re: split string into multi-character letters
2010/8/25 Jed jedmelt...@gmail.com: Hi, I'm seeking help with a fairly simple string processing task. I've simplified what I'm actually doing into a hypothetical equivalent. Suppose I want to take a word in Spanish, and divide it into individual letters. The problem is that there are a few 2-character combinations that are considered single letters in Spanish - for example 'ch', 'll', 'rr'. Suppose I have: alphabet = ['a','b','c','ch','d','u','r','rr','o'] #this would include the whole alphabet but I shortened it here theword = 'churro' I would like to split the string 'churro' into a list containing: 'ch','u','rr','o' So at each letter I want to look ahead and see if it can be combined with the next letter to make a single 'letter' of the Spanish alphabet. I think this could be done with a regular expression passing the list called alphabet to re.match() for example, but I'm not sure how to use the contents of a whole list as a search string in a regular expression, or if it's even possible. My real application is a bit more complex than the Spanish alphabet so I'm looking for a fairly general solution. Thanks, Jed -- http://mail.python.org/mailman/listinfo/python-list Hi, I am not sure, whether it can be generalised enough for your needs, but you can try something like re.findall(rrr|ll|ch|[a-z], asdasdallasdrrcvb) ['a', 's', 'd', 'a', 's', 'd', 'a', 'll', 'a', 's', 'd', 'rr', 'c', 'v', 'b'] of course, the pattern should be adjusted precisely in order not to loose characters... hth, vbr -- http://mail.python.org/mailman/listinfo/python-list
Re: split string into multi-character letters
Jed writes: alphabet = ['a','b','c','ch','d','u','r','rr','o'] #this would include the whole alphabet but I shortened it here theword = 'churro' I would like to split the string 'churro' into a list containing: 'ch','u','rr','o' All non-overlapping matches, each as long as can be, and '.' catches single characters by default: import re re.findall('ch|ll|rr|.', 'churro') ['ch', 'u', 'rr', 'o'] -- http://mail.python.org/mailman/listinfo/python-list
Re: split string into multi-character letters
On 25/08/2010 20:46, Jed wrote: Hi, I'm seeking help with a fairly simple string processing task. I've simplified what I'm actually doing into a hypothetical equivalent. Suppose I want to take a word in Spanish, and divide it into individual letters. The problem is that there are a few 2-character combinations that are considered single letters in Spanish - for example 'ch', 'll', 'rr'. Suppose I have: alphabet = ['a','b','c','ch','d','u','r','rr','o'] #this would include the whole alphabet but I shortened it here theword = 'churro' I would like to split the string 'churro' into a list containing: 'ch','u','rr','o' So at each letter I want to look ahead and see if it can be combined with the next letter to make a single 'letter' of the Spanish alphabet. I think this could be done with a regular expression passing the list called alphabet to re.match() for example, but I'm not sure how to use the contents of a whole list as a search string in a regular expression, or if it's even possible. My real application is a bit more complex than the Spanish alphabet so I'm looking for a fairly general solution. You can build a regex with: '|'.join(alphabet) 'a|b|c|ch|d|u|r|rr|o' You want to try to match, say, 'ch' before 'c', so you want the longest first: '|'.join(sorted(alphabet, key=len, reverse=True)) 'ch|rr|a|b|c|d|u|r|o' If you were going to match the Spanish alphabet then I would recommend that you do it in Unicode. Well, any text that's not pure ASCII should be done in Unicode! -- http://mail.python.org/mailman/listinfo/python-list
Re: split string into multi-character letters
On 08/25/10 14:46, Jed wrote: Hi, I'm seeking help with a fairly simple string processing task. I've simplified what I'm actually doing into a hypothetical equivalent. Suppose I want to take a word in Spanish, and divide it into individual letters. The problem is that there are a few 2-character combinations that are considered single letters in Spanish - for example 'ch', 'll', 'rr'. Suppose I have: alphabet = ['a','b','c','ch','d','u','r','rr','o'] #this would include the whole alphabet but I shortened it here theword = 'churro' I would like to split the string 'churro' into a list containing: 'ch','u','rr','o' So at each letter I want to look ahead and see if it can be combined with the next letter to make a single 'letter' of the Spanish alphabet. I think this could be done with a regular expression passing the list called alphabet to re.match() for example, but I'm not sure how to use the contents of a whole list as a search string in a regular expression, or if it's even possible. My first attempt at the problem: import re special = ['ch', 'rr', 'll'] r = re.compile(r'(?:%s)|[a-z]' % ('|'.join(re.escape(c) for c in special)), re.I) r.findall('churro') ['ch', 'u', 'rr', 'o'] [r.findall(word) for word in 'churro lorenzo caballo'.split()] [['ch', 'u', 'rr', 'o'], ['l', 'o', 'r', 'e', 'n', 'z', 'o'], ['c', 'a', 'b', 'a', 'll', 'o']] This joins escaped versions of all your special characters. Due to the sequential nature used by Python's re module to handle | or-branching, the paired versions get tested (and found) before proceeding to the single-letters. -tkc -- http://mail.python.org/mailman/listinfo/python-list
Re: split string into multi-character letters
On Wednesday 25 August 2010, it occurred to Jed to exclaim: Hi, I'm seeking help with a fairly simple string processing task. I've simplified what I'm actually doing into a hypothetical equivalent. Suppose I want to take a word in Spanish, and divide it into individual letters. The problem is that there are a few 2-character combinations that are considered single letters in Spanish - for example 'ch', 'll', 'rr'. Suppose I have: alphabet = ['a','b','c','ch','d','u','r','rr','o'] #this would include the whole alphabet but I shortened it here theword = 'churro' I would like to split the string 'churro' into a list containing: 'ch','u','rr','o' So at each letter I want to look ahead and see if it can be combined with the next letter to make a single 'letter' of the Spanish alphabet. I think this could be done with a regular expression passing the list called alphabet to re.match() for example, but I'm not sure how to use the contents of a whole list as a search string in a regular expression, or if it's even possible. My real application is a bit more complex than the Spanish alphabet so I'm looking for a fairly general solution. A very simple solution that might be general enough: def tokensplit(string, bits): ... while string: ... for b in bits: ... if string.startswith(b): ... yield b ... string = string[len(b):] ... break ... else: ... raise ValueError(string not composed of the right bits.) ... alphabet = ['a','b','c','ch','d','u','r','rr','o'] # move longer letters to the front alphabet.sort(key=len, reverse=True) list(tokensplit(churro, alphabet)) ['ch', 'u', 'rr', 'o'] -- http://mail.python.org/mailman/listinfo/python-list
Re: Can PySerial's write method be called by multiple threads?
Hi John, John Nagle na...@animats.com wrote in message news:4c75768a$0$1608$742ec...@news.sonic.net... You don't need a queue, though; just use your own write function with a lock. Hmm... that would certainly work. I suppose it's even more efficient than a queue in that the first thing the queue is going to do is to acquire a lock; thanks for the idea! def atomicwrite(fd, data) : with lok : fd.write(data) Cool, I didn't know that threading.Lock() supported with! -- Just the other day I was contemplating how one might go about duplicating the pattern in C++ where you do something like this: { Lock lok; // Constructor acquires lock, will be held until destructor called (i.e., while lok remains in scope) DoSomething(); } // Lock released ...clearly with does the job here. ---Joel -- http://mail.python.org/mailman/listinfo/python-list
Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?
On Aug 24, 9:05 pm, Hugh Aguilar hughaguila...@yahoo.com wrote: What about using what I learned to write programs that work? Does that count for anything? It obviously counts, but it's not the only thing that matters. Where I'm employed, I am currently managing a set of code that works but the quality of that code is poor. The previous programmer suffered from a bad case of cut-and-paste programming mixed with a unsophisticated use of the language. The result is that this code that works is a maintenance nightmare, has poor performance, wastes memory, and is very brittle. The high level of coupling between code means that when you change virtually anything, it invariably breaks something else. And then you have the issue of the programmer thinking the code works but it doesn't actually meet the needs of the customer. The same code I'm talking about has a feature where you can pass message over the network and have the value you pass configure a parameter. It works fine, but it's not what the customer wants. The customer wants to be able to bump the value up and down, not set it to an absolute value. So does the code work? Depends on the definition of work. In my experience, there are a class of software developers who care only that their code works (or more likely, *appears* to work) and think that is the gold standard. It's an attitude that easy for hobbyists to take, but not one that serious professionals can afford to have. A hobbyist can freely spend hours hacking away and having a grand time writing code. Professionals are paid for their efforts, and that means that *someone* is spending both time and money on the effort. A professional who cares only about slamming out code that works is invariably merely moving the cost of maintaining and extending the code to someone else. It becomes a hidden cost, but why do they care... it isn't here and now, and probably won't be their problem. If I don't have a professor to pat me on the back, will my programs stop working? What a low bar you set for yourself. Does efficiency, clarity, maintainability, extensibility, and elegance not matter to you? -- http://mail.python.org/mailman/listinfo/python-list
Re: split string into multi-character letters
Jed wrote: Hi, I'm seeking help with a fairly simple string processing task. I've simplified what I'm actually doing into a hypothetical equivalent. Suppose I want to take a word in Spanish, and divide it into individual letters. The problem is that there are a few 2-character combinations that are considered single letters in Spanish - for example 'ch', 'll', 'rr'. Suppose I have: alphabet = ['a','b','c','ch','d','u','r','rr','o'] #this would include the whole alphabet but I shortened it here theword = 'churro' I would like to split the string 'churro' into a list containing: 'ch','u','rr','o' So at each letter I want to look ahead and see if it can be combined with the next letter to make a single 'letter' of the Spanish alphabet. I think this could be done with a regular expression passing the list called alphabet to re.match() for example, but I'm not sure how to use the contents of a whole list as a search string in a regular expression, or if it's even possible. My real application is a bit more complex than the Spanish alphabet so I'm looking for a fairly general solution. Thanks, Jed I don't know the Spanish alphabet, and you didn't say in what way your real application is more complex, but maybe something like this could be a starter: In [13]: import re In [14]: theword = 'churro' In [15]: two_chars=[ch, rr] In [16]: re.findall('|'.join(two_chars)+|[a-z], theword) Out[16]: ['ch', 'u', 'rr', 'o'] -- http://mail.python.org/mailman/listinfo/python-list
Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?
On Aug 25, 1:44 pm, John Passaniti john.passan...@gmail.com wrote: On Aug 24, 9:05 pm, Hugh Aguilar hughaguila...@yahoo.com wrote: What about using what I learned to write programs that work? Does that count for anything? It obviously counts, but it's not the only thing that matters. Where I'm employed, I am currently managing a set of code that works but the quality of that code is poor. The previous programmer suffered from a bad case of cut-and-paste programming mixed with a unsophisticated use of the language. The result is that this code that works is a maintenance nightmare, has poor performance, wastes memory, and is very brittle. The high level of coupling between code means that when you change virtually anything, it invariably breaks something else. And then you have the issue of the programmer thinking the code works but it doesn't actually meet the needs of the customer. The same code I'm talking about has a feature where you can pass message over the network and have the value you pass configure a parameter. It works fine, but it's not what the customer wants. The customer wants to be able to bump the value up and down, not set it to an absolute value. So does the code work? Depends on the definition of work. In my experience, there are a class of software developers who care only that their code works (or more likely, *appears* to work) and think that is the gold standard. It's an attitude that easy for hobbyists to take, but not one that serious professionals can afford to have. A hobbyist can freely spend hours hacking away and having a grand time writing code. Professionals are paid for their efforts, and that means that *someone* is spending both time and money on the effort. A professional who cares only about slamming out code that works is invariably merely moving the cost of maintaining and extending the code to someone else. It becomes a hidden cost, but why do they care... it isn't here and now, and probably won't be their problem. I agree. Sadly, with managers, especially non-technical managers, it's hard to make this case when the weasel guy says See! It's working.. -- http://mail.python.org/mailman/listinfo/python-list
Re: speed of numpy.power()?
Peter Pearson wrote: On Wed, 25 Aug 2010 06:59:36 -0700 (PDT), Carlos Grohmann wrote: I'd like to hear from you on the benefits of using numpy.power(x,y) over (x*x*x*x..) Using the dis package under Python 2.5, I see that computing x_to_the_16 = x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x uses 15 multiplies. I hope that numpy.power does it with 4. Right. Square/multiply algorithm takes something like 2*(log2(y)) multiplies worst case. That should not only be faster, but quite likely more accurate, at least for non-integer x values and large enough integer y. DaveA -- http://mail.python.org/mailman/listinfo/python-list
Overload print
Hi All Is there anyway in a class to overload the print function? class foo_class(): pass cc = foo_class() print cc Gives: __main__.foo_class instance at Can I do something like: class foo_class(): def __print__(self): print hello cc = foo_class() print cc Gives: hello I'm looking at finding nice way to print variables in a class just by asking to print it Cheers Ross -- Ross Williamson University of Chicago Department of Astronomy Astrophysics 773-834-9785 (office) 312-504-3051 (Cell) -- http://mail.python.org/mailman/listinfo/python-list
Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?
John Passaniti john.passan...@gmail.com writes: On Aug 24, 8:00 pm, Hugh Aguilar hughaguila...@yahoo.com wrote: The C programmers reading this are likely wondering why I'm being attacked. The reason is that Elizabeth Rather has made it clear to everybody that this is what she wants: [http://tinyurl.com/2bjwp7q] Hello to those outside of comp.lang.forth, where Hugh usually leaves his slime trail. I seriously doubt many people will bother to read the message thread Hugh references, but if you do, you'll get to delight in the same nonsense Hugh has brought to comp.lang.forth. Here's the compressed version: I did :-). I have somewhat followed Forth from a far, far distance since the 80's (including hardware), and did read several messages in the thread, also since it was not clear what Hugh was referring to. -- John Bokma j3b Blog: http://johnbokma.com/Facebook: http://www.facebook.com/j.j.j.bokma Freelance Perl Python Development: http://castleamber.com/ -- http://mail.python.org/mailman/listinfo/python-list
Re: Overload print
On 25 Aug, 22:18, Ross Williamson rosswilliamson@gmail.com wrote: Is there anyway in a class to overload the print function? class foo_class(): pass cc = foo_class() print cc Gives: __main__.foo_class instance at Can I do something like: class foo_class(): def __print__(self): print hello cc = foo_class() print cc Gives: hello Yes. Just define the __str__ method, like this: class foo_class(): def __str__(self): return hello -- http://mail.python.org/mailman/listinfo/python-list
Re: Overload print
On Wed, Aug 25, 2010 at 2:18 PM, Ross Williamson rosswilliamson@gmail.com wrote: Hi All Is there anyway in a class to overload the print function? class foo_class(): pass cc = foo_class() print cc Gives: __main__.foo_class instance at Can I do something like: class foo_class(): def __print__(self): print hello cc = foo_class() print cc Gives: hello I'm looking at finding nice way to print variables in a class just by asking to print it You want to overload the __str__() method: http://docs.python.org/reference/datamodel.html#object.__str__ Cheers, Chris -- http://blog.rebertia.com -- http://mail.python.org/mailman/listinfo/python-list
Re: Overload print
On Wed, 25 Aug 2010 16:18:15 -0500 Ross Williamson rosswilliamson@gmail.com wrote: Hi All Is there anyway in a class to overload the print function? Your terminology threw me off for a moment. You don't want to override print. You want to override the default representation of an object. class foo_class(): pass cc = foo_class() print cc Gives: __main__.foo_class instance at That's the default representation. Can I do something like: class foo_class(): def __print__(self): print hello Close. Check this. class foo_class(): ... def __repr__(self): ... return hello ... x = foo_class() x hello -- D'Arcy J.M. Cain da...@druid.net | Democracy is three wolves http://www.druid.net/darcy/| and a sheep voting on +1 416 425 1212 (DoD#0082)(eNTP) | what's for dinner. -- http://mail.python.org/mailman/listinfo/python-list
Re: Overload print
Ross Williamson wrote: Hi All Is there anyway in a class to overload the print function? In Python = 2.x print is a statement and thus can't be overloaded. That's exactly the reason, why Python 3 has turned print into a function. class foo_class(): def __print__(self): print hello cc = foo_class() print cc Gives: hello Hmm, on what Python version are you? To my knowledge there is no __print__ special method. Did you mean __str__ or __repr__ ? I'm looking at finding nice way to print variables in a class just by asking to print it In Python3 you *can* overload print(), but still, you better define __str__() on your class to return a string, representing what ever you want: In [11]: class Foo(object): : def __str__(self): : return foo : : In [12]: f = Foo() In [13]: print f foo -- http://mail.python.org/mailman/listinfo/python-list
Re: Overload print
On Wed, Aug 25, 2010 at 2:23 PM, Glenn Hutchings zond...@gmail.com wrote: On 25 Aug, 22:18, Ross Williamson rosswilliamson@gmail.com wrote: Is there anyway in a class to overload the print function? class foo_class(): pass cc = foo_class() print cc Gives: __main__.foo_class instance at Can I do something like: class foo_class(): def __print__(self): print hello cc = foo_class() print cc Gives: hello Yes. Just define the __str__ method, like this: class foo_class(): def __str__(self): return hello -- http://mail.python.org/mailman/listinfo/python-list I'd recommend looking at both the __str__ and __repr__ functions at http://docs.python.org/reference/datamodel.html. Depending on your specific use case, its possible __repr__ may be perfered for you. -- http://mail.python.org/mailman/listinfo/python-list
matplotlib pyplot contourf with 1-D array (vector)
All, I’m having a problem with the matplotlib.pyplot.contourf function. I have a 1-D array of latitudes (mesolat), a 1-D array of longitudes (mesolon), and a 1-D array of rainfall values (rain) at those corresponding lat, lon points. After importing the necessary libraries, and reading in these 1-D arrays, I do the following commands: p = Basemap(projection='lcc',llcrnrlon=-108.173,llcrnrlat=26.80, urcrnrlon=-81.944664,urcrnrlat=45.730892, lon_0=-97.00, lat_0=37.00, resolution='i') px,py = p(mesolon, mesolat) prplvls = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15] crain = p.contourf(px,py,rain,prplvls) At this point the contourf function returns an error saying “Input z must be a 2D array.” However, based on the documentation (http:// matplotlib.sourceforge.net/api/ pyplot_api.html#matplotlib.pyplot.contourf) I thought that as long as px, py, and rain are the same dimensions, everything should be fine. Apparently that is not the case? If 1D arrays are not allowed in contourf, then how can I change my data into a 2D array? Thanks in advance for the help. -- http://mail.python.org/mailman/listinfo/python-list
Re: matplotlib pyplot contourf with 1-D array (vector)
On Aug 25, 4:57 pm, becky_s rda.se...@gmail.com wrote: All, I’m having a problem with the matplotlib.pyplot.contourf function. I have a 1-D array of latitudes (mesolat), a 1-D array of longitudes (mesolon), and a 1-D array of rainfall values (rain) at those corresponding lat, lon points. After importing the necessary libraries, and reading in these 1-D arrays, I do the following commands: p = Basemap(projection='lcc',llcrnrlon=-108.173,llcrnrlat=26.80, urcrnrlon=-81.944664,urcrnrlat=45.730892, lon_0=-97.00, lat_0=37.00, resolution='i') px,py = p(mesolon, mesolat) prplvls = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15] crain = p.contourf(px,py,rain,prplvls) At this point the contourf function returns an error saying “Input z must be a 2D array.” However, based on the documentation (http:// matplotlib.sourceforge.net/api/ pyplot_api.html#matplotlib.pyplot.contourf) I thought that as long as px, py, and rain are the same dimensions, everything should be fine. Apparently that is not the case? If 1D arrays are not allowed in contourf, then how can I change my data into a 2D array? Thanks in advance for the help. I neglected to mention that these are masked arrays, due to some missing data. I tried using numpy.griddata: mesolati = np.linspace(33.8,37.0,150) mesoloni = np.linspace(-94.5,-102.9,150) raini = griddata(mesolon,mesolat,rain,mesoloni,mesolati) but the raini array returned was entirely masked (no values). Thanks again, Becky -- http://mail.python.org/mailman/listinfo/python-list
Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?
On Aug 25, 5:01 pm, Joshua Maurice joshuamaur...@gmail.com wrote: I agree. Sadly, with managers, especially non-technical managers, it's hard to make this case when the weasel guy says See! It's working.. Actually, it's not that hard. The key to communicating the true cost of software development to non-technical managers (and even some technical ones!) is to express the cost in terms of a metaphor they can understand. Non-technical managers may not understand the technology or details of software development, but they can probably understand money. So finding a metaphor along those lines can help them to understand. http://c2.com/cgi/wiki?WardExplainsDebtMetaphor I've found that explaining the need to improve design and code quality in terms of a debt metaphor usually helps non-technical managers have a very real, very concrete understanding of the problem. For example, telling a non-technical manager that a piece of code is poorly written and needs to be refactored may not resonate with them. To them, the code works and isn't that the only thing that matters? But put in terms of a debt metaphor, it becomes easier for them to see the problem. -- http://mail.python.org/mailman/listinfo/python-list
Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?
On Aug 25, 4:01 pm, John Passaniti john.passan...@gmail.com wrote: On Aug 25, 5:01 pm, Joshua Maurice joshuamaur...@gmail.com wrote: I agree. Sadly, with managers, especially non-technical managers, it's hard to make this case when the weasel guy says See! It's working.. Actually, it's not that hard. The key to communicating the true cost of software development to non-technical managers (and even some technical ones!) is to express the cost in terms of a metaphor they can understand. Non-technical managers may not understand the technology or details of software development, but they can probably understand money. So finding a metaphor along those lines can help them to understand. http://c2.com/cgi/wiki?WardExplainsDebtMetaphor I've found that explaining the need to improve design and code quality in terms of a debt metaphor usually helps non-technical managers have a very real, very concrete understanding of the problem. For example, telling a non-technical manager that a piece of code is poorly written and needs to be refactored may not resonate with them. To them, the code works and isn't that the only thing that matters? But put in terms of a debt metaphor, it becomes easier for them to see the problem. But then it becomes a game of How bad is this code exactly? and How much technical debt have we accrued?. At least in my company's culture, it is quite hard. -- http://mail.python.org/mailman/listinfo/python-list
Re: split string into multi-character letters
On 08/25/10 14:46, Jed wrote: I would like to split the string 'churro' into a list containing: 'ch','u','rr','o' Dirt simple, straightforward, easily generalized solution: def sp_split(s): n,i,ret = len(s), 0, [] while i n: s2 = s[i:i+2] if s2 in ('ch', 'll', 'rr'): ret.append(s2) i += 2 else: ret.append(s[i]) i += 1 return ret print(sp_split('churro')) #'ch', 'u', 'rr', 'o'] -- Terry Jan Reedy -- http://mail.python.org/mailman/listinfo/python-list
Re: Newbie: Win32 COM problem
On 25/08/2010 10:33 PM, Paul Hemans wrote: File C:\development\PyXLS\pyXLS.py, line 13, in createSheet def createBook(self): AttributeError: WrapXLS instance has no attribute '_book' pythoncom error: Python error invoking COM method. Can anyone help? That line seems an unlikely source of the error. Note that as win32com uses an in-process model by default, your problem may be that you changed your implementation but didn't restart the hosting process - and therefore are still using an earlier implementation. HTH, Mark -- http://mail.python.org/mailman/listinfo/python-list
Re: Newbie: Win32 COM problem
Yes, that was it. I just needed to restart the host process. Thanks Mark Hammond skippy.hamm...@gmail.com wrote in message news:mailman.51.1282784920.29448.python-l...@python.org... On 25/08/2010 10:33 PM, Paul Hemans wrote: File C:\development\PyXLS\pyXLS.py, line 13, in createSheet def createBook(self): AttributeError: WrapXLS instance has no attribute '_book' pythoncom error: Python error invoking COM method. Can anyone help? That line seems an unlikely source of the error. Note that as win32com uses an in-process model by default, your problem may be that you changed your implementation but didn't restart the hosting process - and therefore are still using an earlier implementation. HTH, Mark -- http://mail.python.org/mailman/listinfo/python-list
OBAMA created by the CIA - Proofs by Wayne Madsen, the Investigative Journalist - Obama's StepMother Ruth Niedesand and two StepBrothers are JEW
OBAMA created by the CIA - Proofs by Wayne Madsen, the Investigative Journalist - Obama's StepMother Ruth Niedesand and two StepBrothers are JEW OBAMA family are CIA agents or employees http://www.presstv.ir/detail/140093.html http://www.voltairenet.org/article166741.html Special Report The Story of Obama: All in The Company (Part I) by Wayne Madsen* Investigative journalist Wayne Madsen has discovered CIA files that document the agency’s connections to institutions and individuals figuring prominently in the lives of Barack Obama and his mother, father, grandmother, and stepfather. The first part of his report highlights the connections between Barack Obama, Sr. and the CIA- sponsored operations in Kenya to counter rising Soviet and Chinese influence among student circles and, beyond, to create conditions obstructing the emergence of independent African leaders. 20 August 2010 From Washington D.C. (USA) Themes AfriCom: Control of Africa Biographies Barack Obama In 1983-84, Barack Obama worked as Editor at Business Internation Corporation, a Business International Corporation, a known CIA front company. President Obama’s own work in 1983 for Business International Corporation, a CIA front that conducted seminars with the world’s most powerful leaders and used journalists as agents abroad, dovetails with CIA espionage activities conducted by his mother, Stanley Ann Dunham in 1960s post-coup Indonesia on behalf of a number of CIA front operations, including the East-West Center at the University of Hawaii, the U.S. Agency for International Development (USAID), and the Ford Foundation. Dunham met and married Lolo Soetoro, Obama’s stepfather, at the East-West Center in 1965. Soetoro was recalled to Indonesia in 1965 to serve as a senior army officer and assist General Suharto and the CIA in the bloody overthrow of President Sukarno. Barack Obama, Sr., who met Dunham in 1959 in a Russian language class at the University of Hawaii, had been part of what was described as an airlift of 280 East African students to the United States to attend various colleges — merely “aided” by a grant from the Joseph P. Kennedy Foundation, according to a September 12, 1960, Reuters report from London. The airlift was a CIA operation to train and indoctrinate future agents of influence in Africa, which was becoming a battleground between the United States and the Soviet Union and China for influence among newly-independent and soon-to-be independent countries on the continent. The airlift was condemned by the deputy leader of the opposition Kenyan African Democratic Union (KADU) as favoring certain tribes — the majority Kikuyus and minority Luos — over other tribes to favor the Kenyan African National Union (KANU), whose leader was Tom Mboya, the Kenyan nationalist and labor leader who selected Obama, Sr. for a scholarship at the University of Hawaii. Obama, Sr., who was already married with an infant son and pregnant wife in Kenya, married Dunham on Maui on February 2, 1961 and was also the university’s first African student. Dunham was three month’s pregnant with Barack Obama, Jr. at the time of her marriage to Obama, Sr. KADU deputy leader Masinda Muliro, according to Reuters, said KADU would send a delegation to the United States to investigate Kenyan students who received “gifts” from the Americans and “ensure that further gifts to Kenyan students are administered by people genuinely interested in Kenya’s development.’” The CIA allegedly recruited Tom M’Boya in a heavily funded selective liberation programme to isolate Kenya’s founding President Jomo Kenyatta, whom the American spy agency labelled as unsafe. Mboya received a $100,000 grant for the airlift from the Kennedy Foundation after he turned down the same offer from the U.S. State Department, obviously concerned that direct U.S. assistance would look suspicious to pro-Communist Kenyan politicians who suspected Mboya of having CIA ties. The Airlift Africa project was underwritten by the Kennedy Foundation and the African-American Students Foundation. Obama, Sr. was not on the first airlift but a subsequent one. The airlift, organized by Mboya in 1959, included students from Kenya, Uganda, Tanganyika, Zanzibar, Northern Rhodesia, Southern Rhodesia, and Nyasaland. Reuters also reported that Muliro charged that Africans were “disturbed and embittered” by the airlift of the selected students. Muliro “stated that “preferences were shown to two major tribes [Kikuyu and Luo] and many U.S.-bound students had failed preliminary and common entrance examinations, while some of those left behind held first-class certificates.” CIA-airlifted to Hawaii, Barack Obama Sr., with leis, stands with Stanley Dunham, President Obama’s grandfather, on his right. Obama, Sr. was a friend of Mboya and a fellow Luo. After Mboya was assassinated in 1969, Obama, Sr. testified at the trial of his
Path / Listing and os.walk problem.
Hi So here is my problem: I have my render files that are into a directory like this: c:\log\renderfiles\HPO7_SEQ004_031_VDM_DIF_V001.0001.exr c:\log\renderfiles\HPO7_SEQ004_031_VDM_DIF_V001.0002.exr c:\log\renderfiles\HPO7_SEQ004_031_VDM_DIF_V001.0003.exr c:\log\renderfiles\HPO7_SEQ004_031_VDM_AMB_V001.0001.exr c:\log\renderfiles\HPO7_SEQ004_031_VDM_AMB_V001.0002.exr c:\log\renderfiles\HPO7_SEQ004_031_VDM_AMB_V001.0003.exr True is, there is like 1000 Files is the directory (C:\log\renderfiles\) What Iam looking to is to extract the first part of the filenames as a list, but I dont want the script to extract it 1000times, I mean I dont need it to extract HPO7_SEQ004_031_VDM_AMB 150 times, because there is 150 Frames. (not sure if its clear tought) so far, I would like the list to look lik: [HPO7_SEQ004_031_VDM_DIF, HPO7_SEQ004_031_VDM_AMB, etc...] I start to think about that, to try to use a for (path, dirs, files) in os.walk(path): list.append(files) but this kind of thing will just append the whole 1000 files, thing that I dont want, and more complicated I dont want the thing after AMB or DIF in the name files to follow. (thing I can delete using a split, if I read well ?) I trying to search on internet for answer, but seems I find nothing about it. Someone can help me with that please, show me the way or something ? Thank you ! :) -- http://mail.python.org/mailman/listinfo/python-list
[issue8781] 32-bit wchar_t doesn't need to be unsigned to be usable (I think)
Florent Xicluna florent.xicl...@gmail.com added the comment: Hi Daniel, there's a test failure which is related with r84307 on windows buildbot. == FAIL: test_cw_strings (ctypes.test.test_parameters.SimpleTypesTestCase) -- Traceback (most recent call last): File D:\cygwin\home\db3l\buildarea\3.x.bolen-windows\build\lib\ctypes\test\test_parameters.py, line 78, in test_cw_strings self.assertTrue(c_wchar_p.from_param(s)._obj is s) AssertionError: False is not True http://www.python.org/dev/buildbot/builders/x86%20XP-4%203.x/builds/2837 -- keywords: +buildbot nosy: +flox status: closed - open ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8781 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9679] unicode DNS names in urllib, urlopen
New submission from Martin v. Löwis mar...@v.loewis.de: Copy of issue 1027206; support in the socket module was provided, but this request remains: Also other modules should support unicode hostnames. (httplib already does) but urllib and urllib2 don't. -- components: Library (Lib), Unicode keywords: buildbot, patch messages: 114884 nosy: baikie, flox, gdamjan, haypo, loewis, orsenthil priority: normal severity: normal stage: patch review status: open title: unicode DNS names in urllib, urlopen type: feature request versions: Python 3.2 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9679 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1559298] test_popen fails on Windows if installed to Program Files
Changes by Tim Golden m...@timgolden.me.uk: -- assignee: - tim.golden nosy: +tim.golden ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue1559298 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1027206] unicode DNS names in socket, urllib, urlopen
Martin v. Löwis mar...@v.loewis.de added the comment: I have now committed file 18615 as r84313: thanks for the patch. I have split this issue into two: this one is only about the socket module, and #9679 carries any remaining features (it would be good if we have only one bug per bug report). Since the buildbots are also happy now (after r84277), I'm closing this as fixed again. -- status: open - closed superseder: - unicode DNS names in urllib, urlopen ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue1027206 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9679] unicode DNS names in urllib, urlopen
Martin v. Löwis mar...@v.loewis.de added the comment: From msg60564: it's not clear to me what this request really means. It could mean that Python should support IRIs, but then, I'm not sure whether this support can be in urllib, or whether a separate library would be needed. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9679 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8622] Add PYTHONFSENCODING environment variable
STINNER Victor victor.stin...@haypocalc.com added the comment: test_sys is still failing on my system where LC_CTYPE only is set to utf-8 Oh yes, test_sys fails if LC_ALL or LC_CTYPE is a locale using a different encoding than ascii (eg. LC_ALL=fr_FR.utf8). Fixed by r84314. -- status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8622 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8296] multiprocessing.Pool hangs when issuing KeyboardInterrupt
Ask Solem a...@opera.com added the comment: On closer look your patch is also ignoring SystemExit. I think it's beneficial to honor SystemExit, so a user could use this as a means to replace the current process with a new one. If we keep that behavior, the real problem here is that the result handler hangs if the process that reserved a job is gone, which is going to be handled by #9205. Should we mark it as a duplicate? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8296 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2528] Change os.access to check ACLs under Windows
Changes by Tim Golden m...@timgolden.me.uk: -- assignee: - tim.golden ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2528 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9659] frozenset, when subclassed will yield warning upon call to super(...).__init__(iterable)
Carsten Klein carsten.kl...@axn-software.de added the comment: Thanks for the information. Where is this documented? I cannot find it in the official Python docs... TIA. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9659 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9680] Add in declaration order support for the dictionary passed in to the meta class __init__ and __new__ methods
New submission from Carsten Klein carsten.kl...@axn-software.de: Example class Meta(type): def __new__(cls, name, bases, locals): print repr(locals.keys()) class Test(object): __metaclass__ = Meta A = 1 B = 2 C = 3 D = 4 E = 5 The above will yield the keys in a somewhat random order, everytime you start up the Python interpreter: ['__module__', 'E', 'C', 'D', 'B', '__metaclass__', 'A'] While the above example is far from complete, it shows the basic dilemma when having some concept that relies on the order in which the elements have been declared and in the order by which they have been processed during the parse phase and ast traversal phase. In the aforementioned first two phases one can rely on the declaration order, but as soon as we enter the __new__ method, the order becomes irrelevant and is completely lost. For a framework of mine, I would like the locals dict that is being passed as an argument to the __new__ method to give out references to the keys in the order in which they have been declared in the dict. Thus, the above example would yield ['__metaclass__', '__module__', 'A', 'B', 'C', 'D', 'E'] The basic reason is that I find it more intuitive to class A(object): __metaclass__ = Meta A = 5 Z = 9 than class A(object): __metaclass__ = Meta __fields__ = ((A,5), (Z,9)) As you might easily guesses, the main application for the above is a new enum type I am currently developing, where the order is important as every new instance of that class must always yield the same ordinals for the individual constants declared. This should not break with the overall contract of the dict, which defines that keys returned are in no specific order. Thus, adding a specific order to keys in the above locals dict for class instantiation purposes only, would not break with existing code and should be both backwards and forwards compatible. -- components: Interpreter Core messages: 114890 nosy: carsten.kl...@axn-software.de priority: normal severity: normal status: open title: Add in declaration order support for the dictionary passed in to the meta class __init__ and __new__ methods type: feature request versions: Python 2.6, Python 2.7, Python 3.1, Python 3.2, Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9680 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8781] 32-bit wchar_t doesn't need to be unsigned to be usable (I think)
Daniel Stutzbach dan...@stutzbachenterprises.com added the comment: Thanks, I will take a look sometime today. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8781 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6978] compiler.transformer dict key bug d[1,] = 1
Kees Bos k@zx.nl added the comment: Added fix for python 2.7, which includes a test (testDictWithTupleKey) for the compiler test (Lib/test/test_compiler.py). -- status: pending - open Added file: http://bugs.python.org/file18642/compiler-bug-issue6978.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6978 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8781] 32-bit wchar_t doesn't need to be unsigned to be usable (I think)
Daniel Stutzbach dan...@stutzbachenterprises.com added the comment: The underlying problem here is that SIZEOF_WCHAR_T is not defined in pyconfig.h on Windows. My patch assumed that it would be defined on all platforms where HAVE_WCHAR_H is defined (I had checked ./configure, but forgotten Windows). This has come up before and caused problems for other projects that assume including python.h will define SIZEOF_WCHAR_T on all platforms with HAVE_WCHAR_H: http://bugs.python.org/issue4474 http://trac.wxwidgets.org/ticket/12013 The problem with my patch can be solved in one of two ways: 1. In PC/pyconfig.h, #define SIZEOF_WCHAR_T 2, or 2. Change the #if's to: HAVE_USABLE_WCHAR_T || Py_UNICODE_SIZE == SIZEOF_WCHAR_T I prefer option #1, but it's also a more visible change than my original patch and may warrant its own issue. Thoughts? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8781 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8781] 32-bit wchar_t doesn't need to be unsigned to be usable (I think)
Daniel Stutzbach dan...@stutzbachenterprises.com added the comment: Adding other Windows developers to the nosy list. See msg114893 where your input would be helpful. -- nosy: +brian.curtin, tim.golden ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8781 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8781] 32-bit wchar_t doesn't need to be unsigned to be usable (I think)
Marc-Andre Lemburg m...@egenix.com added the comment: Daniel Stutzbach wrote: Daniel Stutzbach dan...@stutzbachenterprises.com added the comment: The underlying problem here is that SIZEOF_WCHAR_T is not defined in pyconfig.h on Windows. My patch assumed that it would be defined on all platforms where HAVE_WCHAR_H is defined (I had checked ./configure, but forgotten Windows). This has come up before and caused problems for other projects that assume including python.h will define SIZEOF_WCHAR_T on all platforms with HAVE_WCHAR_H: http://bugs.python.org/issue4474 http://trac.wxwidgets.org/ticket/12013 The problem with my patch can be solved in one of two ways: 1. In PC/pyconfig.h, #define SIZEOF_WCHAR_T 2, or 2. Change the #if's to: HAVE_USABLE_WCHAR_T || Py_UNICODE_SIZE == SIZEOF_WCHAR_T I prefer option #1, but it's also a more visible change than my original patch and may warrant its own issue. Thoughts? It possible, we should do the right thing and implement #1. One thing I'm not sure about is how other Windows compilers deal with wchar_t, e.g. MinGW or the Borland compiler. I suppose that they all use the standard Windows C lib, so the parameter should be 2 for them as well, but I'm not 100% sure. -- title: 32-bit wchar_t doesn't need to be unsigned to be usable (I think) - 32-bit wchar_t doesn't need to be unsigned to be usable (I think) ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8781 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8781] 32-bit wchar_t doesn't need to be unsigned to be usable (I think)
Daniel Stutzbach dan...@stutzbachenterprises.com added the comment: On Windows, the Python headers define HAVE_USABLE_WCHAR_T and Py_UNICODE_SIZE 2, so we are already relying on sizeof(wchar_t) == 2 on Windows. My patch ran into trouble because it inadvertently disabled that assumption in a few places, causing code to take the slow path and convert between wchar_t * and Py_UNICODE *. The test that failed checks that the fast path was taken on Windows. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8781 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1589266] bdist_sunpkg distutils command
Holger Joukl jh...@gmx.de added the comment: Holger, sorry your work has to be rejected. No harm done - kind of paradoxical that my employer allowed me to release the code into the wild but hasn't been willing to let me sign the contribution form, in 4 years. So it's at least out in the open and maybe useful to some (though rather outdated) - and could be moved into some solaris-py bdist_* project. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue1589266 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9680] Add in declaration order support for the dictionary passed in to the meta class __init__ and __new__ methods
R. David Murray rdmur...@bitdance.com added the comment: The ordering of dictionary keys is a fundamental property of Python dictionaries (it's a hash table). PEP 3115 provides the functionality you are looking for, your metaclass just needs to be slightly more complicated. -- nosy: +r.david.murray resolution: - out of date stage: - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9680 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9679] unicode DNS names in urllib, urlopen
R. David Murray rdmur...@bitdance.com added the comment: There was a discussion about IRI on python-dev in the middle of a discussion about adding a coercable bytes type, but I can't find it. I believe the conclusion was that the best solution for IRI support was a new library that implements the full IRI spec. It is possible that we could just add IDNA support to urllib, but it isn't clear that that work would be worth it when what is really needed is full IRI support. See also issue1500504, though my guess based on the python-dev discussion and my experience with email is that an IRI library will need to be carefully designed with the py3k bytes/string separation in mind. -- nosy: +ncoghlan, r.david.murray stage: patch review - ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9679 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9679] unicode DNS names in urllib, urlopen
Changes by R. David Murray rdmur...@bitdance.com: -- keywords: -buildbot, patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9679 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8296] multiprocessing.Pool hangs when issuing KeyboardInterrupt
Jesse Noller jnol...@gmail.com added the comment: If we keep that behavior, the real problem here is that the result handler hangs if the process that reserved a job is gone, which is going to be handled by #9205. Should we mark it as a duplicate? I would tend to agree with your assessment; we're better served just gracefully handling everything per 9205 -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8296 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9610] buildbot: uncaptured python exception (smtpd), but no failure in regrtest
Giampaolo Rodola' g.rod...@gmail.com added the comment: It's very hard to tell what went wrong without an actual traceback message. What I don't understand is why smtpd module is mentioned in the message, since apparently test_ssl.py doesn't use it at all. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9610 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9674] make install DESTDIR=/home/blah fails when the prefix specified is /
R. David Murray rdmur...@bitdance.com added the comment: See also issue1676135. Seems that the posters were wrong in concluding that the double slashes wouldn't bother anyone using prefix=/ :) -- nosy: +r.david.murray ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9674 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com