Re: what is the difference between @property and method
Thank you On 2012/02/10, at 0:36, John Posner wrote: > On 2:59 PM, Devin Jeanpierre wrote: > > >> It is kind of funny that the docs don't ever explicitly say what a >> property is. http://docs.python.org/library/functions.html#property -- >> Devin > > Here's a writeup that does: > http://wiki.python.org/moin/AlternativeDescriptionOfProperty > > -John > -- http://mail.python.org/mailman/listinfo/python-list
Re: ANN: eGenix mxODBC Zope Database Adapter 2.0.2
Thanks a bunch for the whole team! Best, anonhung On 2/9/12, eGenix Team: M.-A. Lemburg wrote: > > ANNOUNCEMENT > > mxODBC Zope Database Adapter > > Version 2.0.2 > > for Zope and the Plone CMS > > Available for Zope 2.10 and later on > Windows, Linux, Mac OS X, FreeBSD and other platforms > > This announcement is also available on our web-site for online reading: > http://www.egenix.com/company/news/eGenix-mxODBC-Zope-DA-2.0.2-GA.html > > > INTRODUCTION > > The eGenix mxODBC Zope Database Adapter allows you to easily connect > your Zope or Plone installation to just about any database backend on > the market today, giving you the reliability of the commercially > supported eGenix product mxODBC and the flexibility of the ODBC > standard as middle-tier architecture. > > The mxODBC Zope Database Adapter is highly portable, just like Zope > itself and provides a high performance interface to all your ODBC data > sources, using a single well-supported interface on Windows, Linux, > Mac OS X, FreeBSD and other platforms. > > This makes it ideal for deployment in ZEO Clusters and Zope hosting > environments where stability and high performance are a top priority, > establishing an excellent basis and scalable solution for your Plone > CMS. > > Product page: > > http://www.egenix.com/products/zope/mxODBCZopeDA/ > > > NEWS > > We are pleased to announce a new version 2.0.2 of our mxODBC Zope DA > product. > > With the patch level 2.0.2 release we have updated the integrated > mxODBC Python Extension to the latest 3.1.1 release, which > includes a number of important workarounds for these ODBC drivers: > > * Oracle 10gR1 and 10gR2 > * Oracle 11gR1 and 11gR2 > * Teradata 13 > * Netezza > > Due to popular demand, we have also added instructions on how to > install mxODBC Zope DA 2.0 with Plone 4.1 and Zope 2.13 - even though > this combination is not officially supported by the mxODBC Zope DA 2.0 > series: > > http://www.egenix.com/products/zope/mxODBCZopeDA/#Installation > > > UPGRADING > > Licenses purchased for version 2.0.x of the mxODBC Zope DA will continue > to work with the 2.0.2 patch level release. > > Licenses purchased for version 1.0.x of the mxODBC Zope DA will not > work with version 2.0. More information about available licenses > is available on the product page: > > http://www.egenix.com/products/zope/mxODBCZopeDA/#Licensing > > Compared to the popular mxODBC Zope DA 1.0, version 2.0 offers > these enhancements: > > * Includes mxODBC 3.1 with updated support for many current ODBC >drivers, giving you more portability and features for a wider >range of database backends. > > * Mac OS X 10.6 (Snow Leopard) support. > > * Plone 3.2, 3.3, 4.0 support. Plone 4.1 works as well. > > * Zope 2.10, 2.11, 2.12 support. Zope 2.13 works as well. > > * Python 2.4 - 2.6 support. > > * Zero maintenance support to automatically reconnect the >Zope connection after a network or database problem. > > * More flexible Unicode support with options to work with >pure Unicode, plain strings or mixed setups - even for >databases that don't support Unicode > > * Automatic and transparent text encoding and decoding > > * More flexible date/time support including options to work >with Python datetime objects, mxDateTime, strings or tuples > > * New decimal support to have the Zope DA return decimal >column values using Python's decimal objects. > > * Fully eggified to simplify easy_install and zc.buildout based >installation > > > MORE INFORMATION > > For more information on the mxODBC Zope Database Adapter, licensing > and download instructions, please visit our web-site: > > http://www.egenix.com/products/zope/mxODBCZopeDA/ > > You can buy mxODBC Zope DA licenses online from the eGenix.com shop at: > > http://shop.egenix.com/ > > > > Thank you, > -- > Marc-Andre Lemburg > eGenix.com > > Professional Python Services directly from the Source (#1, Feb 09 2012) Python/Zope Consulting and Support ...http://www.egenix.com/ mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/ > > > ::: Try our new mxODBC.Connect Python Database Interface for free ! > > >eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 > D-40764 Langenfeld, Germany.
Re: Common LISP-style closures with Python
在 2012年2月4日星期六UTC+8上午8时27分56秒,Antti J Ylikoski写道: > In Python textbooks that I have read, it is usually not mentioned that > we can very easily program Common LISP-style closures with Python. It > is done as follows: > > - > > # Make a Common LISP-like closure with Python. > # > # Antti J Ylikoski 02-03-2012. > > def f1(): > n = 0 > def f2(): > nonlocal n > n += 1 > return n > return f2 > > - > > and now we can do: > > - > > >>> > >>> a=f1() > >>> b=f1() > >>> a() > 1 > >>> a() > 2 > >>> a() > 3 > >>> a() > 4 > >>> b() > 1 > >>> b() > 2 > >>> a() > 5 > >>> b() > 3 > >>> b() > 4 > >>> > > - > > i. e. we can have several functions with private local states which > are kept between function calls, in other words we can have Common > LISP-like closures. > > yours, Antti J Ylikoski > Helsinki, Finland, the EU We are not in the 1990's now. A descent CAD or internet application now should be able to support users with at least one or more script languages easily. Whether it's javascript or java or flash in the browser-based applications, or go, python in the google desktop API, commercial SW applications to be able to evolve in the long run are not jobs from the publishers and the original writers of the SW packages only. I don't want to include a big fat compiler in my software, what else can I do ? LISP is too fat, too. -- http://mail.python.org/mailman/listinfo/python-list
Re: Guide to: Learning Python Decorators
I prefer to decorate a function not a method. I prefer to decorate an object to own a new method from the existed ones inherited in all the class levels. I do not decorate a class if not necessary. I believe this is more pythonic to add functionalities to objects in classes by aggregated scripts that use similar modules over a period of time. -- http://mail.python.org/mailman/listinfo/python-list
Re: round down to nearest number
On 10 February 2012 06:21, Ian Kelly wrote: > (3219 + 99) // 100 * 100 >> 3300 > (3289 + 99) // 100 * 100 >> 3300 > (328678 + 99) // 100 * 100 >> 328700 > (328 + 99) // 100 * 100 >> 400 >> >> Those are all rounded up to the nearest 100 correctly. > > One thing to be aware of though is that while the "round down" formula > works interchangeably for ints and floats, the "round up" formula does > not. > (3300.5 + 99) // 100 * 100 > 3300.0 > I'm surprised I haven't seen: >>> 212 - (212 % -100) 300 Here's a function that: * rounds up and down * works for both integers and floats * is only two operations (as opposed to 3 in the solutions given above) >>> def round(n, k): ... return n - n%k ... >>> # Round down with a positive k: ... round(167, 100) 100 >>> round(-233, 100 ... ) -300 >>> # Round up with a negative k: ... round(167, -100) 200 >>> round(-233, -100) -200 >>> # Edge cases ... round(500, -100) 500 >>> round(500, 100) 500 >>> # Floats ... round(100.5, -100) 200.0 >>> round(199.5, 100) 100.0 -- Arnaud -- http://mail.python.org/mailman/listinfo/python-list
Re: frozendict
On Fri, Feb 10, 2012 at 1:30 PM, Nathan Rice wrote: > The only thing needed to avoid the hash collision is that your hash > function is not not 100% predictable just by looking at the python > source code. I don't see why every dict would have to be created > differently. I would think having the most ubiquitous data structure > in your language be more predictable would be a priority. Oh well It's perfectly predictable. If you put a series of keys into it, you get those same keys back. Nobody ever promised anything about order. If your hash function is not 100% predictable, that means it varies on the basis of something that isn't part of either the Python interpreter or the script being run. That means that, from one execution to another of the exact same code, the results could be different. The keys will come out in different orders. ChrisA -- http://mail.python.org/mailman/listinfo/python-list
Any idea for build code bars???
Hello all, I new in python but i want to practice very much. In interesting in build an application which read a text file, translate the information in 128 bar code format and print all the information in a printer. File text content: 123456 123456 Where each line mean: 1º --> Print in text format (Ej: 123456) 2º --> Print in bar code format (Ej: |||) With this information my script hast to build a documento (pdf, html, etc) and send it to the printer. Can any one say me what python modules i could use? Regards -- http://mail.python.org/mailman/listinfo/python-list
Re: Read-only attribute in module
On 10 February 2012 03:27, Terry Reedy wrote: > On 2/9/2012 8:04 PM, Steven D'Aprano wrote: > >> Python happily violates "consenting adults" all over the place. We have >> properties, which can easily create read-only and write-once attributes. > > > So propose that propery() work at module level, for module attributes, as > well as for class attributes. I think Steven would like something else: bare names that cannot be rebound. E.g. something like: >>> const a = 42 >>> a = 7 Would raise an exception. Is that right? -- Arnaud -- http://mail.python.org/mailman/listinfo/python-list
Re: Any idea for build code bars???
On 10/02/2012 10:10, sisifus wrote: Hello all, I new in python but i want to practice very much. In interesting in build an application which read a text file, translate the information in 128 bar code format and print all the information in a printer. File text content: 123456 123456 Where each line mean: 1º --> Print in text format (Ej: 123456) 2º --> Print in bar code format (Ej: |||) With this information my script hast to build a documento (pdf, html, etc) and send it to the printer. Can any one say me what python modules i could use? Regards Studying this should give you some ideas. http://pypi.python.org/pypi/pyBarcode -- Cheers. Mark Lawrence. -- http://mail.python.org/mailman/listinfo/python-list
Re: multiple namespaces within a single module?
On 10 February 2012 00:05, Ethan Furman wrote: > Ethan Furman wrote: >> >> Hrm -- and functions/classes/etc would have to refer to each other that >> way as well inside the namespace... not sure I'm in love with that... > > > > Not sure I hate it, either. ;) > > Slightly more sophisticated code: > > > class NameSpace(object): > def __init__(self, current_globals): > self.globals = current_globals > self.saved_globals = current_globals.copy() > > def __enter__(self): > return self > def __exit__(self, *args): > new_items = [] > for key, value in self.globals.items(): > if (key not in self.saved_globals and value is not self > or key in self.saved_globals > and value != self.saved_globals[key]): > > new_items.append((key, value)) > for key, value in new_items: > setattr(self, key, value) > del self.globals[key] > self.globals.update(self.saved_globals) > > if __name__ == '__main__': > x = 'inside main!' > with NameSpace(globals()) as a: > x = 'inside a?' > def fn1(): > print(a.x) > with NameSpace(globals()) as b: > x = 'inside b?' > def fn1(): > print(b.x) > def fn2(): > print('hello!') > b.fn1() > y = 'still inside main' > a.fn1() > b.fn1() > print(x) > print(y) > Please! Metaclasses are the obvious way to proceed here :) Then there is no need for the 'a.x' and 'b.x' cruft. marigold:junk arno$ cat namespace.py function = type(lambda:0) def change_globals(f, g): return function(f.__code__, g, f.__name__, f.__defaults__, f.__closure__) class MetaNamespace(type): def __new__(self, name, bases, attrs): attrs['__builtins__'] = __builtins__ for name, value in attrs.items(): if isinstance(value, function): attrs[name] = change_globals(value, attrs) return type.__new__(self, name, bases, attrs) class Namespace(metaclass=MetaNamespace): pass x = "inside main" class a(Namespace): x = "inside a" def fn1(): print(x) class b(Namespace): x = "inside b" def fn1(): print(x) def fn2(): print("hello") fn1() y = "inside main" a.fn1() b.fn1() b.fn2() print(x) print(y) marigold:junk arno$ python3 namespace.py inside a inside b hello inside b inside main inside main A bit more work would be needed to support nested functions and closures... -- Arnaud -- http://mail.python.org/mailman/listinfo/python-list
Re: when to use import statements in the header, when to use import statements in the blocks where they are used?
On 8 February 2012 01:48, Lei Cheng wrote: > Hi all, > > In a py file, when to use import statements in the header, when to use > import statements in the blocks where they are used? > What are the best practices? > Thanks! Aside from other answers: in some rare cases, importing within a function can avoid circularity problems (e.g. A imports B which tries itself to import A) -- Arnaud -- http://mail.python.org/mailman/listinfo/python-list
Re: round down to nearest number
o.O Very nice On Fri, Feb 10, 2012 at 8:58 PM, Arnaud Delobelle wrote: > On 10 February 2012 06:21, Ian Kelly wrote: >> (3219 + 99) // 100 * 100 >>> 3300 >> (3289 + 99) // 100 * 100 >>> 3300 >> (328678 + 99) // 100 * 100 >>> 328700 >> (328 + 99) // 100 * 100 >>> 400 >>> >>> Those are all rounded up to the nearest 100 correctly. >> >> One thing to be aware of though is that while the "round down" formula >> works interchangeably for ints and floats, the "round up" formula does >> not. >> > (3300.5 + 99) // 100 * 100 >> 3300.0 >> > > I'm surprised I haven't seen: > 212 - (212 % -100) > 300 > > Here's a function that: > * rounds up and down > * works for both integers and floats > * is only two operations (as opposed to 3 in the solutions given above) > def round(n, k): > ... return n - n%k > ... # Round down with a positive k: > ... round(167, 100) > 100 round(-233, 100 > ... ) > -300 # Round up with a negative k: > ... round(167, -100) > 200 round(-233, -100) > -200 # Edge cases > ... round(500, -100) > 500 round(500, 100) > 500 # Floats > ... round(100.5, -100) > 200.0 round(199.5, 100) > 100.0 > > -- > Arnaud > -- > http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
Re: multiple namespaces within a single module?
jkn wrote: > Hi Peter > > On Feb 9, 7:33 pm, Peter Otten <__pete...@web.de> wrote: >> jkn wrote: >> > is it possible to have multiple namespaces within a single python >> > module? >> >> Unless you are abusing classes I don't think so. >> >> > I have a small app which is in three or four .py files. For various >> > reasons I would like to (perhaps optionally) combine these into one >> > file. >> >> Rename your main script into __main__.py, put it into a zip file together >> with the other modules and run that. > > Hmm ... thanks for mentioning this feature, I didn't know of it > before. Sounds great, except that I gather it needs Python >2.5? I'm > stuck with v2.4 at the moment unfortunately... You can import and run explicitly, $ PYTHONPATH mylib.zip python -c'import mainmod; mainmod.main()' assuming you have a module mainmod.py containing a function main() in mylib.zip. -- http://mail.python.org/mailman/listinfo/python-list
Re: Read-only attribute in module
Terry Reedy wrote > > On 2/9/2012 6:43 AM, Mateusz Loskot wrote: >> import xyz print(xyz.flag) # OK >> xyz.flag = 0 # error due to no write access > > Why prevent that? If you called it 'FLAG', that would indicate that it > is a constant that should not be changed. While Python make some effort > to prevent bugs, it is generally a 'consenting adults' language. > Terry, The intent of xyz.flag is that it is a value set by the module internally. xyz is a module wrapping a C library. The C library defines concept of a global flag set by the C functions at some events, so user can check value of this flag. I can provide access to it with function: xyz.get_flag() But, I thought it would be more convenient to have a read-only property in scope of the module. Sometimes, especially when wrapping C code, it is not possible to map C API semantics to Python concepts as 1:1. - -- Mateusz Loskot http://mateusz.loskot.net -- View this message in context: http://python.6.n6.nabble.com/Read-only-attribute-in-module-tp4378950p4382967.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
log and figure out what bits are slow and optimize them.
Hi, I want to log time taken to complete database requests inside a method/ function using decorator . is it possible I think, i have to inject log code inside the method/fuctions or modify it. I wrote a decorator to log taken by a method/function to complete it execution and its working well. My requirement : log everything and figure out what bits are slow and optimize them. What are your suggestions ?? -- http://mail.python.org/mailman/listinfo/python-list
Re: log and figure out what bits are slow and optimize them.
On 10 February 2012 12:30, sajuptpm wrote: > Hi, > > I want to log time taken to complete database requests inside a method/ > function using decorator . is it possible > I think, i have to inject log code inside the method/fuctions or > modify it. > I wrote a decorator to log taken by a method/function to complete it > execution and its working well. > > My requirement : log everything and figure out what bits are slow and > optimize them. > > What are your suggestions ?? Are you familiar with this? http://docs.python.org/library/profile.html -- Arnaud -- http://mail.python.org/mailman/listinfo/python-list
Re: log and figure out what bits are slow and optimize them.
Hi, Yes i saw profile module, I think i have to do function call via cProfile.run('foo()') I know, we can debug this way. But i need a fixed logging system .. On Fri, Feb 10, 2012 at 6:08 PM, Arnaud Delobelle wrote: > On 10 February 2012 12:30, sajuptpm wrote: > > Hi, > > > > I want to log time taken to complete database requests inside a method/ > > function using decorator . is it possible > > I think, i have to inject log code inside the method/fuctions or > > modify it. > > I wrote a decorator to log taken by a method/function to complete it > > execution and its working well. > > > > My requirement : log everything and figure out what bits are slow and > > optimize them. > > > > What are your suggestions ?? > > Are you familiar with this? > > http://docs.python.org/library/profile.html > > -- > Arnaud > -- Regards Saju Madhavan +91 09535134654 Anyone who has never made a mistake has never tried anything new -- Albert Einstein -- http://mail.python.org/mailman/listinfo/python-list
Re: log and figure out what bits are slow and optimize them.
On Fri, Feb 10, 2012 at 6:12 PM, Saju M wrote: > Hi, > > Yes i saw profile module, > I think i have to do function call via > > cProfile.run('foo()') > > I know, we can debug this way. > > But i need a fixed logging system .. > > > > On Fri, Feb 10, 2012 at 6:08 PM, Arnaud Delobelle wrote: > >> On 10 February 2012 12:30, sajuptpm wrote: >> > Hi, >> > >> > I want to log time taken to complete database requests inside a method/ >> > function using decorator . is it possible >> > I think, i have to inject log code inside the method/fuctions or >> > modify it. >> > I wrote a decorator to log taken by a method/function to complete it >> > execution and its working well. >> > >> > My requirement : log everything and figure out what bits are slow and >> > optimize them. >> > >> > What are your suggestions ?? >> >> Are you familiar with this? >> >> http://docs.python.org/library/profile.html >> > I need a fixed logging system and want to use it in production. I think, we can't permanently include profile's debugging code in source code, will cause any performance issue ?? > -- >> Arnaud >> > > > > -- > Regards > Saju Madhavan > +91 09535134654 > > Anyone who has never made a mistake has never tried anything new -- Albert > Einstein > -- Regards Saju Madhavan +91 09535134654 Anyone who has never made a mistake has never tried anything new -- Albert Einstein -- http://mail.python.org/mailman/listinfo/python-list
Re: log and figure out what bits are slow and optimize them.
Yes i saw profile module, I think, i have to do function call via cProfile.run('foo()') I know, we can debug this way. But, I need a fixed logging system and want to use it in production. I think, we can't permanently include profile's debugging code in source code, will cause any performance issue ?? On Fri, Feb 10, 2012 at 6:18 PM, Saju M wrote: > > > On Fri, Feb 10, 2012 at 6:12 PM, Saju M wrote: > >> Hi, >> >> Yes i saw profile module, >> I think i have to do function call via >> >> cProfile.run('foo()') >> >> I know, we can debug this way. >> >> But i need a fixed logging system .. >> >> >> >> On Fri, Feb 10, 2012 at 6:08 PM, Arnaud Delobelle wrote: >> >>> On 10 February 2012 12:30, sajuptpm wrote: >>> > Hi, >>> > >>> > I want to log time taken to complete database requests inside a method/ >>> > function using decorator . is it possible >>> > I think, i have to inject log code inside the method/fuctions or >>> > modify it. >>> > I wrote a decorator to log taken by a method/function to complete it >>> > execution and its working well. >>> > >>> > My requirement : log everything and figure out what bits are slow and >>> > optimize them. >>> > >>> > What are your suggestions ?? >>> >>> Are you familiar with this? >>> >>> http://docs.python.org/library/profile.html >>> >> > > > I need a fixed logging system and want to use it in production. > I think, we can't permanently include profile's debugging code in > source code, > will cause any performance issue ?? > > > > >> -- >>> Arnaud >>> >> >> >> >> -- >> Regards >> Saju Madhavan >> +91 09535134654 >> >> Anyone who has never made a mistake has never tried anything new -- >> Albert Einstein >> > > > > -- > Regards > Saju Madhavan > +91 09535134654 > > Anyone who has never made a mistake has never tried anything new -- Albert > Einstein > -- Regards Saju Madhavan +91 09535134654 Anyone who has never made a mistake has never tried anything new -- Albert Einstein -- http://mail.python.org/mailman/listinfo/python-list
Re: log and figure out what bits are slow and optimize them.
Hi, Yes i saw profile module, I think i have to do function call via cProfile.run('foo()') I know, we can debug this way. But, i need a fixed logging system and want to use it in production. I think, we can't permanently include profile's debugging code in source code, will cause any performance issue ?? -- http://mail.python.org/mailman/listinfo/python-list
Re: changing sys.path
I think I finally located the issue with the sys.path extension. The problem is that I have many namespace directories, for example lib: - sub1 - sub2 lib: - sub3 - sub4 But to have everything working I had lib.sub3 in easy-install.pth. Now if I try to add something else to the path it doesn't take care of the namespace declaration (every __init__.py in the packages contains: __import__('pkg_resources').declare_namespace(__name__)) and just doesn't find the other submodules.. If I try to add manually lib.sub1, lib.sub2 changing the sys.path the imports will only work for the first one. Strangely if I just create a dev_main.pth in site-packages containing the same paths, everything works perfectly. Any suggestions now that the problem is more clear? Thanks, Andrea -- http://mail.python.org/mailman/listinfo/python-list
Re: changing sys.path
On 02/10/2012 08:08 AM, Andrea Crotti wrote: I think I finally located the issue with the sys.path extension. The problem is that I have many namespace directories, for example lib: - sub1 - sub2 lib: - sub3 - sub4 But to have everything working I had lib.sub3 in easy-install.pth. Now if I try to add something else to the path it doesn't take care of the namespace declaration (every __init__.py in the packages contains: __import__('pkg_resources').declare_namespace(__name__)) and just doesn't find the other submodules.. If I try to add manually lib.sub1, lib.sub2 changing the sys.path the imports will only work for the first one. Strangely if I just create a dev_main.pth in site-packages containing the same paths, everything works perfectly. Any suggestions now that the problem is more clear? Thanks, Andrea The only code I saw in this thread was: sys.path.extend(paths_to_add) Can you add a print of paths_to_add, and of sys.path after you execute it? If there's only one path, are you putting it in a list anyway? If not then it won't do what you expect. -- DaveA -- http://mail.python.org/mailman/listinfo/python-list
Re: log and figure out what bits are slow and optimize them.
Please don't top post. On 10/02/2012 12:59, Saju M wrote: Yes i saw profile module, I think, i have to do function call via cProfile.run('foo()') I know, we can debug this way. But, I need a fixed logging system and want to use it in production. I think, we can't permanently include profile's debugging code in source code, will cause any performance issue ?? How about http://docs.python.org/library/logging.html ? -- Cheers. Mark Lawrence. -- http://mail.python.org/mailman/listinfo/python-list
Re: changing sys.path
Ok now it's getting really confusing, I tried a small example to see what is the real behaviour, so I created some package namespaces (where the __init__.py declare the namespace package). /home/andrea/test_ns: total used in directory 12 available 5655372 drwxr-xr-x 3 andrea andrea 4096 Feb 10 14:46 a.b drwxr-xr-x 3 andrea andrea 4096 Feb 10 14:46 a.c -rw-r--r-- 1 andrea andrea 125 Feb 10 14:46 test.py /home/andrea/test_ns/a.b: total 8 drwxr-xr-x 3 andrea andrea 4096 Feb 10 14:47 a -rw-r--r-- 1 andrea andrea 56 Feb 10 14:35 __init__.py /home/andrea/test_ns/a.b/a: total 8 drwxr-xr-x 2 andrea andrea 4096 Feb 10 14:47 b -rw-r--r-- 1 andrea andrea 56 Feb 10 14:35 __init__.py /home/andrea/test_ns/a.b/a/b: total 12 -rw-r--r-- 1 andrea andrea 25 Feb 10 14:36 api.py -rw-r--r-- 1 andrea andrea 153 Feb 10 14:37 api.pyc -rw-r--r-- 1 andrea andrea 56 Feb 10 14:35 __init__.py /home/andrea/test_ns/a.c: total 8 drwxr-xr-x 3 andrea andrea 4096 Feb 10 14:47 a -rw-r--r-- 1 andrea andrea 56 Feb 10 14:35 __init__.py /home/andrea/test_ns/a.c/a: total 8 drwxr-xr-x 2 andrea andrea 4096 Feb 10 14:47 c -rw-r--r-- 1 andrea andrea 56 Feb 10 14:35 __init__.py /home/andrea/test_ns/a.c/a/c: total 12 -rw-r--r-- 1 andrea andrea 20 Feb 10 14:36 api.py -rw-r--r-- 1 andrea andrea 148 Feb 10 14:38 api.pyc -rw-r--r-- 1 andrea andrea 56 Feb 10 14:35 __init__.py So this test.py works perfectly: import sys sys.path.insert(0, 'a.c') sys.path.insert(0, 'a.b') from a.b import api as api_ab from a.c import api as api_ac While just mixing the order: import sys sys.path.insert(0, 'a.b') from a.b import api as api_ab sys.path.insert(0, 'a.c') from a.c import api as api_ac Doesn't work anymore [andrea@precision test_ns]$ python2 test.py Traceback (most recent call last): File "test.py", line 7, in from a.c import api as api_ac ImportError: No module named c Am I missing something/doing something stupid? -- http://mail.python.org/mailman/listinfo/python-list
Re: changing sys.path
On 02/10/2012 09:51 AM, Andrea Crotti wrote: Ok now it's getting really confusing, I tried a small example to see what is the real behaviour, so I created some package namespaces (where the __init__.py declare the namespace package). /home/andrea/test_ns: total used in directory 12 available 5655372 drwxr-xr-x 3 andrea andrea 4096 Feb 10 14:46 a.b drwxr-xr-x 3 andrea andrea 4096 Feb 10 14:46 a.c -rw-r--r-- 1 andrea andrea 125 Feb 10 14:46 test.py /home/andrea/test_ns/a.b: total 8 drwxr-xr-x 3 andrea andrea 4096 Feb 10 14:47 a -rw-r--r-- 1 andrea andrea 56 Feb 10 14:35 __init__.py /home/andrea/test_ns/a.b/a: total 8 drwxr-xr-x 2 andrea andrea 4096 Feb 10 14:47 b -rw-r--r-- 1 andrea andrea 56 Feb 10 14:35 __init__.py /home/andrea/test_ns/a.b/a/b: total 12 -rw-r--r-- 1 andrea andrea 25 Feb 10 14:36 api.py -rw-r--r-- 1 andrea andrea 153 Feb 10 14:37 api.pyc -rw-r--r-- 1 andrea andrea 56 Feb 10 14:35 __init__.py /home/andrea/test_ns/a.c: total 8 drwxr-xr-x 3 andrea andrea 4096 Feb 10 14:47 a -rw-r--r-- 1 andrea andrea 56 Feb 10 14:35 __init__.py /home/andrea/test_ns/a.c/a: total 8 drwxr-xr-x 2 andrea andrea 4096 Feb 10 14:47 c -rw-r--r-- 1 andrea andrea 56 Feb 10 14:35 __init__.py /home/andrea/test_ns/a.c/a/c: total 12 -rw-r--r-- 1 andrea andrea 20 Feb 10 14:36 api.py -rw-r--r-- 1 andrea andrea 148 Feb 10 14:38 api.pyc -rw-r--r-- 1 andrea andrea 56 Feb 10 14:35 __init__.py So this test.py works perfectly: import sys sys.path.insert(0, 'a.c') sys.path.insert(0, 'a.b') from a.b import api as api_ab from a.c import api as api_ac While just mixing the order: import sys sys.path.insert(0, 'a.b') from a.b import api as api_ab sys.path.insert(0, 'a.c') from a.c import api as api_ac Doesn't work anymore [andrea@precision test_ns]$ python2 test.py Traceback (most recent call last): File "test.py", line 7, in from a.c import api as api_ac ImportError: No module named c Am I missing something/doing something stupid? Yes, you've got periods in your directory names. A period means something special within python, and specifically within the import. When you say from a.c import api You're telling it:from package a get module c, and from there impoort the symbol api But package a has no module c, so it complains. In an earlier message you asserted you were using all absolute paths in your additions to sys.path. Here you're inserting relative ones. How come? -- DaveA -- http://mail.python.org/mailman/listinfo/python-list
Re: changing sys.path
On 02/10/2012 03:06 PM, Dave Angel wrote: Yes, you've got periods in your directory names. A period means something special within python, and specifically within the import. When you say from a.c import api You're telling it:from package a get module c, and from there impoort the symbol api But package a has no module c, so it complains. In an earlier message you asserted you were using all absolute paths in your additions to sys.path. Here you're inserting relative ones. How come? Well yes I have periods, but that's also the real-world situation. We have many directories that are contributing to the same namespace in the same superdirectory which should not interfere, so it was decided to give this naming. It would be quite hard to change I guess and I have to prove that this is problem. I renamed everything and this import sys from os import path sys.path.insert(0, path.abspath('ab')) from a.b import api as api_ab sys.path.insert(0, path.abspath('ac')) from a.c import api as api_ac still fails, so the period in the name is not the problem. Also absolute or relative paths in this small example doesn't make any difference. Adding all the paths in one go works perfectly fine anyway, so I probably have to make sure I add them *all* before anything is imported. If there are better solutions I would like to hear them :) -- http://mail.python.org/mailman/listinfo/python-list
Re: changing sys.path
Andrea Crotti wrote: > Ok now it's getting really confusing, I tried a small example to see > what is the real behaviour, > so I created some package namespaces (where the __init__.py declare the > namespace package). > >/home/andrea/test_ns: >total used in directory 12 available 5655372 >drwxr-xr-x 3 andrea andrea 4096 Feb 10 14:46 a.b >drwxr-xr-x 3 andrea andrea 4096 Feb 10 14:46 a.c >-rw-r--r-- 1 andrea andrea 125 Feb 10 14:46 test.py > >/home/andrea/test_ns/a.b: >total 8 >drwxr-xr-x 3 andrea andrea 4096 Feb 10 14:47 a >-rw-r--r-- 1 andrea andrea 56 Feb 10 14:35 __init__.py > >/home/andrea/test_ns/a.b/a: >total 8 >drwxr-xr-x 2 andrea andrea 4096 Feb 10 14:47 b >-rw-r--r-- 1 andrea andrea 56 Feb 10 14:35 __init__.py > >/home/andrea/test_ns/a.b/a/b: >total 12 >-rw-r--r-- 1 andrea andrea 25 Feb 10 14:36 api.py >-rw-r--r-- 1 andrea andrea 153 Feb 10 14:37 api.pyc >-rw-r--r-- 1 andrea andrea 56 Feb 10 14:35 __init__.py > >/home/andrea/test_ns/a.c: >total 8 >drwxr-xr-x 3 andrea andrea 4096 Feb 10 14:47 a >-rw-r--r-- 1 andrea andrea 56 Feb 10 14:35 __init__.py > >/home/andrea/test_ns/a.c/a: >total 8 >drwxr-xr-x 2 andrea andrea 4096 Feb 10 14:47 c >-rw-r--r-- 1 andrea andrea 56 Feb 10 14:35 __init__.py > >/home/andrea/test_ns/a.c/a/c: >total 12 >-rw-r--r-- 1 andrea andrea 20 Feb 10 14:36 api.py >-rw-r--r-- 1 andrea andrea 148 Feb 10 14:38 api.pyc >-rw-r--r-- 1 andrea andrea 56 Feb 10 14:35 __init__.py > > > So this test.py works perfectly: > import sys > sys.path.insert(0, 'a.c') > sys.path.insert(0, 'a.b') > > from a.b import api as api_ab > > from a.c import api as api_ac > > While just mixing the order: > import sys > sys.path.insert(0, 'a.b') > > from a.b import api as api_ab > > sys.path.insert(0, 'a.c') > from a.c import api as api_ac > > Doesn't work anymore > > [andrea@precision test_ns]$ python2 test.py > Traceback (most recent call last): >File "test.py", line 7, in > from a.c import api as api_ac > ImportError: No module named c > > > > Am I missing something/doing something stupid? The package a will be either a.c/a/ or a.b/a/ depending on whether a.c/ or a.b/ appears first in sys.path. If it's a.c/a, that does not contain a c submodule or subpackage. -- http://mail.python.org/mailman/listinfo/python-list
Re: changing sys.path
On 02/10/2012 03:27 PM, Peter Otten wrote: The package a will be either a.c/a/ or a.b/a/ depending on whether a.c/ or a.b/ appears first in sys.path. If it's a.c/a, that does not contain a c submodule or subpackage. I would agree if I didn't have this declaration __import__('pkg_resources').declare_namespace(__name__) in each subdirectory. And how do you explain the fact that changing the order everything works? Namespace packages are supposed to work exactly like this, if it doesn't resolve the "c" instead of raising an Exception it goes forward in the sys.path and try again, which is what actually happens when I do this sys.path.append(path.abspath('ab')) sys.path.append(path.abspath('ac')) from a.b import api as api_ab from a.c import api as api_ac Maybe this: Definition: pkgutil.extend_path(path, name) Docstring: Extend a package's path. Intended use is to place the following code in a package's __init__.py: from pkgutil import extend_path __path__ = extend_path(__path__, __name__) might come handy, from what I'm gathering is the only way to have a more dynamic path manipulation with namespace packages.. -- http://mail.python.org/mailman/listinfo/python-list
Re: changing sys.path
Peter Otten wrote: > If it's a.c/a, that does not contain a c submodule or subpackage. Sorry, I meant a.b/a -- http://mail.python.org/mailman/listinfo/python-list
Re: changing sys.path
Andrea Crotti wrote: > On 02/10/2012 03:27 PM, Peter Otten wrote: >> The package a will be either a.c/a/ or a.b/a/ depending on whether >> a.c/ or a.b/ appears first in sys.path. If it's a.c/a, that does not >> contain a c submodule or subpackage. > > > I would agree if I didn't have this declaration > __import__('pkg_resources').declare_namespace(__name__) > in each subdirectory. Sorry, you didn't mention that in the post I responded to and I didn't follow the thread closely. I found a description for declare_namespace() at http://peak.telecommunity.com/DevCenter/PkgResources but the text explaining the function is completely unintelligible to me, so I cannot contribute anything helpful here :( -- http://mail.python.org/mailman/listinfo/python-list
Fabric Engine + Python benchmarks
Hi all - just letting you know that we recently integrated Fabric with Python. Fabric is a high-performance multi-threading engine that integrates with dynamic languages. We're releasing soon (probably under AGPL), and we just released these benchmarks. http://fabric-engine.com/2012/02/fabric-engine-python-value-at-risk-benchmark/ Before anyone starts attacking the vanilla python :), the point we want to make is that our Python integration performs just as well as our Node.js implementation (benchmarks found at http://fabric-engine.com/tag/benchmarks/). Obviously, it's pretty trivial to compile Python to byte code, and present multi-threaded versions of the program - however, the goal of Fabric is to handle that side of things automatically (that's what the engine does). This means we take care of threading, dynamic compilation, memory management etc Interested to get your feedback. Kind regards, Paul (I work at Fabric) -- http://mail.python.org/mailman/listinfo/python-list
Re: frozendict
On Fri, Feb 10, 2012 at 5:08 AM, Chris Angelico wrote: > On Fri, Feb 10, 2012 at 1:30 PM, Nathan Rice > wrote: >> The only thing needed to avoid the hash collision is that your hash >> function is not not 100% predictable just by looking at the python >> source code. I don't see why every dict would have to be created >> differently. I would think having the most ubiquitous data structure >> in your language be more predictable would be a priority. Oh well > > It's perfectly predictable. If you put a series of keys into it, you > get those same keys back. Nobody ever promised anything about order. > > If your hash function is not 100% predictable, that means it varies on > the basis of something that isn't part of either the Python > interpreter or the script being run. That means that, from one > execution to another of the exact same code, the results could be > different. The keys will come out in different orders. I think having a hash function that is not referentially transparent is a bad thing. Basing your language on a non-deterministic function? Yeah... A type factory that produces the dict type on interpreter initialization (or, replaces the hash function, rather), and uses time/system information/etc would solve the problem, while limiting the introduced non-determinism. I don't care if the order of iteration for keys is different from interpreter run to run. I have used frozenset(mydict.items()) when my requirements dictated. It is a minor performance hit. Lets also not forget that knowing an object is immutable lets you do a lot of optimizations; it can be inlined, it is safe to convert to a contiguous block of memory and stuff in cache, etc. If you know the input to a function is guaranteed to be frozen you can just go crazy. Being able to freeze(anyobject) seems like a pretty clear win. Whether or not it is pythonic is debatable. I'd argue if the meaning of pythonic in some context is limiting, we should consider updating the term rather than being dogmatic. Just my 2 cents... Nathan -- http://mail.python.org/mailman/listinfo/python-list
Re: frozendict
On Fri, Feb 10, 2012 at 8:53 AM, Nathan Rice wrote: > Lets also not forget that knowing an object is immutable lets you do a > lot of optimizations; it can be inlined, it is safe to convert to a > contiguous block of memory and stuff in cache, etc. If you know the > input to a function is guaranteed to be frozen you can just go crazy. > Being able to freeze(anyobject) seems like a pretty clear win. > Whether or not it is pythonic is debatable. I'd argue if the meaning > of pythonic in some context is limiting, we should consider updating > the term rather than being dogmatic. It's been proposed previously and rejected: PEP 351 -- The freeze protocol http://www.python.org/dev/peps/pep-0351/ Cheers, Chris -- http://mail.python.org/mailman/listinfo/python-list
Re: multiple namespaces within a single module?
Hi Peter On Feb 10, 11:10 am, Peter Otten <__pete...@web.de> wrote: [...] > > Hmm ... thanks for mentioning this feature, I didn't know of it > > before. Sounds great, except that I gather it needs Python >2.5? I'm > > stuck with v2.4 at the moment unfortunately... > > You can import and run explicitly, > > $ PYTHONPATH mylib.zip python -c'import mainmod; mainmod.main()' > > assuming you have a module mainmod.py containing a function main() in > mylib.zip. That looks very useful -thanks for the tip! J^n -- http://mail.python.org/mailman/listinfo/python-list
Re: Fabric Engine + Python benchmarks
Fabric Paul, 10.02.2012 17:04: > Fabric is a high-performance multi-threading engine that > integrates with dynamic languages. Hmm, first of all, fabric is a tool for automating admin/deployment/whatever tasks: http://pypi.python.org/pypi/Fabric/1.3.4 http://docs.fabfile.org/en/1.3.4/index.html Not sure which went first, but since you mentioned that you're "releasing soon", you may want to stop the engines for a moment and reconsider the name. Stefan -- http://mail.python.org/mailman/listinfo/python-list
Re: Fabric Engine + Python benchmarks
On Feb 10, 12:21 pm, Stefan Behnel wrote: > Fabric Paul, 10.02.2012 17:04: > > > Fabric is a high-performance multi-threading engine that > > integrates with dynamic languages. > > Hmm, first of all, fabric is a tool for automating > admin/deployment/whatever tasks: > > http://pypi.python.org/pypi/Fabric/1.3.4 > > http://docs.fabfile.org/en/1.3.4/index.html > > Not sure which went first, but since you mentioned that you're "releasing > soon", you may want to stop the engines for a moment and reconsider the name. > > Stefan Hi Stefan - Thanks for the heads up. Fabric Engine has been going for about 2 years now. Registered company etc. I'll be sure to refer to it as Fabric Engine so there's no confusion. We were unaware there was a python tool called Fabric. Thanks, Paul -- http://mail.python.org/mailman/listinfo/python-list
Re: round down to nearest number
On Feb 10, 4:58 am, Arnaud Delobelle wrote: > On 10 February 2012 06:21, Ian Kelly wrote: > > > > > > > (3219 + 99) // 100 * 100 > >> 3300 > > (3289 + 99) // 100 * 100 > >> 3300 > > (328678 + 99) // 100 * 100 > >> 328700 > > (328 + 99) // 100 * 100 > >> 400 > > >> Those are all rounded up to the nearest 100 correctly. > > > One thing to be aware of though is that while the "round down" formula > > works interchangeably for ints and floats, the "round up" formula does > > not. > > (3300.5 + 99) // 100 * 100 > > 3300.0 > > I'm surprised I haven't seen: > > >>> 212 - (212 % -100) > > 300 > > Here's a function that: > * rounds up and down > * works for both integers and floats > * is only two operations (as opposed to 3 in the solutions given above) > > >>> def round(n, k): > > ... return n - n%k > ...>>> # Round down with a positive k: > > ... round(167, 100) > 100>>> round(-233, 100 > > ... ) > -300>>> # Round up with a negative k: > > ... round(167, -100) > 200>>> round(-233, -100) > -200 > >>> # Edge cases > > ... round(500, -100) > 500>>> round(500, 100) > 500 > >>> # Floats > > ... round(100.5, -100) > 200.0>>> round(199.5, 100) > > 100.0 > > -- > Arnaud- Hide quoted text - > > - Show quoted text - Thanks! Covers all bases, good. -- http://mail.python.org/mailman/listinfo/python-list
Re: frozendict
>> Lets also not forget that knowing an object is immutable lets you do a >> lot of optimizations; it can be inlined, it is safe to convert to a >> contiguous block of memory and stuff in cache, etc. If you know the >> input to a function is guaranteed to be frozen you can just go crazy. >> Being able to freeze(anyobject) seems like a pretty clear win. >> Whether or not it is pythonic is debatable. I'd argue if the meaning >> of pythonic in some context is limiting, we should consider updating >> the term rather than being dogmatic. Sweet, looking at the reason for rejection: 1. How dicts (and multiply nested objects) should be frozen was not completely obvious 2. "frozen()" implies in place, thus confusing users 3. freezing something like a list is confusing because some list methods would disappear or cause errors 4. Because automatic conversion in the proposal was seen as too involved to be so magical 5. Because frozendicts are the main end user benefit, and using dicts as keys was seen as "suspect" Honestly, as far as #1, we already have copy and deepcopy, the can of worms is already opened (and necessarily so). For 2, choose a better name? For 3, we have abstract base classes now, which make a nice distinction between mutable and immutable sequences; nominal types are a crutch, thinking in terms of structure is much more powerful. I agree with point 4, if magic does anything besides make the code conform to what an informed user would expect, it should be considered heretical. As for #5, I feel using a collection of key value relations as a key in another collection is not inherently bad, it just depends on the context... The mutability is the rub. Also, immutability provides scaffolding to improve performance and concurrency (both of which are top tier language features). I understand that this is one of those cases where Guido has a strong "bad feeling" about something, and I think a consequence of that is people tread lightly. Perhaps I'm a bit of a language communist in that regard (historically a dangerous philosophy :) As an aside, I find it kind of schizophrenic how on one hand Python is billed as a language for consenting adults (see duck typing, no data hiding, etc) and on the other hand users need to be protected from themselves. Better to serve just one flavor of kool-aid imo. Nathan -- http://mail.python.org/mailman/listinfo/python-list
Re: frozendict
On 2/10/2012 10:14 AM, Nathan Rice wrote: Lets also not forget that knowing an object is immutable lets you do a lot of optimizations; it can be inlined, it is safe to convert to a contiguous block of memory and stuff in cache, etc. If you know the input to a function is guaranteed to be frozen you can just go crazy. Being able to freeze(anyobject) seems like a pretty clear win. Whether or not it is pythonic is debatable. I'd argue if the meaning of pythonic in some context is limiting, we should consider updating the term rather than being dogmatic. A real justification for the ability to make anything immutable is to make it safely shareable between threads. If it's immutable, it doesn't have to be locked for access. Mozilla's new "Rust" language takes advantage of this. Take a look at Rust's concurrency semantics. They've made some progress. John Nagle -- http://mail.python.org/mailman/listinfo/python-list
Removing items from a list
In the past, when deleting items from a list, I looped through the list in reverse to avoid accidentally deleting items I wanted to keep. I tried something different today, and, to my surprise, was able to delete items correctly, regardless of the direction in which I looped, in both Python 3.2.2. and 2..1 - does the remove() function somehow allow the iteration to continue correctly even when items are removed from the midde of the list? >>> x = list(range(10)) >>> x [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> for i in x: if i % 2 == 0: x.remove(i) >>> x [1, 3, 5, 7, 9] >>> for i in reversed(x): if i % 2 == 0: x.remove(i) >>> x [1, 3, 5, 7, 9] >>> x = list(range(10)) >>> for i in reversed(x): if i % 2 == 0: x.remove(i) >>> x [1, 3, 5, 7, 9] Sincerely Thomas Philips -- http://mail.python.org/mailman/listinfo/python-list
Re: Removing items from a list
On Fri, Feb 10, 2012 at 1:04 PM, Thomas Philips wrote: > In the past, when deleting items from a list, I looped through the > list in reverse to avoid accidentally deleting items I wanted to keep. > I tried something different today, and, to my surprise, was able to > delete items correctly, regardless of the direction in which I looped, > in both Python 3.2.2. and 2..1 - does the remove() function somehow > allow the iteration to continue correctly even when items are removed > from the midde of the list? No. Your test works because you never attempt to remove two adjacent items, so the skipping of items doesn't end up mattering. Try the same thing, but print out the values as you iterate over them: >>> x = list(range(10)) >>> x [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> for i in x: ... print(i) ... if i % 2 == 0: ... x.remove(i) ... 0 2 4 6 8 >>> x [1, 3, 5, 7, 9] Had you attempted to remove any of the odd numbers as well, it would have failed. Cheers, Ian -- http://mail.python.org/mailman/listinfo/python-list
Re: Forking simplejson
Héllo, I did it, it wasn't that difficult actually. the source is available @ https://github.com/amirouche/jsonir there is example : https://github.com/amirouche/jsonir/blob/master/example.py What makes the implementation of __json__ awkward is the iterencode support of simplejson that I kept. I'm wondering if it makes sens to have such a feature, what do you think ? I does not support multiple json representation of an object out-of-the-box, one solution is to "__json__" hook a parameter of the encoder. I did not test it. Cheers, Amirouche -- http://mail.python.org/mailman/listinfo/python-list
Re: Removing items from a list
On 10/02/2012 20:04, Thomas Philips wrote: In the past, when deleting items from a list, I looped through the list in reverse to avoid accidentally deleting items I wanted to keep. I tried something different today, and, to my surprise, was able to delete items correctly, regardless of the direction in which I looped, in both Python 3.2.2. and 2..1 - does the remove() function somehow allow the iteration to continue correctly even when items are removed from the midde of the list? x = list(range(10)) x [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] for i in x: if i % 2 == 0: x.remove(i) x [1, 3, 5, 7, 9] for i in reversed(x): if i % 2 == 0: x.remove(i) x [1, 3, 5, 7, 9] x = list(range(10)) for i in reversed(x): if i % 2 == 0: x.remove(i) x [1, 3, 5, 7, 9] The answer is no. For example: >>> for i in x: print("i is", i) if i % 2 == 0: x.remove(i) i is 0 i is 1 i is 2 i is 4 >>> x [0, 1, 3, 5] -- http://mail.python.org/mailman/listinfo/python-list
Re: Removing items from a list
Thanks for the insight. I saw the behavious as soon as I extended x with a bunch of 0's >>> x = list(range(10)) >>> x.extend([0]*10) >>> x [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] >>> for i in reversed(x): if i % 2 == 0: x.remove(i) >>> x [1, 3, 5, 7, 9] >>> x = list(range(10)) >>> x.extend([0]*10) >>> x [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] >>> for i in x: if i % 2 == 0: x.remove(i) >>> x [1, 3, 5, 7, 9, 0, 0, 0, 0, 0] -- http://mail.python.org/mailman/listinfo/python-list
Re: changing sys.path
On 02/10/2012 04:00 PM, Peter Otten wrote: Sorry, you didn't mention that in the post I responded to and I didn't follow the thread closely. I found a description for declare_namespace() at http://peak.telecommunity.com/DevCenter/PkgResources but the text explaining the function is completely unintelligible to me, so I cannot contribute anything helpful here :( Well in the end I submitted a bug report http://bugs.python.org/issue13991 I'm not sure it's really a bug and maybe I'm just doing something wrong, but to me the behavior is at least unexpected.. -- http://mail.python.org/mailman/listinfo/python-list
Re: round down to nearest number
On Thu, 9 Feb 2012 17:43:58 -0800 Chris Rebert wrote: > On Thu, Feb 9, 2012 at 5:23 PM, noydb wrote: > > hmmm, okay. > > > > So how would you round UP always? Say the number is 3219, so you > > want 3300 returned. > > http://stackoverflow.com/questions/17944/how-to-round-up-the-result-of-integer-division/96921 > > Thus: (3219 + 99) // 100 > > Slight tangent: Beware negative numbers when using // or %. This trick work always (even if the entry is a float): -(-a//100)*100 >>> -(-3219//100)*100 3300 >>> -(-3200.1//100)*100 3300.0 -- http://mail.python.org/mailman/listinfo/python-list
How can I catch misnamed variables?
Recently I was been bitten by some stupid errors in my code, and I'm wondering if there's a simple way to catch them. One error was of the form: my_object.some_function() .. when I hadn't declared an object named "my_object". The other error was similar: x = my_module.CONSTANT .. when I hadn't imported my_module. Of course both of these errors were deep inside a long-running function call, so it took a while for them to crop up. Is there an automated way to catch errors like these? I'm using the compileall module to build my program and it does catch some errors such as incorrect indentation, but not errors like the above. -- John Gordon A is for Amy, who fell down the stairs gor...@panix.com B is for Basil, assaulted by bears -- Edward Gorey, "The Gashlycrumb Tinies" -- http://mail.python.org/mailman/listinfo/python-list
Re: How can I catch misnamed variables?
On 10 February 2012 21:06, John Gordon wrote: > Recently I was been bitten by some stupid errors in my code, and I'm > wondering if there's a simple way to catch them. > > One error was of the form: > > my_object.some_function() > > .. when I hadn't declared an object named "my_object". > > The other error was similar: > > x = my_module.CONSTANT > > .. when I hadn't imported my_module. > > Of course both of these errors were deep inside a long-running function > call, so it took a while for them to crop up. > > Is there an automated way to catch errors like these? I'm using the > compileall module to build my program and it does catch some errors > such as incorrect indentation, but not errors like the above. There's pychecker and pylint -- Arnaud -- http://mail.python.org/mailman/listinfo/python-list
Re: How can I catch misnamed variables?
John Gordon wrote: > Recently I was been bitten by some stupid errors in my code, and I'm > wondering if there's a simple way to catch them. > Pyflakes is another static checker that can catch these sorts of errors. Cheers, Kev -- http://mail.python.org/mailman/listinfo/python-list
datetime module and timezone
In the datetime module, it has support for a notion of timezone but is it possible to use one of the available timezone (I am on Linux). Linux has a notion of timezone (in my distribution, they are stored in /usr/share/zoneinfo). I would like to be able 1) to know the current timezone and 2) to be able to use the timezone available on the system. How can I do that? Olive -- http://mail.python.org/mailman/listinfo/python-list
Re: How can I catch misnamed variables?
John Gordon writes: > Is there an automated way to catch errors like these? Use a static code checker, such as ‘pyflakes’ (simple but limited) or ‘pylint’ (complex but highly configurable) to catch these and many other problems in Python code. -- \ “It's a terrible paradox that most charities are driven by | `\ religious belief.… if you think altruism without Jesus is not | _o__) altruism, then you're a dick.” —Tim Minchin, 2010-11-28 | Ben Finney -- http://mail.python.org/mailman/listinfo/python-list
Re: datetime module and timezone
In <20120210222545.4cbe6...@bigfoot.com> Olive writes: > In the datetime module, it has support for a notion of timezone but is > it possible to use one of the available timezone (I am on Linux). Linux > has a notion of timezone (in my distribution, they are stored > in /usr/share/zoneinfo). I would like to be able 1) to know the current > timezone and 2) to be able to use the timezone available on the system. > How can I do that? I believe the current user's timezone is stored in the TZ environment variable. I don't understand your second question. Are you asking for a list of of all the possible timezone choices? -- John Gordon A is for Amy, who fell down the stairs gor...@panix.com B is for Basil, assaulted by bears -- Edward Gorey, "The Gashlycrumb Tinies" -- http://mail.python.org/mailman/listinfo/python-list
Re: Removing items from a list
On Sat, Feb 11, 2012 at 7:04 AM, Thomas Philips wrote: > [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] for i in x: > if i % 2 == 0: > x.remove(i) Just a quickie, is there a reason you can't use a list comprehension? x = [i for i in x if i % 2] ChrisA -- http://mail.python.org/mailman/listinfo/python-list
Re: datetime module and timezone
On Fri, Feb 10, 2012 at 1:25 PM, Olive wrote: > In the datetime module, it has support for a notion of timezone but is > it possible to use one of the available timezone (I am on Linux). Linux > has a notion of timezone (in my distribution, they are stored > in /usr/share/zoneinfo). I would like to be able 1) to know the current > timezone time.tzname gives the zone names (plural due to DST); time.timezone and time.altzone gives their UTC offsets. > and 2) to be able to use the timezone available on the system. You can use the name to look it up in pytz (http://pypi.python.org/pypi/pytz/ ). And python-dateutil (http://labix.org/python-dateutil ) can apparently parse zoneinfo files, if that's what you mean. Cheers, Chris -- http://mail.python.org/mailman/listinfo/python-list
OT (waaaayyyyyyyyy off-topic) [was Re: How can I catch misnamed variables?]
Ben Finney wrote (from signature): > “It's a terrible paradox that most charities are driven by religious > belief. . . . if you think altruism without Jesus is not altruism, > then you're a dick.” —Tim Minchin, 2010-11-28 1) Why is it paradoxical? If anything it's a sad commentary on those who don't ascribe to a religion, as it would appear that they care less for their society. 2) altruism: unselfish regard for or devotion to the welfare of others... no mention of religion of any kind, or Jesus in particular. Altruistic-yet-paradoxically-religious-ly yours, ~Ethan~ -- http://mail.python.org/mailman/listinfo/python-list
Re: Read-only attribute in module
On Thu, 09 Feb 2012 22:27:50 -0500, Terry Reedy wrote: > On 2/9/2012 8:04 PM, Steven D'Aprano wrote: > >> Python happily violates "consenting adults" all over the place. We have >> properties, which can easily create read-only and write-once >> attributes. > > So propose that propery() work at module level, for module attributes, > as well as for class attributes. I'm not wedded to a specific implementation. Besides, it's not just a matter of saying "property should work in modules" -- that would require the entire descriptor protocol work for module lookups, and I don't know how big a can of worms that is. Constant names is a lot more constrained than computed name lookups. -- Steven -- http://mail.python.org/mailman/listinfo/python-list
Re: Read-only attribute in module
On 2/10/2012 6:11 AM, mloskot wrote: The intent of xyz.flag is that it is a value set by the module internally. xyz is a module wrapping a C library. The C library defines concept of a global flag set by the C functions at some events, so user can check value of this flag. I can provide access to it with function: xyz.get_flag() If the value of the flag can change during a run, I would do that. Otherwise, you have to make sure the local copy keeps in sync. Users might also think that it is a true constant that they could read once. I understand that you might be concerned that one person in a multi-programmer project might decide to rebind xyz.flag and mess up everyone else. I think the real solution might be an option to freeze an entire module. -- Terry Jan Reedy -- http://mail.python.org/mailman/listinfo/python-list
Re: log and figure out what bits are slow and optimize them.
sajuptpm wrote: > Hi, > > Yes i saw profile module, > I think i have to do function call via > > cProfile.run('foo()') > > I know, we can debug this way. > > But, i need a fixed logging system and want to use it in production. > I think, we can't permanently include profile's debugging code > in source code, > will cause any performance issue ?? *Any* instrumentation code is going to affect performance. It's a trade-off that you need to analyse and manage in the context of your application. -- http://mail.python.org/mailman/listinfo/python-list
Re: OT (waaaayyyyyyyyy off-topic)
Thanks for responding. Rather than take this discussion too far where it's quite off-topic, I'll respond briefly and ask for a change of forum if we want to continue. Ethan Furman writes: > Ben Finney wrote (from signature): > > “It's a terrible paradox that most charities are driven by religious > > belief. . . . if you think altruism without Jesus is not altruism, > > then you're a dick.” —Tim Minchin, 2010-11-28 The quote is from an interview with Tim Minchin http://www.guardian.co.uk/stage/2010/nov/28/tim-minchin-comedian>. > 1) Why is it paradoxical? If anything it's a sad commentary on those > who don't ascribe to a religion, as it would appear that they care > less for their society. It's an outcome of history that religious institutions have historically been well-situated to be the facilitators of charitable work (and much other work) simply because they have been ubiquitous in most societies. The paradox is that they spend much of their resources away from the worldly, i.e. secular, work of charity. But charitable work is not dependent on religious belief, and indeed in recent decades there are now a great many wholly secular charities (e.g. International Red Cross and Oxfam) which do not divert their resources from addressing the real world. > 2) altruism: unselfish regard for or devotion to the welfare of > others... no mention of religion of any kind, or Jesus in particular. Yes, that's the point. Altruism is a human activity independent of religious belief, yet the default assumption of too many is that they are somehow necessarily connected. > Altruistic-yet-paradoxically-religious-ly yours, As you rightly point out, this discussion is off-topic here. So while I'm open to discussion on this topic, we should move it to some other forum. -- \“Most people, I think, don't even know what a rootkit is, so | `\ why should they care about it?” —Thomas Hesse, Sony BMG, 2006 | _o__) | Ben Finney -- http://mail.python.org/mailman/listinfo/python-list
Re: Fabric Engine + Python benchmarks
Fabric Paul writes: > Hi Stefan - Thanks for the heads up. Fabric Engine has been going for > about 2 years now. Registered company etc. I'll be sure to refer to it > as Fabric Engine so there's no confusion. We were unaware there was a > python tool called Fabric. There will still be confusion. The Fabric configuration tool is quite well known in the python and sysadmin communities, so it will be the first thing people will think of. If you weren't already aware of it, I'd guess you're pretty far out of contact with Python's existing user population, so there may be further sources of mismatch between your product and what else is out there (I'm thinking of Stackless, PyPy, etc.) Still, yoour product sounds pretty cool. -- http://mail.python.org/mailman/listinfo/python-list
Postpone evaluation of argument
Hello, I want to add an item to a list, except if the evaluation of that item results in an exception. I could do that like this: def r(x): if x > 3: raise(ValueError) try: list.append(r(1)) except: pass try: list.append(r(5)) except: pass This looks rather clumbsy though, and it does not work with i.e. list comprehensions. I was thinking of writing a decorator like this: def tryAppendDecorator(fn): def new(*args): try: fn(*args) except: pass return new @tryAppendDecorator def tryAppend(list, item): list.append(item) tryAppend(list, r(1)) tryAppend(list, r(5)) This does not work however because the 'item' argument gets evaluated before the decorator does it's magic. Is there a way to postpone the evaluation of 'item' till it gets used inside the decorator. Like it is possible to quote a form in Lisp. Thank you, Righard -- http://mail.python.org/mailman/listinfo/python-list
Re: log and figure out what bits are slow and optimize them.
In Kev Dwyer writes: > *Any* instrumentation code is going to affect performance. Funny story about that... I wanted to profile some code of mine, and a colleague recommended the 'hotshot' module. It's pretty easy to use: there are functions to start profiling, stop profiling and print results. So I added the function calls and ran my code and it took a really long time. I mean a REALLY long time. In fact I eventually had to kill the process. I briefly wondered if my coworker was playing a prank on me... then I realized that I had neglected to call the function to stop profiling! So when I went to print the results, it was still profiling... endlessly. (Okay, maybe it wasn't that funny.) -- John Gordon A is for Amy, who fell down the stairs gor...@panix.com B is for Basil, assaulted by bears -- Edward Gorey, "The Gashlycrumb Tinies" -- http://mail.python.org/mailman/listinfo/python-list
Re: Postpone evaluation of argument
On Fri, Feb 10, 2012 at 3:01 PM, Righard van Roy wrote: > Hello, > > I want to add an item to a list, except if the evaluation of that item > results in an exception. > I could do that like this: > > def r(x): > if x > 3: > raise(ValueError) > > try: > list.append(r(1)) > except: > pass > try: > list.append(r(5)) > except: > pass > > This looks rather clumbsy though, and it does not work with i.e. list > comprehensions. > > I was thinking of writing a decorator like this: > > def tryAppendDecorator(fn): > def new(*args): > try: > fn(*args) > except: > pass > return new > > @tryAppendDecorator > def tryAppend(list, item): > list.append(item) > > tryAppend(list, r(1)) > tryAppend(list, r(5)) > > This does not work however because the 'item' argument gets evaluated > before the decorator does it's magic. > > Is there a way to postpone the evaluation of 'item' till it gets used > inside the decorator. Like it is possible to quote a form in Lisp. Nope. All arguments always get evaluated before control passes to the callee. You'd have to "quote" the arguments manually by putting them in lambdas, thus explicitly delaying their evaluation. Cheers, Chris -- http://rebertia.com -- http://mail.python.org/mailman/listinfo/python-list
Re: How can I catch misnamed variables?
Am 10.02.2012 22:06, schrieb John Gordon: > Is there an automated way to catch errors like these? I'm using the > compileall module to build my program and it does catch some errors > such as incorrect indentation, but not errors like the above. Write unit tests and use coverage to aim for 100% code and branch coverage. If you want to write high quality code and avoid problems like misnamed variables then you have to write unit tests and functional tests for your program. I'm well aware that it's hard and requires time. But in the long run it will *save* lots of time. -- http://mail.python.org/mailman/listinfo/python-list
Re: Fabric Engine + Python benchmarks
On Fri, 2012-02-10 at 14:52 -0800, Paul Rubin wrote: > Fabric Paul writes: > > Hi Stefan - Thanks for the heads up. Fabric Engine has been going for > > about 2 years now. Registered company etc. I'll be sure to refer to it > > as Fabric Engine so there's no confusion. We were unaware there was a > > python tool called Fabric. > > There will still be confusion. The Fabric configuration tool is quite > well known in the python and sysadmin communities, so it will be the > first thing people will think of. If you weren't already aware of it, > I'd guess you're pretty far out of contact with Python's existing user > population, so there may be further sources of mismatch between your > product and what else is out there (I'm thinking of Stackless, PyPy, > etc.) Still, yoour product sounds pretty cool. > Indeed. When I first saw the subject header I thought it was referring to the Python-based deployment tool. It's just going to confuse people. It's enough already that we have a bunch of stuff with "pi" and "py" in the name :| Does the OSS community *really* need another "Firebird" incident? -a -- http://mail.python.org/mailman/listinfo/python-list
Re: Postpone evaluation of argument
Righard van Roy writes: > I want to add an item to a list, except if the evaluation of that item > results in an exception. This may be overkill and probably slow, but perhaps most in the spirit that you're asking. from itertools import chain def r(x): if x > 3: raise(ValueError) return x def maybe(func): try: yield func() except: return def p(i): return maybe(lambda: r(i)) your_list = list(chain(p(1), p(5))) print your_list -- http://mail.python.org/mailman/listinfo/python-list
ANN: dbf.py 0.90.001
Still messing with .dbf files? Somebody brought you a 15 year old floppy, which still luckily (?) worked, and now wants that ancient data? dbf to the rescue! Supported tables/features = - dBase III - FoxPro - Visual FoxPro supported - Null value Supported field types = - Char - Date - Logical - Memo - Numeric - Currency (returns Decimal) - DateTime - Double - Float (same as Numeric) - General - Integer - Picture Still to come (or, Not Yet Working ;) = - Index files (although you can create temporary memory indices) - auto incrementing fields Latest version can be found on PyPI at http://pypi.python.org/pypi/dbf. Comments, bug reports, etc, appreciated! ~Ethan~ -- http://mail.python.org/mailman/listinfo/python-list
problems with shelve(), collections.defaultdict, self
The following code demonstrates that a collections.defaultdict is shelve worthy: import shelve import collections as c dd = c.defaultdict(int) dd["Joe"] = 3 print(dd) my_shelve = shelve.open('data.shelve') my_shelve['strike_record'] = dd my_shelve.close() my_shelve = shelve.open('data.shelve') data = my_shelve['strike_record'] my_shelve.close() dd.clear() dd.update(data) print(dd) --output:-- defaultdict(, {'Joe': 3}) defaultdict(, {'Joe': 3}) And the following code demonstrates that a class that inherits from dict can shelve itself: import collections as c import shelve class Dog(dict): def __init__(self): super().__init__(Joe=1) print('', self) def save(self): my_shelve = shelve.open('data22.shelve') my_shelve['x'] = self my_shelve.close() def load(self): my_shelve = shelve.open('data22.shelve') data = my_shelve['x'] my_shelve.close() print(data) d = Dog() d.save() d.load() --output:-- {'Joe': 1} {'Joe': 1} But I cannot get a class that inherits from collections.defaultdict to shelve itself: import collections as c import shelve class Dog(c.defaultdict): def __init__(self): super().__init__(int, Joe=0) print('', self) def save(self): my_shelve = shelve.open('data22.shelve') my_shelve['dd'] = self my_shelve.close() def load(self): my_shelve = shelve.open('data22.shelve') data = my_shelve['dd'] my_shelve.close() print(data) d = Dog() d.save() d.load() --output:-- defaultdict(, {'Joe': 30}) Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.2/lib/ python3.2/shelve.py", line 111, in __getitem__ value = self.cache[key] KeyError: 'dd' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "3.py", line 95, in d.load() File "3.py", line 87, in load data = my_shelve['dd'] File "/Library/Frameworks/Python.framework/Versions/3.2/lib/ python3.2/shelve.py", line 114, in __getitem__ value = Unpickler(f).load() TypeError: __init__() takes exactly 1 positional argument (2 given) I deleted all *.shelve.db files between program runs. I can't figure out what I'm doing wrong. -- http://mail.python.org/mailman/listinfo/python-list
Re: problems with shelve(), collections.defaultdict, self
On Feb 10, 7:48 pm, 7stud <7s...@excite.com> wrote: > > But I cannot get a class that inherits from collections.defaultdict to > shelve itself: > > import collections as c > import shelve > > class Dog(c.defaultdict): > def __init__(self): > super().__init__(int, Joe=0) > print('', self) Whoops. I changed: super().__init__(int, Joe=0) to: super().__init__(int, Joe=30) hence this output.. > --output:-- > > defaultdict(, {'Joe': 30}) -- http://mail.python.org/mailman/listinfo/python-list
Re: problems with shelve(), collections.defaultdict, self
On Feb 10, 7:52 pm, 7stud <7s...@excite.com> wrote: I don't know if this helps, but I notice when I initially do this: shelve.open('data22') the file is saved as 'data22.db'. But on subsequent calls to shelve.open(), if I use the file name 'data22.db', I get a different error: --output:-- defaultdict(, {'Joe': 30}) Traceback (most recent call last): File "3.py", line 95, in d.load() File "3.py", line 86, in load my_shelve = shelve.open('data22.db') File "/Library/Frameworks/Python.framework/Versions/3.2/lib/ python3.2/shelve.py", line 232, in open return DbfilenameShelf(filename, flag, protocol, writeback) File "/Library/Frameworks/Python.framework/Versions/3.2/lib/ python3.2/shelve.py", line 216, in __init__ Shelf.__init__(self, dbm.open(filename, flag), protocol, writeback) File "/Library/Frameworks/Python.framework/Versions/3.2/lib/ python3.2/dbm/__init__.py", line 83, in open raise error[0]("db type could not be determined") dbm.error: db type could not be determined The code that produced that error: import collections as c import shelve class Dog(c.defaultdict): def __init__(self): super().__init__(int, Joe=30) print('', self) def save(self): my_shelve = shelve.open('data22') my_shelve['dd'] = self my_shelve.close() def load(self): my_shelve = shelve.open('data22.db') data = my_shelve['dd'] my_shelve.close() print(data) d = Dog() d.save() d.load() I'm using python 3.2.2. -- http://mail.python.org/mailman/listinfo/python-list
Re: frozendict
在 2012年2月11日星期六UTC+8上午2时57分34秒,John Nagle写道: > On 2/10/2012 10:14 AM, Nathan Rice wrote: > >>> Lets also not forget that knowing an object is immutable lets you do a > >>> lot of optimizations; it can be inlined, it is safe to convert to a > >>> contiguous block of memory and stuff in cache, etc. If you know the > >>> input to a function is guaranteed to be frozen you can just go crazy. > >>> Being able to freeze(anyobject) seems like a pretty clear win. > >>> Whether or not it is pythonic is debatable. I'd argue if the meaning > >>> of pythonic in some context is limiting, we should consider updating > >>> the term rather than being dogmatic. > > A real justification for the ability to make anything immutable is > to make it safely shareable between threads. If it's immutable, it > doesn't have to be locked for access. Mozilla's new "Rust" > language takes advantage of this. Take a look at Rust's concurrency > semantics. They've made some progress. > > John Nagl Lets model the system as an asynchronous set of objects with multiple threads performing operatons on objects as in the above. This reminds me the old problem solved before in the digital hardware. -- http://mail.python.org/mailman/listinfo/python-list
Re: datetime module and timezone
in 671891 20120210 212545 Olive wrote: >In the datetime module, it has support for a notion of timezone but is >it possible to use one of the available timezone (I am on Linux). Linux >has a notion of timezone (in my distribution, they are stored >in /usr/share/zoneinfo). I would like to be able 1) to know the current >timezone and 2) to be able to use the timezone available on the system. >How can I do that? For 1) just type "date" on the command line. -- http://mail.python.org/mailman/listinfo/python-list
ldap proxy user bind
I have developed a LDAP auth system using python-ldap module. Using that i can validate username and password, fetch user and groups info from LDAP directory. Now i want to implement ldap proxy user bind to the ldap server. I googled and find this http://ldapwiki.willeke.com/wiki/LDAPProxyUser But i don't have any idea about how implement it usng python-ldap. My existing LDAP settings at client side ldap_enabled = True ldap_host = your_ldap_server ldap_port = 389 ldap_basedn = o=My_omain ldap_user_key = cn ldap_group_key = groupMembership ldap_email_key = mail ldap_user_search = ou=Users ldap_group_search = ou=Groups ldap_group_objectclass = groupOfNames I want to add following 2 new flags ldap_proxy_user = ldap_proxy ldap_proxy_pwd = secret I don't know how this ldapproxy system would works. Could you please point me to an python article/example ?? -- http://mail.python.org/mailman/listinfo/python-list
Re: log and figure out what bits are slow and optimize them.
I decided to create a decorator like. import cProfile def debug_time(method): def timed(*args, **kw): prof = cProfile.Profile() prof.enable(subcalls=False, builtins=False) result = prof.runcall(method, *args, **kw) #prof.print_stats() msg = "\n\n\n\n###" msg += "\n\nURL : %s" %(tg.request.url) msg += "\nMethod: %r" %(method.__name__) print "--ddd", type(prof.getstats()) msg += "\n\nStatus : %s" %(prof.print_stats()) msg += "\n\n###" print msg LOGGER.info(msg) return result return timed Ref : http://stackoverflow.com/questions/5375624/a-decorator-that-profiles-a-method-call-and-logs-the-profiling-result I want to log it in existing log file in my project, so i tried prof.print_stats() and prof.getstats(). prof.getstats() will need extra loop for fetch data. prof.print_stats() will log library calls also. Please suggest a better way to log profiler output. -- http://mail.python.org/mailman/listinfo/python-list