py.test-1.2.0: junitxml, standalone test scripts, pluginization

2010-01-19 Thread holger krekel
Hi all, 

i just released some bits related to automated testing with Python: 

  py-1.2.0: py.test core which grew junitxml, standalone-script generation 
  pytest-xdist-1.0: separately installable dist-testing  looponfailing plugin
  pytest-figleaf-1.0: separately installable figleaf-coverage testing plugin

See below or at this URL for the announcement:

http://pylib.org/announce/release-1.2.0.html

If you didn't experience much speed-up or previously had problems with 
distributed testing i recommend you try to install pytest-xdist now and see 
if it works better. For me it speeds up some tests runs by 500% on a 4 CPU
machine due to its better internal model and several fixes.  (It's five
times because several tests depend on IO and don't block CPU meanwhile).

Another tip: if you use pip (best with a virtualenv) you can do e.g.:

pip install pytest-xdist 
pip uninstall pytest-xdist 

to conveniently activate/deactivate plugins for py.test. easy_install 
works ok as well but has no uninstall, yet remains the only option 
for installing with Python3 at the moment, though.  You need to use
the fine 'distribute' project's easy_install for the latter.

cheers  have fun,
holger


py.test/pylib 1.2.0: junitxml, standalone test scripts, pluginization


py.test is an advanced automated testing tool working with
Python2, Python3 and Jython versions on all major operating
systems.  It has a simple plugin architecture and can run many 
existing common Python test suites without modification.  It offers 
some unique features not found in other testing tools.  
See http://pytest.org for more info.

py.test 1.2.0 brings many bug fixes and interesting new abilities:

* --junitxml=path will create an XML file for use with CI processing 
* --genscript=path creates a standalone py.test-equivalent test-script 
* --ignore=path prevents collection of anything below that path
* --confcutdir=path only lookup conftest.py test configs below that path
* a 'pytest_report_header' hook to add info to the terminal report header 
* a 'pytestconfig' function argument gives direct access to option values
* 'pytest_generate_tests' can now be put into a class as well 
* on CPython py.test additionally installs as py.test-VERSION, on
  Jython as py.test-jython and on PyPy as py.test-pypy-XYZ

Apart from many bug fixes 1.2.0 also has better pluginization: 
Distributed testing and looponfailing testing now live in the
separately installable 'pytest-xdist' plugin.  The same is true for
'pytest-figleaf' for doing coverage reporting.  Those two plugins
can serve well now as blue prints for doing your own.  

thanks to all who helped and gave feedback,
have fun,

holger krekel, January 2010

Changes between 1.2.0 and 1.1.1
=

- moved dist/looponfailing from py.test core into a new 
  separately released pytest-xdist plugin.

- new junitxml plugin: --junitxml=path will generate a junit style xml file
  which is processable e.g. by the Hudson CI system. 

- new option: --genscript=path will generate a standalone py.test script
  which will not need any libraries installed.  thanks to Ralf Schmitt. 

- new option: --ignore will prevent specified path from collection. 
  Can be specified multiple times. 

- new option: --confcutdir=dir will make py.test only consider conftest 
  files that are relative to the specified dir.  

- new funcarg: pytestconfig is the pytest config object for access
  to command line args and can now be easily used in a test. 

- install 'py.test' and `py.which` with a ``-$VERSION`` suffix to
  disambiguate between Python3, python2.X, Jython and PyPy installed versions. 

- new pytestconfig funcarg allows access to test config object

- new pytest_report_header hook can return additional lines 
  to be displayed at the header of a test run. 

- (experimental) allow py.test path::name1::name2::... for pointing
  to a test within a test collection directly.  This might eventually
  evolve as a full substitute to -k specifications. 

- streamlined plugin loading: order is now as documented in
  customize.html: setuptools, ENV, commandline, conftest. 
  also setuptools entry point names are turned to canonical namees (pytest_*)

- automatically skip tests that need 'capfd' but have no os.dup 

- allow pytest_generate_tests to be defined in classes as well 

- deprecate usage of 'disabled' attribute in favour of pytestmark 
- deprecate definition of Directory, Module, Class and Function nodes
  in conftest.py files.  Use pytest collect hooks instead.

- collection/item node specific runtest/collect hooks are only called exactly
  on matching conftest.py files, i.e. ones which are exactly below
  the filesystem path of an item

- change: the first pytest_collect_directory hook to return something
  will now prevent further hooks to be called.

- change: figleaf plugin now requires --figleaf to run.  

EPD 6.0 and IPython Webinar Friday

2010-01-19 Thread Amenity Applewhite


Email not displaying correctly? View it in your browser.

Happy 2010! To start the year off, we've released a new version of EPD  
and lined up a solid set of training options.


Scientific Computing with Python Webinar
This Friday, Travis Oliphant  will then provide an introduction to  
multiprocessing and iPython.kernal.


Scientific Computing with Python Webinar
Multiprocessing and iPython.kernal
Friday, January 22: 1pm CST/7pm UTC
Register


Enthought Live Training
Enthought's intensive training courses are offered in 3-5 day  
sessions. The Python skills you'll acquire will save you and your  
organization time and money in 2010.



Enthought Open Course
February 22-26, Austin, TX
 • Python for Scientists and Engineers
 • Interfacing with C / C++ and Fortran
 • Introduction to UIs and Visualization

Enjoy!
The Enthought Team
   EPD 6.0
   Released
  Now available in our repository, EPD 6.0 includes Python 2.6,  
PiCloud's cloud library, and NumPy 1.4... Not to mention 64-bit  
support for Windows, OSX, and Linux.

Details. Download now.

  New:
  Enthought channel
  on YouTube
Short instructional videos straight from the desktops of our  
developers. Get started with a 4-part series on interpolation with  
SciPy.



Our mailing address is:
Enthought, Inc. 515 Congress Ave. Austin, TX 78701
Copyright (C) 2009 Enthought, Inc. All rights reserved.

Forward this email to a friend



--
http://mail.python.org/mailman/listinfo/python-announce-list

   Support the Python Software Foundation:
   http://www.python.org/psf/donations/


EPD 6.0 and IPython Webinar Friday

2010-01-19 Thread Amenity Applewhite


Email not displaying correctly? View it in your browser.

Happy 2010! To start the year off, we've released a new version of EPD  
and lined up a solid set of training options.


Scientific Computing with Python Webinar
This Friday, Travis Oliphant  will then provide an introduction to  
multiprocessing and iPython.kernal.


Scientific Computing with Python Webinar
Multiprocessing and iPython.kernal
Friday, January 22: 1pm CST/7pm UTC
Register


Enthought Live Training
Enthought's intensive training courses are offered in 3-5 day  
sessions. The Python skills you'll acquire will save you and your  
organization time and money in 2010.



Enthought Open Course
February 22-26, Austin, TX
 • Python for Scientists and Engineers
 • Interfacing with C / C++ and Fortran
 • Introduction to UIs and Visualization

Enjoy!
The Enthought Team
   EPD 6.0
   Released
  Now available in our repository, EPD 6.0 includes Python 2.6,  
PiCloud's cloud library, and NumPy 1.4... Not to mention 64-bit  
support for Windows, OSX, and Linux.

Details. Download now.

  New:
  Enthought channel
  on YouTube
Short instructional videos straight from the desktops of our  
developers. Get started with a 4-part series on interpolation with  
SciPy.



Our mailing address is:
Enthought, Inc. 515 Congress Ave. Austin, TX 78701
Copyright (C) 2009 Enthought, Inc. All rights reserved.

Forward this email to a friend
--
http://mail.python.org/mailman/listinfo/python-announce-list

   Support the Python Software Foundation:
   http://www.python.org/psf/donations/


[ANN] Data Plotting Library DISLIN 10.0

2010-01-19 Thread Helmut Michels

Dear Python users,

I am pleased to announce version 10.0 of the data plotting software
DISLIN.

DISLIN is a high-level and easy to use plotting library for
displaying data as curves, bar graphs, pie charts, 3D-colour plots,
surfaces, contours and maps. Several output formats are supported
such as X11, VGA, PostScript, PDF, CGM, WMF, HPGL, TIFF, GIF, PNG,
BMP and SVG.

The software is available for the most C, Fortran 77 and Fortran 90/95
compilers. Plotting extensions for the interpreting languages Perl,
Python and Java are also supported.

DISLIN distributions and manuals in PDF, PostScript and HTML format
are available from the DISLIN home page

 http://www.dislin.de

and via FTP from the server

 ftp://ftp.gwdg.de/pub/grafik/dislin

All DISLIN distributions are free for non-commercial use. Licenses
for commercial use are available from the site http://www.dislin.de.

 ---
  Helmut Michels
  Max Planck Institute for
  Solar System Research   Phone: +49 5556 979-334
  Max-Planck-Str. 2   Fax  : +49 5556 979-240
  D-37191 Katlenburg-Lindau   Mail : mich...@mps.mpg.de
--
http://mail.python.org/mailman/listinfo/python-announce-list

   Support the Python Software Foundation:
   http://www.python.org/psf/donations/


Re: Generic Python Benchmark suite?

2010-01-19 Thread M.-A. Lemburg
Anand Vaidya wrote:
 Is there a generic python benchmark suite in active development? I am
 looking forward to comparing some code on various python
 implementations (primarily CPython 2.x, CPython 3.x, UnladenSwallow,
 Psyco).
 
 I am happy with something that gives me a relative number eg: ULS is
 30% faster than CPy2.x etc
 
 I found pybench which is probably not maintained actively.

Oh, it is. In fact, I'm preparing a new version for Python 2.7.

 What do you suggest?
 
 PS: I think a benchmark should cover file / network, database  I/O,
 data structures (dict, list etc), object creation/manipulation,
 numbers, measure looping inefficiencies, effects of caching (memcache
 etc) at the minimum

pybench addresses many of the low-level aspects you're asking for.
It doesn't have an I/O tests, since these usually don't have much
to do with Python's performance, but rather that of the underlying
OS and hardware.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jan 19 2010)
 Python/Zope Consulting and Support ...http://www.egenix.com/
 mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/
 mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/


::: Try our new mxODBC.Connect Python Database Interface for free ! 


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   http://www.egenix.com/company/contact/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: basic Class in Python

2010-01-19 Thread Richard Brodie

bartc ba...@freeuk.com wrote in message 
news:xl_4n.28001$ym4.5...@text.news.virginmedia.com...

 Any particular reason why two, and not one (or three)? In some fonts it's 
 difficult to 
 tell how many as they run together.

It follows the C convention for reserved identifers. 


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: syntax

2010-01-19 Thread Gib Bogle

Looks like homework.
--
http://mail.python.org/mailman/listinfo/python-list


Re: searching and storing large quantities of xml!

2010-01-19 Thread Stefan Behnel
dads, 18.01.2010 22:39:
 There was one thing that I forgot about - when ElementTree fails to
 parse due to an element not being closed why doesn't it close the file
 like object.

Because it didn't open it?

Stefan
-- 
http://mail.python.org/mailman/listinfo/python-list


thread return code

2010-01-19 Thread Rajat
Hi,

I'm using threading module in Python 2.6.4. I'm using thread's join()
method.

On the new thread I'm running a function which returns a code at the
end. Is there a way I access that code in the parent thread after
thread finishes? Simply, does join() could get me that code?


Regards,
Rajat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python gui ide under linux..like visual studio ;) ?

2010-01-19 Thread ted
Il 18/01/2010 21:59, Mike Driscoll ha scritto:
 On Jan 18, 8:32 am, ted t...@sjksdjk.it wrote:
 Hi at all...
 Can someone please give me some advice, about a good IDE with control
 GUI under Linux ?

 Actually i know QT Creator by Nokia which i can use with Python (but i
 don't know how).

 And, a good library for access to database (mysql, sql server, oracle) ?

 Thank you very much !

 bye
 
 Check out Dabo for the database stuff: http://dabodev.com/
 
 There is nothing like Visual Studio. The closest I've found are things
 like the QT Creator and wxGlade / Boa Constructor / wxFormBuilder. I
 know I've heard of another one that was commercial, but I can't recall
 the name at the moment.
 
 ---
 Mike Driscoll
 
 Blog:   http://blog.pythonlibrary.org
 
 PyCon 2010 Atlanta Feb 19-21  http://us.pycon.org/

Thank you very much.

Bye!

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: thread return code

2010-01-19 Thread Alf P. Steinbach

* Rajat:

Hi,

I'm using threading module in Python 2.6.4. I'm using thread's join()
method.

On the new thread I'm running a function which returns a code at the
end. Is there a way I access that code in the parent thread after
thread finishes? Simply, does join() could get me that code?


join() always returns None.

But you can store the code in the thread object, and then access it after the 
join().



Cheers  hth.,

- Alf
--
http://mail.python.org/mailman/listinfo/python-list


what test runner should I use?

2010-01-19 Thread Chris Withers

Hi All,

I'm wondering what test runner I should use. Here's my list of requirements:

- cross platform (I develop for and on Windows, Linux and Mac)

- should not prevent tests from running with other test runners
  (so no plugins/layers/etc that only work with one specific test
   runner)

- should work with zc.buildout (preferably without a specialist recipe!)

So far I've tried the following with the resultant problems:

zope.testing

 - requires a special recipe to be useful
 - now emits deprecation warnings from itself:
   https://mail.zope.org/pipermail/zope-dev/2009-December/038965.html
 - coverage support is baroque to put it politely

twisted's trial

 - only has old-style script definition in setup.py, so doesn't work
   with buildout without hackery

 - drops _twisted_trial folders all over the place and doesn't clear
   them up

nose

 - can't see to get it to run only my packages tests, rather than
   including the tests of packages my package depends on

 - seems to be focused towards files rather than modules
   (which makes it not play nicely with buildout)

 - seems to be difficult to provide options to at configuration time
   that can then be overridden on the command line

I did also look at py.test's homepage but found it pretty scary.

What other options do people recommend?
Failing that, any ideas how to fix the problems above?

cheers,

Chris

--
http://mail.python.org/mailman/listinfo/python-list


Re: Updating an OptionMenu every time the text file it reads from is updated (Tkinter)

2010-01-19 Thread Alf P. Steinbach

* Dr. Benjamin David Clarke:

I currently have a program that reads in values for an OptionMenu from
a text file. I also have an option to add a line to that text file
which corresponds to a new value for that OptionMenu. How can I make
that OptionMenu update its values based on that text file without
restarting the program? In other words, every time I add a value to
the text file, I want the OptionMenu to immediately update to take
note of this change. I'll provide code if needed.



It's a bit unclear to me whether the update of the text file is from within the 
process as where the menu is, or not.


If the problem is connecting file updates to menu changes:

  If it's the same process, perhaps you can wrap all access of the text file so 
that whenever an operation to change the text file is performed, it calls back 
on interested parties (like event handling)?


  If it's not the same process you need some file changed notification. 
Windows has this functionality. I don't know if it's there in *nix. Anyway, 
since I'm still essentially a Python newbie I don't know of any Python modules 
for such functionality, if that's what you need. But perhaps someone else does...


Otherwise, if the problem is actually updating the menu, and it is a tkinter 
OptionMenu:


It appears that an OptionMenu has a logical attribute 'menu' that is a tkinter 
Menu, representing the options; if your OptionMenu is an object 'o' then you can 
write 'o[menu]' or 'o.cget( menu )' to get at that logical attribute. And 
tkinter Menu has various methods that you can use to inspect or update the menu; 
see url: http://effbot.org/tkinterbook/menu.htm. Disclaimer: I haven't tried 
updating, but I see no reason why it shouldn't work. :-)



Cheers  hth.,

- Alf
--
http://mail.python.org/mailman/listinfo/python-list


[ANN] Data Plotting Library DISLIN 10.0

2010-01-19 Thread Helmut Michels

Dear Pytnon users,

I am pleased to announce version 10.0 of the data plotting software
DISLIN.

DISLIN is a high-level and easy to use plotting library for
displaying data as curves, bar graphs, pie charts, 3D-colour plots,
surfaces, contours and maps. Several output formats are supported
such as X11, VGA, PostScript, PDF, CGM, WMF, HPGL, TIFF, GIF, PNG,
BMP and SVG.

The software is available for the most C, Fortran 77 and Fortran 90/95
compilers. Plotting extensions for the interpreting languages Perl,
Python and Java are also supported.

DISLIN distributions and manuals in PDF, PostScript and HTML format
are available from the DISLIN home page

 http://www.dislin.de

and via FTP from the server

 ftp://ftp.gwdg.de/pub/grafik/dislin

All DISLIN distributions are free for non-commercial use. Licenses
for commercial use are available from the site http://www.dislin.de.

 ---
  Helmut Michels
  Max Planck Institute for
  Solar System Research   Phone: +49 5556 979-334
  Max-Planck-Str. 2   Fax  : +49 5556 979-240
  D-37191 Katlenburg-Lindau   Mail : mich...@mps.mpg.de
--
http://mail.python.org/mailman/listinfo/python-list


Is HTML report of tests run using PyUnit (unittest) possible?

2010-01-19 Thread fossist
I am using PyUnit (unittest module) for loading test cases from our
modules and executing them. Is it possible to export the test results
as HTML report? Currently the results appear as text on standard
output while the tests execute. But is there something out of the box
available in PyUnit to make this possible?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Inheriting methods but over-riding docstrings

2010-01-19 Thread Michele Simionato
On Jan 16, 6:55 pm, Steven D'Aprano st...@remove-this-
cybersource.com.au wrote:
 I have a series of subclasses that inherit methods from a base class, but
 I'd like them to have their own individual docstrings.

The following is not tested more than you see and will not work for
builtin methods, but it should work in the common cases:

from types import FunctionType, CodeType

def newfunc(func, docstring):
c = func.func_code
nc = CodeType(c.co_argcount, c.co_nlocals, c.co_stacksize,
  c.co_flags, c.co_code, c.co_consts, c.co_names,
  c.co_varnames, c.co_filename, func.__name__,
  c.co_firstlineno, c.co_lnotab, c.co_freevars,
c.co_cellvars)
nf = FunctionType(nc, func.func_globals, func.__name__)
nf.__doc__ = docstring
return nf

def setdocstring(method, docstring):
cls = method.im_class
basefunc = getattr(super(cls, cls), method.__name__).im_func
setattr(cls, method.__name__, newfunc(basefunc, docstring))


# example of use

class B(object):
def m(self):
base
return 'ok'

class C(B):
pass

setdocstring(C.m, 'C.m docstring')

print B.m.__doc__ # the base docstring
print C.m.__doc__ # the new docstring
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generic Python Benchmark suite?

2010-01-19 Thread Antoine Pitrou
Le Mon, 18 Jan 2010 21:05:26 -0800, Anand Vaidya a écrit :
 @Antoine, Terry,
 
 Thanks for the suggestions.
 
 I will investigate those. I just ran the pybench, doesn't run on 3.x,
 2to3 fails.

You just have to use the pybench version that is bundled with 3.x (in the 
Tools directory).


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Updating an OptionMenu every time the text file it reads from is updated (Tkinter)

2010-01-19 Thread Peter Otten
Dr. Benjamin David Clarke wrote:

 I currently have a program that reads in values for an OptionMenu from
 a text file. I also have an option to add a line to that text file
 which corresponds to a new value for that OptionMenu. How can I make
 that OptionMenu update its values based on that text file without
 restarting the program? In other words, every time I add a value to
 the text file, I want the OptionMenu to immediately update to take
 note of this change. I'll provide code if needed.

Inferred from looking into the Tkinter source code:

# python 2.6
import Tkinter as tk

root = tk.Tk()

var  = tk.StringVar()
var.set(One)

optionmenu = tk.OptionMenu(root, var, One, Two, Three)
optionmenu.grid(row=0, column=1)

def add_option():
value = entry_add.get()
menu = optionmenu[menu]
variable = var 
command = None # what you passed as command argument to optionmenu
menu.add_command(label=value,
 command=tk._setit(variable, value, command))

label_show = tk.Label(root, text=current value)
label_show.grid(row=1, column=0)
entry_show = tk.Entry(root, textvariable=var)
entry_show.grid(row=1, column=1)

label_add = tk.Label(root, text=new option)
label_add.grid(row=2, column=0)
entry_add = tk.Entry(root)
entry_add.grid(row=2, column=1)

button_add = tk.Button(root, text=add option,
   command=add_option)
button_add.grid(row=2, column=2)

root.mainloop()

Peter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [ANN] Data Plotting Library DISLIN 10.0

2010-01-19 Thread superpollo

Helmut Michels ha scritto:

Dear Pytnon users,

I am pleased to announce version 10.0 of the data plotting software
DISLIN.


why dont you make it free software (i mean. GPL'ed)

bye
--
http://mail.python.org/mailman/listinfo/python-list


Re: python replace/sub/wildcard/regex issue

2010-01-19 Thread dippim
On Jan 18, 11:04 pm, tom badoug...@gmail.com wrote:
 hi...

 trying to figure out how to solve what should be an easy python/regex/
 wildcard/replace issue.

 i've tried a number of different approaches.. so i must be missing
 something...

 my initial sample text are:

 Soo Choi/spanLONGEDITBOXApryl Berney
 Soo Choi/spanLONGEDITBOXJoel Franks
 Joel Franks/spanGEDITBOXAlexander Yamato

 and i'm trying to get

 Soo Choi foo Apryl Berney
 Soo Choi foo Joel Franks
 Joel Franks foo Alexander Yamato

 the issue i'm facing.. is how to start at / and end at '' and
 substitute inclusive of the stuff inside the regex...

 i've tried derivations of

 name=re.sub(/s[^]*\, foo ,name)

 but i'm missing something...

 thoughts... thanks

 tom

The problem here is that /s matches itself correctly.  However, [^]*
consumes anything that's not  and then stops when it hits something
that is .  So, [^]* consumes pan in each case, then tries to match
\, but fails since there isn't a , so the match ends.  It never
makes it to the second .

I agree with Chris Rebert, regexes are dangerous because the number of
possible cases where you can match isn't always clear (see the above
explanation :).  Also, if the number of comparisons you have to do
isn't high, they can be inefficient.  However, for your limited set of
examples the following should work:

aList = ['Soo Choi/spanLONGEDITBOXApryl Berney',
'Soo Choi/spanLONGEDITBOXJoel Franks',
'Joel Franks/spanGEDITBOXAlexander Yamato']

matcher = re.compile(r[\w\W]*)

newList = []
for x in aList:
newList.append(matcher.sub( foo , x))

print newList

David
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: using super

2010-01-19 Thread Jean-Michel Pichavant

Gabriel Genellina wrote:

I see.
Then is there a reason why
  return super(Subclass, self).parrot()
would be prefered over the classic
  return Base.parrot(self)
?
Or is it just a matter of preference ?


For a longer explanation, see:

James Knight: Python's Super Considered Harmful
http://fuhm.net/super-harmful/

Michele Simionato: Things to Know About Python Super
http://www.artima.com/weblogs/viewpost.jsp?thread=236275
http://www.artima.com/weblogs/viewpost.jsp?thread=236278
http://www.artima.com/weblogs/viewpost.jsp?thread=237121



Thanks to all who replied to this thread.
I didn't remember why I didn't want to dive into super in the first 
place, now I remember :o)


I'm sure about one thing about super: it has a misleading name.

JM
--
http://mail.python.org/mailman/listinfo/python-list


Re: Py 3: Terminal script can't find relative path

2010-01-19 Thread Gnarlodious
On Jan 18, 4:21 pm, John Bokma j...@castleamber.com wrote:
 Gnarlodious gnarlodi...@gmail.com writes:
  I am running a script in a browser that finds the file in subfolder
  Data:

  Content=Plist('Data/Content.plist')

  However, running the same script in Terminal errors:

  IOError: [Errno 2] No such file or directory: 'Data/Content.plist'

 What does:

 ls -l Data/Content.plist

 in the terminal give?

I can replace with absolute paths and it works as expected. Could this
be a Python 3 bug? Where as a CGI script it finds the relative path
but not in Terminal?

-- Gnarlie
-- 
http://mail.python.org/mailman/listinfo/python-list


Performance of lists vs. list comprehensions

2010-01-19 Thread Gerald Britton
Yesterday I stumbled across some old code in a project I was working
on.  It does something like this:

mystring = '\n'.join( [ line for line in lines if some conditions
depending on line ] )

where lines is a simple list of strings.  I realized that the code
had been written before you could put a list comprehension as an
argument to join().  I figured that I could drop the '[' and ']' and
leave the rest as a list comprehension since that works with current
Python (and the project's base is 2.5).  So I rewrote the original
statement like this:

mystring = '\n'.join( line for line in lines if some conditions
depending on line )

It works as expected.  Then I got curious as to how it performs.  I
was surprised to learn that the rewritten expression runs more than
twice as _slow_.  e.g.:

 l
['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']

 Timer(' '.join([x for x in l]), 'l = map(str,range(10))').timeit()
2.9967339038848877

 Timer(' '.join(x for x in l), 'l = map(str,range(10))').timeit()
7.2045478820800781

Notice that I dropped the condition testing that was in my original
code.  I just wanted to see the effect of two different expressions.

I thought that maybe there was some lower bound on the number of the
items in the list or list comprehension beyond which the comprehension
would prove more efficient.  There doesn't appear to be one.  I scaled
the length of the input list up to 1 million items and got more or
less the same relative performance.

Now I'm really curious and I'd like to know:

1. Can anyone else confirm this observation?

2. Why should the pure list comprehension be slower than the same
comprehension enclosed in '[...]' ?

-- 
Gerald Britton
-- 
http://mail.python.org/mailman/listinfo/python-list


python gui ide under linux..like visual studio ;) ?

2010-01-19 Thread ryniek90



Il 18/01/2010 21:59, Mike Driscoll ha scritto:
   

  On Jan 18, 8:32 am, tedt...@sjksdjk.it  wrote:
 

  Hi at all...
  Can someone please give me some advice, about a good IDE with control
  GUI under Linux ?

  Actually i know QT Creator by Nokia which i can use with Python (but i
  don't know how).

  And, a good library for access to database (mysql, sql server, oracle) ?

  Thank you very much !

  bye
   
  
  Check out Dabo for the database stuff:http://dabodev.com/
  
  There is nothing like Visual Studio. The closest I've found are things

  like the QT Creator and wxGlade / Boa Constructor / wxFormBuilder. I
  know I've heard of another one that was commercial, but I can't recall
  the name at the moment.
  
  ---

  Mike Driscoll
  
  Blog:http://blog.pythonlibrary.org
  
  PyCon 2010 Atlanta Feb 19-21http://us.pycon.org/
 

Thank you very much.

Bye!


   


Hi, check this Python Wiki pages:
http://wiki.python.org/moin/IntegratedDevelopmentEnvironments

Cheers.
-- 
http://mail.python.org/mailman/listinfo/python-list


Py 3: How to switch application to Unicode strings?

2010-01-19 Thread Gnarlodious
I am using Python 3, getting an error from SQLite:

sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless
you use a text_factory that can interpret 8-bit bytestrings (like
text_factory = str). It is highly recommended that you instead just
switch your application to Unicode strings.

So... how do I switch to Unicode? I thought I was doing it when I put

# coding:utf-8

at the start of my script.

-- Gnarlie
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of lists vs. list comprehensions

2010-01-19 Thread Alf P. Steinbach

* Gerald Britton:

Yesterday I stumbled across some old code in a project I was working
on.  It does something like this:

mystring = '\n'.join( [ line for line in lines if some conditions
depending on line ] )

where lines is a simple list of strings.  I realized that the code
had been written before you could put a list comprehension as an
argument to join().  I figured that I could drop the '[' and ']' and
leave the rest as a list comprehension since that works with current
Python (and the project's base is 2.5).  So I rewrote the original
statement like this:

mystring = '\n'.join( line for line in lines if some conditions
depending on line )

It works as expected.  Then I got curious as to how it performs.  I
was surprised to learn that the rewritten expression runs more than
twice as _slow_.  e.g.:


l

['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']


Timer(' '.join([x for x in l]), 'l = map(str,range(10))').timeit()

2.9967339038848877


Timer(' '.join(x for x in l), 'l = map(str,range(10))').timeit()

7.2045478820800781

Notice that I dropped the condition testing that was in my original
code.  I just wanted to see the effect of two different expressions.

I thought that maybe there was some lower bound on the number of the
items in the list or list comprehension beyond which the comprehension
would prove more efficient.  There doesn't appear to be one.  I scaled
the length of the input list up to 1 million items and got more or
less the same relative performance.

Now I'm really curious and I'd like to know:

1. Can anyone else confirm this observation?


 Timer(' '.join([x for x in l]), 'l = map(str,range(10))').timeit()
5.8625191190500345
 Timer(' '.join(x for x in l), 'l = map(str,range(10))').timeit()
12.093135300715574
 _



2. Why should the pure list comprehension be slower than the same
comprehension enclosed in '[...]' ?


Regarding (2) the unparenthesized expression in join is *not* a list 
comprehension but a generator expression.


And as such it involves join calling next() on the generator object repeatedly, 
with each next() call involving a light-weight context shift.


In addition the docs mumble something about lazy evaluation, and that may also 
contribute to the overhead.


I think that in contrast, the interpreter can evaluate a list comprehension, [x 
for x in blah], directly without any context shifting, just by transforming it 
to equivalent code and putting the target expressions innermost there.


And so the main factor causing a slowdown for a list comprehension would, I 
think, be paging and such if the list it produced was Really Huge, while for the 
generator there's no memory issue but rather much calling  context shifting.



Cheers  hth.,

- Alf
--
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of lists vs. list comprehensions

2010-01-19 Thread Gerald Britton
Thanks!  Good explanation.

On Tue, Jan 19, 2010 at 10:57 AM, Alf P. Steinbach al...@start.no wrote:
 * Gerald Britton:

 Yesterday I stumbled across some old code in a project I was working
 on.  It does something like this:

 mystring = '\n'.join( [ line for line in lines if some conditions
 depending on line ] )

 where lines is a simple list of strings.  I realized that the code
 had been written before you could put a list comprehension as an
 argument to join().  I figured that I could drop the '[' and ']' and
 leave the rest as a list comprehension since that works with current
 Python (and the project's base is 2.5).  So I rewrote the original
 statement like this:

 mystring = '\n'.join( line for line in lines if some conditions
 depending on line )

 It works as expected.  Then I got curious as to how it performs.  I
 was surprised to learn that the rewritten expression runs more than
 twice as _slow_.  e.g.:

 l

 ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']

 Timer(' '.join([x for x in l]), 'l = map(str,range(10))').timeit()

 2.9967339038848877

 Timer(' '.join(x for x in l), 'l = map(str,range(10))').timeit()

 7.2045478820800781

 Notice that I dropped the condition testing that was in my original
 code.  I just wanted to see the effect of two different expressions.

 I thought that maybe there was some lower bound on the number of the
 items in the list or list comprehension beyond which the comprehension
 would prove more efficient.  There doesn't appear to be one.  I scaled
 the length of the input list up to 1 million items and got more or
 less the same relative performance.

 Now I'm really curious and I'd like to know:

 1. Can anyone else confirm this observation?

 Timer(' '.join([x for x in l]), 'l = map(str,range(10))').timeit()
 5.8625191190500345
 Timer(' '.join(x for x in l), 'l = map(str,range(10))').timeit()
 12.093135300715574
 _


 2. Why should the pure list comprehension be slower than the same
 comprehension enclosed in '[...]' ?

 Regarding (2) the unparenthesized expression in join is *not* a list
 comprehension but a generator expression.

 And as such it involves join calling next() on the generator object
 repeatedly, with each next() call involving a light-weight context shift.

 In addition the docs mumble something about lazy evaluation, and that may
 also contribute to the overhead.

 I think that in contrast, the interpreter can evaluate a list comprehension,
 [x for x in blah], directly without any context shifting, just by
 transforming it to equivalent code and putting the target expressions
 innermost there.

 And so the main factor causing a slowdown for a list comprehension would, I
 think, be paging and such if the list it produced was Really Huge, while for
 the generator there's no memory issue but rather much calling  context
 shifting.


 Cheers  hth.,

 - Alf
 --
 http://mail.python.org/mailman/listinfo/python-list




-- 
Gerald Britton
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of lists vs. list comprehensions

2010-01-19 Thread Stephen Hansen
On Tue, Jan 19, 2010 at 7:30 AM, Gerald Britton gerald.brit...@gmail.comwrote:
[snip]

 mystring = '\n'.join( line for line in lines if some conditions
 depending on line )


Note, this is not a list comprehension, but a generator comprehension.

A list comprehension is used to build, in one sweep, a list and return it.

A generator comprehension is used to build an generator that can be iterated
over to produce a sequence of items.

I think you're seeing not performance of the expression, but the function
call overhead which generators include. A generator requires one to call its
next method to get each item: a list comprehension is just syntactical sugar
for a for loop.

As to which is faster, I think it depends. Your test-case is using *really*
small ranges-- just ten items. In this case, just doing a simple loop to
build a list and then pass it through join is probably faster. If you're
using a much larger list though, the characteristics of the problem may
change, where the lazy evaluation of a generator expression may be more
desirable.

A list comprehension includes a waste of memory, too: you have to build up a
complete list before you return it, and if you have a lot of lines? That can
be a problem.

As you can see, the performance characteristics between the two narrow
considerably if you compare a larger sample:

  Timer(' '.join([str(x) for x in l]), 'l = xrange(100)').timeit()
50.092024087905884
 Timer(' '.join(str(x) for x in l), 'l = xrange(100)').timeit()
54.591049909591675

--S
-- 
http://mail.python.org/mailman/listinfo/python-list


multiprocessing problems

2010-01-19 Thread DoxaLogos
Hi,

I decided to play around with the multiprocessing module, and I'm
having some strange side effects that I can't explain.  It makes me
wonder if I'm just overlooking something obvious or not.  Basically, I
have a script parses through a lot of files doing search and replace
on key strings inside the file.  I decided the split the work up on
multiple processes on each processor core (4 total).  I've tried many
various ways doing this form using pool to calling out separate
processes, but the result has been the same: computer crashes from
endless process spawn.

Here's the guts of my latest incarnation.

def ProcessBatch(files):
p = []
for file in files:
p.append(Process(target=ProcessFile,args=file))

for x in p:
x.start()

for x in p:
x.join()

p = []
return

Now, the function calling ProcessBatch looks like this:
def ReplaceIt(files):

All this does is walks through all the files passed to it and
verifies
the file is a legitimate file to be processed (project file).

@param files:  files to be processed

processFiles = []
for replacefile in files:
if(CheckSkipFile(replacefile)):
processFiles.append(replacefile)
if(len(processFiles) == 4):
ProcessBatch(processFiles)
processFiles = []

#check for left over files once main loop is done and process them
if(len(processFiles)  0):
ProcessBatch(processFiles)

return

Specs:
Windows 7 64-bit
Python v2.6.2
Intel i5


Thanks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Py 3: Terminal script can't find relative path

2010-01-19 Thread nn
On Jan 19, 8:03 am, Gnarlodious gnarlodi...@gmail.com wrote:
 On Jan 18, 4:21 pm, John Bokma j...@castleamber.com wrote:

  Gnarlodious gnarlodi...@gmail.com writes:
   I am running a script in a browser that finds the file in subfolder
   Data:

   Content=Plist('Data/Content.plist')

   However, running the same script in Terminal errors:

   IOError: [Errno 2] No such file or directory: 'Data/Content.plist'

  What does:

  ls -l Data/Content.plist

  in the terminal give?

 I can replace with absolute paths and it works as expected. Could this
 be a Python 3 bug? Where as a CGI script it finds the relative path
 but not in Terminal?

 -- Gnarlie

Stop and think for a second what you are saying: It works with
absolute paths, it works as CGI script with relative paths, it doesn't
work in the terminal. What is different? Do you know for sure what
folder you are starting at when using the relative path? Most likely
the terminal starts in a different place than the CGI script.
-- 
http://mail.python.org/mailman/listinfo/python-list


Closing a Curses Window???

2010-01-19 Thread Adam Tauno Williams
I'm uses the curses module of Python to create a TUI.  I can create
windows, and subwindows, but I can't find anything on deleting or
disposing of a a subwindow.

http://docs.python.org/library/curses.html
http://www.amk.ca/python/howto/curses/curses.html

How do I get rid of a subwindow?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: can i examine the svn rev used to build a python 3 executable?

2010-01-19 Thread Robert P. J. Day
On Tue, 19 Jan 2010, Robert P. J. Day wrote:

   i'm currently checking out python3 from the svn repo, configuring,
 building and installing under /usr/local/bin on my fedora 12 system,
 all good.

   i'm curious, though -- is there a query i can make of that
 executable that would tell me what svn rev it was built from?  i'm
 guessing not, but i thought i'd ask.

  never mind.  just discovered that while python3 -V won't do it,
executing it gives me:

$ python3
Python 3.2a0 (py3k:77609, Jan 19 2010, 04:10:16)
...

and it's that 77609 rev number i was after.

rday
--


Robert P. J. Day   Waterloo, Ontario, CANADA

Linux Consulting, Training and Kernel Pedantry.

Web page:  http://crashcourse.ca
Twitter:   http://twitter.com/rpjday

-- 
http://mail.python.org/mailman/listinfo/python-list


use of super

2010-01-19 Thread harryos
hi
I was going thru the weblog appln in practical django book by
bennet .I came across this

class Entry(Model):
def save(self):
dosomething()
super(Entry,self).save()

I couldn't make out why Entry and self are passed as arguments to super
().Can someone please explain?
thanks
harry
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of lists vs. list comprehensions

2010-01-19 Thread Gerald Britton
Interestingly, I scaled it up to a million list items with more or
less the same results.  It's helpful to see that your results are
different.  That leads me to suspect that mine are somehow related to
my own environment.

Still I think the key is the overhead in calling next() for each item
in the generator expression.  That in itself probably accounts for the
differences since function calls are somewhat expensive IIRC.

On Tue, Jan 19, 2010 at 11:18 AM, Stephen Hansen apt.shan...@gmail.com wrote:
 On Tue, Jan 19, 2010 at 7:30 AM, Gerald Britton gerald.brit...@gmail.com
 wrote:
 [snip]

 mystring = '\n'.join( line for line in lines if some conditions
 depending on line )

 Note, this is not a list comprehension, but a generator comprehension.
 A list comprehension is used to build, in one sweep, a list and return it.
 A generator comprehension is used to build an generator that can be iterated
 over to produce a sequence of items.
 I think you're seeing not performance of the expression, but the function
 call overhead which generators include. A generator requires one to call its
 next method to get each item: a list comprehension is just syntactical sugar
 for a for loop.
 As to which is faster, I think it depends. Your test-case is using *really*
 small ranges-- just ten items. In this case, just doing a simple loop to
 build a list and then pass it through join is probably faster. If you're
 using a much larger list though, the characteristics of the problem may
 change, where the lazy evaluation of a generator expression may be more
 desirable.
 A list comprehension includes a waste of memory, too: you have to build up a
 complete list before you return it, and if you have a lot of lines? That can
 be a problem.
 As you can see, the performance characteristics between the two narrow
 considerably if you compare a larger sample:
   Timer(' '.join([str(x) for x in l]), 'l = xrange(100)').timeit()
 50.092024087905884
 Timer(' '.join(str(x) for x in l), 'l = xrange(100)').timeit()
 54.591049909591675
 --S



-- 
Gerald Britton
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing problems

2010-01-19 Thread Adam Tauno Williams
 I decided to play around with the multiprocessing module, and I'm
 having some strange side effects that I can't explain.  It makes me
 wonder if I'm just overlooking something obvious or not.  Basically, I
 have a script parses through a lot of files doing search and replace
 on key strings inside the file.  I decided the split the work up on
 multiple processes on each processor core (4 total).  I've tried many
 various ways doing this form using pool to calling out separate
 processes, but the result has been the same: computer crashes from
 endless process spawn.

Are you hitting a ulimit error?  The number of processes you can create
is probably limited. 

TIP: close os.stdin on your subprocesses.

 Here's the guts of my latest incarnation.
 def ProcessBatch(files):
 p = []
 for file in files:
 p.append(Process(target=ProcessFile,args=file))
 for x in p:
 x.start()
 for x in p:
 x.join()
 p = []
 return
 Now, the function calling ProcessBatch looks like this:
 def ReplaceIt(files):
 processFiles = []
 for replacefile in files:
 if(CheckSkipFile(replacefile)):
 processFiles.append(replacefile)
 if(len(processFiles) == 4):
 ProcessBatch(processFiles)
 processFiles = []
 #check for left over files once main loop is done and process them
 if(len(processFiles)  0):
 ProcessBatch(processFiles)

According to this you will create files is sets of four, but an unknown
number of sets of four.

-- 
http://mail.python.org/mailman/listinfo/python-list


can i examine the svn rev used to build a python 3 executable?

2010-01-19 Thread Robert P. J. Day

  i'm currently checking out python3 from the svn repo, configuring,
building and installing under /usr/local/bin on my fedora 12 system,
all good.

  i'm curious, though -- is there a query i can make of that
executable that would tell me what svn rev it was built from?  i'm
guessing not, but i thought i'd ask.

rday
--


Robert P. J. Day   Waterloo, Ontario, CANADA

Linux Consulting, Training and Kernel Pedantry.

Web page:  http://crashcourse.ca
Twitter:   http://twitter.com/rpjday

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing problems

2010-01-19 Thread DoxaLogos
On Jan 19, 10:26 am, Adam Tauno Williams awill...@opengroupware.us
wrote:
  I decided to play around with the multiprocessing module, and I'm
  having some strange side effects that I can't explain.  It makes me
  wonder if I'm just overlooking something obvious or not.  Basically, I
  have a script parses through a lot of files doing search and replace
  on key strings inside the file.  I decided the split the work up on
  multiple processes on each processor core (4 total).  I've tried many
  various ways doing this form using pool to calling out separate
  processes, but the result has been the same: computer crashes from
  endless process spawn.

 Are you hitting a ulimit error?  The number of processes you can create
 is probably limited.

 TIP: close os.stdin on your subprocesses.



  Here's the guts of my latest incarnation.
  def ProcessBatch(files):
      p = []
      for file in files:
          p.append(Process(target=ProcessFile,args=file))
      for x in p:
          x.start()
      for x in p:
          x.join()
      p = []
      return
  Now, the function calling ProcessBatch looks like this:
  def ReplaceIt(files):
      processFiles = []
      for replacefile in files:
          if(CheckSkipFile(replacefile)):
              processFiles.append(replacefile)
              if(len(processFiles) == 4):
                  ProcessBatch(processFiles)
                  processFiles = []
      #check for left over files once main loop is done and process them
      if(len(processFiles)  0):
          ProcessBatch(processFiles)

 According to this you will create files is sets of four, but an unknown
 number of sets of four.

What would be the proper way to only do a set of 4, stop, then do
another set of 4?  I'm trying to only 4 files at time before doing
another set of 4.
-- 
http://mail.python.org/mailman/listinfo/python-list


Create list/dict from string

2010-01-19 Thread SoxFan44
I was wondering if there was a way to create a list (which in this
case would contain several dicts) based on a string passed in by the
user.  Security is not an issue.  Basically I want to be able to have
the user pass in using optparse:
--actions=[{action_name: action_1, val: asdf, val2:
asdf},
   {action_name: action_2, val: asdf, val2:
asdf},
   {action_name: action_1, val: asdf, val2:
asdf}]

And have this create a list/dict.  I'm aware of pickle, but it won't
work as far as I can tell.

Thanks.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Arrrrgh! Another module broken

2010-01-19 Thread nn
On Jan 18, 11:37 am, Grant Edwards inva...@invalid.invalid wrote:
 On 2010-01-18, Jive Dadson notonthe...@noisp.com wrote:

  I just found another module that broke when I went to 2.6.  Gnuplot.
  Apparently one of its routines has a parameter named with.  That used
  to be okay, and now it's not.

 I remember seeing depreicated warnings about that _years_ ago,
 and I would have sworn it's been fixed for at least a couple
 years.

 --
 Grant

FWIW, with deprecation warnings exist since September 19, 2006 when
Python 2.5 was released.
-- 
http://mail.python.org/mailman/listinfo/python-list


how to sort two dimensional array

2010-01-19 Thread robert somerville
Hi;
 i am having trouble trying to sort the rows of a 2 dimensional array by the
values in the first column .. does anybody know how or have an example of
how to do this ??? while leaving the remain columns remain relative to the
leading column

from numpy import *

a=array( [ [4, 4, 3], [4, 5, 2],  [3, 1, 1] ] )

i would like to generate the output (or get the output ...)

b = [ [3,1,1], [4,4,3], [4,5,2] ]

thanks;
bob
-- 
http://mail.python.org/mailman/listinfo/python-list


how to sort two dimensional array ??

2010-01-19 Thread robert somerville
Hi;
 i am having trouble trying to sort the rows of a 2 dimensional array by the
values in the first column .. does anybody know how or have an example of
how to do this ??? while leaving the remain columns remain relative to the
leading column

from numpy import *

a=array( [ [4, 4, 3], [4, 5, 2],  [3, 1, 1] ] )

i would like to generate the output (or get the output ...)

b = [ [3,1,1], [4,4,3], [4,5,2] ]

thanks;
bob
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: use of super

2010-01-19 Thread Simon Brunning
2010/1/19 harryos oswald.ha...@gmail.com:
 I was going thru the weblog appln in practical django book by
 bennet .I came across this

 class Entry(Model):
        def save(self):
                dosomething()
                super(Entry,self).save()

 I couldn't make out why Entry and self are passed as arguments to super
 ().Can someone please explain?

Does http://docs.python.org/library/functions.html#super make
anything clearer?

-- 
Cheers,
Simon B.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Create list/dict from string

2010-01-19 Thread Peter Otten
SoxFan44 wrote:

 I was wondering if there was a way to create a list (which in this
 case would contain several dicts) based on a string passed in by the
 user.  Security is not an issue.  Basically I want to be able to have
 the user pass in using optparse:
 --actions=[{action_name: action_1, val: asdf, val2:
 asdf},
{action_name: action_2, val: asdf, val2:
 asdf},
{action_name: action_1, val: asdf, val2:
 asdf}]
 
 And have this create a list/dict.  I'm aware of pickle, but it won't
 work as far as I can tell.

Both eval() and json.loads() will do. eval() is dangerous as it allows the 
user to run arbitrary python code.

Peter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is HTML report of tests run using PyUnit (unittest) possible?

2010-01-19 Thread Phlip
On Jan 19, 3:16 am, fossist foss...@gmail.com wrote:

 I am using PyUnit (unittest module) for loading test cases from our
 modules and executing them. Is it possible to export the test results
 as HTML report? Currently the results appear as text on standard
 output while the tests execute. But is there something out of the box
 available in PyUnit to make this possible?

django-test-extensions can do this, but I'm unaware how well it works
without Django. (And I _could_ complain that it comes with one or two
irritations _with_ Django;)

I would download it and read its source to see how the --xml option
works.

(Then you'd use a XSL filter to rip the XML into HTML...)

--
  Phlip
  http://c2.com/cgi/wiki?ZeekLand
-- 
http://mail.python.org/mailman/listinfo/python-list


how to sort two dimensional array ??

2010-01-19 Thread Robert Somerville

Hi;
i am having trouble trying to sort the rows of a 2 dimensional array by 
the values in the first column .. does anybody know how or have an 
example of how to do this ??? while leaving the remain columns remain 
relative to the leading column


from numpy import *

a=array( [ [4, 4, 3], [4, 5, 2],  [3, 1, 1] ] )

i would like to generate the output (or get the output ...)

b = [ [3,1,1], [4,4,3], [4,5,2] ]



thanks;
bob
--
http://mail.python.org/mailman/listinfo/python-list


Iterate over group names in a regex match?

2010-01-19 Thread Brian D
Here's a simple named group matching pattern:

 s = 1,2,3
 p = re.compile(r(?Pone\d),(?Ptwo\d),(?Pthree\d))
 m = re.match(p, s)
 m
_sre.SRE_Match object at 0x011BE610
 print m.groups()
('1', '2', '3')

Is it possible to call the group names, so that I can iterate over
them?

The result I'm looking for would be:

('one', 'two', 'three')



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python IDE for MacOS-X

2010-01-19 Thread Phlip
On Jan 18, 11:09 pm, Jean Guillaume Pyraksos wis...@hotmail.com
wrote:

 What's the best one to use with beginners ?
 Something with integrated syntax editor, browser of doc...
 Thanks,

Before this message goes stale, there's TextMate (which I have too
much experience with to consider redeemable in any way)...

...and there's Komodo Edit.

The problems I have with that are...

 - the code browsing breaks when the wind blows, and you must Find in
Files
(my library, Django, is symlinked below my project, so FiF can see
it)

 - the editor refuses to let me run a script - such as a test script
-
each time I hit F5. For whatever reason - poor keyboard remapping,
or lack of a plug-in for Django - I must use Ctrl+R to run a batch
file, just to test. Testing should be ONE (1) keystroke.

 - FiF sees my .git folder, and everything in it. WTF??

 - after searching or anything, I must click the editor with the mouse
 to resume editing. Escape should move the focus out of the
 frame to the editor, and of course any activity that takes you
 out of the editor should dump you back into it by default

Other than that, it has all the standard edito

My problem with Mac in general is the keystrokes are always so broken
they force me to switch to a mouse. Nobody seems to usability test
these things and see how long you can type  manipulate windows
without
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Create list/dict from string

2010-01-19 Thread Gerald Britton
Can't you just use the dict() built-in function like this:

dict({action_name: action_1, val: asdf})

Of course if the list is not properly formed, this will fail.  But I
guess you have thought of that already.

On Tue, Jan 19, 2010 at 11:33 AM, SoxFan44 gregchag...@gmail.com wrote:
 I was wondering if there was a way to create a list (which in this
 case would contain several dicts) based on a string passed in by the
 user.  Security is not an issue.  Basically I want to be able to have
 the user pass in using optparse:
 --actions=[{action_name: action_1, val: asdf, val2:
 asdf},
               {action_name: action_2, val: asdf, val2:
 asdf},
               {action_name: action_1, val: asdf, val2:
 asdf}]

 And have this create a list/dict.  I'm aware of pickle, but it won't
 work as far as I can tell.

 Thanks.
 --
 http://mail.python.org/mailman/listinfo/python-list




-- 
Gerald Britton
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Create list/dict from string

2010-01-19 Thread Simon Brunning
2010/1/19 Peter Otten __pete...@web.de:
 Both eval() and json.loads() will do. eval() is dangerous as it allows the
 user to run arbitrary python code.

Something like http://code.activestate.com/recipes/364469/ might be
worth a look too.

-- 
Cheers,
Simon B.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Iterate over group names in a regex match?

2010-01-19 Thread djc
Brian D wrote:
 Here's a simple named group matching pattern:
 
 s = 1,2,3
 p = re.compile(r(?Pone\d),(?Ptwo\d),(?Pthree\d))
 m = re.match(p, s)
 m
 _sre.SRE_Match object at 0x011BE610
 print m.groups()
 ('1', '2', '3')
 
 Is it possible to call the group names, so that I can iterate over
 them?
 
 The result I'm looking for would be:
 
 ('one', 'two', 'three')


 print(m.groupdict())
{'one': '1', 'three': '3', 'two': '2'}

 print(m.groupdict().keys())
['one', 'three', 'two']


-- 
David Clark, MSc, PhD.  Dept of Information Studies
Systems  Web Development Manager   University  College  London
UCL Centre for Publishing   Gower Str  London  WCIE 6BT
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Iterate over group names in a regex match?

2010-01-19 Thread Alf P. Steinbach

* Brian D:

Here's a simple named group matching pattern:


s = 1,2,3
p = re.compile(r(?Pone\d),(?Ptwo\d),(?Pthree\d))
m = re.match(p, s)
m

_sre.SRE_Match object at 0x011BE610

print m.groups()

('1', '2', '3')

Is it possible to call the group names, so that I can iterate over
them?

The result I'm looking for would be:

('one', 'two', 'three')


I never used that beast (I'm in a sense pretty new to Python, although starting 
some months back I only investigate what's needed for my writings), but checking 
things in the interpreter:



 import re
 re
module 're' from 'C:\Program Files\cpython\python26\lib\re.pyc'
 s = 1,2,3
 p = re.compile(r(?Pone\d),(?Ptwo\d),(?Pthree\d))
 m = re.match(p, s)
 m
_sre.SRE_Match object at 0x01319F70
 m.groups()
('1', '2', '3')
 type( m.groups() )
type 'tuple'
 dir( m )
['__copy__', '__deepcopy__', 'end', 'expand', 'group', 'groupdict', 'groups', 
'span', 'start']

 m.groupdict
built-in method groupdict of _sre.SRE_Match object at 0x01319F70
 m.groupdict()
{'one': '1', 'three': '3', 'two': '2'}
 print( tuple( m.groupdict().keys() ) )
('one', 'three', 'two')
 _


Cheers  hth.,

- Alf
--
http://mail.python.org/mailman/listinfo/python-list


Re: use of super

2010-01-19 Thread harryos
thanks Simon..I should have checked it
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Iterate over group names in a regex match?

2010-01-19 Thread Peter Otten
Brian D wrote:

 Here's a simple named group matching pattern:
 
 s = 1,2,3
 p = re.compile(r(?Pone\d),(?Ptwo\d),(?Pthree\d))
 m = re.match(p, s)
 m
 _sre.SRE_Match object at 0x011BE610
 print m.groups()
 ('1', '2', '3')
 
 Is it possible to call the group names, so that I can iterate over
 them?
 
 The result I'm looking for would be:
 
 ('one', 'two', 'three')

 s = 1,2,3
 p = re.compile(r(?Pone\d),(?Ptwo\d),(?Pthree\d))
 m = re.match(p, s)
 dir(m)
['__copy__', '__deepcopy__', 'end', 'expand', 'group', 'groupdict', 
'groups', 'span', 'start']
 m.groupdict().keys()
['one', 'three', 'two']
 sorted(m.groupdict(), key=m.span)
['one', 'two', 'three']

Peter
-- 
http://mail.python.org/mailman/listinfo/python-list


[ANN] Python-es mailing list changes home

2010-01-19 Thread Chema Cortes
=== Python-es mailing list changes home ===

Due to technical problems with the site that usually ran the Python-es
mailing list (Python list for the Spanish speaking community), we are
setting up a new one under the python.org umbrella.  Hence, the new
list will become python...@python.org (the old one was
python...@aditel.org).

Please feel free to subscribe to the new list in:

http://mail.python.org/mailman/listinfo/python-es

Thanks!

=== La lista de distribución Python-es cambia de lugar ===

Debido a problemas técnicos con el sitio que normalmente albergaba la
lista de Python-es (Lista de Python para la comunidad
hispano-hablante), estamos configurando una nueva en el sitio
python.org.  Así que la nueva lista será python...@python.org (en
sustitución de la antigua python...@aditel.org).

Por favor, si lo deseas, date de alta en la nueva lista en:

http://mail.python.org/mailman/listinfo/python-es

¡Gracias!

Chema Cortes, Oswaldo Hernández y Francesc Alted
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to sort two dimensional array ??

2010-01-19 Thread Robert Kern

On 2010-01-19 11:00 AM, Robert Somerville wrote:

Hi;
i am having trouble trying to sort the rows of a 2 dimensional array by
the values in the first column .. does anybody know how or have an
example of how to do this ??? while leaving the remain columns remain
relative to the leading column


You will want to ask numpy questions on the numpy mailing list:

  http://www.scipy.org/Mailing_Lists


from numpy import *

a=array( [ [4, 4, 3], [4, 5, 2], [3, 1, 1] ] )

i would like to generate the output (or get the output ...)

b = [ [3,1,1], [4,4,3], [4,5,2] ]


In [4]: import numpy as np

In [5]: a = np.array( [ [4, 4, 3], [4, 5, 2], [3, 1, 1] ] )

In [6]: i = a[:,0].argsort()

In [7]: b = a[i]

In [8]: b
Out[8]:
array([[3, 1, 1],
   [4, 4, 3],
   [4, 5, 2]])

--
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco

--
http://mail.python.org/mailman/listinfo/python-list


Re: how to sort two dimensional array ??

2010-01-19 Thread Stephen Hansen
On Tue, Jan 19, 2010 at 9:00 AM, Robert Somerville 
rsomervi...@sjgeophysics.com wrote:

 Hi;


Hi, why did you post this three times?


 i am having trouble trying to sort the rows of a 2 dimensional array by the
 values in the first column .. does anybody know how or have an example of
 how to do this ??? while leaving the remain columns remain relative to the
 leading column

 from numpy import *

 a=array( [ [4, 4, 3], [4, 5, 2],  [3, 1, 1] ] )

 i would like to generate the output (or get the output ...)

 b = [ [3,1,1], [4,4,3], [4,5,2] ]


I don't use numpy, so this may or may not work. But for a regular python
list-of-lists (2 dimensional array), you simply do:

a.sort(key=lambda x: x[0])

--S
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Iterate over group names in a regex match?

2010-01-19 Thread Brian D
On Jan 19, 11:28 am, Peter Otten __pete...@web.de wrote:
 Brian D wrote:
  Here's a simple named group matching pattern:

  s = 1,2,3
  p = re.compile(r(?Pone\d),(?Ptwo\d),(?Pthree\d))
  m = re.match(p, s)
  m
  _sre.SRE_Match object at 0x011BE610
  print m.groups()
  ('1', '2', '3')

  Is it possible to call the group names, so that I can iterate over
  them?

  The result I'm looking for would be:

  ('one', 'two', 'three')
  s = 1,2,3
  p = re.compile(r(?Pone\d),(?Ptwo\d),(?Pthree\d))
  m = re.match(p, s)
  dir(m)

 ['__copy__', '__deepcopy__', 'end', 'expand', 'group', 'groupdict',
 'groups', 'span', 'start'] m.groupdict().keys()

 ['one', 'three', 'two'] sorted(m.groupdict(), key=m.span)

 ['one', 'two', 'three']

 Peter

groupdict() does it. I've never seen it used before. Very cool!

Thank you all for taking time to answer the question.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Iterate over group names in a regex match?

2010-01-19 Thread Brian D
On Jan 19, 11:51 am, Brian D brianden...@gmail.com wrote:
 On Jan 19, 11:28 am, Peter Otten __pete...@web.de wrote:



  Brian D wrote:
   Here's a simple named group matching pattern:

   s = 1,2,3
   p = re.compile(r(?Pone\d),(?Ptwo\d),(?Pthree\d))
   m = re.match(p, s)
   m
   _sre.SRE_Match object at 0x011BE610
   print m.groups()
   ('1', '2', '3')

   Is it possible to call the group names, so that I can iterate over
   them?

   The result I'm looking for would be:

   ('one', 'two', 'three')
   s = 1,2,3
   p = re.compile(r(?Pone\d),(?Ptwo\d),(?Pthree\d))
   m = re.match(p, s)
   dir(m)

  ['__copy__', '__deepcopy__', 'end', 'expand', 'group', 'groupdict',
  'groups', 'span', 'start'] m.groupdict().keys()

  ['one', 'three', 'two'] sorted(m.groupdict(), key=m.span)

  ['one', 'two', 'three']

  Peter

 groupdict() does it. I've never seen it used before. Very cool!

 Thank you all for taking time to answer the question.


FYI, here's an example of the working result ...

 for k, v in m.groupdict().iteritems():
k, v


('one', '1')
('three', '3')
('two', '2')


The use for this is that I'm pulling data from a flat text file using
regex, and storing values in a dictionary that will be used to update
a database.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Iterate over group names in a regex match?

2010-01-19 Thread MRAB

Brian D wrote:

Here's a simple named group matching pattern:


s = 1,2,3
p = re.compile(r(?Pone\d),(?Ptwo\d),(?Pthree\d))
m = re.match(p, s)
m

_sre.SRE_Match object at 0x011BE610

print m.groups()

('1', '2', '3')

Is it possible to call the group names, so that I can iterate over
them?

The result I'm looking for would be:

('one', 'two', 'three')


The closest you can get is with groupdict():

 print m.groupdict()
{'one': '1', 'three': '3', 'two': '2'}

--
http://mail.python.org/mailman/listinfo/python-list


Re: substitution

2010-01-19 Thread Wyrmskull


Peter Otten wrote:

def replace_many(s, pairs):
if len(pairs):
a, b = pairs[0]
rest = pairs[1:]
return b.join(replace_many(t, rest) for t in s.split(a))
else:
return s

-

Proves wrong, this way x - y - z. 
You call replace_many again on the central part of the split
Specifics indicate that x - y in the end.
Your flowing pythonicity (if len(x):) gave me lots of inspiration.
I bet this will win the prize ;)

-

def mySubst(reps,string):
if not(len(reps)):
return string
a,b,c = string.partition(reps[0][0])
if b:
return mySubst(reps,a) + reps[0][1] + mySubst (reps,c)
else:
return mySubst(reps[1:],string)

print mySubst( ( ('foo','bar'), ('bar','qux'), ('qux','foo') ), 'foobarquxfoo')

---
Wyrmskull lordkran...@gmail.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: substitution

2010-01-19 Thread Peter Otten
Wyrmskull wrote:

 Peter Otten wrote:

 def replace_many(s, pairs):
 if len(pairs):
 a, b = pairs[0]
 rest = pairs[1:]
 return b.join(replace_many(t, rest) for t in s.split(a))
 else:
 return s

 Proves wrong, this way x - y - z.
 You call replace_many again on the central part of the split
 Specifics indicate that x - y in the end.

Sorry, I don't understand what you want to say with the above.

 Try with this:
 
 def mySubst(reps,string):
 if not(len(reps)):
 return string
 current = reps[0][0]
 a,b,c = string.partition(current)
 if b:
 return mySubst(reps,a) + reps[0][1] + mySubst (reps,c)
 else:
 return mySubst(reps[1:],string)
 
 print mySubst( ( ('foo','bar'), ('bar','qux'), ('qux','foo') ),
  'foobarquxfoo')
 
 ---
 Wyrmskull lordkran...@gmail.com

I don't see at first glance where the results of replace_many() and 
mySubst() differ. Perhaps you could give an example?

Peter

PS: Please keep the conversation on-list
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: substitution

2010-01-19 Thread Wyrmskull

Cleaned. You can remove the 'else's if you want,
because 'return' breaks the instruction flow.
Should also work with other sequence types.



def mySubst(reps,seq):
if reps:
a,b,c = string.partition(reps[0][0])
if b:
return mySubst(reps,a) + reps[0][1] + mySubst (reps,c)
else:
return mySubst(reps[1:], seq)
else:
return seq

print mySubst( ( ('foo','bar'), ('bar','qux'), ('qux','foo') ), 'foobarquxfoo')
print mySubst( ( ('foo','bar'), ('bar','qux'), ('qux','foo') ), 
'foobarquxxxfoo')

---
Wyrmskull 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: substitution

2010-01-19 Thread Wyrmskull
Nvm, my bad, I misunderstood the split instruction.
No difference :)

---
Wyrmskull

P.S Sorry about the P.M., I misclicked on a GUI
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python IDE for MacOS-X

2010-01-19 Thread Günther Dietrich
Jean Guillaume Pyraksos wis...@hotmail.com wrote:

What's the best one to use with beginners ?
Something with integrated syntax editor, browser of doc...
Thanks,

I started with 'EasyEclipse for Python', but soon changed to Eclipse 
with PyDev, MercurialEclipse and AnyEditTools plugins installed manually.

I can recommend the combination Eclipse/PyDev.



Best regards,

Günther
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python IDE for MacOS-X

2010-01-19 Thread Chris Colbert
On Tue, Jan 19, 2010 at 2:09 AM, Jean Guillaume Pyraksos wis...@hotmail.com
 wrote:

 What's the best one to use with beginners ?
 Something with integrated syntax editor, browser of doc...
 Thanks,

JG
 --
 http://mail.python.org/mailman/listinfo/python-list


I whole-heartedly recommend WingIDE. It's commercial and the only piece of
commercial Linux software I use, but it is worth every penny.

And support emails are answered within hours, if not minutes...they are
great guys over there.

**I am not affiliated with Wingware in any way. Just a happy customer.**
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Py 3: Terminal script can't find relative path

2010-01-19 Thread Gnarlodious
OK I guess that is normal, I fixed it with this:

path=os.path.dirname(__file__)+/Data/

-- Gnarlie
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of lists vs. list comprehensions

2010-01-19 Thread Wolfram Hinderer
On 19 Jan., 16:30, Gerald Britton gerald.brit...@gmail.com wrote:
  Timer(' '.join([x for x in l]), 'l = map(str,range(10))').timeit()

 2.9967339038848877

  Timer(' '.join(x for x in l), 'l = map(str,range(10))').timeit()

 7.2045478820800781

[...]

 2. Why should the pure list comprehension be slower than the same
 comprehension enclosed in '[...]' ?

Others have already commented on generator expression vs. list
comprehension. I'll try to shed some light on the cause of the
slowness.

For me it's
 Timer(' '.join([x for x in l]), 'l = map(str,range(10))').timeit()
0.813948839866498
 Timer(' '.join(x for x in l), 'l = map(str,range(10))').timeit()
2.226825476422391

But wait! I'm on Python 3.1 and the setup statement has to be changed
to make this test meaningful.
 Timer(' '.join([x for x in l]), 'l = list(map(str,range(10)))').timeit()
2.5788493369966545
 Timer(' '.join(x for x in l), 'l = list(map(str,range(10)))').timeit()
3.7431774848480472

Much smaller factor now.
But wait! If we want to test list comprehension against generator
comprehension, we should try a function that just consumes the
iterable.

 setup = l = list(map(str,range(10)))
... def f(a):
... for i in a: pass
... 
 Timer(f([x for x in l]), setup).timeit()
3.288511528699928
 Timer(f(x for x in l), setup).timeit()
2.410873798206012

Oops! Iteration over generator expressions is not inherently more
expension than iteration over list comprehensions. But certainly
building a list from a generator expression is more expensive than a
list comprehension?

 Timer([x for x in l], 'l = list(map(str,range(10)))').timeit()
2.088602950933364
 Timer(list(x for x in l), 'l = list(map(str,range(10)))').timeit()
3.691566805277944

Yes, list building from a generator expression *is* expensive. And
join has to do it, because it has to iterate twice over the iterable
passed in: once for calculating the memory needed for the joined
string, and once more to actually do the join (this is implementation
dependent, of course). If the iterable is a list already, the list
building is not needed.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Py 3: How to switch application to Unicode strings?

2010-01-19 Thread Stephen Hansen
On Tue, Jan 19, 2010 at 7:50 AM, Gnarlodious gnarlodi...@gmail.com wrote:

 I am using Python 3, getting an error from SQLite:

 sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless
 you use a text_factory that can interpret 8-bit bytestrings (like
 text_factory = str). It is highly recommended that you instead just
 switch your application to Unicode strings.

 So... how do I switch to Unicode? I thought I was doing it when I put

# coding:utf-8

 at the start of my script.


All that does is mean that the script itself is encoded as utf8.

In Py3, anything you do this or use str(), you are using unicode strings.

The problem appears to be that you are passing bytestrings to sqlite; things
created with bthis or bytes(), or read from a file as bytes and not
decoded before passing it to the database.

To really help further, you should provide the line that threw that warning
and show where any variables in it come from. As for that error message, I
believe it means text_factory = bytes.

--S
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of lists vs. list comprehensions

2010-01-19 Thread Gerald Britton
[snip]


 Yes, list building from a generator expression *is* expensive. And
 join has to do it, because it has to iterate twice over the iterable
 passed in: once for calculating the memory needed for the joined
 string, and once more to actually do the join (this is implementation
 dependent, of course). If the iterable is a list already, the list
 building is not needed.

if join has to iterate twice over the iterable, how does this work?

$ python3.1
Python 3.1.1+ (r311:74480, Nov  2 2009, 14:49:22)
[GCC 4.4.1] on linux2
Type help, copyright, credits or license for more information.
 l = map(str, (x for x in range(10) if int(x)%2))
 '.'.join(l)
'1.3.5.7.9'


If join had to iterate twice over l, it would be consumed on the first
pass.  If it works as you say then join would have to copy the
iterable on the first pass, effectively turning it into a list.
Though I didn't read through it, I would suppose that join could use a
dynamic-table approach to hold the result, starting with some
guesstimate then expanding the result buffer if and when needed.

 --
 http://mail.python.org/mailman/listinfo/python-list




-- 
Gerald Britton
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: enhancing 'list'

2010-01-19 Thread Steve Holden
samwyse wrote:
 On Jan 18, 1:56 am, Terry Reedy tjre...@udel.edu wrote:
 On 1/17/2010 5:37 PM, samwyse wrote:





 Consider this a wish list.  I know I'm unlikely to get any of these in
 time for for my birthday, but still I felt the need to toss it out and
 see what happens.
 Lately, I've slinging around a lot of lists, and there are some simple
 things I'd like to do that just aren't there.
 s.count(x[, cmp[, key]])
 - return number of i‘s for which s[i] == x.  'cmp' specifies a custom
 comparison function of two arguments, as in '.sort'.  'key' specifies
 a custom key extraction function of one argument.
 s.index(x[, i[, j[, cmp[, key)
 - return smallest k such that s[k] == x and i= k  j.  'cmp' and
 'key' are as above.
 s.rindex(x[, i[, j[, cmp[, key)
 - return largest k such that s[k] == x and i= k  j.  'cmp' and
 'key' are as above.
 There are two overlapping proposals here.  One is to add the .rindex
 method, which strings already have.  The other is to extend the
 optional arguments of .sort to all other methods that test for item
 equality.
 One last thing, the Python 2.6.2 spec says .count and .index only
 apply to mutable sequence types.  I see no reason why they
 (and .rindex) couldn't also apply to immutable sequences (tuples, in
 particular).
 In 3.x, tuple does have those methods, even though the doc is not clear
 (unless fixed by now).
 
 That's good to hear.  Perhaps I should have tried them directyly, but
 my 3.1 docs still echo the 2.x docs, which only show them for
 immutable sequences.

The tuple IS an immutable sequence.

regards
 Steve
-- 
Steve Holden   +1 571 484 6266   +1 800 494 3119
PyCon is coming! Atlanta, Feb 2010  http://us.pycon.org/
Holden Web LLC http://www.holdenweb.com/
UPCOMING EVENTS:http://holdenweb.eventbrite.com/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of lists vs. list comprehensions

2010-01-19 Thread Arnaud Delobelle
Gerald Britton gerald.brit...@gmail.com writes:

 [snip]


 Yes, list building from a generator expression *is* expensive. And
 join has to do it, because it has to iterate twice over the iterable
 passed in: once for calculating the memory needed for the joined
 string, and once more to actually do the join (this is implementation
 dependent, of course). If the iterable is a list already, the list
 building is not needed.

 if join has to iterate twice over the iterable, how does this work?

 $ python3.1
 Python 3.1.1+ (r311:74480, Nov  2 2009, 14:49:22)
 [GCC 4.4.1] on linux2
 Type help, copyright, credits or license for more information.
 l = map(str, (x for x in range(10) if int(x)%2))
 '.'.join(l)
 '1.3.5.7.9'


 If join had to iterate twice over l, it would be consumed on the first
 pass.  If it works as you say then join would have to copy the
 iterable on the first pass, effectively turning it into a list.
 Though I didn't read through it, I would suppose that join could use a
 dynamic-table approach to hold the result, starting with some
 guesstimate then expanding the result buffer if and when needed.

Looking at the source (py3k):

PyObject *
PyUnicode_Join(PyObject *separator, PyObject *seq)
{
[skip declarations]

fseq = PySequence_Fast(seq, );
if (fseq == NULL) {
return NULL;
}

[code that works out the length of the joined string then allocates
 memory, then fills it]
}

Where PySequence_Fast(seq, ) returns seq if seq is already a tuple or
a list and otherwise returns a new tuple built from the elements of seq.

So no, it doesn't guess the size of the joined string and yes, it
iterates twice over the sequence (I would have thought it should be
called an iterable) by copying it into a tuple.

-- 
Arnaud
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of lists vs. list comprehensions

2010-01-19 Thread Wolfram Hinderer
On 19 Jan., 21:06, Gerald Britton gerald.brit...@gmail.com wrote:
 [snip]



  Yes, list building from a generator expression *is* expensive. And
  join has to do it, because it has to iterate twice over the iterable
  passed in: once for calculating the memory needed for the joined
  string, and once more to actually do the join (this is implementation
  dependent, of course). If the iterable is a list already, the list
  building is not needed.

 if join has to iterate twice over the iterable, how does this work?

 $ python3.1
 Python 3.1.1+ (r311:74480, Nov  2 2009, 14:49:22)
 [GCC 4.4.1] on linux2
 Type help, copyright, credits or license for more information.

  l = map(str, (x for x in range(10) if int(x)%2))
  '.'.join(l)
 '1.3.5.7.9'

 If join had to iterate twice over l, it would be consumed on the first
 pass.

Yes. (Coincidentally, l is consumed in the first execution of the Timer
()-statement, which is why I had to add the call to list. Not to
mention the xrange example of Stephen. But all this is not linked to
what join does internally.)

 If it works as you say then join would have to copy the
 iterable on the first pass, effectively turning it into a list.

Yes, that's what I'm saying above in the first two lines of what you
quoted.
I should have added somehting like AFAIK CPython does it that way.

 Though I didn't read through it, I would suppose that join could use a
 dynamic-table approach to hold the result, starting with some
 guesstimate then expanding the result buffer if and when needed.

Probably. But that's not what happens. Try
.join( for x in range(10**10)
and watch join eating memory.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: integer and string compare, is that correct?

2010-01-19 Thread Aahz
In article hickp8$cfg$0...@news.t-online.com,
Peter Otten  __pete...@web.de wrote:

The use cases for an order that works across types like int and str are
weak to non-existent. Implementing it was considered a mistake and has
been fixed in Python 3:

That is not precisely correct from my POV.  The primary use case for
order that works across types is sorting lists of heterogeneous data.
Many people relied on that feature; however, experience showed that it
caused more bugs than it fixed.  But that doesn't obviate the use of the
feature.
-- 
Aahz (a...@pythoncraft.com)   * http://www.pythoncraft.com/

If you think it's expensive to hire a professional to do the job, wait
until you hire an amateur.  --Red Adair
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of lists vs. list comprehensions

2010-01-19 Thread Gerald Britton
That's surprising. I wouldn't implement it that way at all.  I'd use a
dynamically-expanding buffer as I suggested.  That way you get a
single pass and don't have to calculate anything before you begin.  In
the best case, you'd use half the memory (the result just fits in the
buffer after its last expansion and no saved tuple).  In the worst
case, the memory use is about the same (you just expanded the buffer
using a 2x expansion rule then you hit the last item).

Still I suppose the author thought of that approach and rejected it
for reasons I can't yet see.

On Tue, Jan 19, 2010 at 4:01 PM, Arnaud Delobelle
arno...@googlemail.com wrote:
 Gerald Britton gerald.brit...@gmail.com writes:

 [snip]


 Yes, list building from a generator expression *is* expensive. And
 join has to do it, because it has to iterate twice over the iterable
 passed in: once for calculating the memory needed for the joined
 string, and once more to actually do the join (this is implementation
 dependent, of course). If the iterable is a list already, the list
 building is not needed.

 if join has to iterate twice over the iterable, how does this work?

 $ python3.1
 Python 3.1.1+ (r311:74480, Nov  2 2009, 14:49:22)
 [GCC 4.4.1] on linux2
 Type help, copyright, credits or license for more information.
 l = map(str, (x for x in range(10) if int(x)%2))
 '.'.join(l)
 '1.3.5.7.9'


 If join had to iterate twice over l, it would be consumed on the first
 pass.  If it works as you say then join would have to copy the
 iterable on the first pass, effectively turning it into a list.
 Though I didn't read through it, I would suppose that join could use a
 dynamic-table approach to hold the result, starting with some
 guesstimate then expanding the result buffer if and when needed.

 Looking at the source (py3k):

 PyObject *
 PyUnicode_Join(PyObject *separator, PyObject *seq)
 {
    [skip declarations]

    fseq = PySequence_Fast(seq, );
    if (fseq == NULL) {
        return NULL;
    }

    [code that works out the length of the joined string then allocates
     memory, then fills it]
 }

 Where PySequence_Fast(seq, ) returns seq if seq is already a tuple or
 a list and otherwise returns a new tuple built from the elements of seq.

 So no, it doesn't guess the size of the joined string and yes, it
 iterates twice over the sequence (I would have thought it should be
 called an iterable) by copying it into a tuple.

 --
 Arnaud
 --
 http://mail.python.org/mailman/listinfo/python-list




-- 
Gerald Britton
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Recommended new way for config files

2010-01-19 Thread Jonathan Gardner
On Jan 8, 2:54 pm, Ben Finney ben+pyt...@benfinney.id.au wrote:
 Chris Rebert c...@rebertia.com writes:
  JSON is one option:http://docs.python.org/library/json.html

 YAML URL:http://en.wikipedia.org/wiki/YAML is another contender.
 Compared to JSON, it is yet to gain as much mind-share, but even more
 human-friendly and no less expressive.

 Here are some discussions of YAML that can help you evaluate it:

     URL:http://www.ibm.com/developerworks/xml/library/x-matters23.html
     URL:http://www.codinghorror.com/blog/archives/001114.html
     
 URL:http://webignition.net/articles/xml-vs-yaml-vs-json-a-study-to-find-a...



YAML is far too complex to be useful. I played with it a while and
found the syntax even more confusing than XML, which is quite a feat.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Updating an OptionMenu every time the text file it reads from is updated (Tkinter)

2010-01-19 Thread Dr. Benjamin David Clarke
On Jan 19, 7:00 am, Peter Otten __pete...@web.de wrote:
 Dr. Benjamin David Clarke wrote:

  I currently have a program that reads in values for an OptionMenu from
  a text file. I also have an option to add a line to that text file
  which corresponds to a new value for that OptionMenu. How can I make
  that OptionMenu update its values based on that text file without
  restarting the program? In other words, every time I add a value to
  the text file, I want the OptionMenu to immediately update to take
  note of this change. I'll provide code if needed.

 Inferred from looking into the Tkinter source code:

 # python 2.6
 import Tkinter as tk

 root = tk.Tk()

 var  = tk.StringVar()
 var.set(One)

 optionmenu = tk.OptionMenu(root, var, One, Two, Three)
 optionmenu.grid(row=0, column=1)

 def add_option():
     value = entry_add.get()
     menu = optionmenu[menu]
     variable = var
     command = None # what you passed as command argument to optionmenu
     menu.add_command(label=value,
                      command=tk._setit(variable, value, command))

 label_show = tk.Label(root, text=current value)
 label_show.grid(row=1, column=0)
 entry_show = tk.Entry(root, textvariable=var)
 entry_show.grid(row=1, column=1)

 label_add = tk.Label(root, text=new option)
 label_add.grid(row=2, column=0)
 entry_add = tk.Entry(root)
 entry_add.grid(row=2, column=1)

 button_add = tk.Button(root, text=add option,
                    command=add_option)
 button_add.grid(row=2, column=2)

 root.mainloop()

 Peter

The problem turned out to be fairly simple. I just had to get a little
creative with nesting my functions and add or remove the option from
the OptionMenu right after adding/removing to the text file. Thanks
for pointing me in the right direction.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of lists vs. list comprehensions

2010-01-19 Thread Raymond Hettinger
[Wolfram Hinderer]
 Yes, list building from a generator expression *is* expensive. And
 join has to do it, because it has to iterate twice over the iterable
 passed in: once for calculating the memory needed for the joined
 string, and once more to actually do the join (this is implementation
 dependent, of course). If the iterable is a list already, the list
 building is not needed.

Good analysis.  That is exactly what is going on under the hood.


Raymond
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of lists vs. list comprehensions

2010-01-19 Thread Arnaud Delobelle
Gerald Britton gerald.brit...@gmail.com writes:

 That's surprising. I wouldn't implement it that way at all.  I'd use a
 dynamically-expanding buffer as I suggested.  That way you get a
 single pass and don't have to calculate anything before you begin.  In
 the best case, you'd use half the memory (the result just fits in the
 buffer after its last expansion and no saved tuple).  In the worst
 case, the memory use is about the same (you just expanded the buffer
 using a 2x expansion rule then you hit the last item).

 Still I suppose the author thought of that approach and rejected it
 for reasons I can't yet see.

I don't know the reasons, but I'm guessing they could be historic.
Before Python had iterators, str.join would mostly have been only given lists
and tuples as arguments, in which case the current approach seems to be
the most appropriate.  Later, when things like generator functions and
generator expressions were introduced, perhaps str.join wasn't optimized
to accomodate them.

-- 
Arnaud
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: can i examine the svn rev used to build a python 3 executable?

2010-01-19 Thread Martin v. Loewis
   never mind.  just discovered that while python3 -V won't do it,
 executing it gives me:
 
 $ python3
 Python 3.2a0 (py3k:77609, Jan 19 2010, 04:10:16)
 ...
 
 and it's that 77609 rev number i was after.

If you want that in a command line fashion, do

python -c 'import sys;print(sys.subversion[2])'

Regards,
Martin

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: thread return code

2010-01-19 Thread Peter
On Jan 19, 9:25 pm, Alf P. Steinbach al...@start.no wrote:
 * Rajat:

  Hi,

  I'm using threading module in Python 2.6.4. I'm using thread's join()
  method.

  On the new thread I'm running a function which returns a code at the
  end. Is there a way I access that code in the parent thread after
  thread finishes? Simply, does join() could get me that code?

 join() always returns None.

 But you can store the code in the thread object, and then access it after the
 join().

 Cheers  hth.,

 - Alf

The typical way to communicate with a thread is via a queue or pipe.
You can do what Alf suggests or you could create a queue (or pipe),
pass it to the thread as an argument and have the thread put the
return value into the queue as the last action prior to exit. After
the join() just read the results from the queue.

Using a queue or pipe is just a suggestion, the multiprocessing module
offers numerous ways to communicate between tasks, have a read and
pick whatever mechanism seems appropriate for your situation.

Peter
-- 
http://mail.python.org/mailman/listinfo/python-list


Pydev 1.5.4 Released

2010-01-19 Thread Fabio Zadrozny
Hi All,

Pydev 1.5.4 has been released

Details on Pydev: http://pydev.org
Details on its development: http://pydev.blogspot.com

Release Highlights:
---

* Actions:
  o Go to matching bracket (Ctrl + Shift + P)
  o Copy the qualified name of the current context to the clipboard.
  o Ctrl + Shift + T keybinding is resolved to show globals in any
context (note: a conflict may occur if JDT is present -- it can be fixed at
the keys preferences if wanted).
  o Ctrl + 2 shows a dialog with the list of available options.
  o Wrap paragraph is available in the source menu.
  o Globals browser will start with the current word if no selection is
available (if possible).
* Templates:
  o Scripting engine can be used to add template variables to Pydev.
  o New template variables for next, previous class or method, current
module, etc.
  o New templates for super and super_raw.
  o print is now aware of Python 3.x or 2.x
* Code analysis and code completion:
  o Fixed problem when getting builtins with multiple Python
interpreters configured.
  o If there's a hasattr(obj, 'attr), 'attr' will be considered in the
code completion and code analysis.
  o Fixed issue where analysis was only done once when set to only
analyze open editor.
  o Proper namespace leakage semantic in list comprehension.
  o Better calltips in IronPython.
  o Support for code-completion in Mac OS (interpreter was crashing if
_CF was not imported in the main thread).
* Grammar:
  o Fixed issues with 'with' being used as name or keyword in 2.5.
  o Fixed error when using nested list comprehension.
  o Proper 'as' and 'with' handling in 2.4 and 2.5.
  o 'with' statement accepts multiple items in python 3.0.
* Improved hover:
  o Showing the actual contents of method or class when hovering.
  o Link to the definition of the token being hovered (if class or
method).
* Others:
  o Completions for [{( are no longer duplicated when on block mode.
  o String substitution can now be configured in the interpreter.
  o Fixed synchronization issue that could make Pydev halt.
  o Fixed problem when editing with collapsed code.
  o Import wasn't found for auto-import location if it import started
with 'import' (worked with 'from')
  o Fixed interactive console problem with help() function in Python 3.1
  o NullPointerException fix in compare editor.


What is Pydev?
---

Pydev is a plugin that enables users to use Eclipse for Python, Jython and
IronPython development -- making Eclipse a first class Python IDE -- It
comes with many goodies such as code completion, syntax highlighting, syntax
analysis, refactor, debug and many others.


Cheers,

-- 
Fabio Zadrozny
--
Software Developer

Aptana
http://aptana.com/python

Pydev - Python Development Environment for Eclipse
http://pydev.org
http://pydev.blogspot.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of lists vs. list comprehensions

2010-01-19 Thread Steven D'Aprano
On Tue, 19 Jan 2010 11:26:43 -0500, Gerald Britton wrote:

 Interestingly, I scaled it up to a million list items with more or less
 the same results.

A million items is not a lot of data. Depending on the size of each 
object, that might be as little as 4 MB of data:

 L = ['' for _ in xrange(10**6)]
 sys.getsizeof(L)
4348732

Try generating a billion items, or even a hundred million, and see how 
you go.

This is a good lesson in the dangers of premature optimization. I can't 
think how many times I've written code using a generator expression 
passed to join, thinking that would surely be faster than using a list 
comprehension (save building a temporary list first).



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of lists vs. list comprehensions

2010-01-19 Thread Steven D'Aprano
On Tue, 19 Jan 2010 16:20:42 -0500, Gerald Britton wrote:

 That's surprising. I wouldn't implement it that way at all.  I'd use a
 dynamically-expanding buffer as I suggested.  That way you get a single
 pass and don't have to calculate anything before you begin.  In the best
 case, you'd use half the memory (the result just fits in the buffer
 after its last expansion and no saved tuple).  In the worst case, the
 memory use is about the same (you just expanded the buffer using a 2x
 expansion rule then you hit the last item).

In the worst case, you waste 50% of the memory allocated. And because 
strings are immutable (unlike lists and dicts, which also use this 
approach), you can never use that memory until the string is garbage 
collected.

In the current approach, join produces a temporary sequence, but it 
doesn't last very long. With your suggested approach, you could end up 
with a large number of long-lasting strings up to twice the size 
necessary. Since join is particularly useful for building large strings, 
this could be a significant memory pessimation.

The obvious fix is for join to shrink the buffer once it has finished 
building the string, but the success of that may be operating system 
dependent. I don't know -- it sounds like a recipe for memory 
fragmentation to me. And even if it can be done, reclaiming the memory 
will take time, potentially more time than the current approach.

Still, it's probably worth trying, and seeing if it speeds join up.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Py 3: How to switch application to Unicode strings?

2010-01-19 Thread Mark Tolonen


Stephen Hansen apt.shan...@gmail.com wrote in message 
news:7a9c25c21001191156j46a7fdadt58b728477b85e...@mail.gmail.com...
On Tue, Jan 19, 2010 at 7:50 AM, Gnarlodious gnarlodi...@gmail.com 
wrote:



I am using Python 3, getting an error from SQLite:

sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless
you use a text_factory that can interpret 8-bit bytestrings (like
text_factory = str). It is highly recommended that you instead just
switch your application to Unicode strings.

So... how do I switch to Unicode? I thought I was doing it when I put


# coding:utf-8


at the start of my script.



All that does is mean that the script itself is encoded as utf8.



Actually it means that the user has declared that the source file is encoded 
in utf-8.  A common source of errors is that the source file is *not* 
encoded in utf-8.  Make sure to save the source file in the encoding 
declared.


-Mark 



--
http://mail.python.org/mailman/listinfo/python-list


Re: Py 3: How to switch application to Unicode strings?

2010-01-19 Thread Gnarlodious
Well, Python 3 is supposed to be all Unicode by default. I shouldn't
even need to say
# coding:UTF-8

And, the file is saved as Unicode.

There are many mentions of this error found by Google, but none seen
to clearly say what the problem is or how to fix it.

FYI, the problem line says:

cursor.execute('insert into Data values
(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)', frameTuple)

and one of the strings in the tuple contains a character like 'ñ'.
I have a version of the SQLite editor that works as expected in a
browser, I don't know why.

-- Gnarlie
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of lists vs. list comprehensions

2010-01-19 Thread Alf P. Steinbach

* Steven D'Aprano:

On Tue, 19 Jan 2010 16:20:42 -0500, Gerald Britton wrote:


That's surprising. I wouldn't implement it that way at all.  I'd use a
dynamically-expanding buffer as I suggested.  That way you get a single
pass and don't have to calculate anything before you begin.  In the best
case, you'd use half the memory (the result just fits in the buffer
after its last expansion and no saved tuple).  In the worst case, the
memory use is about the same (you just expanded the buffer using a 2x
expansion rule then you hit the last item).


In the worst case, you waste 50% of the memory allocated.


Yes. That is a good argument for not doing the expanding buffer thing. But such 
buffers may be generally present anyway, resulting from optimization of +.


Using CPython 2.6.4 in Windows XP:


 def elapsed_time_for( f, n_calls ):
... return timeit.timeit( f, number = n_calls )
...
 def appender( n ):
... def makestr( n = n ):
... s = 
... for i in xrange( n ):
... s = s + A
... return s
... return makestr
...
 appender( 1000 )() == 1000*A
True

 for i in xrange( 10 ):
... print( elapsed_time_for( appender( 1*(i+1) ), 100 ) )
...
0.782596670811
1.37728454314
2.10189898437
2.76442173517
3.34536707878
4.08251830889
4.79620119317
5.42201844089
6.12892811796
6.84236460221
 _


Here the linear increase of times indicate that the + is being optimized using 
an expanding buffer for the string. If only the minimal space was allocated each 
time then one would expect O(n^2) behavior instead of the apparently O(n) above. 
Example of that O(n^2) behavior given below.



 def exact_appender( n ):
... def makestr( n = n ):
... s = 
... for i in xrange( n ):
... new_s = s + A
... s = new_s
... return s
... return makestr
...
 exact_appender( 1000 )() == 1000*A
True
 for i in xrange( 10 ):
... print( elapsed_time_for( exact_appender( 1*(i+1) ), 100 ) )
...
3.28094241027
9.30584501661
19.5319170453
33.6563767183
52.3327800042
66.5475022663
84.8809736992
Traceback (most recent call last):
  File stdin, line 2, in module
  File stdin, line 2, in elapsed_time_for
  File C:\Program Files\cpython\python26\lib\timeit.py, line 227, in timeit
return Timer(stmt, setup, timer).timeit(number)
  File C:\Program Files\cpython\python26\lib\timeit.py, line 193, in timeit
timing = self.inner(it, self.timer)
  File C:\Program Files\cpython\python26\lib\timeit.py, line 99, in inner
_func()
  File stdin, line 5, in makestr
KeyboardInterrupt
 _


So, given that apparently the simple '+' in the first example is optimized using 
an expanding buffer, which then hangs around, it's not clear to me that the 
space optimization in 'join' really helps. It may be (but isn't necessarily) 
like shoveling snow in a snowstorm. Then the effort/cost could be for naught.



And because 
strings are immutable (unlike lists and dicts, which also use this 
approach), you can never use that memory until the string is garbage 
collected.


I think that the simple '+', with the apparent optimization shown in the first 
example above, can use that space. I know for a fact that when you control a 
string implementation then it can do that (since I've implemented that). But I 
don't know for a fact that it's practical to do so in Python. In order to use 
the space the implementation must know that there's only one reference to the 
string. And I don't know whether that information is readily available in a 
CPython implementation (say), although I suspect that it is.



Cheers,

- Alf
--
http://mail.python.org/mailman/listinfo/python-list


Re: Py 3: How to switch application to Unicode strings?

2010-01-19 Thread Stephen Hansen
On Tue, Jan 19, 2010 at 8:16 PM, Gnarlodious gnarlodi...@gmail.com wrote:

 Well, Python 3 is supposed to be all Unicode by default. I shouldn't
 even need to say
 # coding:UTF-8

 And, the file is saved as Unicode.

 There are many mentions of this error found by Google, but none seen
 to clearly say what the problem is or how to fix it.

 FYI, the problem line says:

 cursor.execute('insert into Data values
 (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)', frameTuple)

 and one of the strings in the tuple contains a character like 'ñ'.
 I have a version of the SQLite editor that works as expected in a
 browser, I don't know why.


But is it a -unicode- string, or a -byte- string? Print it with repr(). By
that error, it seems like its a bytestring. So you read it or got it from a
source which provided it to you not as unicode. In that case, find out what
the encoding is-- and decode it.

after = before.decode(utf8)

Python 3 is not 'all unicode'; or 'by default'. Python 3 has a firm line in
the sand. Everything is either explicitly a byte string (bytes) or
explicitly a unicode string (str).

--S
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python IDE for MacOS-X

2010-01-19 Thread Tim Arnold
Jean Guillaume Pyraksos wis...@hotmail.com wrote in message 
news:wissme-9248e1.08090319012...@news.free.fr...
 What's the best one to use with beginners ?
 Something with integrated syntax editor, browser of doc...
 Thanks,

JG

eclipse + pydev works well for me.
--Tim Arnold


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of lists vs. list comprehensions

2010-01-19 Thread Steven D'Aprano
On Wed, 20 Jan 2010 05:25:22 +0100, Alf P. Steinbach wrote:

 * Steven D'Aprano:
 On Tue, 19 Jan 2010 16:20:42 -0500, Gerald Britton wrote:
 
 That's surprising. I wouldn't implement it that way at all.  I'd use a
 dynamically-expanding buffer as I suggested.  That way you get a
 single pass and don't have to calculate anything before you begin.  In
 the best case, you'd use half the memory (the result just fits in the
 buffer after its last expansion and no saved tuple).  In the worst
 case, the memory use is about the same (you just expanded the buffer
 using a 2x expansion rule then you hit the last item).
 
 In the worst case, you waste 50% of the memory allocated.
 
 Yes. That is a good argument for not doing the expanding buffer thing.
 But such buffers may be generally present anyway, resulting from
 optimization of +.


As near as I can determine, the CPython optimization you are referring to 
doesn't use the double the buffer when needed technique. It operates on 
a completely different strategy. As near as I can tell (as a non-C 
speaker), it re-sizes the string in place to the size actually needed, 
thus reducing the amount of copying needed.

The optimization patch is here:

http://bugs.python.org/issue980695

and some history (including Guido's opposition to the patch) here:

http://mail.python.org/pipermail/python-dev/2004-August/046686.html

Nevertheless, the patch is now part of CPython since 2.4, but must be 
considered an implementation-specific optimization. It doesn't apply to 
Jython, and likely other implementations.



 Using CPython 2.6.4 in Windows XP:
[snip time measurements]
 Here the linear increase of times indicate that the + is being
 optimized using an expanding buffer for the string.

Be careful of jumping to the conclusion from timings that a certain 
algorithm is used. All you can really tell is that, whatever the 
algorithm is, it has such-and-such big-oh behaviour.

 If only the minimal
 space was allocated each time then one would expect O(n^2) behavior
 instead of the apparently O(n) above.

I don't follow that reasoning.


 Example of that O(n^2) behavior given below.

The example shown demonstrates that the + optimization only applies in 
certain specific cases. In fact, it only applies to appending:

s += t
s = s + t

but not prepending or concatenation with multiple strings:

s = t + s
s += t + u


However, keep in mind that the CPython keyhole optimizer will take 
something like this:

s += x + y

and turn it into this:

s += xy

which the concatenation optimization does apply to. Optimizations make it 
hard to reason about the behaviour of algorithms!


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is a list compression in Python?

2010-01-19 Thread Rainer Grimm
Hallo,
you can also look at list comprehension as syntactic sugar for the
functions map and filter. The two functions from the functional world
can be expressed in a comprehensive way with list comprehension.
 [x**2 for x in range(10) ] == map ( lambda x: x*x, range(10))
True
 [ x for x in range(10) if x%2 == 0 ] == filter ( lambda x: x%2 == 0 , 
 range(10))
True

Greetings
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: setattr() oddness

2010-01-19 Thread Dieter Maurer
Steven D'Aprano ste...@remove.this.cybersource.com.au writes on 18 Jan 2010 
06:47:59 GMT:
 On Mon, 18 Jan 2010 07:25:58 +0100, Dieter Maurer wrote:
 
  Lie Ryan lie.1...@gmail.com writes on Sat, 16 Jan 2010 19:37:29 +1100:
  On 01/16/10 10:10, Sean DiZazzo wrote:
   Interesting.  I can understand the would take time argument, but I
   don't see any legitimate use case for an attribute only accessible
   via getattr().  Well, at least not a pythonic use case.
  
  mostly for people (ab)using attributes instead of dictionary.
  
  Here is one use case:
  
   A query application. Queries are described by complex query objects.
   For efficiency reasons, query results should be cached. For this, it is
   not unnatural to use query objects as cache keys. Then, query objects
   must not get changed in an uncontrolled way. I use __setattr__ to
   control access to the objects.
 
 
 (1) Wouldn't it be more natural to store these query keys in a list or 
 dictionary rather than as attributes on an object?
 
 e.g. instead of:
 
 cache.__setattr__('complex query object', value)
 
 use:
 
 cache['complex query object'] = value

Few will use cache.__setattr__(...) but cache.attr = ... which
is nicer than cache['attr'] = 

Moreover, it is not the cache but the query of which I want to protect
modification. My cache indeed uses cache[query_object] = 
But I want to prevent query_object from being changed after a potential
caching.



 (2) How does __setattr__ let you control access to the object? If a user 
 wants to modify the cache, and they know the complex query object, what's 
 stopping them from using __setattr__ too?

In my specific case, __setattr__ prevents all modifications via attribute
assignment. The class uses __dict__ access to set attributes when
it knows it is still safe.

Of course, this is no real protection against attackers (which could
use __dict__ as well). It only protects against accidental change
of query objects.


Meanwhile, I remembered a more important use case for __setattr__:
providing for transparent persistancy. The ZODB (Zope Object DataBase)
customizes __setattr__ in order to intercept object modifications
and register automatically that the change needs to be persisted at
the next transaction commit.


Dieter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Py 3: How to switch application to Unicode strings?

2010-01-19 Thread Arnaud Delobelle
Gnarlodious gnarlodi...@gmail.com writes:

 Well, Python 3 is supposed to be all Unicode by default. I shouldn't
 even need to say
 # coding:UTF-8

 And, the file is saved as Unicode.


When a filed is saved, shouldn't it be in a specific encoding?  I don't
see how you can save your file 'as unicode'.  You should save your file
with UTF-8 encoding.

HTH

-- 
Arnaud
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is a list compression in Python?

2010-01-19 Thread Stephen Hansen
On Tue, Jan 19, 2010 at 10:39 PM, Rainer Grimm r.gr...@science-computing.de
 wrote:

 Hallo,
 you can also look at list comprehension as syntactic sugar for the
 functions map and filter. The two functions from the functional world
 can be expressed in a comprehensive way with list comprehension.
  [x**2 for x in range(10) ] == map ( lambda x: x*x, range(10))
 True
  [ x for x in range(10) if x%2 == 0 ] == filter ( lambda x: x%2 == 0 ,
 range(10))
 True


I really don't think you can call comprehensions as mere syntactic sugar, as
there are measurable performance differences between the two. For something
to be syntactic sugar, it must be essentially equivalent. A list
comprehension is, IIUC, real syntactic sugar around a for loop. Meaning it,
in a real way, simply creates a for-loop with special syntax.

A list comprehension can be /equivalent in effect/ to the functions map and
filter, but it is not syntactic sugar. It is generating entirely different
code-paths with entirely different performance characteristics to achieve
the same goals. But that's not sugar.

Syntactic sugar is a sweet, cute, or cute way to express a more complex
thought in a simpler or more concise way. But the -effect- and -result- of
both thoughts are the same: if you do it manually, or with the sugar,
they're essentially equivalent. That's why its sugar... its just something
to make the experience sweeter, it doesn't -really- add any value beyond
making the experience sweeter.

Consider your tests:

 Timer([x**2 for x in range(10) ]).timeit()
3.4986340999603271
 Timer(map ( lambda x: x*x, range(10))).timeit()
4.5014309883117676
 Timer([ x for x in range(10) if x%2 == 0 ]).timeit()
3.3268649578094482
 Timer(filter ( lambda x: x%2 == 0 , range(10))).timeit()
5.3649170398712158

The list comprehension version performs distinctly better in each case.

The end result of the two are the same, but one is not sugar for the other.
A list comprehension is in essence a compact way of writing a for loop that
builds a list. A map (or filter) operation is a functional approach to build
a list: the difference is that the latter requires you to call a function a
lot, and function calls in Python are not cheap.

--S
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is a list compression in Python?

2010-01-19 Thread Stephen Hansen
On Tue, Jan 19, 2010 at 11:32 PM, Stephen Hansen apt.shan...@gmail.comwrote:

 I really don't think you can call comprehensions as mere syntactic sugar,


 Err, I misspoke.

I don't really think you can call comprehensions mere syntactic sugar /for
map and filter/.

It IS mere syntactic sugar for a for loop.

But not for the functional equivalent.


--S
-- 
http://mail.python.org/mailman/listinfo/python-list


Best way to convert sequence of bytes to long integer

2010-01-19 Thread Steven D'Aprano
I have a byte string (Python 2.x string), e.g.:

s = g%$f yg\n1\05
assert len(s) == 10

I wish to convert it to a long integer, treating it as base-256. 
Currently I'm using:

def makelong(s):
n = 0
for c in s:
n *= 256
n += ord(c)
return n


which gives:

 makelong(s)
487088900085839492165893L


Is this the best way, or have I missed some standard library function?


Thanks in advance,


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of lists vs. list comprehensions

2010-01-19 Thread Stefan Behnel
Steven D'Aprano, 20.01.2010 07:12:
 On Wed, 20 Jan 2010 05:25:22 +0100, Alf P. Steinbach wrote:
 That is a good argument for not doing the expanding buffer thing.
 But such buffers may be generally present anyway, resulting from
 optimization of +.
 
 As near as I can determine, the CPython optimization you are referring to 
 doesn't use the double the buffer when needed technique. It operates on 
 a completely different strategy. As near as I can tell (as a non-C 
 speaker), it re-sizes the string in place to the size actually needed, 
 thus reducing the amount of copying needed.
 
 Using CPython 2.6.4 in Windows XP:
 [snip time measurements]
 Here the linear increase of times indicate that the + is being
 optimized using an expanding buffer for the string.
 
 Be careful of jumping to the conclusion from timings that a certain 
 algorithm is used. All you can really tell is that, whatever the 
 algorithm is, it has such-and-such big-oh behaviour.

Which is particularly tricky here since the algorithms depends more on the
OS than on the code in CPython. The better timings come from the fact that
the OS does *not* need to copy the buffer on each iteration, but does
smarter things when asked to enlarge the buffer. If you ran the benchmark
on an OS that *did* copy the buffer each time, the runtime would really be
quadratic.

BTW, I think it would actually be worth trying to apply the same approach
to str.join() if the argument is not a sequence (obviously followed by a
benchmark on different platforms).

Stefan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Py 3: How to switch application to Unicode strings?

2010-01-19 Thread Mark Tolonen


Gnarlodious gnarlodi...@gmail.com wrote in message 
news:646ab38b-0710-4d31-b9e1-8a6ee7bfa...@21g2000yqj.googlegroups.com...

Well, Python 3 is supposed to be all Unicode by default. I shouldn't
even need to say
# coding:UTF-8


Yes, in Python 3, an absence of a 'coding' line assumes UTF-8.


And, the file is saved as Unicode.


There is no such thing as saved as Unicode.  Unicode is not an encoding. 
For example, 'ñ' is the Unicode codepoint 241.  This can be stored in a file 
in a number of ways.  UTF-8 is the two bytes 0xc3 0xB1.  UTF-16LE is 0xF1 
0x00.  UTF-16BE is 0x00 0xF1.  latin-1 is the single byte 0xF1.  If your 
editor saves your file in the encoding latin-1, and you don't use a coding 
line to declare it, Python 3 will throw an error if it finds a non-UTF-8 
byte sequence in the file.



There are many mentions of this error found by Google, but none seen
to clearly say what the problem is or how to fix it.



FYI, the problem line says:



cursor.execute('insert into Data values
(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)', frameTuple)


Is frameTuple a byte string or Unicode string.  print(repr(frameTuple)).


and one of the strings in the tuple contains a character like 'ñ'.
I have a version of the SQLite editor that works as expected in a
browser, I don't know why.


Post the simplest, complete source code that exhibits the problem.

-Mark


--
http://mail.python.org/mailman/listinfo/python-list


[issue7472] email.encoders.encode_7or8bit(): typo iso-2202. iso-2022 is correct.

2010-01-19 Thread Yukihiro Nakadaira

Yukihiro Nakadaira yukihiro.nakada...@gmail.com added the comment:

 In other words, I think the correct thing to do is to delete that if test.

I think so too.

 Do you have a case where the code produces incorrect behavior that your patch 
 turns into correct behavior?

No, I don't.  I just found a typo.

The code for iso-2022 was added by issue #804885.  But I don't know why it 
was requested.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7472
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7738] IDLE hang when tooltip comes up in Linux

2010-01-19 Thread Kent Yip

New submission from Kent Yip yes...@gmail.com:

IDLE will hang when a tooltip shows in a Linux system (Ubuntu).

do this:

t = (1,2,3)
len(t)

it will hang after the closing ')', when you press return nothing will happen 
or when you press any keys, it won't show up. 

However, you can work around this hangup by clicking on the IDLE menus on top 
or clicking on a different application on your desktop and return to IDLE and 
press Enter will work again

tested on python2.5, python2.6, python3.0 on linux machine 

however in windows vista with python3.1 the tooltip hangup doesn't exist. Only 
in Linux does it behave like that.

--
components: IDLE
messages: 98048
nosy: yesk13
severity: normal
status: open
title: IDLE hang when tooltip comes up in Linux
type: behavior
versions: Python 2.5, Python 2.6, Python 3.1

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7738
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7739] time.strftime may hung while trying to open /etc/localtime but does not release GIL

2010-01-19 Thread Doron Tal

New submission from Doron Tal doron.tal.l...@gmail.com:

I've encountered a hung of python process for more than a second. It appears 
that the stall happens due to time.strftime call, which internally opens a file 
('/etc/localtime'). I think it is best if the GIL would have been released to 
allow other threads to continue working.

Snippet from strace on:
---
import time
time.strftime('%Z')
---
Note the line:
open(/etc/localtime, O_RDONLY)= 4


=== strace output Starts here  ===---
stat(/usr/local/lib/python2.6/lib-old/time, 0x7fff871deff0) = -1 ENOENT (No 
such file or directory)
open(/usr/local/lib/python2.6/lib-old/time.so, O_RDONLY) = -1 ENOENT (No such 
file or directory)
open(/usr/local/lib/python2.6/lib-old/timemodule.so, O_RDONLY) = -1 ENOENT 
(No such file or directory)
open(/usr/local/lib/python2.6/lib-old/time.py, O_RDONLY) = -1 ENOENT (No such 
file or directory)
open(/usr/local/lib/python2.6/lib-old/time.pyc, O_RDONLY) = -1 ENOENT (No 
such file or directory)
stat(/usr/local/lib/python2.6/lib-dynload/time, 0x7fff871deff0) = -1 ENOENT 
(No such file or directory)
open(/usr/local/lib/python2.6/lib-dynload/time.so, O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0755, st_size=46995, ...}) = 0
futex(0x393b4030ec, FUTEX_WAKE_PRIVATE, 2147483647) = 0
open(/usr/local/lib/python2.6/lib-dynload/time.so, O_RDONLY) = 4
read(4, \177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0\0\1\0\0\\30\0\0\0\0\0\0..., 
832) = 832
fstat(4, {st_mode=S_IFREG|0755, st_size=46995, ...}) = 0
mmap(NULL, 2115944, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 4, 0) = 
0x2adf26fba000
mprotect(0x2adf26fbd000, 2097152, PROT_NONE) = 0
mmap(0x2adf271bd000, 8192, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 4, 0x3000) = 0x2adf271bd000
close(4)= 0
time(NULL)  = 1263890749
open(/etc/localtime, O_RDONLY)= 4
fstat(4, {st_mode=S_IFREG|0644, st_size=2197, ...}) = 0
fstat(4, {st_mode=S_IFREG|0644, st_size=2197, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x2adf271bf000
read(4, TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\4\0\0\0\4\0\0\0\0..., 4096) 
= 2197
lseek(4, -1394, SEEK_CUR)   = 803
read(4, TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\5\0\0\0\5\0\0\0\0..., 4096) 
= 1394
close(4)= 0
munmap(0x2adf271bf000, 4096)= 0
close(3)= 0
time(NULL)  = 1263890749
stat(/etc/localtime, {st_mode=S_IFREG|0644, st_size=2197, ...}) = 0
rt_sigaction(SIGINT, {SIG_DFL, [], SA_RESTORER, 0x393b60e930}, {0x4c4730, [], 
SA_RESTORER, 0x393b60e930}, 8) = 0
exit_group(0)   = ?

--
messages: 98049
nosy: dorontal
severity: normal
status: open
title: time.strftime may hung while trying to open /etc/localtime but does not 
release GIL
versions: Python 2.6

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7739
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   >