ANN: trombi 0.9.1

2011-04-08 Thread Jyrki Pulliainen
Announcing Trombi version 0.9.1

Trombi is an asynchronous CouchDB client for Tornado, the asynchronous
web server by Facebook.

Version 0.9.1 fixes a major header handling bug in present previous versions.

Currently the only problem the bug causes is a rare corner case with
long polling changes feed using filter document where the CouchDB
becomes unable to respond to the request.

Trombi is available at PyPI: http://pypi.python.org/pypi/trombi/
Sources and issue tracker are available in Github:
https://github.com/inoi/trombi
Documentation for 0.9.1 is available at: https://github.com/inoi/trombi

Cheers,
Jyrki Pulliainen
-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


ANN: Mahotas 0.6.4

2011-04-08 Thread Luis Pedro Coelho
Hello all,

I'm happy to announce a new release of mahotas, my computer vision
library for Python.

WHAT'S MAHOTAS
--

Mahotas is a library which includes several image processing algorithm.
It works over numpy arrays. All its computation heavy functions are
implemented in C++ (using templates so that they work for every data
type without conversions).

The result is a fast, optimised image processing and computer vision
library.

It is under heavy development and has no known bugs. Reported bugs
almost always get fixed in a day.

WHAT'S NEW
--

Major change is that Mac OS compilation now works! Thanks to K-Michael
Aye for the patch that saved the day.

Minor changes include fixes to ``cwatershed`` and adding the tests to
the source distribution (use `python setup.py test` to run them). Some
of these.

INFO

*API Docs*: http://packages.python.org/mahotas/

*Mailing List*: Use the pythonvision mailing list
http://groups.google.com/group/pythonvision?pli=1 for questions, bug
submissions, etc.

*Author*: Luis Pedro Coelho (with code by Zachary Pincus [from
scikits.image], Peter J. Verveer [from scipy.ndimage], and Davis King
[from dlib]

*License*: GPLv2

Thank you,
-- 
Luis Pedro Coelho | Carnegie Mellon University | http://luispedro.org


-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


pyCologne Python User Group Cologne - Meeting, April 13, 2011, 6.30pm

2011-04-08 Thread Andi Albrecht
The next meeting of pyCologne will take place:

Wednesday, April, 13th
starting about 6.30 pm - 6.45 pm
at Room 0.14, Benutzerrechenzentrum (RRZK-B)
University of Cologne, Berrenrather Str. 136, 50937 Köln, Germany

On this month's agenda:

 - wxWidgets - a look at the demo (Ralf Schönian)
 - PythonCamp planning (everyone)

Any presentations, news, book presentations etc. are welcome
on each of our meetings!

At about 8.30 pm we will as usual enjoy the rest of the evening in a
nearby restaurant.

Further information including directions how to get to the location
can be found at:

http://www.pycologne.de


(Sorry, the web-links are in German only.)

Regards,

Andi
-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


!!!!$$$$only for ladies$$$!!!!

2011-04-08 Thread aarthi priya
only for ladies$$$
Special wedsite for ladies and women
Ladies Secrets
A to z of ladies
http://www.wix.com/kumarrajlove/ammu
just click
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [OT] Free software versus software idea patents

2011-04-08 Thread harrismh777

Steven D'Aprano wrote:

The reason Mono gets hit (from others besides me) is that they are in
  partnership and collaboration with Microsoft, consciously and
  unconsciously. This must be punished.

Just like Python, Apache, and the Linux kernel. What are you going to do
to punish them?


What do you mean 'just like?They are nothing alike.

(which is why the community is upset by sone, but not the others: hint)

The punishment? ... withdraw support and use of the project as well as 
continue to aggressively support activism in all areas of freedom for 
computer science, including but not limited to the following:


1) I, nor my family, will ever purchase another Microsoft anything, from 
anyone, connected in any way, nor for any reason. Microsoft gets no more 
money from me, ever.


2) I do not use non-free software at any level nor for any reason at any 
time regardless of pain or inconvenience.


3) I work hard to advocate for free software alternatives to proprietary 
software at all levels... including bios and firmware.


4) I help to fund free software initiatives, choices, and development; 
conversely I refuse to fund (in any way) proprietary software 
initiatives and projects.


5) I educate the public in my sphere of influence for freedom and 
freedom choices, including the software engineering venue, and whenever 
possible persuade users of non-free software and Microsoft collaborative 
software to use only free software and only frameworks that are removed 
from Microsoft direction and control.


6) I actively advocate with vendors of free software at all levels to 
drop non-free software from their distros and to similarly punish 
Microsoft collaborative software initiatives and projects. (networking, 
communication, education, awareness training)


7) I write lots and lots of letters... to computer distributors, 
software houses, etc., with the message of freedom; a plea for the 
openness of a free society and sanity in the marketplace of ideas. I 
make my marketing intentions clear to vendors who would like me to spend 
my dollars with them ( cards, drivers, printers, cameras, comp hardware 
desk, notebook, netbook, others) clearly indicating that their products 
must be open, not require Microsoft proprietary drivers, and be free 
from non-free software and firmware.



Freedom isn't free... you have to fight for it... always.

--
http://mail.python.org/mailman/listinfo/python-list


Re: [OT] Free software versus software idea patents

2011-04-08 Thread harrismh777

Steven D'Aprano wrote:

Just like Python, Apache, and the Linux kernel. What are you going to do
to punish them?


What do you mean 'just like?They are nothing alike.

(which is why the community is upset by sone, but not the others: hint)

The punishment? ... withdraw support and use of the project as well as 
continue to aggressively support activism in all areas of freedom for 
computer science, including but not limited to the following:


1) I, nor my family, will ever purchase another Microsoft anything, from 
anyone, connected in any way, nor for any reason. Microsoft gets no more 
money from me, ever.


2) I do not use non-free software at any level nor for any reason at any 
time regardless of pain or inconvenience.


3) I work hard to advocate for free software alternatives to proprietary 
software at all levels... including bios and firmware.


4) I help to fund free software initiatives, choices, and development; 
conversely I refuse to fund (in any way) proprietary software 
initiatives and projects.


5) I educate the public in my sphere of influence for freedom and 
freedom choices, including the software engineering venue, and whenever 
possible persuade users of non-free software and Microsoft collaborative 
software to use only free software and only frameworks that are removed 
from Microsoft direction and control.


6) I actively advocate with vendors of free software at all levels to 
drop non-free software from their distros and to similarly punish 
Microsoft collaborative software initiatives and projects. (networking, 
communication, education, awareness training)


7) I write lots and lots of letters... to computer distributors, 
software houses, etc., with the message of freedom; a plea for the 
openness of a free society and sanity in the marketplace of ideas. I 
make my marketing intentions clear to vendors who would like me to spend 
my dollars with them ( cards, drivers, printers, cameras, comp hardware 
desk, notebook, netbook, others) clearly indicating that their products 
must be open, not require Microsoft proprietary drivers, and be free 
from non-free software and firmware.



Freedom isn't free... you have to fight for it... always.
--
http://mail.python.org/mailman/listinfo/python-list


Tips on Speeding up Python Execution

2011-04-08 Thread Abhijeet Mahagaonkar
Dear Pythoners,
I have written a python application which authenticates a user, reads a
webpage and searches for pattern and builds a database ( In my case its a
dictinary with fixed set of keys).
Inputting the username and password for authentication and final display of
the results is done by GUI.
I was able to isolate that major chunk of run time is eaten up in opening a
webpages, reading from them and extracting text.
I wanted to know if there is a way to concurrently calling the functions.

here is my pseudo code:

database=[]
in a while loop:
   build the url by concatenating the query parametres
   temp=urlopen(url).read()
   dict['Key1']=get_key1_data(temp) ## passing the entire string obtained by
.read()
   dict['key2']=get_key2_data(temp)
   .
   .
   .
   dict['keyn']=get_keyn_data(temp)
   database=database+[dict]   ## building an array of dictionaries

My question here is can I have the functions get_key1_data to get_keyn_data
run in some concurrent way.??
I ask this because these functions are not dependent on one another. They
all have access to the same parsed url and independently pull data in order
to populate the final database

Appreciating your help in this one.

Warm Regards,
Abhijeet.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to re import a module

2011-04-08 Thread Rafael Durán Castañeda
That's really easy:

 import re
 reload(re)
module 're' from '/usr/lib/python2.6/re.pyc'


In python2.x, but if you are using python3.x I think is different, really
easy to know if you search in python docs.

2011/4/8 hid...@gmail.com

 Hello i want to know the best way to re import a module, because i have a
 web server with just one Apache session for all my domains and applications,
 and i if i need to make some changes on one application restart the server
 will affect the others, so i was thinking in 2 ways the first will be 're
 imported' the module, the second use subprocess. First the less unsecured
 that in this case i think is re import the module, give me your opinions
 please.


 ---
 Thanks in advance
 Diego Hidalgo
 --
 http://mail.python.org/mailman/listinfo/python-list


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Amazon Simple Queue Service Worker

2011-04-08 Thread Joseph Ziegler
Thanks!

On 7 April 2011 10:13, Joseph Ziegler j...@lounginghound.com wrote:

 Hi all,

 Little new to the python world, please excuse the Noobness.

 We are writing a server which will subscribe to the Amazon Simple Queue
 Service.  I am looking for a good service container.  I saw Twisted and Zope
 out there. It's going to be a server which polls on a queue via the Boto
 api.  Do you have any suggestions for a framework to start?

 Best Regards,

 Joe

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tips on Speeding up Python Execution

2011-04-08 Thread Chris Angelico
On Fri, Apr 8, 2011 at 5:04 PM, Abhijeet Mahagaonkar
abhijeet.mano...@gmail.com wrote:
 I was able to isolate that major chunk of run time is eaten up in opening a
 webpages, reading from them and extracting text.
 I wanted to know if there is a way to concurrently calling the functions.

So, to clarify: you have code that's loading lots of separate pages,
and the time is spent waiting for the internet? If you're saturating
your connection, then this won't help, but if they're all small pages
and they're coming over the internet, then yes, you certainly CAN
fetch them concurrently. As the Perl folks say, There's More Than One
Way To Do It; one is to spawn a thread for each request, then collect
up all the results at the end. Look up the 'threading' module for
details:

http://docs.python.org/library/threading.html

It should also be possible to directly use asynchronous I/O and
select(), but I couldn't see a way to do that with urllib/urllib2. If
you're using sockets directly, this ought to be an option.

I don't know what's the most Pythonesque option, but if you already
have specific Python code for each of your functions, it's probably
going to be easiest to spawn threads for them all.

Chris Angelico
Threading fan ever since he met OS/2 in 1993 or so
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Literate Programming

2011-04-08 Thread Hans Georg Schaathun
On Thu, 07 Apr 2011 16:21:52 -0500, Robert Kern
  robert.k...@gmail.com wrote:
:  http://sphinx.pocoo.org/markup/code.html
: 
:  As far as I can tell, it just works. See here for an example:
: 
:  http://ipython.scipy.org/doc/nightly/html/interactive/reference.html

Maybe I did not express myself clearly.  I don't have a problem with
highlighting or indentation within a single, complete and continuous
block of code.
I get trouble when I insert explaining text within nested blocks, 
especially when I do it at different levels of indentation.
In the pages you cite, I cannot find a single example which tries to 
do this.

So something like this is fine:

code
# This is a long and complex block::

if mytest():
   for i in myList():
 foobar(i)
else:
   for i in yourList():
 boofar(i)
/code

If I do the following, sphinx cannot keep up ...

code
# This block should be explained step by step::

if mytest():

# Bla, blah blah...
#
#   ::

   for i in myList():

# The :func:`foobar` function is an interesting choice here ... blah
#
#   ::

 foobar(i)

# Otherwise, myList might not be defined, so we need yours::

else:

# More blah...
#
#   ::

   for i in yourList():

# And here we go again::

 boofar(i)
/code

Highlighting tends to break starting from `else', and indentation breaks
at the second or third level.

-- 
:-- Hans Georg
-- 
http://mail.python.org/mailman/listinfo/python-list


see the link

2011-04-08 Thread ramalingam i
webpage : http://123maza.com/65/hand741/
-- 
http://mail.python.org/mailman/listinfo/python-list


How to get a PID of a child process from a process openden with Popen()

2011-04-08 Thread P.S.

Hello,

I am starting a GUI-application as another user with kdesu in my python
script:

import shlex, subprocess

p = subprocess.Popen(shlex.split(kdesu -u test program))

How can I aquire the PID of the program which kdesu starts?
p.pid just returns the PID of kdesu, but I need the PID of the
child process from kdesu.

My System: openSUSE 11.4 64-Bit, Python 2.7.

Regards
Pedro Santos
--
http://mail.python.org/mailman/listinfo/python-list


3.2: email.message.get_payload() delivers str, but send_message expect bytes

2011-04-08 Thread Axel Rau
Hi all,

I'm just starting with imaplib, email and smtplib and try to write a
SPAM reporter. I retrieve SPAM mails from an IMAP server and add them as
message/rfc822 attachments to a report mail.
Sometimes my call of smtplib.send_message works, sometimes, I get:
--
  File
/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/smtplib.py,
line 771, in send_message
rcpt_options)
  File
/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/smtplib.py,
line 739, in sendmail
(code,resp) = self.data(msg)
  File
/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/smtplib.py,
line 495, in data
q = _quote_periods(msg)
  File
/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/smtplib.py,
line 165, in _quote_periods
return re.sub(br'(?m)^\.', '..', bindata)
  File
/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/re.py,
line 167, in sub
return _compile(pattern, flags).sub(repl, string, count)
TypeError: sequence item 1: expected bytes, str found
--
When I query the class of my pyloads, they always show up as strings.
The test case, which always fails is an oversized SPAM, which my script
must truncate. I do this by removing MIME parts from the end (just
deleting items from the list, describing the multipart structure).

Another problem comes up, when I try to encode the payload of the whole
report mail, I get always:
---
  File erdb_bt.py, line 195, in flushReports
email.encoders.encode_base64(self.msg)
  File
/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/email/encoders.py,
line 32, in encode_base64
encdata = str(_bencode(orig), 'ascii')
  File
/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/base64.py,
line 56, in b64encode
raise TypeError(expected bytes, not %s % s.__class__.__name__)
TypeError: expected bytes, not list
---
What am I doing wrong?

Axel
-- 
http://mail.python.org/mailman/listinfo/python-list


NLP

2011-04-08 Thread Ranjith Kumar
Hi all,
Can anyone suggest me any best Natural Language Processing in
python other than nltk.

-- 
Cheers,
Ranjith Kumar K,
Chennai.

http://ranjithtenz.wordpress.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Literate Programming

2011-04-08 Thread Jim
On Apr 7, 2:09 pm, Hans Georg Schaathun h...@schaathun.net wrote:
 Has anyone found a good system for literate programming in python?

Are you aware of pyweb http://sourceforge.net/projects/pywebtool/ ?
-- 
http://mail.python.org/mailman/listinfo/python-list


[2.5.1] Script to download and rename bunch of files?

2011-04-08 Thread Gilles Ganault
Hello,

Before I go ahead and learn how to write this, I was wondering if
someone knew of some source code I could use to download and rename a
bunch of files, ie. the equivalent of wget's -O switch?

I would provide a two-column list where column 1 would contain the
full URL, and column 2 would contain the name to use to rename the
file once downloaded.

Thank you.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [2.5.1] Script to download and rename bunch of files?

2011-04-08 Thread Laurent Claessens

Le 08/04/2011 14:47, Gilles Ganault a écrit :

Hello,

Before I go ahead and learn how to write this, I was wondering if
someone knew of some source code I could use to download and rename a
bunch of files, ie. the equivalent of wget's -O switch?

I would provide a two-column list where column 1 would contain the
full URL, and column 2 would contain the name to use to rename the
file once downloaded.

Thank you.


The following puts in the string `a` the code of the page urlBase :

a = urllib.urlopen(urlBase).read()

Then you have to write `a` in a file.

There could be better way.

Laurent
--
http://mail.python.org/mailman/listinfo/python-list


Re: [2.5.1] Script to download and rename bunch of files?

2011-04-08 Thread Gilles Ganault
On Fri, 08 Apr 2011 15:14:27 +0200, Laurent Claessens
moky.m...@gmail.com wrote:
The following puts in the string `a` the code of the page urlBase :

a = urllib.urlopen(urlBase).read()

Then you have to write `a` in a file.

There could be better way.

Thank you.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [OT] Free software versus software idea patents

2011-04-08 Thread Westley Martínez
On Fri, 2011-04-08 at 01:41 -0500, harrismh777 wrote:
 Steven D'Aprano wrote:
  Just like Python, Apache, and the Linux kernel. What are you going to do
  to punish them?
 
 What do you mean 'just like?They are nothing alike.
 
 (which is why the community is upset by sone, but not the others: hint)
 
 The punishment? ... withdraw support and use of the project as well as 
 continue to aggressively support activism in all areas of freedom for 
 computer science, including but not limited to the following:
 
 1) I, nor my family, will ever purchase another Microsoft anything, from 
 anyone, connected in any way, nor for any reason. Microsoft gets no more 
 money from me, ever.
 
 2) I do not use non-free software at any level nor for any reason at any 
 time regardless of pain or inconvenience.
 
 3) I work hard to advocate for free software alternatives to proprietary 
 software at all levels... including bios and firmware.
 
 4) I help to fund free software initiatives, choices, and development; 
 conversely I refuse to fund (in any way) proprietary software 
 initiatives and projects.
 
 5) I educate the public in my sphere of influence for freedom and 
 freedom choices, including the software engineering venue, and whenever 
 possible persuade users of non-free software and Microsoft collaborative 
 software to use only free software and only frameworks that are removed 
 from Microsoft direction and control.
 
 6) I actively advocate with vendors of free software at all levels to 
 drop non-free software from their distros and to similarly punish 
 Microsoft collaborative software initiatives and projects. (networking, 
 communication, education, awareness training)
 
 7) I write lots and lots of letters... to computer distributors, 
 software houses, etc., with the message of freedom; a plea for the 
 openness of a free society and sanity in the marketplace of ideas. I 
 make my marketing intentions clear to vendors who would like me to spend 
 my dollars with them ( cards, drivers, printers, cameras, comp hardware 
 desk, notebook, netbook, others) clearly indicating that their products 
 must be open, not require Microsoft proprietary drivers, and be free 
 from non-free software and firmware.
 
 
 Freedom isn't free... you have to fight for it... always.

Why should a business listen to you? You're not gonna buy any software
anyways.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Fun python 3.2 one-liner

2011-04-08 Thread Lie Ryan
On 04/06/11 01:07, Steven D'Aprano wrote:
 On Tue, 05 Apr 2011 15:38:28 +0200, Daniel Fetchinson wrote:

 Personally, I find that the discipline of keeping to 80 characters is 
 good for me. It reduces the temptation of writing obfuscated Python one-
 liners when two lines would be better. The *only* time it is a burden is 
 when I write doc strings, and even then, only a small one.

Unless the editor I'm using has an 80-char autowrapping or a 80-char
guiding lines, I tend to wrap docstring in ~40-60 char.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tips on Speeding up Python Execution

2011-04-08 Thread MRAB

On 08/04/2011 08:25, Chris Angelico wrote:
[snip]

I don't know what's the most Pythonesque option, but if you already
have specific Python code for each of your functions, it's probably
going to be easiest to spawn threads for them all.


Pythonesque refers to Monty Python's Flying Circus. The word you
want is Pythonic.
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to get a PID of a child process from a process openden with Popen()

2011-04-08 Thread Miki Tebeka
   p = subprocess.Popen(shlex.split(kdesu -u test program))
 
 How can I aquire the PID of the program which kdesu starts?
You can run ps --ppid p.pid and get the line containing test program.
The first field there should be the child process id.

HTH
--
Miki Tebeka miki.teb...@gmail.com
http://pythonwise.blogspot.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-04-08 Thread Aahz
In article 4d8be3bb.4030...@v.loewis.de,
Martin v. Loewis mar...@v.loewis.de wrote:
Martin deleted the attribution for Carl Banks:

 The cmp argument doesn't depend in any way on an object's __cmp__
 method, so getting rid of __cmp__ wasn't any good readon to also get
 rid of the cmp argument

So what do you think about the cmp() builtin? Should have stayed,
or was it ok to remove it?

If it should have stayed: how should it's implementation have looked like?

If it was ok to remove it: how are people supposed to fill out the cmp=
argument in cases where they use the cmp() builtin in 2.x?

Actually, my take is that removing __cmp__ was a mistake.  (I already
argued about it back in python-dev before it happened, and I see little
point rehashing it.  My reason is strictly efficiency grounds: when
comparisons are expensive -- such as Decimal object -- __cmp__ is
faster.)
-- 
Aahz (a...@pythoncraft.com)   * http://www.pythoncraft.com/

Beware of companies that claim to be like a family.  They might not be
lying.  --Jill Lundquist
-- 
http://mail.python.org/mailman/listinfo/python-list


Python 3.2 vs Java 1.6

2011-04-08 Thread km
Hi All,

How does python 3.2 fare compared to Java 1.6 in terms of performance ?
any pointers or observations ?

regards,
KM
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tips on Speeding up Python Execution

2011-04-08 Thread Chris Angelico
On Sat, Apr 9, 2011 at 12:41 AM, MRAB pyt...@mrabarnett.plus.com wrote:
 On 08/04/2011 08:25, Chris Angelico wrote:
 [snip]

 I don't know what's the most Pythonesque option, but if you already
 have specific Python code for each of your functions, it's probably
 going to be easiest to spawn threads for them all.

 Pythonesque refers to Monty Python's Flying Circus. The word you
 want is Pythonic.

Whoops! Remind me not to post while sleep-deprived.

although Sleep-deprived is my normal state, so that may be a bit tricky.

Chris Angelico
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python 3.2 vs Java 1.6

2011-04-08 Thread Chris Angelico
On Sat, Apr 9, 2011 at 1:21 AM, km srikrishnamo...@gmail.com wrote:
 Hi All,

 How does python 3.2 fare compared to Java 1.6 in terms of performance ?
 any pointers or observations ?

Hi All,

How do apples compare to oranges in terms of performance?

Chris Angelico
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python 3.2 vs Java 1.6

2011-04-08 Thread Moises Alberto Lindo Gutarra
I work with java since 1997 and with python three years ago, and i
really think that
python performance is much better than java, i made same applications using
both and python always responses better. Try to do the same with a
little appication
accesing data bases, using ftp clients, etc and you will see what i mean.

2011/4/8 km srikrishnamo...@gmail.com:
 Hi All,

 How does python 3.2 fare compared to Java 1.6 in terms of performance ?
 any pointers or observations ?

 regards,
 KM


 --
 http://mail.python.org/mailman/listinfo/python-list





-- 
Atte.
Moisés Alberto Lindo Gutarra
Asesor - Desarrollador Java / Open Source
Linux Registered User #431131 - http://counter.li.org/
Cel: (511) 995081720
EMail: mli...@gmail.com
MSN: mli...@tumisolutions.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tips on Speeding up Python Execution

2011-04-08 Thread Matt Chaput

On 08/04/2011 11:31 AM, Chris Angelico wrote:

On Sat, Apr 9, 2011 at 12:41 AM, MRABpyt...@mrabarnett.plus.com  wrote:

On 08/04/2011 08:25, Chris Angelico wrote:
[snip]


I don't know what's the most Pythonesque option, but if you already
have specific Python code for each of your functions, it's probably
going to be easiest to spawn threads for them all.


Pythonesque refers to Monty Python's Flying Circus. The word you
want is Pythonic.


And the word for referring to the actual snake is Pythonical :P
--
http://mail.python.org/mailman/listinfo/python-list


Generators and propagation of exceptions

2011-04-08 Thread r
I had a problem for which I've already found a satisfactory
work-around, but I'd like to ask you if there is a better/nicer
looking solution. Perhaps I'm missing something obvious.

The code looks like this:

stream-of-tokens = token-generator(stream-of-characters)
stream-of-parsed-expressions = parser-generator(stream-of-tokens)
stream-of-results = evaluator-generator(stream-of-parsed-expressions)

each of the above functions consumes and implements a generator:

def evaluator-generator(stream-of-tokens):
  for token in stream-of-tokens:
try:
   yield token.evaluate()   # evaluate() returns a Result
except Exception as exception:
   yield ErrorResult(exception) # ErrorResult is a subclass of Result

The problem is that, when I use the above mechanism, the errors
propagate to the output embedded in the data streams. This means, I
have to make them look like real data (in the example above I'm
wrapping the exception with an ErrorExpression object) and raise them
and intercept them again at each level until they finally trickle down
to the output. It feels a bit like writing in C (checking error codes
and propagating them to the caller).

OTOH, if I don't intercept the exception inside the loop, it will
break the for loop and close the generator. So the error no longer
affects a single token/expression but it kills the whole session. I
guess that's because the direction flow of control is sort of
orthogonal to the direction of flow of data.

Any idea for a working and elegant solution?

Thanks,

-r
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [OT] Free software versus software idea patents

2011-04-08 Thread Ethan Furman

Westley Martínez wrote:

On Fri, 2011-04-08 at 01:41 -0500, harrismh777 wrote:


Freedom isn't free... you have to fight for it... always.


Why should a business listen to you? You're not gonna buy any software
anyways.



From a thread a few months back I can say there are a couple companies 
with posters on this list that are successful in supporting *and 
selling* open-source software.


~Ethan~
--
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-04-08 Thread Lie Ryan
On 04/09/11 01:08, Aahz wrote:
 Actually, my take is that removing __cmp__ was a mistake.  (I already
 argued about it back in python-dev before it happened, and I see little
 point rehashing it.  My reason is strictly efficiency grounds: when
 comparisons are expensive -- such as Decimal object -- __cmp__ is
 faster.)

I don't get you... why would sorting a list using __cmp__ be faster when
comparisons are expensive?
-- 
http://mail.python.org/mailman/listinfo/python-list


Copy-on-write when forking a python process

2011-04-08 Thread John Connor
Hi all,
Long time reader, first time poster.

I am wondering if anything can be done about the COW (copy-on-write)
problem when forking a python process.  I have found several
discussions of this problem, but I have seen no proposed solutions or
workarounds.  My understanding of the problem is that an object's
reference count is stored in the ob_refcnt field of the PyObject
structure itself.  When a process forks, its memory is initially not
copied. However, if any references to an object are made or destroyed
in the child process, the page in which the objects ob_refcnt field
is located in will be copied.

My first thought was the obvious one: make the ob_refcnt field a
pointer into an array of all object refcounts stored elsewhere.
However, I do not think that there would be a way of doing this
without adding a lot of complexity.  So my current thinking is that it
should be possible to disable refcounting for an object.  This could
be done by adding a field to PyObject named ob_optout.  If ob_optout
is true then py_INCREF and py_DECREF will have no effect on the
object:


from refcount import optin, optout

class Foo: pass

mylist = [Foo() for _ in range(10)]
optout(mylist)  # Sets ob_optout to true
for element in mylist:
optout(element) # Sets ob_optout to true

Fork_and_block_while_doing_stuff(mylist)

optin(mylist) # Sets ob_optout to false
for element in mylist:
optin(element) # Sets ob_optout to false


Has anyone else looked into the COW problem?  Are there workarounds
and/or other plans to fix it?  Does the solution I am proposing sound
reasonable, or does it seem like overkill?  Does anyone foresee any
problems with it?

Thanks,
--jac
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generators and propagation of exceptions

2011-04-08 Thread Ian Kelly
On Fri, Apr 8, 2011 at 9:55 AM, r nbs.pub...@gmail.com wrote:
 I had a problem for which I've already found a satisfactory
 work-around, but I'd like to ask you if there is a better/nicer
 looking solution. Perhaps I'm missing something obvious.

 The code looks like this:

 stream-of-tokens = token-generator(stream-of-characters)
 stream-of-parsed-expressions = parser-generator(stream-of-tokens)
 stream-of-results = evaluator-generator(stream-of-parsed-expressions)

 each of the above functions consumes and implements a generator:

 def evaluator-generator(stream-of-tokens):
  for token in stream-of-tokens:
    try:
       yield token.evaluate()           # evaluate() returns a Result
    except Exception as exception:
       yield ErrorResult(exception) # ErrorResult is a subclass of Result

 The problem is that, when I use the above mechanism, the errors
 propagate to the output embedded in the data streams. This means, I
 have to make them look like real data (in the example above I'm
 wrapping the exception with an ErrorExpression object) and raise them
 and intercept them again at each level until they finally trickle down
 to the output. It feels a bit like writing in C (checking error codes
 and propagating them to the caller).

 OTOH, if I don't intercept the exception inside the loop, it will
 break the for loop and close the generator. So the error no longer
 affects a single token/expression but it kills the whole session. I
 guess that's because the direction flow of control is sort of
 orthogonal to the direction of flow of data.

 Any idea for a working and elegant solution?


I guess that depends on what you would want to do with the exceptions
instead.  Collect them out-of-band?  Revising your pseudo-code:

errors = []

stream-of-tokens = token-generator(stream-of-characters, errors)
stream-of-parsed-expressions = parser-generator(stream-of-tokens, errors)
stream-of-results = evaluator-generator(stream-of-parsed-expressions, errors)

def evaluator-generator(stream-of-tokens, errors):
 for token in stream-of-tokens:
   try:
  yield token.evaluate()   # evaluate() returns a Result
   except Exception as exception:
  errors.append(exception)
  # or:
  # errors.append(EvaluatorExceptionContext(exception, ...))


Cheers,
Ian
-- 
http://mail.python.org/mailman/listinfo/python-list


Argument of the bool function

2011-04-08 Thread candide
About the standard function bool(), Python's official documentation 
tells us the following :


bool([x])
Convert a value to a Boolean, using the standard truth testing procedure.


In this context, what exactly a value is referring to ?


For instance,


 x=42
 bool(x=5)
True



but _expression_ :

x=42


has no value.






--
http://mail.python.org/mailman/listinfo/python-list


Re: Generators and propagation of exceptions

2011-04-08 Thread Terry Reedy

On 4/8/2011 11:55 AM, r wrote:

I had a problem for which I've already found a satisfactory
work-around, but I'd like to ask you if there is a better/nicer
looking solution. Perhaps I'm missing something obvious.

The code looks like this:

stream-of-tokens = token-generator(stream-of-characters)
stream-of-parsed-expressions = parser-generator(stream-of-tokens)
stream-of-results = evaluator-generator(stream-of-parsed-expressions)

each of the above functions consumes and implements a generator:

def evaluator-generator(stream-of-tokens):


According to the above, that should be stream-of-parsed-expressions.


   for token in stream-of-tokens:
 try:
yield token.evaluate()   # evaluate() returns a Result
 except Exception as exception:
yield ErrorResult(exception) # ErrorResult is a subclass of Result


The question which you do not answer below is what, if anything, you 
want to do with error? If nothing, just pass. You are now, in effect, 
treating them the same as normal results (at least sending them down the 
same path), but that does not seem satisfactory to you. If you want them 
treated separately, then send them down a different path. Append the 
error report to a list or queue or send to a consumer generator 
(consumer.send).



The problem is that, when I use the above mechanism, the errors
propagate to the output embedded in the data streams. This means, I
have to make them look like real data (in the example above I'm
wrapping the exception with an ErrorExpression object) and raise them
and intercept them again at each level until they finally trickle down
to the output. It feels a bit like writing in C (checking error codes
and propagating them to the caller).

OTOH, if I don't intercept the exception inside the loop, it will
break the for loop and close the generator. So the error no longer
affects a single token/expression but it kills the whole session. I
guess that's because the direction flow of control is sort of
orthogonal to the direction of flow of data.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Argument of the bool function

2011-04-08 Thread Benjamin Kaplan
On Fri, Apr 8, 2011 at 12:26 PM, candide candide@free.invalid wrote:
 About the standard function bool(), Python's official documentation tells us
 the following :

 bool([x])
 Convert a value to a Boolean, using the standard truth testing procedure.


 In this context, what exactly a value is referring to ?


 For instance,


 x=42
 bool(x=5)
 True



 but _expression_ :

 x=42


 has no value.


That's because bool(x=5) isn't doing what you think.

 bool(y=5)
Traceback (most recent call last):
  File stdin, line 1, in module
TypeError: 'y' is an invalid keyword argument for this function

bool(x=5) is just passing the value 5 as the argument x to the function.

value means just what you'd think- any constant or any value that's
been assigned to.






 --
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Argument of the bool function

2011-04-08 Thread Ian Kelly
On Fri, Apr 8, 2011 at 10:26 AM, candide candide@free.invalid wrote:
 x=42
 bool(x=5)
 True



 but _expression_ :

 x=42


 has no value.

x=42 is an assignment statement, not an expression.
In bool(x=5), x=5 is also not an expression.  It's passing the
expression 5 in as the parameter x, using a keyword argument.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Argument of the bool function

2011-04-08 Thread Mel
candide wrote:
 About the standard function bool(), Python's official documentation 
 tells us the following :

 bool([x])
 Convert a value to a Boolean, using the standard truth testing procedure.

 In this context, what exactly a value is referring to ?

 For instance,
   x=42
   bool(x=5)
 True
  

Cute.  What's happening here is that `x=5` isn't really an expression. 
It's passing a value to the named parameter `x`, specified in the
definition of `bool`.  Try it with something else:

Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) 
[GCC 4.4.3] on linux2
Type help, copyright, credits or license for more information.
 bool(y=5)
Traceback (most recent call last):
  File stdin, line 1, in module
TypeError: 'y' is an invalid keyword argument for this function



Mel.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Copy-on-write when forking a python process

2011-04-08 Thread Heiko Wundram
Am 08.04.2011 18:14, schrieb John Connor:
 Has anyone else looked into the COW problem?  Are there workarounds
 and/or other plans to fix it?  Does the solution I am proposing sound
 reasonable, or does it seem like overkill?  Does anyone foresee any
 problems with it?

Why'd you need a fix like this for something that isn't broken? COW
doesn't just refer to the object reference-count, but to the object
itself, too. _All_ memory of the parent (and, as such, all objects, too)
become unrelated to memory in the child once the fork is complete.

The initial object reference-count state of the child is guaranteed to
be sound for all objects (because the parent's final reference-count
state was, before the process image got cloned [remember, COW is just an
optimization for a complete clone, and it's up the operating-system to
make sure that you don't notice different semantics from a complete
copy]), and what you're proposing (opting in/out of reference counting)
breaks that.

-- 
--- Heiko.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tips on Speeding up Python Execution

2011-04-08 Thread Raymond Hettinger
On Apr 8, 12:25 am, Chris Angelico ros...@gmail.com wrote:
 On Fri, Apr 8, 2011 at 5:04 PM, Abhijeet Mahagaonkar

 abhijeet.mano...@gmail.com wrote:
  I was able to isolate that major chunk of run time is eaten up in opening a
  webpages, reading from them and extracting text.
  I wanted to know if there is a way to concurrently calling the functions.

 So, to clarify: you have code that's loading lots of separate pages,
 and the time is spent waiting for the internet? If you're saturating
 your connection, then this won't help, but if they're all small pages
 and they're coming over the internet, then yes, you certainly CAN
 fetch them concurrently. As the Perl folks say, There's More Than One
 Way To Do It; one is to spawn a thread for each request, then collect
 up all the results at the end. Look up the 'threading' module for
 details:

 http://docs.python.org/library/threading.html

The docs for Python3.2 have a nice example for downloading multiple
webpages in parallel:

http://docs.python.org/py3k/library/concurrent.futures.html#threadpoolexecutor-example

Raymond
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python 3.2 vs Java 1.6

2011-04-08 Thread geremy condra
On Fri, Apr 8, 2011 at 8:21 AM, km srikrishnamo...@gmail.com wrote:
 Hi All,

 How does python 3.2 fare compared to Java 1.6 in terms of performance ?
 any pointers or observations ?

Python and Java have overall very different performance profiles, but
for the vast majority of applications either will suffice. If you have
reason to believe that your application is very atypical in terms of
performance requirements, please post that so we can give you more
specific advice.

Geremy Condra
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generators and propagation of exceptions

2011-04-08 Thread Raymond Hettinger
On Apr 8, 8:55 am, r nbs.pub...@gmail.com wrote:
 I had a problem for which I've already found a satisfactory
 work-around, but I'd like to ask you if there is a better/nicer
 looking solution. Perhaps I'm missing something obvious.

 The code looks like this:

 stream-of-tokens = token-generator(stream-of-characters)
 stream-of-parsed-expressions = parser-generator(stream-of-tokens)
 stream-of-results = evaluator-generator(stream-of-parsed-expressions)

 each of the above functions consumes and implements a generator:

 def evaluator-generator(stream-of-tokens):
   for token in stream-of-tokens:
     try:
        yield token.evaluate()           # evaluate() returns a Result
     except Exception as exception:
        yield ErrorResult(exception) # ErrorResult is a subclass of Result

 The problem is that, when I use the above mechanism, the errors
 propagate to the output embedded in the data streams. This means, I
 have to make them look like real data (in the example above I'm
 wrapping the exception with an ErrorExpression object) and raise them
 and intercept them again at each level until they finally trickle down
 to the output. It feels a bit like writing in C (checking error codes
 and propagating them to the caller).

You're right, checking and propagating at each level doesn't feel
right.

If you need an exception to break out of the loops and close the
generators, then it seems you should just ignore the exception on
every level except for the one where you really want to handle the
exception (possibly in an outer control-loop):

  while True:
  try:

  for result in evaluator-generator(stream-of-parsed-
expressions):


 let the exception float up to



 OTOH, if I don't intercept the exception inside the loop, it will
 break the for loop and close the generator. So the error no longer
 affects a single token/expression but it kills the whole session. I
 guess that's because the direction flow of control is sort of
 orthogonal to the direction of flow of data.

 Any idea for a working and elegant solution?

 Thanks,

 -r

-- 
http://mail.python.org/mailman/listinfo/python-list


argparse and default for FileType

2011-04-08 Thread Paolo Elvati
Hi,

I noticed a strange behavior of argparse.
When running a simple code like the following:

import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
  -o,
  default = 'fake',
  dest = 'OutputFile',
  type = argparse.FileType('w')
 )
args = parser.parse_args()

I noticed that the default file (fake) is created every time I run the
code, even when I explicitly set the -o flag, in which case it will
produce both files.
My goal instead is to erite the default file ONLY if the flag is not specified.
For the moment, I solved it simply by removing the default=fake and
adding the required=True keyword, but I was wondering what is the
correct way of doing it (or if it is simply a bug).

Thank you,

Paolo
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generators and propagation of exceptions

2011-04-08 Thread r
Terry, Ian, thank you for your answers.

On Sat, Apr 9, 2011 at 1:30 AM, Terry Reedy tjre...@udel.edu wrote:
[...]
 According to the above, that should be stream-of-parsed-expressions.

Good catch.

 The question which you do not answer below is what, if anything, you want to
 do with error? If nothing, just pass. You are now, in effect, treating them
 the same as normal results (at least sending them down the same path), but
 that does not seem satisfactory to you. If you want them treated separately,
 then send them down a different path. Append the error report to a list or
 queue or send to a consumer generator (consumer.send).

The code above implements an interactive session (a REPL). Therefore,
what I'd like to get is an error information printed out at the output
as soon as it becomes available. Piping the errors together with data
works fine (it does what I need) but it feels a bit clunky [1]. After
all exceptions were specifically invented to simplify error handling
(delivering errors to the caller) and here they seem to chose a
wrong caller.

Ignoring the errors or collecting them out of band are both fine ideas
but they don't suit the interactive mode of operation.

[1] It's actually not _that_ bad. Exceptions still are very useful
inside of each of these procedures. Errors only have to be handled
manually in that main data path with generators.

Thanks,

-r
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generators and propagation of exceptions

2011-04-08 Thread Raymond Hettinger
On Apr 8, 8:55 am, r nbs.pub...@gmail.com wrote:
 I had a problem for which I've already found a satisfactory
 work-around, but I'd like to ask you if there is a better/nicer
 looking solution. Perhaps I'm missing something obvious.

 The code looks like this:

 stream-of-tokens = token-generator(stream-of-characters)
 stream-of-parsed-expressions = parser-generator(stream-of-tokens)
 stream-of-results = evaluator-generator(stream-of-parsed-expressions)

 each of the above functions consumes and implements a generator:

 def evaluator-generator(stream-of-tokens):
   for token in stream-of-tokens:
     try:
        yield token.evaluate()           # evaluate() returns a Result
     except Exception as exception:
        yield ErrorResult(exception) # ErrorResult is a subclass of Result

 The problem is that, when I use the above mechanism, the errors
 propagate to the output embedded in the data streams. This means, I
 have to make them look like real data (in the example above I'm
 wrapping the exception with an ErrorExpression object) and raise them
 and intercept them again at each level until they finally trickle down
 to the output. It feels a bit like writing in C (checking error codes
 and propagating them to the caller).

You could just let the exception go up to an outermost control-loop
without handling it at all on a lower level.  That is what exceptions
for you: terminate all the loops, unwind the stacks, and propagate up
to some level where the exception is caught:

  while 1:
 try:
results = evaluator-generator(stream-of-parsed-expressions)
for result in results:
print(result)
 except Exception as e:
handle_the_exception(e)

OTOH, If you want to catch the exception at the lowest level and wrap
it up as data (the ErrorResult in your example), there is a way to
make it more convenient.  Give the ErrorResult object some identify
methods that correspond to the methods being called by upper levels.
This will let the object float through without you cluttering each
level with detect-and-reraise logic.

   class ErrorResult:
   def __iter__(self):
   # pass through an enclosing iterator
   yield self

Here's a simple demo of how the pass through works:

 from itertools import *
 list(chain([1,2,3], ErrorResult(), [4,5,6]))
[1, 2, 3, __main__.ErrorResult object at 0x2250f70, 4, 5, 6]


 Any idea for a working and elegant solution?

Hope these ideas have helped.


Raymond

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generators and propagation of exceptions

2011-04-08 Thread Ethan Furman

r wrote:

The code above implements an interactive session (a REPL). Therefore,
what I'd like to get is an error information printed out at the output
as soon as it becomes available.


Couple ideas:

1) Instead of yielding the error, call some global print function, then 
continue on; or


2) Collect the errors, then have the top-most consumer check for errors 
and print them out before reading the next generator output.


~Ethan~
--
http://mail.python.org/mailman/listinfo/python-list


Re: Copy-on-write when forking a python process

2011-04-08 Thread jac
Hi Heiko,
I just realized I should probably have put a clearer use-case in my
previous message.  A example use-case would be if you have a parent
process which creates a large dictionary (say several gigabytes).  The
process then forks several worker processes which access this
dictionary.  The worker processes do not add or remove objects from
the dictionary, nor do they alter the individual elements of the
dictionary.  They simply perform lookups on the dictionary and perform
calculations which are then written to files.
If I wrote the above program in C, neither the dictionary nor its
contents would be copied into the memory of the child processes, but
in python as soon as you pass the dictionary itself or any of its
contents into a function as an argument, its reference count is
changed and the page of memory on which its reference count resides is
copied into the child process' memory.  What I am proposing is to
allow the parent process to disable reference counting for this
dictionary and its contents so that the child processes can access
them in a readonly fashion without them having to be copied.

I disagree with your statement that COW is an optimization for a
complete clone, it is an optimization that works at the memory page
level, not at the memory image level.  In other words, if I write to a
copy-on-write page, only that page is copied into my process' address
space, not the entire parent image.  To the best of my knowledge by
preventing the child process from altering an object's reference count
you can prevent the object from being copied (assuming the object is
not altered explicitly of course.)

Hopefully this clarifies my previous post,
--jac

On Apr 8, 12:26 pm, Heiko Wundram modeln...@modelnine.org wrote:
 Am 08.04.2011 18:14, schrieb John Connor:

  Has anyone else looked into the COW problem?  Are there workarounds
  and/or other plans to fix it?  Does the solution I am proposing sound
  reasonable, or does it seem like overkill?  Does anyone foresee any
  problems with it?

 Why'd you need a fix like this for something that isn't broken? COW
 doesn't just refer to the object reference-count, but to the object
 itself, too. _All_ memory of the parent (and, as such, all objects, too)
 become unrelated to memory in the child once the fork is complete.

 The initial object reference-count state of the child is guaranteed to
 be sound for all objects (because the parent's final reference-count
 state was, before the process image got cloned [remember, COW is just an
 optimization for a complete clone, and it's up the operating-system to
 make sure that you don't notice different semantics from a complete
 copy]), and what you're proposing (opting in/out of reference counting)
 breaks that.

 --
 --- Heiko.

-- 
http://mail.python.org/mailman/listinfo/python-list


Creating unit tests on the fly

2011-04-08 Thread Roy Smith
I've got a suite of unit tests for a web application.  There's an
(abstract) base test class from which all test cases derive:

class BaseSmokeTest(unittest.TestCase):

BaseSmokeTest.setUpClass() fetches a UR (from a class attribute
route, which must be defined in the derived classes), and there's a
number of test methods which do basic tests like checking for
reaonable-looking HTML (parsed with lxml), scanning the server log
file to make sure there's no error messages or stack dumps, etc.

Many of the test cases are nothing more than running the base test
methods on a particular route, i.e.

class Test_Page_About(BaseSmokeTest):
route = 'page/about'

Now, I want to do something a little fancier.  I want to get a
particular page, parse the HTML to find anchor tags containing
additional URLs which I want to test.  It's easy enough to pull out
the anchors I'm interested in with lxml:

selector = CSSSelector('div.st_info .st_name  a')
for anchor in selector(self.tree):
print anchor.get('href')

I can even create new test cases from these on the fly with something
like:

 newClass = type(newClass, (BaseSmokeTest,), {'route': '/my/newly/
discovered/anchor'})

(credit to 
http://jjinux.blogspot.com/2005/03/python-create-new-class-on-fly.html
for that neat little trick).  The only thing I don't see is how I can
now get unittest.main(), which is already running, to notice that a
new test case has been created and add it to the list of test cases to
run.  Any ideas on how to do that?

I suppose I don't strictly need to go the create a new TestCase on
the fly route, but I've already got a fair amount of infrastructure
set up around that, which I don't want to redo.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Literate Programming

2011-04-08 Thread Hans Georg Schaathun
On Fri, 8 Apr 2011 05:22:01 -0700 (PDT), Jim
  jim.heffe...@gmail.com wrote:
:  On Apr 7, 2:09 pm, Hans Georg Schaathun h...@schaathun.net wrote:
:  Has anyone found a good system for literate programming in python?
: 
:  Are you aware of pyweb http://sourceforge.net/projects/pywebtool/ ?

Interesting tool, but it solves only part of the problem.
I could use it as a replacement for pylit, but I would then still
have the problem of commenting code within a block, which is a
reST/sphinx problem.

Alternatively, I could use pyweb directly with LaTeX.  However, then
I would need to find or create macro packages which provide the 
features of reST directly in LaTeX.  Do you know of a good candidate?

-- 
:-- Hans Georg
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to get a PID of a child process from a process openden with Popen()

2011-04-08 Thread Nobody
On Fri, 08 Apr 2011 07:43:41 -0700, Miki Tebeka wrote:

  p = subprocess.Popen(shlex.split(kdesu -u test program))
 
 How can I aquire the PID of the program which kdesu starts?
 
 You can run ps --ppid p.pid and get the line containing test program.
 The first field there should be the child process id.

This will fail if the kdesu process has terminated at that point (the
documentation doesn't say whether it waits for the child to terminate).
Once a process' parent has terminated, it's PPID will become 1 (i.e. it
will be adopted by the init process).

There isn't a robust solution to the OP's problem. It's typically
impossible to determine whether one process is an ancestor of another if
any of the intermediate processes have terminated.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-04-08 Thread Aahz
In article 4d9f32a2$1...@dnews.tpgi.com.au,
Lie Ryan  lie.1...@gmail.com wrote:
On 04/09/11 01:08, Aahz wrote:

 Actually, my take is that removing __cmp__ was a mistake.  (I already
 argued about it back in python-dev before it happened, and I see little
 point rehashing it.  My reason is strictly efficiency grounds: when
 comparisons are expensive -- such as Decimal object -- __cmp__ is
 faster.)

I don't get you... why would sorting a list using __cmp__ be faster when
comparisons are expensive?

Not sorting (because sorting only requires one comparison), but any
operation involving multiple comparisons.  Consider this:

if a  b:
x()
elif a == b:
y()
else:
z()

For a = b, you need to make two comparisons.  Now consider this:

r = cmp(a, b)
if r  0:
x()
elif r == 0:
y()
else:
z()
-- 
Aahz (a...@pythoncraft.com)   * http://www.pythoncraft.com/

At Resolver we've found it useful to short-circuit any doubt and just
refer to comments in code as 'lies'. :-)
--Michael Foord paraphrases Christian Muirhead on python-dev, 2009-03-22
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generators and propagation of exceptions

2011-04-08 Thread r
On Sat, Apr 9, 2011 at 3:22 AM, Raymond Hettinger pyt...@rcn.com wrote:

 You could just let the exception go up to an outermost control-loop
 without handling it at all on a lower level.  That is what exceptions
 for you: terminate all the loops, unwind the stacks, and propagate up
 to some level where the exception is caught:

  while 1:
     try:
        results = evaluator-generator(stream-of-parsed-expressions)
        for result in results:
            print(result)
     except Exception as e:
        handle_the_exception(e)

I've been thinking about something like this but the problem with
shutting down the generators is that I lose all the state information
associated with them (line numbers, have to reopen files if in batch
mode etc.). It's actually more difficult than my current solution.

 OTOH, If you want to catch the exception at the lowest level and wrap
 it up as data (the ErrorResult in your example), there is a way to
 make it more convenient.  Give the ErrorResult object some identify
 methods that correspond to the methods being called by upper levels.
 This will let the object float through without you cluttering each
 level with detect-and-reraise logic.

I'm already making something like this (that is, if I understand you
correctly). In the example below (an almost real code this time, I
made too many mistakes before) all the Expressions (including the
Error one) implement an 'eval' method that gets called by one of the
loops. So I don't have to do anything to detect the error, just have
to catch it and reraise it at each stage so that it propagates to the
next level (what I have to do anyway as the next level generates
errors as well).

class Expression(object):
def eval(self):
pass

class Error(Expression):
def __init__(self, exception):
self.exception = exception

def eval(self):
raise self.exception

def parseTokens(self, tokens):
for token in tokens:
try:
yield Expression.parseToken(token, tokens)
except ExpressionError as e:
# here is where I wrap exceptions raised during parsing and embed them
yield Error(e)

def eval(expressions, frame):
for expression in expressions:
try:
# and here (.eval) is where they get unwrapped and raised again
yield unicode(expression.eval(frame))
except ExpressionError as e:
# here they are embedded again but because it is the last stage
# text representation is fine
yield unicode(e)


   class ErrorResult:
       def __iter__(self):
           # pass through an enclosing iterator
           yield self

 Here's a simple demo of how the pass through works:

     from itertools import *
     list(chain([1,2,3], ErrorResult(), [4,5,6]))
    [1, 2, 3, __main__.ErrorResult object at 0x2250f70, 4, 5, 6]

I don't really understand what you mean by this example. Why would
making the Error iterable help embedding it into data stream? I'm
currently using yield statement and it seems to work well (?).

Anyway, thank you all for helping me out and bringing some ideas to
the table. I was hoping there might be some pattern specifically
designed for thiskind of job (exception generators anyone?), which
I've overlooked. If not anything else, knowing that this isn't the
case, makes me feel better about the solution I've chosen.

Thanks again,

-r
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: argparse and default for FileType

2011-04-08 Thread Robert Kern

On 4/8/11 1:11 PM, Paolo Elvati wrote:

Hi,

I noticed a strange behavior of argparse.
When running a simple code like the following:

import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
   -o,
   default = 'fake',
   dest = 'OutputFile',
   type = argparse.FileType('w')
  )
args = parser.parse_args()

I noticed that the default file (fake) is created every time I run the
code, even when I explicitly set the -o flag, in which case it will
produce both files.
My goal instead is to erite the default file ONLY if the flag is not specified.
For the moment, I solved it simply by removing the default=fake and
adding the required=True keyword, but I was wondering what is the
correct way of doing it (or if it is simply a bug).


It's an unfortunate (and possibly unforeseen by the author) consequence of how 
specifying defaults interact with specifying the type. I believe the way it is 
implemented is that the default values get injected into the namespace first. In 
order to do that, the default='fake' text that you specify needs to be passed 
through the type function to get the actual value that should be in the 
namespace. *Then* the arguments are parsed and override the defaults that are 
present.


It is possible that the order of these events could be switched around; i.e. the 
arguments get parsed first, then the defaults are injected into the namespace 
for those arguments which are not present. That might have side effects that I 
am not aware of, though, or it might just be too difficult to change at this 
point. Even if it can't be changed, you might be able to suggest what could be 
added to the argparse documentation that would have helped you realize this 
limitation sooner.


Open up a bug report on the Python bug tracker and assign it to the user 
bethard, who is the author of argparse. He's usually pretty responsive.


  http://bugs.python.org/

--
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco

--
http://mail.python.org/mailman/listinfo/python-list


Re: Generators and propagation of exceptions

2011-04-08 Thread Raymond Hettinger
On Apr 8, 12:47 pm, r nbs.pub...@gmail.com wrote:
 Anyway, thank you all for helping me out and bringing some ideas to
 the table. I was hoping there might be some pattern specifically
 designed for thiskind of job (exception generators anyone?), which
 I've overlooked. If not anything else, knowing that this isn't the
 case, makes me feel better about the solution I've chosen.

Sounds like you've gathered a bunch of good ideas and can now be
pretty sure of your design choices.

While it doesn't help your current use case, you might be interested
in the iter_except() recipe in 
http://docs.python.org/py3k/library/itertools.html#itertools-recipes

Raymond

twitter: @raymondh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Copy-on-write when forking a python process

2011-04-08 Thread Heiko Wundram
Am 08.04.2011 20:34, schrieb jac:
 I disagree with your statement that COW is an optimization for a
 complete clone, it is an optimization that works at the memory page
 level, not at the memory image level.  In other words, if I write to a
 copy-on-write page, only that page is copied into my process' address
 space, not the entire parent image.  To the best of my knowledge by
 preventing the child process from altering an object's reference count
 you can prevent the object from being copied (assuming the object is
 not altered explicitly of course.)

As I said before: COW for sharing a processes forked memory is simply
an implementation-detail, and an _optimization_ (and of course a
sensible one at that) for fork; there is no provision in the semantics
of fork that an operating system should use COW memory-pages for
implementing the copying (and early UNIXes didn't do that; they
explicitly copied the complete process image for the child). The only
semantic that is specified for fork is that the parent and the child
have independent process images, that are equivalent copies (except for
some details) immediately after the fork call has returned successfully
(see SUSv4).

What you're thinking of (and what's generally useful in the context
you're describing) is shared memory; Python supports putting objects
into shared memory using e.g. POSH (which is an extension that allows
you to place Python objects in shared memory, using the SysV
IPC-featureset that most UNIXes implement today).

-- 
--- Heiko.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Argument of the bool function

2011-04-08 Thread candide

Le 08/04/2011 18:43, Ian Kelly a écrit :


x=42 is an assignment statement, not an expression.


Right, I was confounding with C ;)

In fact, respect to this question, the documentation makes things 
unambiguous :



-
In contrast to many other languages, not all language constructs are 
expressions. There are also statements which cannot be used as 
expressions, such as print or if. Assignments are also statements, not 
expressions.

-







In bool(x=5), x=5 is also not an expression.  It's passing the
expression 5 in as the parameter x, using a keyword argument.



You are probably right but how do you deduce this brilliant 
interpretation from the wording given in the documentation ?





--
http://mail.python.org/mailman/listinfo/python-list


Re: Argument of the bool function

2011-04-08 Thread Ethan Furman

candide wrote:

Le 08/04/2011 18:43, Ian Kelly a écrit :

In bool(x=5), x=5 is also not an expression.  It's passing the
expression 5 in as the parameter x, using a keyword argument. 


You are probably right but how do you deduce this brilliant 
interpretation from the wording given in the documentation ?


Look at your original post, which contains the excerpt from the docs 
that you put there:


 bool([x])
 Convert a value to a Boolean, using the standard truth testing
 procedure.


As you can see, the parameter name is 'x'.

~Ethan~
--
http://mail.python.org/mailman/listinfo/python-list


Re: Creating unit tests on the fly

2011-04-08 Thread Raymond Hettinger
On Apr 8, 12:10 pm, Roy Smith r...@panix.com wrote:
 I can even create new test cases from these on the fly with something
 like:

  newClass = type(newClass, (BaseSmokeTest,), {'route': '/my/newly/
 discovered/anchor'})

 (credit 
 tohttp://jjinux.blogspot.com/2005/03/python-create-new-class-on-fly.html
 for that neat little trick).  The only thing I don't see is how I can
 now get unittest.main(), which is already running, to notice that a
 new test case has been created and add it to the list of test cases to
 run.  Any ideas on how to do that?

The basic unittest.main() runner isn't well suited to this task.  It
flows in a pipeline of discovery - test_suite - test_runner.

I think you're going to need a queue of tests, with your own test
runner consuming the queue, and your on-the-fly test creator running
as a producer thread.

Writing your own test runner isn't difficult.  1) wait on the queue
for a new test case. 2) invoke test_case.run() with a TestResult
object to hold the result 3) accumulate or report the results 4)
repeat forever.

Raymond

twitter: @raymondh

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to use optparse without the command line?

2011-04-08 Thread Karim

On 04/07/2011 10:37 AM, markolopa wrote:

Hello,

Is there support/idioms/suggestions for using optparse without a
command line?

I have a code which used to be called through subprocess. The whole
flow of the code is based on what 'options' object from optparse
contains.

Now I want to call this code without subprocessing. What I did first
was to build a fake command-line and use

options, args = parser.parse_args(fake_cmdline)

But I find it dumb to encode and decode a dictionary... So I would
like to know how I if there is a good way of passing a dictionary to
optparse and benefiting from its option management (check, error
detection, etc).

Thanks a lot!
Marko
Move to the best module on args parsing: argparse it is way, way, way 
better no equivalent in any others language.
No tuple when parsing but a simple Namespace objects and very easy to 
port. Go have a look in std libs (=v2.7).


Regards
Karim
--
http://mail.python.org/mailman/listinfo/python-list


Re: Argument of the bool function

2011-04-08 Thread Ben Finney
candide candide@free.invalid writes:

 Le 08/04/2011 18:43, Ian Kelly a écrit :

  In bool(x=5), x=5 is also not an expression. It's passing the
  expression 5 in as the parameter x, using a keyword argument.

 You are probably right but how do you deduce this brilliant
 interpretation from the wording given in the documentation ?

By also learning the language syntax. ‘foo=bar’ within the parameters to
a function call will always mean binding a value to a keyword argument.

Just as the function docstring should not spend any space to explain
what the parens mean, it should not spend any space to explain how to
pass keyword arguments.

-- 
 \  “When [science] permits us to see the far side of some new |
  `\  horizon, we remember those who prepared the way – seeing for |
_o__)  them also.” —Carl Sagan, _Cosmos_, 1980 |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to get a PID of a child process from a process openden with Popen()

2011-04-08 Thread Chris Angelico
On Sat, Apr 9, 2011 at 5:28 AM, Nobody nob...@nowhere.com wrote:
 There isn't a robust solution to the OP's problem. It's typically
 impossible to determine whether one process is an ancestor of another if
 any of the intermediate processes have terminated.

Upstart and gdb can both detect forks and follow the child. But I
think that's getting into some serious esoteria that's unlikely to be
of much practical use here.

Chris Angelico
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: using python to post data to a form

2011-04-08 Thread Karim

On 04/04/2011 01:01 PM, Corey Richardson wrote:

On 04/04/2011 01:36 AM, Littlefield, Tyler wrote:

Hello:
I have some data that needs to be fed through a html form to get
validated and processed and the like. How can I use python to send data
through that form, given a specific url? the form says it uses post, but
Im not really sure what the difference is. would it just be:
http://mysite.com/bla.php?foo=barbar=foo?
If so, how do I do that with python?


import urllib
import urllib2

url = http://www.foo.com/;
data = {name: Guido, status: BDFL}

data = urllib.urlencode(data)
request = urllib2.Request(url, data)
response = urllib2.urlopen(request)

page = response.read()

So yeah, passing in a Request object to urlopen that has some
urlencode'ed data in it.


Real life example:
I query for bugs:

def make_form_data(query=None):
Factory function to create a post form query to submit on a html 
webpage.


@param query - the query string from the existing query list of the 
query webpage.


return {
   'init'   : EMPTY,
   'NextForm'   : EMPTY,
   'LastForm'   : FORM_QUERY,
   'ACTION' : ACTION,
   'class'  : CLASS,
   'personalQuery'  : query,
   'sharedQuery': EMPTY,
   '.cgifields' : CONFIG_QUERY,
   '.cgifields' : SHARED_QUERY
   }

def authentication_setup(username=None, password=None, url=None):
Setup an authentication for a super-url (root url) on a given 
securised web server.


@param username  - String
@param password  - String
@param url   - String

# Password Manager creation
pwd_manager = urllib2.HTTPPasswordMgrWithDefaultRealm()

# As we set the first parameter to None
# the Password Manager will always use
# the same combination username/password
# for the urls for which 'url' is a super-url.
pwd_manager.add_password(None, url, username, password)

# Authentication Handler creation from the Password Manager.
auth_handler = urllib2.HTTPBasicAuthHandler(pwd_manager)

# Opener creation from the Authentication Handler.
opener = urllib2.build_opener(auth_handler)

# Tous les appels a urllib2.urlopen vont maintenant utiliser le handler
# Ne pas mettre le protocole l'URL, ou
# HTTPPasswordMgrWithDefaultRealm sera perturbe.
# Vous devez (bien sur) l'utiliser quand vous recuperez la page.
urllib2.install_opener(opener)
# l'authentification est maintenant geree automatiquement pour nous

def post_request(url=None, data=None, headers=None):
Post a request form on a given url web server.

@param url   - String
@param data  - Dictionnary
@param headers   - Dictionnary

@return response The web page (file) object.

if headers is None:
headers = {
  'User-Agent'   : __file__,
  'Content-Type' : 'application/x-www-form-urlencoded',
  'Accept'   : 'text/html'
   }

query   = urllib.urlencode(data) if data is not None else ''
request = urllib2.Request(url, query, headers)

response = urllib2.urlopen(request)
#print('INFO', response.info())
return response

AND MAIN CODE:

 try:
webpage = post_request(url=MAIN_URL, data=make_form_data(query))
html= webpage.read()
print('Authentication Information: access granted.')

except URLError, e:
print('Authentication Information: {msg}.'.format(msg=e))
sys.exit(1)

That's all folks Authentication+Posting a request.
make_form_data() is the most important.
To find cgi data dict you can use ClientForm.py (google it!) it is a 
good helper to find form data.


Regards
Karim
--
http://mail.python.org/mailman/listinfo/python-list


Re: Creating unit tests on the fly

2011-04-08 Thread Ben Finney
Raymond Hettinger pyt...@rcn.com writes:

 I think you're going to need a queue of tests, with your own test
 runner consuming the queue, and your on-the-fly test creator running
 as a producer thread.

I have found the ‘testscenarios’ library very useful for this: bind a
sequence of (name, dict) tuples to the test case class, and each tuple
represents a scenario of data fixtures that will be applied to every
test case function in the class.

URL:http://pypi.python.org/pypi/test-scenarios

You (the OP) will also find the ‘testing-in-python’ discussion forum
URL:http://lists.idyll.org/listinfo/testing-in-python useful for this
topic.

-- 
 \   “Never use a long word when there's a commensurate diminutive |
  `\available.” —Stan Kelly-Bootle |
_o__)  |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Argument of the bool function

2011-04-08 Thread candide

Le 09/04/2011 00:03, Ethan Furman a écrit :


  bool([x])
  Convert a value to a Boolean, using the standard truth testing
  procedure.
 

As you can see, the parameter name is 'x'.



OK, your response is clarifying my point ;)


I didn't realize that in the bool([x]) syntax, identifier x refers to a 
genuine argument [I was considering x as referring to a generic 
object having a boolean value].



Nevertheless, compare with the definition the doc provides for the 
builtin function dir():


dir([object])
[definition omited, just observe the declaration syntax]

Now, lets make a try

 dir(object=Explicit is better than implicit)
Traceback (most recent call last):
  File stdin, line 1, in module
TypeError: dir() takes no keyword arguments


Not very meaningful, isn't it ?



--
http://mail.python.org/mailman/listinfo/python-list


Re: Tips on Speeding up Python Execution

2011-04-08 Thread Abhijeet Mahagaonkar
Thats awesome. Its time I migrate to 3 :)

On Fri, Apr 8, 2011 at 11:29 PM, Raymond Hettinger pyt...@rcn.com wrote:

 On Apr 8, 12:25 am, Chris Angelico ros...@gmail.com wrote:
  On Fri, Apr 8, 2011 at 5:04 PM, Abhijeet Mahagaonkar
 
  abhijeet.mano...@gmail.com wrote:
   I was able to isolate that major chunk of run time is eaten up in
 opening a
   webpages, reading from them and extracting text.
   I wanted to know if there is a way to concurrently calling the
 functions.
 
  So, to clarify: you have code that's loading lots of separate pages,
  and the time is spent waiting for the internet? If you're saturating
  your connection, then this won't help, but if they're all small pages
  and they're coming over the internet, then yes, you certainly CAN
  fetch them concurrently. As the Perl folks say, There's More Than One
  Way To Do It; one is to spawn a thread for each request, then collect
  up all the results at the end. Look up the 'threading' module for
  details:
 
  http://docs.python.org/library/threading.html

 The docs for Python3.2 have a nice example for downloading multiple
 webpages in parallel:


 http://docs.python.org/py3k/library/concurrent.futures.html#threadpoolexecutor-example

 Raymond
 --
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Creating unit tests on the fly

2011-04-08 Thread Roy Smith
In article 87fwpse4zt@benfinney.id.au,
 Ben Finney ben+pyt...@benfinney.id.au wrote:

 Raymond Hettinger pyt...@rcn.com writes:
 
  I think you're going to need a queue of tests, with your own test
  runner consuming the queue, and your on-the-fly test creator running
  as a producer thread.
 
 I have found the ‘testscenarios’ library very useful for this: bind a
 sequence of (name, dict) tuples to the test case class, and each tuple
 represents a scenario of data fixtures that will be applied to every
 test case function in the class.
 
 URL:http://pypi.python.org/pypi/test-scenarios
 
 You (the OP) will also find the ‘testing-in-python’ discussion forum
 URL:http://lists.idyll.org/listinfo/testing-in-python useful for this
 topic.

That link doesn't work, I assume you meant

http://pypi.python.org/pypi/testscenarios/0.2

This is interesting, and a bunch to absorb.  Thanks.  It might be what 
I'm looking for.  For the moment, I'm running the discovery then doing 
something like

class_name = 'Test_DiscoveredRoute_%s' % cleaned_route_name
g = globals()
g[class_name] = type(class_name, bases, new_dict)

on each discovered route, and calling unittest.main() after I'm done 
doing all that.  It's not quite what I need however, so something like 
testscenarios or raymondh's test queue idea might be where this needs to 
go.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Creating unit tests on the fly

2011-04-08 Thread Ben Finney
Roy Smith r...@panix.com writes:

 In article 87fwpse4zt@benfinney.id.au,
  Ben Finney ben+pyt...@benfinney.id.au wrote:

  I have found the ‘testscenarios’ library very useful for this:
  bind a sequence of (name, dict) tuples to the test case class, and
  each tuple represents a scenario of data fixtures that will be
  applied to every test case function in the class.
  
  URL:http://pypi.python.org/pypi/test-scenarios

 That link doesn't work, I assume you meant

 http://pypi.python.org/pypi/testscenarios/0.2

You're right, I gave the wrong URL. The right one for that project (for
the show-me-the-latest-version-whichever-that-is page) is
URL:http://pypi.python.org/pypi/testscenarios/.

-- 
 \   “An idea isn't responsible for the people who believe in it.” |
  `\  —Donald Robert Perry Marquis |
_o__)  |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Literate Programming

2011-04-08 Thread Tim Arnold
Hans Georg Schaathun h...@schaathun.net wrote in message 
news:r7b178-602@svn.schaathun.net...
 Has anyone found a good system for literate programming in python?

 I have been trying to use pylit/sphinx/pdflatex to generate
 technical documentation.  The application is scientific/numerical
 programming, so discussing maths in maths syntax in between
 python syntax is important.

snip

Hi Hans,
If you already know LaTeX, you might experiment with the *.dtx docstrip 
capability.
It has some pain points if you're developing from scratch, but I use it once 
I've got a system in reasonable shape. You have full control over the 
display and you can make the code files go anywhere you like when you run 
pdflatex on your file.
--Tim Arnold


-- 
http://mail.python.org/mailman/listinfo/python-list


python on iPad (PyPad)

2011-04-08 Thread Jon Dowdall
Hi All,

Sorry for the blatant advertising but hope some of you may be interested 
to know that I've created an iPad application containing the python 
interpreter and a simple execution environment. It's available in iTunes 
at http://itunes.apple.com/us/app/pypad/id428928902?mt=8#

I wanted to have python available 'on the go' without carrying a laptop. 
The current implementation is based on my need to test simple python 
functions in an isolated environment. I hope to add more iOS specific 
capabilities if there is enough interest.

Enjoy...

Jon Dowdall
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Argument of the bool function

2011-04-08 Thread Lie Ryan
On 04/09/11 08:59, candide wrote:
 Le 09/04/2011 00:03, Ethan Furman a écrit :
 
   bool([x])
   Convert a value to a Boolean, using the standard truth testing
   procedure.
  

 As you can see, the parameter name is 'x'.
 
 
 OK, your response is clarifying my point ;)
 
 
 I didn't realize that in the bool([x]) syntax, identifier x refers to a
 genuine argument [I was considering x as referring to a generic
 object having a boolean value].
 
 
 Nevertheless, compare with the definition the doc provides for the
 builtin function dir():
 
 dir([object])
 [definition omited, just observe the declaration syntax]
 
 Now, lets make a try
 
 dir(object=Explicit is better than implicit)
 Traceback (most recent call last):
   File stdin, line 1, in module
 TypeError: dir() takes no keyword arguments

 
 Not very meaningful, isn't it ?

The error says it unambiguously, dir() does not take *keyword*
arguments; instead dir() takes *positional* argument:

dir(Explicit is better than implicit)
-- 
http://mail.python.org/mailman/listinfo/python-list


[issue11803] Memory leak in sub-interpreters

2011-04-08 Thread Swapnil Talekar

New submission from Swapnil Talekar swapnil...@gmail.com:

In the attached program, the total memory consumption of the process, goes up 
each time a new subinterpreter imports a bunch of modules. When the 
subinterpreter is shutdown with Py_EndInterpreter, the memory consumed with 
import of modules is not returned back. Hence the amount of memory consumed 
keeps increasing with each loop. It goes up from about 8MB to about 11MB after 
few loops. Strangely it doesn't rise any further.

I have tested this only for Python 2.6.5

--
components: Interpreter Core
files: test_subinterpreter.c
messages: 133292
nosy: amaury.forgeotdarc, benjamin.peterson, christian.heimes, eric.araujo, 
grahamd, haypo, loewis, ncoghlan, pitrou, python-dev, swapnil
priority: normal
severity: normal
status: open
title: Memory leak in sub-interpreters
type: resource usage
versions: Python 2.6
Added file: http://bugs.python.org/file21577/test_subinterpreter.c

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11803
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11803] Memory leak in sub-interpreters

2011-04-08 Thread Ezio Melotti

Ezio Melotti ezio.melo...@gmail.com added the comment:

Is this the same as #222684?

--
nosy: +ezio.melotti

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11803
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11803] Memory leak in sub-interpreters

2011-04-08 Thread Swapnil Talekar

Swapnil Talekar swapnil...@gmail.com added the comment:

No. This is not the same as #222684?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11803
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11803] Memory leak in sub-interpreters

2011-04-08 Thread Swapnil Talekar

Changes by Swapnil Talekar swapnil...@gmail.com:


Added file: http://bugs.python.org/file21579/large_import.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11803
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11803] Memory leak in sub-interpreters

2011-04-08 Thread Ezio Melotti

Ezio Melotti ezio.melo...@gmail.com added the comment:

Indeed, the code looks similar, but #222684 seems to be fixed, and doesn't use 
PyImport_ImportModule, so maybe the leak is there.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11803
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue222684] Memory leak creating sub-interpreters

2011-04-08 Thread Amaury Forgeot d'Arc

Amaury Forgeot d'Arc amaur...@gmail.com added the comment:

For the record, the changeset was 433458651eb4

--
nosy: +amaury.forgeotdarc

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue222684
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11757] test_subprocess.test_communicate_timeout_large_ouput failure on select(): negative timeout?

2011-04-08 Thread STINNER Victor

STINNER Victor victor.stin...@haypocalc.com added the comment:

Le vendredi 08 avril 2011 à 05:34 +, Charles-Francois Natali a
écrit :
 Charles-Francois Natali neolo...@free.fr added the comment:
 
  You may also patch poll_poll().
 
 
 Poll accepts negative timeout values, since it's the only way to
 specify an infinite wait (contrarily to select which can be passed
 NULL).

Oh, I didn't know. In this case, is my commit 3664fc29e867 correct? I
think that it is, because without the patch, subprocess may call poll()
with a negative timeout, and so it is no more a timeout at all.

If I am correct, it is a real bug. Should it be fixed in Python 2.7, 3.1
and 3.2? ... Hum, it looks like communicate() timeout was introduced in
Python 3.3: c4a0fa6e687c. This commit has no reference to an issue: it
is the issue #5673. And as it was already written in msg130851, the doc
is wrong: the doc indicates that the feature was introduced in 3.2, but
it is 3.3 only. The change is not documented in Misc/NEWS.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11757
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5673] Add timeout option to subprocess.Popen

2011-04-08 Thread STINNER Victor

STINNER Victor victor.stin...@haypocalc.com added the comment:

I fixed a bug in _communicate_with_poll(): raise an error if the endtime-time() 
is negative. If poll() is called with a negative timeout, it blocks until it 
gets an event.

New changeset 3664fc29e867 by Victor Stinner in branch 'default':
Issue #11757: subprocess ensures that select() and poll() timeout = 0
http://hg.python.org/cpython/rev/3664fc29e867

By the way, the doc modified by [c4a0fa6e687c] is still wrong: the timeout 
feature was introduced in 3.3, not in 3.2. And the change is not documented in 
Misc/NEWS.

@Reid Kleckner: Can you fix that? (I reopen this issue to not forget)

--
nosy: +haypo
status: closed - open

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5673
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11629] Reference implementation for PEP 397

2011-04-08 Thread Gertjan Klein

Changes by Gertjan Klein gkl...@xs4all.nl:


--
nosy: +gklein

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11629
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11629] Reference implementation for PEP 397

2011-04-08 Thread anatoly techtonik

anatoly techtonik techto...@gmail.com added the comment:

Is it possible to add tl;dr chapter to this document. I am especially 
interested in extensive logging (to debug problems etc.)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11629
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5996] abstract class instantiable when subclassing dict

2011-04-08 Thread Nadeem Vawda

Changes by Nadeem Vawda nadeem.va...@gmail.com:


--
nosy: +nadeem.vawda

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5996
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11802] filecmp.cmp needs a documented way to clear cache

2011-04-08 Thread Nadeem Vawda

Nadeem Vawda nadeem.va...@gmail.com added the comment:

I've looked at the code for Python 3, and there isn't anything there that
prevents this from happening there, either. So the fix should be applied
to 3.2 and 3.3 as well.

An alternative approach would be to limit the size of the cache, so that
the caller doesn't need to explicitly clear the cache. Something along
the lines of functools.lru_cache() should do the trick. I don't think
it'll be possible to use lru_cache() itself, though - it doesn't provide
a mechanism to invalidate cache entries when they become stale (and in
any case, it doesn't exist in 2.7).

--
nosy: +nadeem.vawda
stage:  - needs patch
type:  - resource usage
versions: +Python 3.2, Python 3.3

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11802
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11804] expat parser not xml 1.1 (breaks xmlrpclib)

2011-04-08 Thread Panos Christeas

New submission from Panos Christeas x...@hellug.gr:

The expat library (in C level) is not xml 1.1 compliant, meaning that
it won't accept characters \x01-\x08,\x0b,\x0c and \x0e-\x1f .
At the same time, ElementTree (or custom XML creation, such as in 
xmlrpclib.py:694) allow these characters to pass through. They will get blocked 
on the receiving side.
Since 2.7, the expat library is the default parser for xml-rpc, so it
this is a regression, IMHO. According to the network principal, we should
accept these characters gracefully.

The attached test script demonstrates that we're not xml 1.1 compliant (but 
instead enforce the more strict 1.0 rule)

References:
http://bugs.python.org/issue5166
http://en.wikipedia.org/wiki/Valid_characters_in_XML

--
components: XML
files: expat-test.py
messages: 133301
nosy: xrg
priority: normal
severity: normal
status: open
title: expat parser not xml 1.1 (breaks xmlrpclib)
type: behavior
versions: Python 2.7
Added file: http://bugs.python.org/file21580/expat-test.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11804
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11794] Backport new logging docs to 2.7

2011-04-08 Thread Vinay Sajip

Changes by Vinay Sajip vinay_sa...@yahoo.co.uk:


--
assignee: docs@python - vinay.sajip
nosy: +vinay.sajip

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11794
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11571] Turtle window pops under the terminal on OSX

2011-04-08 Thread Ronald Oussoren

Ronald Oussoren ronaldousso...@mac.com added the comment:

On 07 Apr, 2011,at 07:03 PM, Alexander Belopolsky rep...@bugs.python.org 
wrote:

Alexander Belopolsky belopol...@users.sourceforge.net added the comment:

While you are at it, can you also fix the same issue with python -m tkinter?
 
Sure, I can add a hack to that module as well.

Ronald

--
Added file: http://bugs.python.org/file21581/unnamed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11571
___htmlbodydivbrbrOn 07 Apr, 2011,at 07:03 PM, Alexander Belopolsky 
lt;rep...@bugs.python.orggt; wrote:brbr/divdivblockquote 
type=citediv class=msg-quotediv class=_stretchbr
Alexander Belopolsky lt;a href=mailto:belopol...@users.sourceforge.net; 
_mce_href=mailto:belopol...@users.sourceforge.net;belopol...@users.sourceforge.net/agt;
 added the comment:br
br
While you are at it, can you also fix the same issue with python -m 
tkinter?/div/div/blockquotespannbsp;/span/divdivspanSure, I 
can add a hack to that module as 
well./span/divdivspanbr/span/divdivspanRonald/span/divdivblockquote
 type=citediv class=msg-quotediv class=_stretchspan style=color: 
rgb(0, 0, 0); span style=color: rgb(0, 51, 153); font 
class=Apple-style-span color=#00 face=Helvetica, Arial, 
sans-serifbr/font/span/span/div/div/blockquote/div/body/html
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11794] Backport new logging docs to 2.7

2011-04-08 Thread Roundup Robot

Roundup Robot devnull@devnull added the comment:

New changeset 6fb033af9310 by Vinay Sajip in branch '2.7':
Issue #11794: Reorganised logging documentation.
http://hg.python.org/cpython/rev/6fb033af9310

--
nosy: +python-dev

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11794
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11794] Backport new logging docs to 2.7

2011-04-08 Thread Vinay Sajip

Changes by Vinay Sajip vinay_sa...@yahoo.co.uk:


--
resolution:  - fixed
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11794
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3136] [PATCH] logging.config.fileConfig() compulsivly disable all existing loggers

2011-04-08 Thread Matt Joiner

Changes by Matt Joiner anacro...@gmail.com:


--
nosy: +anacrolix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3136
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11802] filecmp.cmp needs a documented way to clear cache

2011-04-08 Thread R. David Murray

R. David Murray rdmur...@bitdance.com added the comment:

Putting in a size limit is reasonable.  We did this for fnmatch not that long 
ago (issue 7846).  That was in fact the inspiration for lru_cache.

--
nosy: +r.david.murray

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11802
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11800] regrtest --timeout: apply the timeout on a function, not on the whole file

2011-04-08 Thread STINNER Victor

STINNER Victor victor.stin...@haypocalc.com added the comment:

Simplify the test and set the default timeout to 15 min.

--
Added file: http://bugs.python.org/file21582/regrtest_timeout-3.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11800
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11800] regrtest --timeout: apply the timeout on a function, not on the whole file

2011-04-08 Thread STINNER Victor

Changes by STINNER Victor victor.stin...@haypocalc.com:


Removed file: http://bugs.python.org/file21575/regrtest_timeout-2.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11800
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11800] regrtest --timeout: apply the timeout on a function, not on the whole file

2011-04-08 Thread Michael Foord

Michael Foord mich...@voidspace.org.uk added the comment:

Victor's reasons for wanting per-test timeout rather than per-file seem sound. 
Need to review the patch to see how much extra complexity it actually 
introduces (although on a casual reading the new custom result object it 
introduces is trivially simple, so not a maintenance burden).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11800
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11803] Memory leak in sub-interpreters

2011-04-08 Thread Nick Coghlan

Nick Coghlan ncogh...@gmail.com added the comment:

As a first guess, I would suspect that this is just badly fragmenting the heap 
and we aren't freeing up any arenas to pass back to the OS.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11803
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11803] Memory leak in sub-interpreters

2011-04-08 Thread Jesús Cea Avión

Jesús Cea Avión j...@jcea.es added the comment:

The fact that the leak doesn't grow seems to confirm Nick supposition.

Close as invalid?

--
nosy: +jcea

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11803
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11803] Memory leak in sub-interpreters

2011-04-08 Thread Amaury Forgeot d'Arc

Amaury Forgeot d'Arc amaur...@gmail.com added the comment:

Most builtin modules keep static references to common objects: exceptions, 
types, co.  These references are currently never freed, but are reused by all 
sub-interpreters.
It the memory usage stays stable, even after many calls to 
Py_NewInterpreter()/Py_EndInterpreter(), this is not a bug!

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11803
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11800] regrtest --timeout: apply the timeout on a function, not on the whole file

2011-04-08 Thread STINNER Victor

STINNER Victor victor.stin...@haypocalc.com added the comment:

With the current implementation of faulthandler.dump_tracebacks_later(), use a 
timeout per function means create a new thread for each function (the thread is 
interrupted and exits when the test is done).

Quick benchmark with test_io on Linux (lowest real time of 3 runs):

- time ./python Lib/test/regrtest.py -v (with timeout): 1 min 3 sec
- time ./python Lib/test/regrtest.py -v, without timeout: 1 min 3 sec
- time ./python Lib/test/test_io.py (without timeout): 1 min 3 sec

test_io have 403 tests and 8 are skipped, so it looks like the creation of ~400 
threads is really fast on Linux! test_io is one of the biggest test suite I 
think.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11800
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11804] expat parser not xml 1.1 (breaks xmlrpclib)

2011-04-08 Thread Antoine Pitrou

Changes by Antoine Pitrou pit...@free.fr:


--
nosy: +loewis

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11804
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11803] Memory leak in sub-interpreters

2011-04-08 Thread Benjamin Peterson

Benjamin Peterson benja...@python.org added the comment:

Please don't add everyone in existence to the nosy list.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11803
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11803] Memory leak in sub-interpreters

2011-04-08 Thread Benjamin Peterson

Changes by Benjamin Peterson benja...@python.org:


--
nosy:  -benjamin.peterson

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11803
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   >