lockfile 0.1 - platform-independent advisory locks for Python

2007-11-04 Thread skip
Python's distribution supports a few different ways to lock files.  None are
platform-independent though.  A number of implementations of file locks are
out there, but as an exploration of the possibilities I wrote the lockfile
module.  It offers these features:

* A platform-independent API
* Three classes implementing file locks
  - LinkFileLock - relies on atomic nature of the link(2) system call.
  - MkdirFileLock - relies on atomic nature of the mkdir(2) systemm
call.
  - SQLiteFileLock - provides locks through a SQLite database.
* Locking is done on a per-thread basis by default, though locking can
  be done on a per-process basis instead.
* Context manager support.

The module contains documentation in ReST format and a decent set of test
cases written using doctest.  Almost all the test cases are attached to the
_FileLock base class, so other implementations (e.g., using different
relational databases or object databases like ZODB) can be sure they satisfy
the same constraints.

The lockfile module is available from PyPI:

http://pypi.python.org/pypi/lockfile/

I welcome feedback from anyone who gives it a whirl.  The code is stored in
a Mercurial repository on my laptop, so subject to occasional access
problems when I'm on the train others are welcome to have access to the
source code.  If that's of interest to you, drop me a note.

-- 
Skip Montanaro - [EMAIL PROTECTED] - http://www.webfast.com/~skip/
-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations.html


Roundup Issue Tracker release 1.4.0

2007-11-04 Thread Richard Jones
I'm proud to release version 1.4.0 of Roundup.

The metakit backend has been removed due to lack of maintenance and
presence of good alternatives (in particular sqlite built into Python 2.5)

New Features in 1.4.0:

- Roundup has a new xmlrpc frontend that gives access to a tracker using
  XMLRPC.
- Dates can now be in the year-range 1-
- Add simple anti-spam recipe to docs
- Allow customisation of regular expressions used in email parsing, thanks
  Bruno Damour
- Italian translation by Marco Ghidinelli
- Multilinks take any iterable
- config option: specify port and local hostname for SMTP connections
- Tracker index templating (i.e. when roundup_server is serving multiple
  trackers) (sf bug 1058020)
- config option: Limit nosy attachments based on size (Philipp Gortan)
- roundup_server supports SSL via pyopenssl
- templatable 404 not found messages (sf bug 1403287)
- Unauthorized email includes a link to the registration page for
  the tracker
- config options: control whether author info/email is included in email
  sent by roundup
- support for receiving OpenPGP MIME messages (signed or encrypted)

There's also a ton of bugfixes.

If you're upgrading from an older version of Roundup you *must* follow
the Software Upgrade guidelines given in the maintenance documentation.

Roundup requires python 2.3 or later for correct operation.

To give Roundup a try, just download (see below), unpack and run::

roundup-demo

Release info and download page:
 http://cheeseshop.python.org/pypi/roundup
Source and documentation is available at the website:
 http://roundup.sourceforge.net/
Mailing lists - the place to ask questions:
 http://sourceforge.net/mail/?group_id=31577


About Roundup
=

Roundup is a simple-to-use and -install issue-tracking system with
command-line, web and e-mail interfaces. It is based on the winning design
from Ka-Ping Yee in the Software Carpentry Track design competition.

Note: Ping is not responsible for this project. The contact for this
project is [EMAIL PROTECTED]

Roundup manages a number of issues (with flexible properties such as
description, priority, and so on) and provides the ability to:

(a) submit new issues,
(b) find and edit existing issues, and
(c) discuss issues with other participants.

The system will facilitate communication among the participants by managing
discussions and notifying interested parties when issues are edited. One of
the major design goals for Roundup that it be simple to get going. Roundup
is therefore usable out of the box with any python 2.3+ installation. It
doesn't even need to be installed to be operational, though a
disutils-based install script is provided.

It comes with two issue tracker templates (a classic bug/feature tracker and
a minimal skeleton) and five database back-ends (anydbm, sqlite, metakit,
mysql and postgresql).

-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations.html


ANN: DocIndexer 0.9.1.0 released

2007-11-04 Thread Stuart Rackham
DocIndexer now handles unicode (the previous release was only really
comfortable with ascii). A full list of changes is in the CHANGELOG.

What is it?
---
DocIndexer is a document indexer toolkit that uses the PyLucene search
engine for indexing and searching document files. DocIndexer includes
command-line utilities, Python index and search classes plus a Win32
COM server that can be used to integrate indexing and searching into
application software. The current version has parser support for
Microsoft Word, HTML, PDF and plain text documents.

Runtime Requisites
--
Win32: None (compiled binary distribution).
Linux: Python 2.5, PyLucene 2, antiword and poppler-utils.

License
---
MIT

URLs

Homepage:http://www.methods.co.nz/docindexer/
SourceForge: http://sourceforge.net/projects/docindexer/


Cheers,
Stuart
---
Stuart Rackham
-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations.html


[ANN] Scope_Plot, another plot library for real time signals.

2007-11-04 Thread Stef Mientki
hello,

I justed finished, another plot library, called Scope_Plot, based on 
wxPython.

Scope_Plot is special meant for displaying real time signals,
and therefor has some new functionalities:
- signal selection
- each signal has it's own scale,
- moving erase block
- measurement cursor
ans should be a lot faster than MatPlot, Plot and FloatCanvas,
at least for real time signals (because it only draws the changes).

An animated demo can be seen here (2 MB, 2:10):
  http://stef.mientki.googlepages.com/jalspy_scope.html

A description of the library, as used in an application, can be found here:
  http://oase.uci.kun.nl/~mientki/data_www/pic/jalspy/jalspy_scope.html

And finally the file (and a few necessary libs) can be found here:
  http://oase.uci.kun.nl/~mientki/download/Scope_Plot.zip
The library has a main section which contains a simple demo,
the animated demo and the application description shows a more complex 
signal organization.

cheers,
Stef Mientki
-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations.html


Re: how to iterate through each set

2007-11-04 Thread Tom_chicollegeboy
I figured out problem. here is my code. now it works as it should!
Thank you everyone!

def speed():
infile = open('speed.in', 'r')
line = infile.readline()
read = int(line)
while '-1' not in line:
i = 0
t0 = 0
v = 0
if len(line.split())==1:
read = int(line)
while iread:
line = infile.readline()
list1 = line.split()
s = int(list1[0])
t1 = int(list1[1])
t = t1- t0
v += s*t
i+=1
t0 = t1
print v
line = infile.readline()
infile.close()

speed()

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: bsddb in python 2.5.1

2007-11-04 Thread Martin v. Löwis
 I know that when you upgrade Berkeley DB you're supposed to go through
 steps solving this problem,but I wasn't expecting an upgrade. I've
 tried to use different versions bsddb3, 4.4 and 4.5, (instead of bsddb
 that comes with python 2.5.1) with different versions of Berkeley DB
 installs (4.5 and 4.4 - built from source into /usr/local).

There seems to be an important misconception here. Python 2.5.1 does
not come with any bsddb version whatsoever. If you have a Python binary
where the bsddb module is linked with a certain version of Python, that
was the choice of whoever made the Python binary.

For acccess to the database, the Python version does not matter at all.
What matters is the version of Berkeley DB.

So as the first step, you should find out what version of Berkeley DB
the old Python installation was using, and what version of Berkeley DB
the new version is using.

I'm also not sure what you mean by I've tried to use different version
bsddb3, 4.4 and 4.5. What specifically did you do to try them? AFAICT,
Ubuntu Feisty Fawn linked its Python with bsddb 4.4, so you should
have no version issues if you really managed to use bsddb 4.4.

Can you please report the specific error you got? According to the
Berkeley DB documentation, there was no change to database formats
in Berkeley DB 4.5 (but there was a change to the log file format).

Regards,
Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to iterate through each set

2007-11-04 Thread Cameron Walsh
Tom_chicollegeboy wrote:
 I figured out problem. here is my code. now it works as it should!
 Thank you everyone!

I decided my 4th clue earlier was too much, so I removed it before 
posting.  It looks like you got it anyway =)

You've now solved it the way the course instructor intended you to solve 
it.  I would have done it this way:

def speed():
 journeys = []
 for line in open('speed.in', 'r').readlines():
 line = list(int(i) for i in line.split())
 if len(line) == 1:
 if line[0] == -1:
 break
 time = 0
 journeys.append(0)
 else:
 journeys[-1] += (line[1]-time) * line[0]
 time = line[1]
 for journey in journeys:
 print journey

speed()
-- 
http://mail.python.org/mailman/listinfo/python-list


How to know more about an exception?

2007-11-04 Thread Timmy
Hi, 
   I has a question about exception in python.
   I know that an exception can be re-raised. 
Is there any simple way provided by python itself that I can know the current 
exception is 
just firstly occurred or it is re-raised by previous exception?
I need to know whether the exception is firstly occurred or not because I want 
to just display an error message if it's
firstly occur and skip display error message if it is just re-raised by 
previous exception.

Thanks!

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is pyparsing really a recursive descent parser?

2007-11-04 Thread Kay Schluehr
On 4 Nov., 03:07, Neil Cerutti [EMAIL PROTECTED] wrote:
 On 2007-11-04, Just Another Victim of the Ambient Morality



 [EMAIL PROTECTED] wrote:

  Neil Cerutti [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]
  On 2007-11-03, Paul McGuire [EMAIL PROTECTED] wrote:
  On Nov 3, 12:33 am, Just Another Victim of the Ambient Morality
 [EMAIL PROTECTED] wrote:
  It has recursion in it but that's not sufficient to call it a
  recursive
  descent parser any more than having a recursive implementation of the
  factorial function.  The important part is what it recurses through...

 snip

  In my opinion, the rule set I mentioned in my original post:

  grammar = OneOrMore(Word(alphas)) + Literal('end')

  ...should be translated (internally) to something like this:

  word = Word(alphas)
  grammar = Forward()
  grammar  ((word + grammar) | (word + Literal(end)))

  This allows a recursive search to find the string correct without
  any
  need for backtracking, if I understand what you mean by that.
  Couldn't
  pyparsing have done something like this?

  Dear JAVotAM -

  This was a great post!  (Although I'm not sure the comment in the
  first paragraph was entirely fair - I know the difference between
  recursive string parsing and recursive multiplication.)

  You really got me thinking more about how this recursion actually
  behaves, especially with respect to elements such as OneOrMore.  Your
  original question is really quite common, and up until now, my stock
  answer has been to use negative lookahead.  The expression you posted
  is the first time I've seen this solution, and it *does* work.

  Is there not an ambiguity in the grammar?

  In EBNF:

   goal -- WORD { WORD } END

   WORD is '[a-zA-Z]+'
   END is 'end'

  I think it is fine that PyParsing can't guess what the composer
  of that grammar meant.

  First, I don't know if that constitutes an ambiguity in the
  grammar. 'end' is a word but it's unambiguous that this grammar
  must end in a literal 'end'.  You could interpret the input as
  just a sequence of words or you could interpret it as a
  sequence of words terminated by the word 'end'.

 Yeah. If it were a regex, it would be: '[ab]+b'. That is a fine
 regex, because a regex is generally just a recognizer; the
 ambiguity doesn't have to do with recognizing the language.  But
 PyParsing is parser. The ambiguity is in deciding what's a
 Word(alphas) and what's an 'end'.

  One interpretation conforms to the grammar while the other
  doesn't. You would assume that the interpretation that agrees
  with the grammar would be the preferable choice and so should
  the program. Secondly, even if it is an ambiguity... so what?
  pyparsing's current behaviour is to return a parse error,
  pretending that the string can't be parsed.  Ideally, perhaps
  it should alert you to the ambiguity but, surely, it's better
  to return _a_ valid parsing than to pretend that the string
  can't be parsed at all...

 I wouldn't characterize it as pretending. How would you parse:

   hello end hello end

 WORD END WORD END and WORD WORD WORD END are both valid
 interpretations, according to the grammar.

 As soon as you remove the ambiguity from the grammar, PyParsing
 starts to work correctly.

I think you are correct about this. But I'm not sure how much it shall
matter. Just take a look at Pythons Grammar

http://svn.python.org/view/python/trunk/Grammar/Grammar?rev=55446view=markup

Without special keyword treatment this grammar would be ambigous and
couldn't be parsed using an LL(1) parser. The grammar compiler which
builds the parser tables creates a special label for each keyword.
This label is filtered when a NAME token is feeded into the parser.
With the label that belongs to e.g. 'if' or 'while' the correct
statement can be selected in constant time. Same happens when I use
the parser generator with your EBNF grammar. With a little more
adaption also NUMBER token could be filtered. But this would be
overdesign.

Theoretical beauty is compromised here using reasonable default
assumptions for keeping the grammar simple ( convention over
configuration to borrow a Rails slogan ).

Tokenization is another issue in Python. It is indeed somewhat special
due to significant whitespace and line continuation but tokenization
is conceptually much simpler and one would likely throw all kinds of
options and special case handling in the lexical analysis phase.



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to know more about an exception?

2007-11-04 Thread Diez B. Roggisch
Timmy schrieb:
 Hi, 
I has a question about exception in python.
I know that an exception can be re-raised. 
 Is there any simple way provided by python itself that I can know the current 
 exception is 
 just firstly occurred or it is re-raised by previous exception?
 I need to know whether the exception is firstly occurred or not because I 
 want to just display an error message if it's
 firstly occur and skip display error message if it is just re-raised by 
 previous exception.

No, that's not possible. What you can do is either

 - wrap the exception instead of re-raising it

 - set some property on the exception before re-raising.


Diez
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Low-overhead GUI toolkit for Linux w/o X11?

2007-11-04 Thread Bjoern Schliessmann
Grant Edwards wrote:
 On 2007-11-04, Bjoern Schliessmann

 Erm, wxWidgets is implemented in C++
 
 Are you saying C++ software can't be large and slow?

No, but wxWidgets is quite mature and my experience is that it's
faster than Qt (partly, I think, because it always uses the native
widgets).
 
 and wxPython is just a wrapper.
 
 Yes, I know.  If we though Python was the problem, I wouldn't
 be asking about other toolkits that had Python bindings.

Ah, you know more than you wrote? If you've done measurements, I'd
find them quite interesting to see.
 
Regards,


Björn

-- 
BOFH excuse #295:

The Token fell out of the ring. Call us when you find it.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: (MAC) CoreGraphics module???

2007-11-04 Thread David C. Ullrich
On Fri, 02 Nov 2007 14:09:25 -0500, Robert Kern
[EMAIL PROTECTED] wrote:

David C. Ullrich wrote:
 [???]

Okay, which version of OS X do you have? In 10.3 and 10.4 it used to be here:
/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/plat-mac/CoreGraphics.py

I notice that in 10.5, it no longer exists, though.

Um, surely that doesn't mean that there's no CoreGraphics
available in 10.5?

[...]

For scripts executed from the terminal, you could start them with a hash-bang 
line:

  #!/usr/bin/python

Use chmod u+x on the script, and then you can execute it like any other
program from the terminal.

Sure enough: As I suspected yesterday morning, the reason that didn't
work the first time I tried it is that I'd already corrected the
import statement to Carbon.CoreGraphics. When I put it back the way
it's supposed to be everything's great.

Such a happy camper - I now have a pdf consisting of a blank white
page with one red circle in the middle! Something I've always 
wanted - thanks.





David C. Ullrich
-- 
http://mail.python.org/mailman/listinfo/python-list


how does google search in phrase

2007-11-04 Thread [EMAIL PROTECTED]
hi my friends;
google can searching in phrase but it is imposible. it have a lot of
page in data base and quadrillions  sentence it can't search in
fulltxt all of them .it need a super algorithm. ı need the algorithm
now. if you have a idea ,pls share to me
thanks
(sorry for my bad english :(  )

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is pyparsing really a recursive descent parser?

2007-11-04 Thread Neil Cerutti
On 2007-11-04, Kay Schluehr [EMAIL PROTECTED] wrote:
 On 4 Nov., 03:07, Neil Cerutti [EMAIL PROTECTED] wrote:
 I wouldn't characterize it as pretending. How would you parse:

   hello end hello end

 WORD END WORD END and WORD WORD WORD END are both valid
 interpretations, according to the grammar.

 As soon as you remove the ambiguity from the grammar, PyParsing
 starts to work correctly.

 I think you are correct about this. But I'm not sure how much
 it shall matter. Just take a look at Pythons Grammar

 http://svn.python.org/view/python/trunk/Grammar/Grammar?rev=55446view=markup

 Without special keyword treatment this grammar would be
 ambigous and couldn't be parsed using an LL(1) parser. 

I agree. I don't know how easy it is to create keywords using
PyParsing, or if the grammar in question would still be
considered correct by the author.

 The grammar compiler which builds the parser tables creates a
 special label for each keyword. This label is filtered when a
 NAME token is feeded into the parser. With the label that
 belongs to e.g. 'if' or 'while' the correct statement can be
 selected in constant time. Same happens when I use the parser
 generator with your EBNF grammar. With a little more adaption
 also NUMBER token could be filtered. But this would be
 overdesign.

 Theoretical beauty is compromised here using reasonable default
 assumptions for keeping the grammar simple ( convention over
 configuration to borrow a Rails slogan ).

Keywords are practically ubiquitous. I never thought of them as
unbeautiful before.

 Tokenization is another issue in Python. It is indeed somewhat
 special due to significant whitespace and line continuation but
 tokenization is conceptually much simpler and one would likely
 throw all kinds of options and special case handling in the
 lexical analysis phase.

It might be a quick fix in PyParsing, which includes a Keyword
type, but without the semantics that are needed in this case. You
have to use (as suggested earlier) negative lookahead in either a
Regex or with the NotAny type.

 goal = OneOrMore(NotAny(Literal('end')) + Word(alphas)) + Literal('end')
 goal.parseString('hello hello hello end')
(['hello', 'hello', 'hello', 'end'], {})
 goal.parseString('hello end hello end')
(['hello', 'end'], {})

No scanner/tokenizer needed! ;)

-- 
Neil Cerutti
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python good for data mining?

2007-11-04 Thread Jens
  What if I were to use my Python libraries with a web site written in
  PHP, Perl or Java - how do I integrate with Python?

 Possibly the simplest way would be python .cgi files.  The cgi and cgitb
 modules allow form data to be read fairly easily.  Cookies are also
 fairly simple.  For a more complicated but more customisable approach,
 you could look in to the BaseHTTPServer module or a socket listener of
 some sort, running that alongside the webserver publicly or privately.
 Publicly you'd have links from the rest of your php/whatever pages to
 the python server.  Privately the php/perl/java backend would request
 data from the local python server before feeding the results back
 through the main server (apache?) to the client.

Thanks a lot! I'm not sure I completely understand your description of
how to integrate Python with, say PHP. Could you please give a small
example? I have no experience with Python web development using CGI.
How easy is it compared to web development in PHP?

I still havent't made my mind up about the choice of programming
language for my data mining project. I think it's a difficult
decision. My heart tells me Python and my head tells me Java :-)

-- 
http://mail.python.org/mailman/listinfo/python-list


how to keep order key in a dictionary

2007-11-04 Thread azrael
I 'm currenty working on a project for which it would be great to use
a dictionary. At the begining I have a list of strings that should
represent the keys in the dictionary. When I try to create a
dictionary it rearanges the keys. For this dictionary it is realy
important to keep the right order. Is it possible to arange them in a
specific order?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to keep order key in a dictionary

2007-11-04 Thread Jeff McNeil
See http://www.python.org/doc/faq/general/#how-are-dictionaries-implemented 
.  In short, keys() and items() return an arbitrary ordering.  I think  
that http://pypi.python.org/pypi/Ordered%20Dictionary/ will do what  
you want if key ordering is a necessity.

Jeff
On Nov 4, 2007, at 8:19 AM, azrael wrote:

 I 'm currenty working on a project for which it would be great to use
 a dictionary. At the begining I have a list of strings that should
 represent the keys in the dictionary. When I try to create a
 dictionary it rearanges the keys. For this dictionary it is realy
 important to keep the right order. Is it possible to arange them in a
 specific order?

 -- 
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: setting variables in outer functions

2007-11-04 Thread Bruno Desthuilliers
Hrvoje Niksic a écrit :
 Chris Mellon [EMAIL PROTECTED] writes:
 
 
I have no idea why someone who already has a working, object system
would want to implement their own on top of closures.
 
 
 This subthread is getting ridiculous -- closures are *not* useful only
 for implementing object systems!  The object system thing is just a
 method of teaching abstractions popularized by SICP.  Abstractions
 normally taken for granted, such as objects, or even expression
 evaluation, are implemented from scratch, using only the most basic
 primitives available.  In a Lisp-like language with lexical scope, the
 most basic storage primitive is the lexical environment itself[1].
 
 In real-life code, closures are used to implement callbacks with
 automatic access to their lexical environment without the need for the
 bogus additional void * argument one so often sees in C callbacks,
 and without communication through global variables.  If the callbacks
 can access variables in the outer scope, it's only logical (and
 useful) for them to be able to change them.  Prohibiting modification
 reduces the usefulness of closures and causes ugly workarounds such as
 the avar[0] pattern.
 
 If closures were useful only for implementing bogus object systems,
 neither they nor nonlocal would have made it to Python in the first
 place.
 

Indeed. But please read carefully the classic piece of wisdom quoted by 
Diez in his answer. Then remember that in Python, callable objects 
doesn't need to be functions, and most of what can be done (and is 
usually done in FPLs) can be done with objects. Look at the sample 
Python implementation of partial evaluation, or at all the so-called 
decorators that are not functions but really classes.

I do use Python's closures quite a lot myself, because it's often 
simpler than doing it the OO way. But for whatever slightly more 
involved, I prefer to write my own classes, and I'll still do so if and 
when Python grows full-blown closures.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python newbie

2007-11-04 Thread Bruno Desthuilliers
Bjoern Schliessmann a écrit :
 Bruno Desthuilliers wrote:
 
Bjoern Schliessmann a écrit :
 
 
You can't just declare in Python, you always define objects (and
bind a name to them).

def toto():
   global p
   p = 42

Here I declared 'x' as global without defining it.
 
 
 Ah well, someone had to notice it ...
 
 BTW, where's x? :)

Sorry. s/x/p/g, of course.

 
Yes, globals need to be defined before you
can access them using global.

For which definition of 'defined' ?
 
 
 to define a name: to bind an object to a name

Then - according to this definition - your assertion is false.

 
  p
Traceback (most recent call last):
   File stdin, line 1, in module
NameError: name 'p' is not defined
  toto()
  p
42
 
 
 
 Easy, p = 42 is a definition. 

Yes, but it came *after* the global statement. So the global statement 
*is* a kind of declaration, and the 'subject' of this declaration 
doesn't need to be previously defined.


 
But anyway: in Python, everything's an object, so the only 
thing that makes functions a bit specials is that they are
callable - as are classes, methods, and every instance of a class
implementing __call__ FWIW. So saying 'variables holds data,
functions do stuff' is unapplyiable to Python.
 
 
 I don't think so. Even if everything is an object, there is still a
 concept behind the words function and variable.

f = lambda x: x+2

def toto(c):
   return c(21)

toto(f)

What's f ? A function or a variable ?-)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python good for data mining?

2007-11-04 Thread Cameron Walsh
Jens wrote:
 
 Thanks a lot! I'm not sure I completely understand your description of
 how to integrate Python with, say PHP. Could you please give a small
 example? I have no experience with Python web development using CGI.
 How easy is it compared to web development in PHP?
 
 I still havent't made my mind up about the choice of programming
 language for my data mining project. I think it's a difficult
 decision. My heart tells me Python and my head tells me Java :-)
 

My C++ lecturer used to tell us 'C++ or Java?' is never the question. 
For that matter, Java is never the answer.

As for python and cgi, it's pretty simple.  Instead of a .php file to be 
handled by the php-handler, you have a .cgi file which is handled by the 
cgi-handler.  Set the action of your html form to the .cgi file.  At the 
top of the .cgi file, you'll need a line like:

#!/usr/bin/env python

Which tells it to use python as the interpreter.  You'll need a few imports:

import cgi
import cgitb; cgitb.enable() # for debugging - it htmlises
 # your exceptions and error messages.
print Content-type: text/html; charset=iso-8859-1;\n
# You need that line or something similar so the browser knows what to 
do with the output of the script.

Everything that's printed by the python script goes straight to the 
client's browser, so the script will have to print html.  The cgi module 
handles form data, typically formdata = cgi.FieldStorage() will be 
filled when a form is sent to the script.  print it and see what's in it.

 From here, there's a huge number of tutorials on python and cgi on the 
web and I'm tired.

Best of luck,

Cameron.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Low-overhead GUI toolkit for Linux w/o X11?

2007-11-04 Thread Grant Edwards
On 2007-11-04, Paul Rubin http wrote:
 Grant Edwards [EMAIL PROTECTED] writes:
 There is no mouse.  I'm not sure how many widgets are
 required.  Probably not very many.

 Back in the old days there were some lightweight toolkits for
 doing text mode GUI's using ANSI graphic characters for
 MS-DOS.  I did a few of them.  You could do quite usable and
 attractive gui's that way, as long as you didn't require too
 much bling.

I do require graphics.

-- 
Grant Edwards   grante Yow!  Join the PLUMBER'S
  at   UNION!!
   visi.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to keep order key in a dictionary

2007-11-04 Thread BartlebyScrivener
On Nov 4, 7:19 am, azrael [EMAIL PROTECTED] wrote:

  For this dictionary it is realy
 important to keep the right order. Is it possible to arange them in a
 specific order?

Not sure what order you want, but how about sorting the keys?

def printdict(dict):
print sorted key:value pairs
keys = dict.keys()
keys.sort()
for key in keys:
print key, :, dict[key]

from Python Visual Quickstart, Chris Fehily p. 157

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Low-overhead GUI toolkit for Linux w/o X11?

2007-11-04 Thread Grant Edwards
On 2007-11-04, Bjoern Schliessmann [EMAIL PROTECTED] wrote:

 and wxPython is just a wrapper.
 
 Yes, I know.  If we though Python was the problem, I wouldn't
 be asking about other toolkits that had Python bindings.

 Ah, you know more than you wrote? If you've done measurements,
 I'd find them quite interesting to see.

We know that wxWidgets is slow.  That's why we're looking for
alternatives.

-- 
Grant Edwards   grante Yow!  Did you find a
  at   DIGITAL WATCH in YOUR box
   visi.comof VELVEETA??
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is pyparsing really a recursive descent parser?

2007-11-04 Thread Just Another Victim of the Ambient Morality

Neil Cerutti [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 On 2007-11-04, Just Another Victim of the Ambient Morality
 [EMAIL PROTECTED] wrote:

 Neil Cerutti [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]

 Is there not an ambiguity in the grammar?

 In EBNF:

  goal -- WORD { WORD } END

  WORD is '[a-zA-Z]+'
  END is 'end'

 I think it is fine that PyParsing can't guess what the composer
 of that grammar meant.

 One interpretation conforms to the grammar while the other
 doesn't. You would assume that the interpretation that agrees
 with the grammar would be the preferable choice and so should
 the program. Secondly, even if it is an ambiguity... so what?
 pyparsing's current behaviour is to return a parse error,
 pretending that the string can't be parsed.  Ideally, perhaps
 it should alert you to the ambiguity but, surely, it's better
 to return _a_ valid parsing than to pretend that the string
 can't be parsed at all...

 I wouldn't characterize it as pretending. How would you parse:

  hello end hello end

 WORD END WORD END and WORD WORD WORD END are both valid
 interpretations, according to the grammar.

...and it would be nice if the parser were to parse one of them since 
they are both right.  Having more than one right answer is not the same as 
having no answer, which is what pyparsing claims...


 As soon as you remove the ambiguity from the grammar, PyParsing
 starts to work correctly.

This is simply not true.  Try this:


grammar = OneOrMore(Word(alphas)) + Literal('end') + Literal('.')
grammar.parseString('First Second Third end.')


...again, this will fail to parse.  Where's the ambiguity?
Besides, parsing ambiguous grammars is a useful feature.  Not all 
grammars being parsed are designed by those doing the parsing...


 Consider writing a recursive decent parser by hand to parse the
 language '[ab]+b'.

  goal -- ab_list 'b'
  ab_list -- 'a' list_tail
  ab_list -- 'b' list_tail
  list_tail -- 'a' list_tail
  list_tail -- 'b' list_tail
  list_tail -- null


 The above has the exact same bug (and probably some others--I'm
 sorry unable to test it just now) as the PyParsing solution.

 The error is in the grammar. It might be fixed by specifying that
 'b' must be followed by EOF, and then it could be coded by using
 more than one character of lookahead.

I don't exactly understand the syntax you used to describe the 
productions of your recursive descent parser so not only did I not follow it 
but I couldn't make out the rest of your post.  Could you explain in a 
little more detail?  The last part that points to 'null' is especially 
confusing...
As demonstrated earlier, it's not just the grammar.  There are 
situations that are unambiguous that pyparsing can't parse simply and 
there's no reason for it.
Besides, ambiguous grammars are a fact of life and some of us need to 
parse them.  It's usually okay, too.  Consider a previous example:


grammar = OneOrMore(Word(alphas)) + Literal('end')


While you may consider this inherently ambiguous, it's usually not. 
That is to say, as long as it is rare that 'end' is used not at the end of 
the string, this will simply parse and, yet, pyparsing will consistently 
fail to parse it...





-- 
http://mail.python.org/mailman/listinfo/python-list


modify a file

2007-11-04 Thread tech user
Hello,

I have a file which is large about 3.5G.
I need to modify some lines in it,but I don't like to create another file
for the result.
How can i do it? thanks.





  
National Bingo Night. Play along for the chance to win $10,000 every week. 
Download your gamecard now at Yahoo!7 TV. 
http://au.blogs.yahoo.com/national-bingo-night/


-- 
http://mail.python.org/mailman/listinfo/python-list


pygresql

2007-11-04 Thread JD
Hi there.

I'm trying to use python with postgresql. I decided to use psycopg to
interact with the postgresql server.  When installing psycopg it
appeared that I needed mxDateTime. So I decided to install the mxbase
package.

I received the following error message (the interesting bit seems to
be at the end):


[EMAIL PROTECTED]:/var/lib/postgresql/mxbase$ sudo python setup.py
install
running install
running build
running mx_autoconf
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall -Wstrict-
prototypes -fPIC -D_GNU_SOURCE=1 -I/usr/local/include -I/usr/include -
c _configtest.c -o _configtest.o
_configtest.c: In function 'main':
_configtest.c:4: warning: statement with no effect
gcc -pthread _configtest.o -L/usr/local/lib -o _configtest
success!
removing: _configtest.c _configtest.o _configtest
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall -Wstrict-
prototypes -fPIC -D_GNU_SOURCE=1 -I/usr/include/python2.5 -I/usr/local/
include -I/usr/include -c _configtest.c -o _configtest.o
success!
removing: _configtest.c _configtest.o
macros to define: [('HAVE_STRPTIME', '1')]
macros to undefine: []
running build_ext

building extension mx.DateTime.mxDateTime.mxDateTime (required)
building 'mx.DateTime.mxDateTime.mxDateTime' extension
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall -Wstrict-
prototypes -fPIC -DUSE_FAST_GETCURRENTTIME -DHAVE_STRPTIME=1 -Imx/
DateTime/mxDateTime -I/usr/include/python2.5 -I/usr/local/include -I/
usr/include -c mx/DateTime/mxDateTime/mxDateTime.c -o build/temp.linux-
i686-2.5_ucs4/mx-DateTime-mxDateTime-mxDateTime/mx/DateTime/mxDateTime/
mxDateTime.o
gcc: mx/DateTime/mxDateTime/mxDateTime.c: No such file or directory
gcc: no input files
error: command 'gcc' failed with exit status 1


I googled error: command 'gcc' failed with exit status 1 and
interestingly a lot of the results seemed to be linked with python. I
can confirm that I do have gcc installed. One post seemed to suggest
that I may be using too new a version of gcc. Do you think this is the
problem or am I going astray somewhere else?

Thank you very much in advance for any assistance,
James.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pygresql

2007-11-04 Thread JD
Btw apologies for naming the post 'pygresql'! That was the module I
was attempting to use before.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: modify a file

2007-11-04 Thread Marc 'BlackJack' Rintsch
On Mon, 05 Nov 2007 01:55:50 +1100, tech user wrote:

 I have a file which is large about 3.5G.
 I need to modify some lines in it,but I don't like to create another file
 for the result.
 How can i do it? thanks.

In general not a good idea unless the modification does not change the
length of the lines.  If it does, everything behind that line must be
copied within the file, either up before the modification to make room
or down after the modification to close a gap.  If anything goes wrong
while moving/copying the data you are left with a 3.5G file that may be
broken.

Ciao,
Marc 'BlackJack' Rintsch
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python newbie

2007-11-04 Thread Bruno Desthuilliers
Hendrik van Rooyen a écrit :
 Bruno Desthuilliers wrote:
 
 
functions are *not* methods of their module.
 
 
 Now I am confused - if I write:
 
 result = foo.bar(param)
 
 Then if foo is a class, we probably all agree that bar is
 a method of foo.

We probably agree that it's an attribute of foo, and that this attribute 
is callable. Chances are that the object returned by the lookup is 
either a classmethod or a staticmethod, but not necessarily, and nothing 
in the above snippet can tell us - IOW, you have to either inspect 
foo.bar or have a look at the implementation to know what foo.bar yields.

 But the same syntax would work if I had imported some 
 module as foo.

It would also work if foo was an instance an bar a function:

class Foo(object):
   def bar(self):
  print bar, self

def baaz(param):
   return param * 2

foo = Foo()
foo.baaz = baaz

result = foo.baaz(21)

While we're at it, notice that modules are *not* classes. They are 
instances of class 'module'.

 So what's the difference ?  Why can't bar be called a method
 of foo,

Depends. If
   type(foo.bar) is types.MethodType
yields True, then you can obviously call bar a method of foo. Else, it's 
a callable attribute. FWIW, if foo is a module and Bar a class defined 
in this module, would you call foo.Bar a method of foo ?

 or is it merely a convention that classes have
 methods and modules have functions?

It's not a convention.

Ok, time for some in-depth (even if greatly simplified) explanation.

First point is that Python objects have two special attributes: __dict__ 
and __class__. The first stores the object's attributes, and the second 
a reference to the object's class (which is itself an object...). Class 
objects also have an __mro__ special attributes that stores the 
superclasses. Attributes are first looked up in the object's __dict__, 
then in the object's class __dict__, then in the superclasses __dict__'s.

Second point : there's something in Python named the protocol 
descriptor. This protocol mandates that, when an attribute lookup is 
performed, if
1/ the attribute belongs to a class object, and
2/ have a (callable) attribute named __get__,

then this callable attribute is called with the instance (or None if the 
attribute is directly looked up on the class) as first argument and the 
class as second argument, and the lookup mechanism yields the result of 
this call instead of the attribute itself.

Third point: function objects actually implement the descriptor 
protocol, in such a way that their __get__ method returns a method 
object - in fact, either a bound (if the attribute was looked up on the 
instance) or unbound instancemethod (if the attribute was looked up on 
the class), wrapping the class, the instance (for bound instancemethods) 
and the function itself. This is this method object which is responsible 
for fetching the instance as first argument to the function when called.

There are also the classmethod and staticmethod types - used as function 
decorators-, which themselves implements the descriptor protocol in a 
somewhat different way, but I won't say more about this, since what they 
do should be somewhat obvious now.

Now if you re-read all this carefully, you'll note that (classmethods 
and staticmethods set aside), the real attribute *is* a function object. 
It's only at lookup time that the method object is created, and only if 
it's an attribute of the class. You can easily check this by yourself:

print Foo.bar
print Foo.__dict__['bar']

So if you set a function as an attribute of a non-class object, the 
descriptor protocol won't be invoked, and you'll get the function object 
itself. Which is what happens with modules (which are instances, not 
types) and functions.

As a last note: all this only applies to the so-called new-style 
object model introduced in Python 2.3 (IIRC). The old object model 
(usually refered to as 'old-style' or 'classic' classes) works somewhat 
differently - but with mostly similar results.

 Note that I am purposely refraining from mentioning a module
 that has a class that has a method.

You shouldn't. Every element must be observed to solve a puzzle. And 
while we're at it, you also forgot to mention the methods of the class's 
class - since classes are themselves instances of another class !-)

HTH
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python newbie

2007-11-04 Thread Bruno Desthuilliers
Paul Rubin a écrit :
 Paul Hankin [EMAIL PROTECTED] writes:
 
I'm intrigued - when would you want a callable module?
 
 
 I think it would be nice to be able to say
 
import StringIO
buf = StringIO('hello')
 
 instead of
 
   import StringIO
   buf = StringIO.StringIO('hello')

What's wrong with:

from StringIO import StringIO
buf = StringIO('hello')

???
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python newbie

2007-11-04 Thread Bruno Desthuilliers
Bjoern Schliessmann a écrit :
 Hendrik van Rooyen wrote:
 
 
So what's the difference ?  Why can't bar be called a method
of foo, or is it merely a convention that classes have
methods and modules have functions?
 
 
 In depends on which terminology you use. As Steven told, Python
 methods are special functions.

Nope. They are callable objects wrapping a function, a class and 
(usually) an instance of the class.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to keep order key in a dictionary

2007-11-04 Thread azrael
thanks, the links where successfull

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: problem with iteration

2007-11-04 Thread Bruno Desthuilliers
Panagiotis Atmatzidis a écrit :
 Hello,
 
 I managed to write some code in order to do what I wanted: Inject code
 in the right place, in some html files. I developed the program using
 small functions, one at the time in order to see how they work. When I
 tried to put these pieces of code together I got this error:
 TypeError: iteration over non-sequence
 
 Here is the code snippet that has the issue
 
 --
 def injectCode(path, statcode):
   for root, dir, files in os.walk(path, topdown=True):
   for name in files:
   html_files = re.search(html, name, flags=0)
   if html_files == None:

 if html_files is None:

   print No html files found in your path.
   else:
   for oldfile in html_files: -- HERE IS THE ERROR
   [rest of code here]

 
 I'm learning through practice and this is my first program. The error
 may seem easy for you.

Indeed !-)

 However except from the explanation I'd like to
 know how can I handle situations like the one above. I tought that
 itering was valid there :-(

Obviously not. You may want to learn more about the re module, and what 
re.search returns. Anyway, if what you want is to find out if some file 
name contains the string 'html', you don't need regexps:

if 'html' in 'index.html':
   print hurray

Now I'm not sure you quite get how os.walk works - here, 'files' is a 
list of file *names*, so I don't see how you could expect one of it's 
elements to itself become a list of file names !-)

IOW, assuming you want to detect file names ending with '.html' in a 
given part of your directory tree, the following code might do:

def injectCode(path, statcode):
   html_files = []
   for root, dir, files in os.walk(path, topdown=True):
 for name in files:
   if name.endswith('.html'):
 html_files.append(os.path.join(root, name))
   if not html_files:
 print No html files found in your path.
   else:
 for html_file in html_files:
   [rest of code here]

HTH
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to keep order key in a dictionary

2007-11-04 Thread Bruno Desthuilliers
azrael a écrit :
 I 'm currenty working on a project for which it would be great to use
 a dictionary. At the begining I have a list of strings that should
 represent the keys in the dictionary. When I try to create a
 dictionary it rearanges the keys. For this dictionary it is realy
 important to keep the right order. Is it possible to arange them in a
 specific order?
 
No (not with the builtin dict type at least). You have to keep the order 
by yourself. The nice thing is that you already have this : your list of 
strings. Then you just have to iterate over this list and get to the dict:

for key in list_of_strings:
   print dic[key]

HTH
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to do the mapping btw numpy arrayvalues and matrix columns

2007-11-04 Thread [EMAIL PROTECTED]
CW,
thanx for the reply..but i was looking for a mapping BTW each item of
a numpy.ndarray and the corresponding column of a numpy.matrix ,after
some struggle :-) i came up with this

#a function to return a column from a matrix
def getcol(data, colindex):
return data[:,colindex]#returns a matrix obj

#function to set a column of a matrix with a given matrix
def setcol(data, inmat,colindex):
data[:,colindex]=inmat  #both data and inmat are matrix objs

#now i have an ndarray with 5 elements
evalarray=array(([11.0,33.0,22.0,55.0,44.0]))

#and a matrix with 5 columns
evectmat=matrix(([1.3,2.5,3.2,6.7,3.1],
  [9.7,5.6,4.8,2.5,2.2],
  [5.1,3.7,9.6,3.1,6.7],
  [5.6,3.3,1.5,2.4,8.5]

   ))

the first column of evectmat corresponds to first element of evalarray
and so on..
then i did this

mydict=dict(
[(evalarray[x],getcol(evectmat,x)) for x in range(len(evalarray))]
)
klst=mydict.keys()

klst.sort()
klst.reverse() #because i want the largest value as first

newevectmat=matrix(zeros((4,5)))

for x in range(len(klst)):
newcol=mydict[klst[x]]
setcol(newevectmat,newcol,x)

print newevectmat:
print newevectmat

this gives me the desired result..now i have a new matrix with columns
arranged corresponding to the values of evalarray

i don't know if this is a good way(or even pythonish ) to do it..i am
a beginner afterall.!. any suggestions most welcome
TIA
dn











 from numpy import matrix, asarray
 obj = matrix(([1.3,2.5,3.2,6.7,3.1],
[9.7,5.6,4.8,2.5,2.2],
[5.1,3.7,9.6,3.1,6.7],
[5.6,3.3,1.5,2.4,8.5]))
 ar = asarray(obj)
 val_to_col_list = []
 for row in ar:
  for ind,val in enumerate(row):
  val_to_col_list.append((val,ind))
 val_to_col_list.sort()

 If instead you require a map, such that each value maps to a list of the
 columns it appears in, you could try the following:

 val_col_map = {}
 for row in ar:
  for col,val in enumerate(row):
  tmplist=val_col_map.get(val,[])
  tmplist.append(col)
  val_col_map[val]=tmplist
 val_keys = val_col_map.keys()
 val_keys.sort()

 val_keys is now a sorted list of unique values from your original
 matrix.  Use these values as keys for val_col_map.

 Eww... but it works.  You'll want to be really careful with floating
 point numbers as keys, seehttp://docs.python.org/tut/node16.htmlfor
 more details.

 Best of luck,

 Cameron.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python newbie

2007-11-04 Thread Paul Rubin
Bruno Desthuilliers [EMAIL PROTECTED] writes:
 What's wrong with:
 
 from StringIO import StringIO
 buf = StringIO('hello')

The other functions in the module aren't available then.  E.g.

  from random import random
  x = random()
  y = random.choice((1,2,3))   # oops
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pygresql

2007-11-04 Thread JD
Apologies for essentially talking to myself out loud!

I've switched back to pygresql. I think a lot of my problems were
caused by not having installed postgresql-server-dev-8.2 which
contains a lot of header files etc. I'm sure this was part of the
problem with the psycopg modules aswell.

postgresql-server-dev can easily be installed of course by using:

   sudo apt-get install postgresql-server-dev

I hope my ramblings have been of help to someone!

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pygresql

2007-11-04 Thread rustom
On Nov 4, 8:45 pm, JD [EMAIL PROTECTED] wrote:
 Hi there.

 I'm trying to use python with postgresql. I decided to use psycopg to
 interact with the postgresql server.  When installing psycopg it
 appeared that I needed mxDateTime. So I decided to install the mxbase
 package.

 I received the following error message (the interesting bit seems to
 be at the end):

snipped

 Thank you very much in advance for any assistance,
 James.

Why are you trying to install yourself instead of using apt? On my
debian etch box the psycopg packages have dependencies to the python-
egenix-mx* packages and installing psycopg pulls in the others without
problems. And I guess ubuntu should be similar.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is pyparsing really a recursive descent parser?

2007-11-04 Thread Neil Cerutti
On 2007-11-04, Just Another Victim of the Ambient Morality
[EMAIL PROTECTED] wrote:
 Consider writing a recursive decent parser by hand to parse
 the language '[ab]+b'.

  goal -- ab_list 'b'
  ab_list -- 'a' list_tail
  ab_list -- 'b' list_tail
  list_tail -- 'a' list_tail
  list_tail -- 'b' list_tail
  list_tail -- null


 The above has the exact same bug (and probably some others--I'm
 sorry unable to test it just now) as the PyParsing solution.

 The error is in the grammar. It might be fixed by specifying that
 'b' must be followed by EOF, and then it could be coded by using
 more than one character of lookahead.

 I don't exactly understand the syntax you used to describe the 
 productions of your recursive descent parser so not only did I not follow it 
 but I couldn't make out the rest of your post.  Could you explain in a 
 little more detail?  The last part that points to 'null' is especially 
 confusing...

It's the BNF spelling of
  
  goal -- ab_list 'b'
  ab_list -- ab { ab }
  ab -- 'a' | 'b'

The null is to say that list_tail can match nothing, i.e, an
empty string.

Then, in the Parser class, every method (except for match, which
is used as a central place to consume characters) corresponds to
one of the productions in the BNF. Breaking things down into 
BNF-based productions often makes implementation, debugging and
code generation easier.

PyParsing saves me that stop, since I can often directly
implement the EBNF using PyParsing.

 As demonstrated earlier, it's not just the grammar.  There are
 situations that are unambiguous that pyparsing can't parse
 simply and there's no reason for it.

Yes, many parser generators have many more limitations than just
the requirement of an unambiguous grammar.

 Besides, ambiguous grammars are a fact of life and some of us
 need to parse them.  It's usually okay, too.  Consider a
 previous example:

 grammar = OneOrMore(Word(alphas)) + Literal('end')

 While you may consider this inherently ambiguous, it's usually
 not. That is to say, as long as it is rare that 'end' is used
 not at the end of the string, this will simply parse and, yet,
 pyparsing will consistently fail to parse it...

I believe there's no cure for the confusion you're having except
for implementing a parser for your proposed grammar.
Alternatively, try implementing your grammar in one of your other
favorite parser generators.

-- 
Neil Cerutti
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python good for data mining?

2007-11-04 Thread paul
Jens schrieb:
 What about user interfaces? How easy is it to use Tkinter for
 developing a user interface without an IDE? And with an IDE? (which
 IDE?)
Tkinter is easy but looks ugly (yeah folks, I know it doesn't matter in 
you mission critical flight control system). Apart from ActiveStates 
Komodo I'm not aware of any GUI builders. Very likely you don't need one.

 
 What if I were to use my Python libraries with a web site written in
 PHP, Perl or Java - how do I intergrate with Python?
How do you integrate Perl and PHP? The usual methods are calling 
external programs (slow) or using some IPC method (socket, xmlrpc, corba).

 
 I really like Python for a number of reasons, and would like to avoid
 Java.
Have you looked at jython?

cheers
  Paul

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python newbie

2007-11-04 Thread Bruno Desthuilliers
Paul Rubin a écrit :
 Bruno Desthuilliers [EMAIL PROTECTED] writes:
 
What's wrong with:

from StringIO import StringIO
buf = StringIO('hello')
 
 
 The other functions in the module aren't available then.  E.g.
 
   from random import random
   x = random()
   y = random.choice((1,2,3))   # oops

from random import random, choice

x = random()
y = choice((1, 2, 3))
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pygresql

2007-11-04 Thread Erik Jones
On Nov 4, 2007, at 9:45 AM, JD wrote:

 Hi there.

 I'm trying to use python with postgresql. I decided to use psycopg to
 interact with the postgresql server.  When installing psycopg it
 appeared that I needed mxDateTime. So I decided to install the mxbase
 package.

 I received the following error message (the interesting bit seems to
 be at the end):


 [EMAIL PROTECTED]:/var/lib/postgresql/mxbase$ sudo python setup.py
 install
 running install
 running build
 running mx_autoconf
 gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall -Wstrict-
 prototypes -fPIC -D_GNU_SOURCE=1 -I/usr/local/include -I/usr/include -
 c _configtest.c -o _configtest.o
 _configtest.c: In function 'main':
 _configtest.c:4: warning: statement with no effect
 gcc -pthread _configtest.o -L/usr/local/lib -o _configtest
 success!
 removing: _configtest.c _configtest.o _configtest
 gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall -Wstrict-
 prototypes -fPIC -D_GNU_SOURCE=1 -I/usr/include/python2.5 -I/usr/ 
 local/
 include -I/usr/include -c _configtest.c -o _configtest.o
 success!
 removing: _configtest.c _configtest.o
 macros to define: [('HAVE_STRPTIME', '1')]
 macros to undefine: []
 running build_ext

 building extension mx.DateTime.mxDateTime.mxDateTime (required)
 building 'mx.DateTime.mxDateTime.mxDateTime' extension
 gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall -Wstrict-
 prototypes -fPIC -DUSE_FAST_GETCURRENTTIME -DHAVE_STRPTIME=1 -Imx/
 DateTime/mxDateTime -I/usr/include/python2.5 -I/usr/local/include -I/
 usr/include -c mx/DateTime/mxDateTime/mxDateTime.c -o build/ 
 temp.linux-
 i686-2.5_ucs4/mx-DateTime-mxDateTime-mxDateTime/mx/DateTime/ 
 mxDateTime/
 mxDateTime.o
 gcc: mx/DateTime/mxDateTime/mxDateTime.c: No such file or directory
 gcc: no input files
 error: command 'gcc' failed with exit status 1


 I googled error: command 'gcc' failed with exit status 1 and
 interestingly a lot of the results seemed to be linked with python. I
 can confirm that I do have gcc installed. One post seemed to suggest
 that I may be using too new a version of gcc. Do you think this is the
 problem or am I going astray somewhere else?

 Thank you very much in advance for any assistance,
 James.

You shouldn't be using psycopg, it's not supported anymore.  Use  
psycopg2 which is in active development and has no dependecies on any  
of the mx libraries.

Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python newbie

2007-11-04 Thread Paul Rubin
Bruno Desthuilliers [EMAIL PROTECTED] writes:
from random import random
x = random()
y = random.choice((1,2,3))   # oops
 
 from random import random, choice
 
 x = random()
 y = choice((1, 2, 3))

Really, a lot of these modules exist primarily to export a single
class or function, but have other classes or functions of secondary
interest.  I'd like to be able to get to the primary function without
needing to use a qualifier, and be able to get to the secondary ones
by using a qualifier instead of having to import explicitly and
clutter up the importing module's name space like that.  It just seems
natural.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: bsddb in python 2.5.1

2007-11-04 Thread BjornT
On Nov 4, 1:04 am, Martin v. Löwis [EMAIL PROTECTED] wrote:
  I know that when you upgrade Berkeley DB you're supposed to go through
  steps solving this problem,but I wasn't expecting an upgrade. I've
  tried to use different versions bsddb3, 4.4 and 4.5, (instead of bsddb
  that comes with python 2.5.1) with different versions of Berkeley DB
  installs (4.5 and 4.4 - built from source into /usr/local).

 There seems to be an important misconception here. Python 2.5.1 does
 not come with any bsddb version whatsoever. If you have a Python binary
 where the bsddb module is linked with a certain version of Python, that
 was the choice of whoever made the Python binary.

 For acccess to the database, the Python version does not matter at all.
 What matters is the version of Berkeley DB.

 So as the first step, you should find out what version of Berkeley DB
 the old Python installation was using, and what version of Berkeley DB
 the new version is using.

 I'm also not sure what you mean by I've tried to use different version
 bsddb3, 4.4 and 4.5. What specifically did you do to try them? AFAICT,
 Ubuntu Feisty Fawn linked its Python with bsddb 4.4, so you should
 have no version issues if you really managed to use bsddb 4.4.

 Can you please report the specific error you got? According to the
 Berkeley DB documentation, there was no change to database formats
 in Berkeley DB 4.5 (but there was a change to the log file format).

 Regards,
 Martin

Hi,

Thank you for your reply. My exact error (when I use bsddb that came
with Ubuntu's python 2.5.1) is as follows:


2007-11-04 13:23:05: (mod_fastcgi.c.2588) FastCGI-stderr: Traceback
(most recent call last):
  File /usr/lib/python2.5/site-packages/web/webapi.py, line 304, in
wsgifunc
result = func()
  File /usr/lib/python2.5/site-packages/web/request.py, line 129, in
lambda
func = lambda: handle(getattr(mod, name), mod)
  File /usr/lib/python2.5/site-packages/web/request.py, line 61, in
handle
return tocall(*([x and urllib.unquote(x) for x in args] + fna))
  File /home/bjorn/karmerd-svn/karmerd.py, line 6, in GET
self.handleRequest(GET, path)
  File /home/bjorn/karmerd-svn/karmerd.py, line 14, in handleRequest
request_api = api.Api(path, input, web.ctx, type)
  File /home/bjorn/karmerd-svn/api.py, line 14, in __init__
self.a = kapilib.abuse.Abuse(input, identity, self.pathParts[1],
apilib);
  File /home/bjorn/karmerd-svn/kapilib/abuse.py, line 26, in
__init__
d.addEntry(identity.ip, self.target, time.time())
  File /home/bjorn/karmerd-svn/dal/abusedal.py, line 11, in addEntry
env = dal.dalbase.Dal._getOpenEnv(self)
  File /home/bjorn/karmerd-svn/dal/dalbase.py, line 94, in
_getOpenEnv
env.open(path, flags)
DBError: (-30972, DB_VERSION_MISMATCH: Database environment version
mismatch -- Program version 4.5 doesn't match environment version
4.4)

As I have said I have two versions of Berkeley DB installed in /usr/
local:

drwxr-xr-x  6 root root 4096 2007-11-03 13:51 BerkeleyDB.4.4
drwxr-xr-x  6 root root 4096 2007-06-09 14:04 BerkeleyDB.4.5

I have two versions of bsddb3 installed (only one is active) this is
from /usr/lib/python2.5/site-packages:
drwxr-xr-x  3 root root   4096 2007-11-03 15:01 bsddb3
-rw-r--r--  1 root root905 2007-11-03 15:39 bsddb3-4.4.2.egg-info
-rw-r--r--  1 root root905 2007-11-03 15:49 bsddb3-4.5.0.egg-info

 And of course I have this, which was just in Python 2.5 as it came in
Ubuntu:
drwxr-xr-x  2 root root   4096 2007-11-02 19:32 bsddb

As per:

 Ubuntu Feisty Fawn linked its Python with bsddb 4.4, so you
should
 have no version issues if you really managed to use bsddb 4.4.

I tried to use bsddb3, which I downloaded from sourceforge to link to
the different versions of berkeley db I have, but trying to open the
database just stalls, there's never a response. In the interactive
interpreter, I never get the prompt back, I have to kill they python
process. I'm really at a loss of what I could do, except for reverting
back to Ubuntu 7.05. In the future I plan on not using what ships with
the OS, but from source or binaries that I install so I can have
control over the update process.

-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Python good for data mining?

2007-11-04 Thread Bruno Desthuilliers
Jens a écrit :
 I'm starting a project in data mining, and I'm considering Python and
 Java as possible platforms.
 
 I'm conserned by performance. Most benchmarks report that Java is
 about 10-15 times faster than Python,

Benchmarking is difficult, and most benchmarks are easily 'oriented'. 
(pure) Python is slower than Java for some tasks, and as fast as C for 
some others. In the first case, it's quite possible that a C-based 
package exists.

 and my own experiments confirms
 this. 

bis mode=Benchmarking is difficult
If you go that way, Java is way slower than C++ - and let's not talk 
about resources...
/bis

 I could imagine this to become a problem for very large
 datasets.

If you have very large datasets, you're probably using a serious RDBMS, 
that will do most of the job.

 How good is the integration with MySQL in Python?

Pretty good - but I wouldn't call MySQL a serious RDBMS.

 What about user interfaces? How easy is it to use Tkinter for
 developing a user interface without an IDE? And with an IDE? (which
 IDE?)

If your GUI is complex and important enough to need a GUI builder (which 
I guess is what you mean by IDE), then forget about Tkinter, and go for 
either pyGTK, pyQT or wxPython.

 What if I were to use my Python libraries with a web site written in
 PHP, Perl or Java - how do I intergrate with Python?

HTTP is language-agnostic.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python IDE

2007-11-04 Thread Bruno Desthuilliers
Simon Pickles a écrit :
 Hi,
 
 I have recently moved from Windows XP to Ubuntu Gutsy.
 
 I need a Python IDE and debugger, but have yet to find one as good as 
 Pyscripter for Windows. Can anyone recommend anything? What are you all 
 using?

I'm not sure we're all using the same solutions. As far as I'm 
concerned, it's emacs, which is just *great* when it comes to Python 
programming.

 Coming from a Visual Studio background,

Yuck. Sorry...

 editing text files and using the 
 terminal to execute them offends my sensibilities :)

Probably because you don't quite get yet the difference between a 
unix-like command line interface and what one can get with Windows.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is pyparsing really a recursive descent parser?

2007-11-04 Thread Just Another Victim of the Ambient Morality

Neil Cerutti [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 On 2007-11-04, Just Another Victim of the Ambient Morality
 [EMAIL PROTECTED] wrote:
 Consider writing a recursive decent parser by hand to parse
 the language '[ab]+b'.

  goal -- ab_list 'b'
  ab_list -- 'a' list_tail
  ab_list -- 'b' list_tail
  list_tail -- 'a' list_tail
  list_tail -- 'b' list_tail
  list_tail -- null


 The above has the exact same bug (and probably some others--I'm
 sorry unable to test it just now) as the PyParsing solution.

 The error is in the grammar. It might be fixed by specifying that
 'b' must be followed by EOF, and then it could be coded by using
 more than one character of lookahead.

 I don't exactly understand the syntax you used to describe the
 productions of your recursive descent parser so not only did I not follow 
 it
 but I couldn't make out the rest of your post.  Could you explain in a
 little more detail?  The last part that points to 'null' is especially
 confusing...

 It's the BNF spelling of

  goal -- ab_list 'b'
  ab_list -- ab { ab }
  ab -- 'a' | 'b'

 The null is to say that list_tail can match nothing, i.e, an
 empty string.

 Then, in the Parser class, every method (except for match, which
 is used as a central place to consume characters) corresponds to
 one of the productions in the BNF. Breaking things down into
 BNF-based productions often makes implementation, debugging and
 code generation easier.

 PyParsing saves me that stop, since I can often directly
 implement the EBNF using PyParsing.

Okay, I see that now, thank you.
Your statement from the previous post:


 Consider writing a recursive decent parser by hand to parse
 the language '[ab]+b'.

  goal -- ab_list 'b'
  ab_list -- 'a' list_tail
  ab_list -- 'b' list_tail
  list_tail -- 'a' list_tail
  list_tail -- 'b' list_tail
  list_tail -- null


 The above has the exact same bug (and probably some others--I'm
 sorry unable to test it just now) as the PyParsing solution.


...merely demonstrates that this grammar is similarly ambiguous.  There 
are many ways to parse this correctly and pyparsing chooses none of these! 
Instead, it returns the same error it does when the string has no 
solutions...


 As demonstrated earlier, it's not just the grammar.  There are
 situations that are unambiguous that pyparsing can't parse
 simply and there's no reason for it.

 Yes, many parser generators have many more limitations than just
 the requirement of an unambiguous grammar.

Yes, but a recursive descent parser?  I expect such things from LALR and 
others, but not only do I expect a recursive descent parser to correctly 
parse grammars but I expect it to even parse ambiguous ones, in that it is 
the only technique prepared to find more than one solution...


 Besides, ambiguous grammars are a fact of life and some of us
 need to parse them.  It's usually okay, too.  Consider a
 previous example:

 grammar = OneOrMore(Word(alphas)) + Literal('end')

 While you may consider this inherently ambiguous, it's usually
 not. That is to say, as long as it is rare that 'end' is used
 not at the end of the string, this will simply parse and, yet,
 pyparsing will consistently fail to parse it...

 I believe there's no cure for the confusion you're having except
 for implementing a parser for your proposed grammar.
 Alternatively, try implementing your grammar in one of your other
 favorite parser generators.

I believe there is a cure and it's called recursive descent parsing. 
It's slow, obviously, but it's correct and, sometimes (arguably, often), 
that's more important the execution speed.

I spent this morning whipping up a proof of concept parser whose 
interface greatly resembles pyparsing but, baring unknown bugs, works and 
works as I'd expect a recursive descent parser to work.  I don't know Python 
very well so the parser is pretty simple.  It only lexes single characters 
as tokens.  It only supports And, Or, Optional, OneOrMore and ZeroOrMore 
rules but I already think this is a rich set of rules.  I'm sure others can 
be added.  Finally, I'm not sure it's safely copying all its parameter input 
the same way pyparsing does but surely those bugs can be worked out.  It's 
merely a proof of concept to demonstrate a point.
Everyone, please look it over and tell me what you think. 
Unfortunately, my news client is kind of poor, so I can't simply cut and 
paste the code into here.  All the tabs get turned into single spacing, so I 
will post this link, instead:


http://theorem.ca/~dlkong/new_pyparsing.zip


I hope you can all deal with .zip files.  Let me know if this is a 
problem.
Thank you...



-- 
http://mail.python.org/mailman/listinfo/python-list


cgi undefined?

2007-11-04 Thread Tyler Smith
Hi,

I'm trying to learn how to use python for cgi scripting. I've got
apache set up on my laptop, and it appears to be working correctly.
I can run a basic cgi script that just outputs a new html page,
without reading in any form data, so I know that the basics are ok.
But when I try and use cgi.FieldStorage() I get the following errors
from cgitb:

A problem occurred in a Python script. Here is the sequence of
function calls leading up to the error, in the order they occurred. 
 /home/tyler/public_html/cgi-bin/cgi.py
2 
3 import cgitb; cgitb.enable()
4 import cgi
5 
6 print Content-type: text/html
cgi undefined
 /home/tyler/public_html/cgi-bin/cgi.py
   12 H1Input values/H1
   13 
   14 form = cgi.FieldStorage()
   15 for key in form.keys():
   16print 'pkey=%s, value=%s' % (key, form[key])
form undefined, cgi = module 'cgi' from 
'/home/tyler/public_html/cgi-bin/cgi.py', cgi.FieldStorage undefined

AttributeError: 'module' object has no attribute 'FieldStorage'
  args = ('module' object has no attribute 'FieldStorage',)

My form is:

html
headtitleTyler's webform/title/head
body
h1Basic CGI form/h1
pHere comes the form:
form action=http://localhost/tycgi-bin/cgi.py;
  pEnter some text here: input type=text name=text1br
/form/body/html

And the cgi.py script is:

#! /usr/bin/python

import cgitb; cgitb.enable()
import cgi

print Content-type: text/html

HTMLHEAD
TITLEoutput of HTML from Python CGI script/TITLE
/HEAD
BODY
H1Input values/H1

form = cgi.FieldStorage()
for key in form.keys():
   print 'pkey=%s, value=%s' % (key, form[key])

print /BODY/HTML


What am I doing wrong?

Thanks,

Tyler
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Copy database with python..

2007-11-04 Thread Bruno Desthuilliers
Abandoned a écrit :
 Hi.
 I want to copy my database but python give me error when i use this
 command.
 cursor.execute(pg_dump mydata  old.dump)
 What is the problem ? 

Could it have to do with the fact that cursor.execute expects a valid 
SQL query - not a bash command line ?

 And how can i copy the database with python ?

import os
help(os.system)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: (MAC) CoreGraphics module???

2007-11-04 Thread Robert Kern
David C. Ullrich wrote:
 On Fri, 02 Nov 2007 14:09:25 -0500, Robert Kern
 [EMAIL PROTECTED] wrote:
 
 David C. Ullrich wrote:
 [???]
 Okay, which version of OS X do you have? In 10.3 and 10.4 it used to be here:
 /System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/plat-mac/CoreGraphics.py

 I notice that in 10.5, it no longer exists, though.
 
 Um, surely that doesn't mean that there's no CoreGraphics
 available in 10.5?

The CoreGraphics C library still exists, of course. The Python module they
provided does not.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: simple question on dictionary usage

2007-11-04 Thread Bruno Desthuilliers
[EMAIL PROTECTED] a écrit :
 On Oct 27, 6:42 am, Karthik Gurusamy [EMAIL PROTECTED] wrote:
 
On Oct 26, 9:29 pm, Frank Stutzman [EMAIL PROTECTED] wrote:




My apologies in advance, I'm new to python

Say, I have a dictionary that looks like this:

record={'BAT': '14.4', 'USD': '24', 'DIF': '45', 'OAT': '16',
'FF': '3.9', 'C3': '343', 'E4': '1157', 'C1': '339',
'E6': '1182', 'RPM': '996', 'C6': '311', 'C5': '300',
'C4': '349', 'CLD': '0', 'E5': '1148', 'C2': '329',
'MAP': '15', 'OIL': '167', 'HP': '19', 'E1': '1137',
'MARK': '', 'E3': '1163', 'TIME': '15:43:54',
'E2': '1169'}

From this dictionary I would like to create another dictionary calld
'egt') that has all of the keys that start with the letter 'E'.  In
otherwords it should look like this:

egt = {'E6': '1182','E1': '1137','E4': '1157','E5': '1148',
   'E2': '1169','E3': '1163'}

This should be pretty easy, but somehow with all my googling I've
not found a hint.

One possible solution (read list-comprehension if you not familiar
with it):


record={'BAT': '14.4', 'USD': '24', 'DIF': '45', 'OAT': '16',

... 'FF': '3.9', 'C3': '343', 'E4': '1157', 'C1': '339',
... 'E6': '1182', 'RPM': '996', 'C6': '311', 'C5': '300',
... 'C4': '349', 'CLD': '0', 'E5': '1148', 'C2': '329',
... 'MAP': '15', 'OIL': '167', 'HP': '19', 'E1': '1137',
... 'MARK': '', 'E3': '1163', 'TIME': '15:43:54',
... 'E2': '1169'} egt =dict([(k,record[k]) for k inrecordif 
k.startswith('E')])

egt

{'E5': '1148', 'E4': '1157', 'E6': '1182', 'E1': '1137', 'E3': '1163',
'E2': '1169'}

Karthik




Thanks in advance

--
Frank Stutzman
 
 
 Hallo,
 a functional and concise

and not necessarily efficient

 way.
 
 egt= dict( filter( lambda item: item[0][0] == E ,
 record.iteritems() ))

List comprehensions and generator expressions are just as 'functional' 
as lambdas (list comps comes from Haskell FWIW).

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: cgi undefined?

2007-11-04 Thread Bruno Desthuilliers
Tyler Smith a écrit :
 Hi,
 
 I'm trying to learn how to use python for cgi scripting. I've got
 apache set up on my laptop, and it appears to be working correctly.
 I can run a basic cgi script that just outputs a new html page,
 without reading in any form data, so I know that the basics are ok.
 But when I try and use cgi.FieldStorage() I get the following errors
 from cgitb:
 
 A problem occurred in a Python script. Here is the sequence of
 function calls leading up to the error, in the order they occurred. 
  /home/tyler/public_html/cgi-bin/cgi.py
(snip)
 form undefined, cgi = module 'cgi' from 
 '/home/tyler/public_html/cgi-bin/cgi.py', cgi.FieldStorage undefined
(snip)
 AttributeError: 'module' object has no attribute 'FieldStorage'
(snip)

 What am I doing wrong?

Not paying enough attention to the error message, perhaps ? Your script 
is named cgi.py. rename it and you should be fine.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python newbie

2007-11-04 Thread Steven D'Aprano
On Sun, 04 Nov 2007 12:05:35 -0800, Paul Rubin wrote:

 Bruno Desthuilliers [EMAIL PROTECTED] writes:
from random import random
x = random()
y = random.choice((1,2,3))   # oops
 
 from random import random, choice
 
 x = random()
 y = choice((1, 2, 3))
 
 Really, a lot of these modules exist primarily to export a single class
 or function, but have other classes or functions of secondary interest. 
 I'd like to be able to get to the primary function without needing to
 use a qualifier, and be able to get to the secondary ones by using a
 qualifier instead of having to import explicitly and clutter up the
 importing module's name space like that.  It just seems natural.

+1 on Paul's suggestion. It's a style thing: it is so much more elegant 
and obvious than the alternatives.

A module is a single conceptual unit. It might very well have a rich 
internal API, but many modules also have a single main function. It might 
be a function or class with the same name as the module, or it might 
literally be called main. It might be designed to be called from the 
shell, as a shell script, but it need not be.

A good clue that you're dealing with such a module is if you find 
yourself usually calling the same object from the module, or if the 
module has a main class or function with the same name as the module. The 
module is, in effect, a single functional unit.

Possible candidates, in no particular order: StringIO, random, glob, 
fnmatch, bisect, doctest, filecmp, ...

Note that they're not *quite* black boxes: at times it is useful to crack 
that functional unit open to access the internals: the module's not 
main functions. But *much of the time*, you should be able to use a 
module as a black box, without ever caring about anything inside it:

import module
module(data) # do something with the module


The model I have in mind is that of shell-scripting languages, like Bash. 
For the purpose of syntax, Bash doesn't distinguish between built-in 
commands and other Bash scripts, but Python does. When you call a script 
(module) in Bash, you don't need to qualify it to use it:

ls  myscript --options

instead of ls  myscript.myscript --options

If you think of modules (at least sometimes) as being the equivalent of 
scripts, only richer and more powerful, then the ugliness of the current 
behaviour is simply obvious.

This model of modules as scripts isn't appropriate for all modules, but 
it is backwards compatible with the current way of doing things (except 
for code that assumes that calling a module will raise an exception). For 
those modules that don't have a single main function, simply don't define 
__call__.



The natural way to treat a module as a functional whole *and* still be 
able to crack it open to access the parts currently is some variation of:

from module import main
import module

That's inelegant for at least six reasons:

(1) There's no standard interface for module authors. Should the main 
function of the module be called main or __main__ or __call__ or 
something derived from the name of the module? The same as the module 
name?

(2) Namespace pollution. There are two names added to the namespace just 
to deal with a single conceptual unit.

(3) The conceptual link between the module and it's main function is now 
broken. There is no longer any obvious connection between main() and the 
module. The two are meant to be joined at the hip, and we're treating 
them as independent things.

(4) What happens if you have two modules you want to treat this way, both 
with a main function with the same name? You shouldn't have to say from 
module import main as main2 or similar.

(5) You can't use the module as a black box. You *must* care about the 
internals, if only to find out the name of the main function you wish to 
import.

(6) The obvious idiom uses two imports (although there is an almost as 
obvious idiom only using one). Python caches imports, and the second is 
presumably much faster than the first, but it would be nice to avoid the 
redundant import.


As far as I know, all it would take to allow modules to be callable would 
be a single change to the module type, the equivalent of:

def __call__(self, *args, **kwargs):
try:
# Special methods are retrieved from the class, not
# from the instance, so we need to see if the 
# instance has the right method.
callable = self.__dict__['__call__']
except KeyError:
return None # or raise an exception?
return callable(self, *args, **kwargs)



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


[ANN] Scope_Plot, another plot library for real time signals.

2007-11-04 Thread Stef Mientki
hello,

I justed finished, another plot library, called Scope_Plot, based on 
wxPython.

Scope_Plot is special meant for displaying real time signals,
and therefor has some new functionalities:
- signal selection
- each signal has it's own scale,
- moving erase block
- measurement cursor
ans should be a lot faster than MatPlot, Plot and FloatCanvas,
at least for real time signals (because it only draws the changes).

An animated demo can be seen here (2 MB, 2:10):
  http://stef.mientki.googlepages.com/jalspy_scope.html

A description of the library, as used in an application, can be found here:
  http://oase.uci.kun.nl/~mientki/data_www/pic/jalspy/jalspy_scope.html

And finally the file (and a few necessary libs) can be found here:
  http://oase.uci.kun.nl/~mientki/download/Scope_Plot.zip
The library has a main section which contains a simple demo,
the animated demo and the application description shows a more complex 
signal organization.

cheers,
Stef Mientki
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: cgi undefined?

2007-11-04 Thread Tyler Smith
On 2007-11-04, Dennis Lee Bieber [EMAIL PROTECTED] wrote:

 '/home/tyler/public_html/cgi-bin/cgi.py'
  ^^

   Very simple -- you named YOUR handler cgi. So when it does import
 cgi it is importing itself...


Of course. I knew it must be something dumb. Thanks!

Tyler
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is pyparsing really a recursive descent parser?

2007-11-04 Thread Neil Cerutti
On 2007-11-04, Just Another Victim of the Ambient Morality
[EMAIL PROTECTED] wrote:
 Neil Cerutti [EMAIL PROTECTED] wrote in message 
 news:[EMAIL PROTECTED]
 I believe there's no cure for the confusion you're having except
 for implementing a parser for your proposed grammar.
 Alternatively, try implementing your grammar in one of your other
 favorite parser generators.

 I believe there is a cure and it's called recursive descent
 parsing. It's slow, obviously, but it's correct and, sometimes
 (arguably, often), that's more important the execution speed.

 I spent this morning whipping up a proof of concept parser
 whose interface greatly resembles pyparsing but, baring unknown
 bugs, works and works as I'd expect a recursive descent parser
 to work.  I don't know Python very well so the parser is pretty
 simple.  It only lexes single characters as tokens.  It only
 supports And, Or, Optional, OneOrMore and ZeroOrMore rules but
 I already think this is a rich set of rules.  I'm sure others
 can be added.  Finally, I'm not sure it's safely copying all
 its parameter input the same way pyparsing does but surely
 those bugs can be worked out.  It's merely a proof of concept
 to demonstrate a point.
 Everyone, please look it over and tell me what you think. 
 Unfortunately, my news client is kind of poor, so I can't
 simply cut and paste the code into here.  All the tabs get
 turned into single spacing, so I will post this link, instead:

 http://theorem.ca/~dlkong/new_pyparsing.zip

Your program doesn't necessarily address the ambiguity in the
grammar in question, since right now it is only a recognizer.
Will it be hard to get it to return a parse tree?

The grammar in your implementation is:

 goal = OneOrMore(RuleAnd('a') | RuleAnd('b')) + RuleAnd('b')
 goal.parse(0, 'ab')
True
 goal.parse(0, 'ba')
False
 goal.parse(0, 'b')
False
 goal.parse(0, 'aaab')
True
 goal.parse(0, 'abc')
True

So far so good. :)

-- 
Neil Cerutti
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Low-overhead GUI toolkit for Linux w/o X11?

2007-11-04 Thread Nick Craig-Wood
Grant Edwards [EMAIL PROTECTED] wrote:
  I'm looking for GUI toolkits that work with directly with the
  Linux frambuffer (no X11).  It's an embedded device with
  limited resources, and getting X out of the picture would be a
  big plus.
 
  The toolkit needs to be free and open-source.
 
  So far, I've found two options that will work without X11:
 
   1) QTopia (nee QT/Embedded).  I assume that I can probably get
  PyQT to work with the embedded version of QT?
 
   2) PySDL or PyGame.

We did a similar project recently.  We ended up using pygame and
writing our own GUI.  For an embedded device you don't really want a
general purpose GUI which needs a mouse, you want something specific
which knows how many buttons the device has, what resolution the
screen is etc.  It is not too difficult to make your own GUI in pygame
to do exactly what you want for your embedded device.

 I'm not really sure what the differences are between those two.  The
 latter seems to be a little more active.

Pygame is the way I've always done SDL stuff in python - never even
heard of PySDL!

-- 
Nick Craig-Wood [EMAIL PROTECTED] -- http://www.craig-wood.com/nick
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Low-overhead GUI toolkit for Linux w/o X11?

2007-11-04 Thread Grant Edwards
On 2007-11-04, Nick Craig-Wood [EMAIL PROTECTED] wrote:

  So far, I've found two options that will work without X11:
 
   1) QTopia (nee QT/Embedded).  I assume that I can probably get
  PyQT to work with the embedded version of QT?
 
   2) PySDL or PyGame.

 We did a similar project recently.  We ended up using pygame
 and writing our own GUI.  For an embedded device you don't
 really want a general purpose GUI which needs a mouse, you
 want something specific which knows how many buttons the
 device has, what resolution the screen is etc.  It is not too
 difficult to make your own GUI in pygame to do exactly what
 you want for your embedded device.

Thanks for the pointer.  I'm starting to like pygame for this
project.  One of the application features needs to support some
simple animated graphics, and pygame's sprite support looks
like it might be a decent fit. I've been looking at the
different libraries listed on the pygame web site, and it looks
like there might be several that could make good starting GUI
points.

 I'm not really sure what the differences are between those
 two.  The latter seems to be a little more active.

 Pygame is the way I've always done SDL stuff in python - never
 even heard of PySDL!

PySDL doesn't seem to be nearly as active or high-profile as
pygame.

-- 
Grant Edwards
[EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python at command prompt

2007-11-04 Thread [david]
Tim Roberts wrote:
 Ton van Vliet [EMAIL PROTECTED] wrote:
 There's could also be an issue with entering 'python' at the command
 line, and not 'python.exe'. Once the PATH is setup correctly, try to
 enter 'python.exe', and check whether that works.

 IMHO, to get any 'program-name' (without the .exe extension) to work,
 one needs to:
 1. register the executable with windows (doesn't work for python) or
 2. make sure the the PATHEXT environment variable is set correctly,
 and includes the .EXE extension (on my w2k system it looks like:
 .COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH)
 
 You're confusing two things here.  Executables (.exe) are always available,
 and do not need to be registered to be run without the extension.
 
 It is possible to have Windows execute abc.py when you type abc, and
 that DOES require registering the .py extension and adding .py to the
 PATHEXT environment variable.
 
 A very useful thing to do, by the way.  I have many command line tools for
 which I have forgotten whether they are batch files, small executables, or
 Python scripts.  And that's how it should be.

That is,

Executables (.exe) are always available,
... provided that the PATHEXT environment variable
has not been set incorrectly...

and
Executables ... do not need to be registered


[david]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: __file__ vs __FILE__

2007-11-04 Thread Giampaolo Rodola'
On 3 Nov, 15:46, Gabriel Genellina [EMAIL PROTECTED] wrote:
 En Sat, 03 Nov 2007 10:07:10 -0300, Giampaolo Rodola' [EMAIL PROTECTED]  
 escribió:

  On 3 Nov, 04:21, klenwell [EMAIL PROTECTED] wrote:
  In PHP you have the __FILE__ constant which gives you the value of the
  absolute path of the file you're in (as opposed to the main script
  file.)
  This is not really 'one-line' since you have to import two modules
  first, but it looks nicer...:

  import sys, os
  print sys.argv[0] # absolute file name
  print os.path.dirname(sys.argv[0]) # absolute dir name

 Note that this returns the location of the *main* script, not the current  
 module, as the OP explicitely asked for.

 --
 Gabriel Genellina

Whoops! You're right.

-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Is pyparsing really a recursive descent parser?

2007-11-04 Thread Just Another Victim of the Ambient Morality

Neil Cerutti [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 On 2007-11-04, Just Another Victim of the Ambient Morality
 [EMAIL PROTECTED] wrote:
 Neil Cerutti [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]
 I believe there's no cure for the confusion you're having except
 for implementing a parser for your proposed grammar.
 Alternatively, try implementing your grammar in one of your other
 favorite parser generators.

 I believe there is a cure and it's called recursive descent
 parsing. It's slow, obviously, but it's correct and, sometimes
 (arguably, often), that's more important the execution speed.

 I spent this morning whipping up a proof of concept parser
 whose interface greatly resembles pyparsing but, baring unknown
 bugs, works and works as I'd expect a recursive descent parser
 to work.  I don't know Python very well so the parser is pretty
 simple.  It only lexes single characters as tokens.  It only
 supports And, Or, Optional, OneOrMore and ZeroOrMore rules but
 I already think this is a rich set of rules.  I'm sure others
 can be added.  Finally, I'm not sure it's safely copying all
 its parameter input the same way pyparsing does but surely
 those bugs can be worked out.  It's merely a proof of concept
 to demonstrate a point.
 Everyone, please look it over and tell me what you think.
 Unfortunately, my news client is kind of poor, so I can't
 simply cut and paste the code into here.  All the tabs get
 turned into single spacing, so I will post this link, instead:

 http://theorem.ca/~dlkong/new_pyparsing.zip

 Your program doesn't necessarily address the ambiguity in the
 grammar in question, since right now it is only a recognizer.
 Will it be hard to get it to return a parse tree?

Hey, it's only a proof of concept.  If you can parse the tree, surely 
you can record what you parsed, right?
Did you notice that the parse() functions have the rather serious bug of 
not returning how much of the string they could parse?  It just so happens 
that the contstructions that I made only ever had to increment the matches 
by one, so they just happen to work.  That's an easy bug to fix but a pretty 
major one to have overlooked.  Hence, my enthusiasm for input...


 The grammar in your implementation is:

 goal = OneOrMore(RuleAnd('a') | RuleAnd('b')) + RuleAnd('b')
 goal.parse(0, 'ab')
 True
 goal.parse(0, 'ba')
 False
 goal.parse(0, 'b')
 False
 goal.parse(0, 'aaab')
 True
 goal.parse(0, 'abc')
 True

 So far so good. :)

Good!  Keep hammering at it!
More importantly, study it to understand the idea I'm trying to convey. 
This is what I thought a recursive descent parser would do...



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is pyparsing really a recursive descent parser?

2007-11-04 Thread Kay Schluehr
On Nov 4, 10:44 pm, Just Another Victim of the Ambient Morality
[EMAIL PROTECTED]

 I believe there is a cure and it's called recursive descent parsing.
 It's slow, obviously, but it's correct and, sometimes (arguably, often),
 that's more important the execution speed.

Recursive decendent parsing is not necessarily slow but from your
remarks above I infer you want a general RD parser with backtracking:
when one rule doesn't match, try another one to derive the current
symbol in the input stream.

I'm not sure one needs to start again with a naive approach just to
avoid any parser theory. For a user of a parser it is quite important
whether she has to wait 50 seconds for a parse to run or 50
milliseconds. I don't like to compromise speed for implementation
simplicity here.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is pyparsing really a recursive descent parser?

2007-11-04 Thread Just Another Victim of the Ambient Morality

Kay Schluehr [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 On Nov 4, 10:44 pm, Just Another Victim of the Ambient Morality
 [EMAIL PROTECTED]

 I believe there is a cure and it's called recursive descent parsing.
 It's slow, obviously, but it's correct and, sometimes (arguably, often),
 that's more important the execution speed.

 Recursive decendent parsing is not necessarily slow but from your
 remarks above I infer you want a general RD parser with backtracking:
 when one rule doesn't match, try another one to derive the current
 symbol in the input stream.

I think I've just discovered a major hurdle in my understand of the 
problem.
You keep saying with backtracking.  Why?  Isn't backtracking 
inherent in recursion?  So, why can't these alleged recursive descent 
parsers find valid parsings?  How are they not already backtracking?  What 
was the point of being recursive if not to take advantage of the inherent 
backtracking in it?
Obviously, these parsers aren't recursing through what I think they 
should be recursing.  The question is why not?

Correct me if I'm wrong but I'm beginning to think that pyparsing 
doesn't typically use recursion, at all.  It only employs it if you create 
one, using the Forward class.  Otherwise, it does everything iteratively, 
hence the lack of backtracking.


 I'm not sure one needs to start again with a naive approach just to
 avoid any parser theory. For a user of a parser it is quite important
 whether she has to wait 50 seconds for a parse to run or 50
 milliseconds. I don't like to compromise speed for implementation
 simplicity here.

This attitude is all too prevalent among computer professionals...  Of 
course it's a useful thing to shield users from the intricacies of parser 
theory!  Just as much as it is useful to shield drivers from needing 
automotive engineering or software users from programing.  How many people 
have come to this newsgroup asking about anomalous pyparsing behaviour, 
despite their grammars being mathematically correct.
Think of it this way.  You can force all the clients of pyparsing to 
duplicate work on figuring out how to massage pyparsing to their grammars, 
or you can do the work of getting pyparsing to solve people's problems, 
once.  That's what a library is supposed to do...
Finally, I can't believe you complain about potential speed problems. 
First, depending on the size of the string, it's likely to be the difference 
between 2ms and 200ms.  Secondly, if speed were an issue, you wouldn't go 
with a recursive descent parser.  You'd go with LALR or the many other 
parsing techniques available.  Recursive descent parsing is for those 
situations where you need correctness, regardless of execution time.  These 
situations happen...
I've said this before, albeit for a different language, but it applies 
to Python just as well.  I don't use Python to write fast code, I use it to 
write code fast.
If _you_ don't like to compromise speed for implementation simplicity 
then you have a plethora choices available to you.  What about the guy who 
needs to parse correctly and is unconcerned about speed?




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is pyparsing really a recursive descent parser?

2007-11-04 Thread Neil Cerutti
On 2007-11-05, Just Another Victim of the Ambient Morality [EMAIL PROTECTED] 
wrote:

 Neil Cerutti [EMAIL PROTECTED] wrote in message 
 news:[EMAIL PROTECTED]
 On 2007-11-04, Just Another Victim of the Ambient Morality
 [EMAIL PROTECTED] wrote:
 Neil Cerutti [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]
 I believe there's no cure for the confusion you're having except
 for implementing a parser for your proposed grammar.
 Alternatively, try implementing your grammar in one of your other
 favorite parser generators.

 I believe there is a cure and it's called recursive descent
 parsing. It's slow, obviously, but it's correct and, sometimes
 (arguably, often), that's more important the execution speed.

 I spent this morning whipping up a proof of concept parser
 whose interface greatly resembles pyparsing but, baring unknown
 bugs, works and works as I'd expect a recursive descent parser
 to work.  I don't know Python very well so the parser is pretty
 simple.  It only lexes single characters as tokens.  It only
 supports And, Or, Optional, OneOrMore and ZeroOrMore rules but
 I already think this is a rich set of rules.  I'm sure others
 can be added.  Finally, I'm not sure it's safely copying all
 its parameter input the same way pyparsing does but surely
 those bugs can be worked out.  It's merely a proof of concept
 to demonstrate a point.
 Everyone, please look it over and tell me what you think.
 Unfortunately, my news client is kind of poor, so I can't
 simply cut and paste the code into here.  All the tabs get
 turned into single spacing, so I will post this link, instead:

 http://theorem.ca/~dlkong/new_pyparsing.zip

 Your program doesn't necessarily address the ambiguity in the
 grammar in question, since right now it is only a recognizer.
 Will it be hard to get it to return a parse tree?

 Hey, it's only a proof of concept.  If you can parse the tree, surely 
 you can record what you parsed, right?
 Did you notice that the parse() functions have the rather serious bug of 
 not returning how much of the string they could parse?  

Unfortunately I haven't had much time to play with it today; just
barely enough to put it through a very few paces.

 It just so happens that the contstructions that I made only
 ever had to increment the matches by one, so they just happen
 to work.  That's an easy bug to fix but a pretty major one to
 have overlooked.  Hence, my enthusiasm for input...

 The grammar in your implementation is:

 goal = OneOrMore(RuleAnd('a') | RuleAnd('b')) + RuleAnd('b')
 goal.parse(0, 'ab')
 True
 goal.parse(0, 'ba')
 False
 goal.parse(0, 'b')
 False
 goal.parse(0, 'aaab')
 True
 goal.parse(0, 'abc')
 True

 So far so good. :)

 Good!  Keep hammering at it!
 More importantly, study it to understand the idea I'm
 trying to convey. This is what I thought a recursive descent
 parser would do...

Kay has pointed out how it works. Strangely enough, I've never
studied a backtracking RDP before (trying to teach yourself a
subject like parsing can be tricky--I've had to somehow avoid all
the texts that overuse Greek letters--those incomprehensible
symbols confuse the hell out of me). It does simplify the job of
the grammar designer, but Kay's message makes it sound like it
won't scale very well.

It might, perhaps, be an interesting feature for PyParsing to
entertain by setting a 'backtracking' option, for when you're
writing a quick script and don't want to fuss too much with a
non-conformant grammar.

I'll have more time to look at it tomorrow.

-- 
Neil Cerutti
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: IDLE

2007-11-04 Thread Russ P.
On Nov 3, 1:43 am, [EMAIL PROTECTED] wrote:
 Just curious: What makes you wish to move from emacs to idle?

I don't necessarily want to move from xemacs to idle. I'm just getting
tired of using print statements to debug, and I figure I'm well past
the stage where I should still be doing that. If I can use xemacs
*with* idle, I'll try that. Then again, if gdb works with python,
maybe I should be using that with xemacs.

The local Python interest group had a meeting to discuss these
development environments, but unfortunately I missed it. I hope they
do another one soon.






-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is pyparsing really a recursive descent parser?

2007-11-04 Thread Neil Cerutti
On 2007-11-05, Just Another Victim of the Ambient Morality [EMAIL PROTECTED] 
wrote:

 Kay Schluehr [EMAIL PROTECTED] wrote in message 
 news:[EMAIL PROTECTED]
 On Nov 4, 10:44 pm, Just Another Victim of the Ambient Morality
 [EMAIL PROTECTED]

 I believe there is a cure and it's called recursive descent parsing.
 It's slow, obviously, but it's correct and, sometimes (arguably, often),
 that's more important the execution speed.

 Recursive decendent parsing is not necessarily slow but from your
 remarks above I infer you want a general RD parser with backtracking:
 when one rule doesn't match, try another one to derive the current
 symbol in the input stream.

 I think I've just discovered a major hurdle in my understand of the 
 problem.
 You keep saying with backtracking.  Why?  Isn't backtracking 
 inherent in recursion?  So, why can't these alleged recursive descent 
 parsers find valid parsings?  How are they not already backtracking?  What 
 was the point of being recursive if not to take advantage of the inherent 
 backtracking in it?
 Obviously, these parsers aren't recursing through what I think they 
 should be recursing.  The question is why not?

There are different kinds of recursion. Compare:

  def fac1(x, y=1):
 Compute factorials with a recursive function (it calls
itself), but the stack is not actually used for storing
anything important, i.e., it is tail-recursive. 
if x  0:
  raise ValueError('non-negative integer')
elif x == 0:
  return y 
else:
  return fac1(x-1, y*x)

to

  def fac2(x):
 Computes factorials with a recursive process, keeping
the state of the calculation on the stack. 
if x  0:
  raise ValueError('non-negative integer')
if x == 0:
  return 1
else:
  return fac2(x-1) * x

to

  def Ack(x, y):
 The Ackermann function. Creates a humongous mess even
with quite tiny numbers. 
if x  0 or y  0:
  raise ValueError('non-negative integer')
elif x == 0:
  return y + 1
elif y == 0:
  return foo3(x-1, 1)
else:
  return foo3(x-1, foo3(x, y-1))

There's probably a word for the type of recursive process built
by fac2; the RDP's I'm most familiar with create a fac2 sort of
process, which stores valuable info on the stack.

And even though fac1 defines an iterative process, the code
itself is recursive, and you can call it a recursive function if
you wish (and in Python you might as well).

 Correct me if I'm wrong but I'm beginning to think that
 pyparsing doesn't typically use recursion, at all.  It only
 employs it if you create one, using the Forward class.
 Otherwise, it does everything iteratively, hence the lack of
 backtracking.

It's recursive because each production rule calls other
production rules to define itself. A rule regularly ends up
calling itself. Consider the Parser class I built earlier.
list_tail keeps calling itself to continue consuming characters
in an ab_list. The stack is used to keep track of where we are in
the grammar; at any time you can look up the stack and see how
you got where you are--you 'descend' down from the topmost
productions to the most primitive productions, and then back up
once everything has been sorted out. Take another look
at the exception raised in my Parsing class example for an
illustrative traceback.

 I'm not sure one needs to start again with a naive approach just to
 avoid any parser theory. For a user of a parser it is quite important
 whether she has to wait 50 seconds for a parse to run or 50
 milliseconds. I don't like to compromise speed for implementation
 simplicity here.

 Finally, I can't believe you complain about potential speed
 problems. First, depending on the size of the string, it's
 likely to be the difference between 2ms and 200ms.  Secondly,
 if speed were an issue, you wouldn't go with a recursive
 descent parser.  You'd go with LALR or the many other parsing
 techniques available.  Recursive descent parsing is for those
 situations where you need correctness, regardless of execution
 time.  These situations happen...

RDP is plenty fast; speed has never been one of it's
disadvantages, as far as I know. Today there are many
excellent parser generators and compiler builders that compose an
RDP under the hood, e.g., Antlr and Gentle.

 I've said this before, albeit for a different language, but
 it applies to Python just as well.  I don't use Python to write
 fast code, I use it to write code fast.
 If _you_ don't like to compromise speed for implementation
 simplicity then you have a plethora choices available to you.
 What about the guy who needs to parse correctly and is
 unconcerned about speed?

You have to be concerned about speed when something runs so
slowly in common circumstances compared to other well-known
algotithms that you can't practically wait for an answer. Would
you consider bubble-sort a suitable general-purpose sorting
algorithm for Python?

-- 
Neil 

Re: Is pyparsing really a recursive descent parser?

2007-11-04 Thread Neil Cerutti
On 2007-11-05, Neil Cerutti [EMAIL PROTECTED] wrote:
   def Ack(x, y):
  The Ackermann function. Creates a humongous mess even
 with quite tiny numbers. 
 if x  0 or y  0:
   raise ValueError('non-negative integer')
 elif x == 0:
   return y + 1
 elif y == 0:
   return foo3(x-1, 1)
 else:
   return foo3(x-1, foo3(x, y-1))

Urk! Of course those foo3 calls should have been Ack calls.

-- 
Neil Cerutti
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: IDLE

2007-11-04 Thread Russ P.
Thanks for the information on IDLE.

 As for your question, I couldn't quite understand what you're trying
 to do. In general, you can have the script use os.chdir() to go to the
 relevant directory and then open() the file, or you can use open()
 directly with a relative/full path to it. (This question isn't IDLE
 specific in any way, unless I misunderstood...)

I should have been clearer about what I'm trying to do. I have
approximately 100 directories, each corresponding to an incident (I
won't say what type of incident here). Each directory has many data
files on the incident, and the structure of all the directories is the
same (i.e., each has files of the same name). One of those files in an
input file that I wish to replay through my program, then the
results are recorded in an output data file.

To replay one incident, I would normally go to the directory for that
case and execute my program. I can specify the input and output files
explicitly, but I virtually never need to do so because they default
to the same file name in each directory. I also have a script that can
replay all 100 cases automatically.

I would like to do the same sort of thing with IDLE. I don't want to
have to specify the input and output files explicitly using absolute
pathnames from outside the directory for the incident. That would be
horrendously cumbersome after a few times.

I want to just cd to a directory and execute my program. But in my
first few tries, it seems that I need to be in the directory that
contains the source code -- not the directory that contains the data.
Am I missing something? Thanks.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python good for data mining?

2007-11-04 Thread D.Hering
On Nov 3, 9:02 pm, Jens [EMAIL PROTECTED] wrote:
 I'm starting a project indatamining, and I'm considering Python and
 Java as possible platforms.

 I'm conserned by performance. Most benchmarks report that Java is
 about 10-15 times faster than Python, and my own experiments confirms
 this. I could imagine this to become a problem for very large
 datasets.

 How good is the integration with MySQL in Python?

 What about user interfaces? How easy is it to use Tkinter for
 developing a user interface without an IDE? And with an IDE? (which
 IDE?)

 What if I were to use my Python libraries with a web site written in
 PHP, Perl or Java - how do I intergrate with Python?

 I really like Python for a number of reasons, and would like to avoid
 Java.

 Sorry - lot of questions here - but I look forward to your replies!


All of my programming is data centric. Data mining is foundational
there in. I started learning computer science via Python in 2003. I
too was concerned about it's performance, especially considering my
need for literally trillions of iterations of financial data tables
with mathematical algorithms.

I then leaned C and then C++. I am now coming home to Python realizing
after my self-eduction, that programming in Python is truly a pleasure
and the performance is not the concern I first considered to be.
Here's why:

Python is very easily extended to near C speed. The Idea that FINALLY
sunk in, was that I should first program my ideas in Python WITHOUT
CONCERN FOR PERFOMANCE. Then, profile the application to find the
bottlenecks and extend those blocks of code to C or C++. Cython/
Pyrex/Sip are my preferences for python extension frameworks.

Numpy/Scipy are excellent libraries for optimized mathematical
operations. Pytables is my preferential python database because of
it's excellent API to the acclaimed HDF5 database (used by very many
scientists and government organizations).

As for GUI framework, I have studied Qt intensely and would therefore,
very highly recommend PyQt.

After four years of intense study, I can say that with out a doubt,
Python is most certainly the way to go. I personally don't understand
why, generally, there is any attraction to Java, though I have yet to
study it further.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is pyparsing really a recursive descent parser?

2007-11-04 Thread Just Another Victim of the Ambient Morality

Neil Cerutti [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 On 2007-11-05, Just Another Victim of the Ambient Morality 
 [EMAIL PROTECTED] wrote:

 Kay Schluehr [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]
 On Nov 4, 10:44 pm, Just Another Victim of the Ambient Morality
 [EMAIL PROTECTED]

 I believe there is a cure and it's called recursive descent parsing.
 It's slow, obviously, but it's correct and, sometimes (arguably, 
 often),
 that's more important the execution speed.

 Recursive decendent parsing is not necessarily slow but from your
 remarks above I infer you want a general RD parser with backtracking:
 when one rule doesn't match, try another one to derive the current
 symbol in the input stream.

 I think I've just discovered a major hurdle in my understand of the
 problem.
 You keep saying with backtracking.  Why?  Isn't backtracking
 inherent in recursion?  So, why can't these alleged recursive descent
 parsers find valid parsings?  How are they not already backtracking? 
 What
 was the point of being recursive if not to take advantage of the inherent
 backtracking in it?
 Obviously, these parsers aren't recursing through what I think they
 should be recursing.  The question is why not?

 There are different kinds of recursion. Compare:

  def fac1(x, y=1):
 Compute factorials with a recursive function (it calls
itself), but the stack is not actually used for storing
anything important, i.e., it is tail-recursive. 
if x  0:
  raise ValueError('non-negative integer')
elif x == 0:
  return y
else:
  return fac1(x-1, y*x)

 to

  def fac2(x):
 Computes factorials with a recursive process, keeping
the state of the calculation on the stack. 
if x  0:
  raise ValueError('non-negative integer')
if x == 0:
  return 1
else:
  return fac2(x-1) * x

 to

  def Ack(x, y):
 The Ackermann function. Creates a humongous mess even
with quite tiny numbers. 
if x  0 or y  0:
  raise ValueError('non-negative integer')
elif x == 0:
  return y + 1
elif y == 0:
  return foo3(x-1, 1)
else:
  return foo3(x-1, foo3(x, y-1))

 There's probably a word for the type of recursive process built
 by fac2; the RDP's I'm most familiar with create a fac2 sort of
 process, which stores valuable info on the stack.

 And even though fac1 defines an iterative process, the code
 itself is recursive, and you can call it a recursive function if
 you wish (and in Python you might as well).

While interesting, none of this actually addresses the point I was 
making.  I wasn't saying that there was no recursion (at least, not in this 
paragraph), I was saying that it wasn't recursing through what I thought it 
should be recursing through.  It recurses through a set of rules without any 
regard to how these rules interact with each other.  That's why it fails to 
parse valid strings.  In my opinion, it should recurse through appropriate 
combinations of rules to determine validity, rather than by arbitrary 
categorization...


 Correct me if I'm wrong but I'm beginning to think that
 pyparsing doesn't typically use recursion, at all.  It only
 employs it if you create one, using the Forward class.
 Otherwise, it does everything iteratively, hence the lack of
 backtracking.

 It's recursive because each production rule calls other
 production rules to define itself. A rule regularly ends up
 calling itself. Consider the Parser class I built earlier.
 list_tail keeps calling itself to continue consuming characters
 in an ab_list. The stack is used to keep track of where we are in
 the grammar; at any time you can look up the stack and see how
 you got where you are--you 'descend' down from the topmost
 productions to the most primitive productions, and then back up
 once everything has been sorted out. Take another look
 at the exception raised in my Parsing class example for an
 illustrative traceback.

I guess that all the And and Or class in pyparsing call methods of each 
other from each other, even if they are doing so from different 
instantiations.  I still say they're not recursing through the right 
things...


 I'm not sure one needs to start again with a naive approach just to
 avoid any parser theory. For a user of a parser it is quite important
 whether she has to wait 50 seconds for a parse to run or 50
 milliseconds. I don't like to compromise speed for implementation
 simplicity here.

 Finally, I can't believe you complain about potential speed
 problems. First, depending on the size of the string, it's
 likely to be the difference between 2ms and 200ms.  Secondly,
 if speed were an issue, you wouldn't go with a recursive
 descent parser.  You'd go with LALR or the many other parsing
 techniques available.  Recursive descent parsing is for those
 situations where you need correctness, regardless of execution
 time.  These situations happen...

 RDP is plenty 

Funny quote

2007-11-04 Thread John Salerno
Hi all. Thought you might get a kick out of this if you haven't heard it 
before. I have to admit, not being either, I don't quite fully 
understand it, but intuitively I do. :)

---
André Bensoussan once explained to me the difference between a 
programmer and a designer:

If you make a general statement, a programmer says, 'Yes, but...'
while a designer says, 'Yes, and...'
---
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: bsddb in python 2.5.1

2007-11-04 Thread Martin v. Löwis
 I have two versions of bsddb3 installed (only one is active) this is
 from /usr/lib/python2.5/site-packages:
 drwxr-xr-x  3 root root   4096 2007-11-03 15:01 bsddb3
 -rw-r--r--  1 root root905 2007-11-03 15:39 bsddb3-4.4.2.egg-info
 -rw-r--r--  1 root root905 2007-11-03 15:49 bsddb3-4.5.0.egg-info

This makes two installations, but how do you know that they really link
with two different versions of Berkeley DB?

 I'm really at a loss of what I could do, except for reverting
 back to Ubuntu 7.05. 

What's wrong with upgrading the database files to 4.5? Just run the
proper db_upgrade binary.

Regards,
Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


achieving performance using C/C++

2007-11-04 Thread sandipm
I did fair amount of programming in python but never used c/c++ as
mentioned below.
any good tutorials for using C/C++ to optimize python codebase for
performance?
how widely do they use such kind of mixed coding practices?

sandip

-- Forwarded message --
From: D.Hering
.
.
.
.
Python is very easily extended to near C speed. The Idea that FINALLY
sunk in, was that I should first program my ideas in Python WITHOUT
CONCERN FOR PERFOMANCE. Then, profile the application to find the
bottlenecks and extend those blocks of code to C or C++. Cython/
Pyrex/Sip are my preferences for python extension frameworks.
.
.
.
.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: opinion - file.readlines blemish

2007-11-04 Thread Gabriel Genellina
En Sat, 03 Nov 2007 15:00:46 -0300, Ken Seehart [EMAIL PROTECTED] escribi�:

 *newlines*

 If Python was built with the *---with-universal-newlines* option to
 *configure* (the default) this read-only attribute exists, and for
 files opened in universal newline read mode it keeps track of the
 types of newlines encountered while reading the file. The values it
 can take are |'\r'|, |'\n'|, |'\r\n'|, *None*
 http://effbot.org/pyref/None.htm (unknown, no newlines read yet)
 or a tuple containing all the newline types seen, to indicate that
 multiple newline conventions were encountered. For files not opened
 in universal newline read mode the value of this attribute will be
 *None* http://effbot.org/pyref/None.htm.


 It seems immediately obvious to me that the return value should always
 be a tuple, consisting of zero or more strings.  If built with universal
 newlines, it should return ('\n',) if a newline was found.

 Perhaps there was some blurry contemplation that the None return value
 could identify that *universal-newlines* is enabled, but this is blurry
 because None could also just mean that no newlines have been read.
 Besides, it's not a good idea to stuff extra semantics like that.
 Better would be a separate way to identify *universal-newlines *mode.

I don't fully understand your concerns.
If Python was compiled with universal newlines support, the newline  
attribute exists; if not, it doesn't exist.
If it exists, and certain particular file was open with universal newline  
mode (rU by example), None means no end of line read yet, any other  
values represent the end-of-line markers already seen. If the file was not  
opened in universal newline mode, that attribute will always be None. You  
usually know whether the file was opened in universal newline mode or not,  
but in any case, you can always look at the mode file attribute.

As why is not always a tuple, even with 0 or 1 element: I can guess that  
it's some kind of optimization. Most programs don't even care about this  
attribute, and most files will contain only one kind of end-of-line  
markers, so None and a single string would be the most common values. Why  
build a tuple (it's not free) when it's not needed most of the time?

-- 
Gabriel Genellina

-- 
http://mail.python.org/mailman/listinfo/python-list

PyObjC with Xcode 3.0 Leopard

2007-11-04 Thread flyfree
I got an error during making a python application with xcode 3.0 in OS
X Leopard.

(KeyError: 'NSUnknownKeyException - [NSObject 0xd52fe0
valueForUndefinedKey:]: this class is not key value coding-compliant
for the key calculatedMean.')

The application is a simple example of how to use the PyObjC with
xcode 2.0.
http://developer.apple.com/cocoa/pyobjc.html

Could you give any hint to start to dig into it?

-- 
http://mail.python.org/mailman/listinfo/python-list


Descriptors and side effects

2007-11-04 Thread mrkafk
Hello everyone,

I'm trying to do seemingly trivial thing with descriptors: have
another attribute updated on dot access in object defined using
descriptors.

For example, let's take a simple example where you set an attribute s
to a string and have another attribute l set automatically to its
length.

 class Desc(str):
def __init__(self,val):
self.s=val
self.l=len(val)
print creating value: , self.s
print id(self.l), id(self.l)
def __set__(self, obj, val):
self.s=val
self.l=len(val)
print setting value:, self.s, length:, self.l
def __get__(self, obj, type=None):
print getting value:, self.s, length:, self.l
return self.l


 class some(str):
m=Desc('abc')
l=m.l


creating value:  abc
id(self.l) 10049688
 ta=some()
 ta.m='test string'
setting value: test string length: 11

However, the attribute ta.l didn't get updated:

 ta.l
3

This is so much weirder that object id of ta.l is the same as id of
instance of descriptor:

 id(ta.l)
10049688

A setter function should have updated self.l just like it updated
self.s:

def __set__(self, obj, val):
self.s=val
self.l=len(val)
print setting value:, self.s, length:, self.l

Yet it didn't happen.

From my POV, the main benefit of a descriptor lies in its side effect:
on dot access (getting/setting) I can get other attributes updated
automatically: say, in class of Squares I get area automatically
updated on updating side, etc.

Yet, I'm struggling with getting it done in Python. Descriptors are a
great idea, but I would like to see them implemented in Python in a
way that makes it easier to get desireable side effects.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is pyparsing really a recursive descent parser?

2007-11-04 Thread Kay Schluehr
On Nov 5, 3:05 am, Just Another Victim of the Ambient Morality
[EMAIL PROTECTED] wrote:
 Kay Schluehr [EMAIL PROTECTED] wrote in message

 news:[EMAIL PROTECTED]

  On Nov 4, 10:44 pm, Just Another Victim of the Ambient Morality
  [EMAIL PROTECTED]

  I believe there is a cure and it's called recursive descent parsing.
  It's slow, obviously, but it's correct and, sometimes (arguably, often),
  that's more important the execution speed.

  Recursive decendent parsing is not necessarily slow but from your
  remarks above I infer you want a general RD parser with backtracking:
  when one rule doesn't match, try another one to derive the current
  symbol in the input stream.

 I think I've just discovered a major hurdle in my understand of the
 problem.
 You keep saying with backtracking.  Why?  Isn't backtracking
 inherent in recursion?  So, why can't these alleged recursive descent
 parsers find valid parsings?  How are they not already backtracking?  What
 was the point of being recursive if not to take advantage of the inherent
 backtracking in it?
 Obviously, these parsers aren't recursing through what I think they
 should be recursing.  The question is why not?

Backtracking and RD parsers are two different issues. An RD parser
keeps a grammar which is structured tree-like. So one feeds a start
symbol into the parser and the parser tries to derive a symbol of
the input stream by descending down along the tree of non-terminals.
For each non-terminal the parser calls itself because it is
essentially a recursive datastructure it iterates through.

Backtracking comes into play when the grammar is not properly left
factored. This doesn't necessrily mean it is ambigous. Take for
example the following grammar:

S: A A 'foo' | A A 'bar'

The parser cannot immediately decide whether to take the left or the
right branch of the RHS.  So it will checkout the left branch and when
it fails to derive 'foo' in the input stream it tries to select the
right branch. But the grammar is *not* ambigous: there is always a
unique parse tree that can be derived ( or none ).

Parser theory is mostly concerned with finding strategies to avoid
backtracking ( and resolving ambiguities ) because it slows parsers
down. One always wants to have parser effort that depends linear on
the size of the input stream.


  I'm not sure one needs to start again with a naive approach just to
  avoid any parser theory. For a user of a parser it is quite important
  whether she has to wait 50 seconds for a parse to run or 50
  milliseconds. I don't like to compromise speed for implementation
  simplicity here.

 This attitude is all too prevalent among computer professionals...  Of
 course it's a useful thing to shield users from the intricacies of parser
 theory!

Yes, but people who develop parser generators shall no at least a bit
about it  ;)

 Just as much as it is useful to shield drivers from needing
 automotive engineering or software users from programing.  How many people
 have come to this newsgroup asking about anomalous pyparsing behaviour,
 despite their grammars being mathematically correct.
 Think of it this way.  You can force all the clients of pyparsing to
 duplicate work on figuring out how to massage pyparsing to their grammars,
 or you can do the work of getting pyparsing to solve people's problems,
 once.  That's what a library is supposed to do...
 Finally, I can't believe you complain about potential speed problems.
 First, depending on the size of the string, it's likely to be the difference
 between 2ms and 200ms.  Secondly, if speed were an issue, you wouldn't go
 with a recursive descent parser.

ANTLR is recursive descendend, so are ratpack parsers. Python uses an
extremely optimized recursive descendend table based parser for
parsing the language. The parser flies and it is not LALR. Grammars
are more accessible in EBNF style and simpler to write so many people
( like me ) prefer RD parsers and seek for ways to optimize them. As
always there is a separation of concerns. The software engineering
aspects of modularization, interface abstraction, library / framework
implementation and the algorithmic aspects. I'd like to see both being
addressed by a parser generator architecture.


-- 
http://mail.python.org/mailman/listinfo/python-list


[issue1377] test_import breaks on Linux

2007-11-04 Thread Christian Heimes

Christian Heimes added the comment:

I'm going to disable the test for now.

--
keywords: +py3k
resolution:  - accepted
superseder:  - Crash on Windows if Python runs from a directory with umlauts

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1377
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1342] Crash on Windows if Python runs from a directory with umlauts

2007-11-04 Thread Christian Heimes

Christian Heimes added the comment:

I've checked in part of the patch in r58837. It doesn't solve the
problem but at least it prevents Python from seg faulting on Windows.

--
keywords: +py3k, rfe
priority:  - high
resolution:  - accepted

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1342
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1379] reloading imported modules sometimes fail with 'parent not in sys.modules' error

2007-11-04 Thread Paul Pogonyshev

Paul Pogonyshev added the comment:

Thank you for the commit.

I just had a problem with my package, and since I was not sure if it was
a bug in Py3k or the package, I went to debugging the former and found
this.  I just didn't know how to work with Unicode strings properly.

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1379
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1772916] xmlrpclib crash when PyXML installed - sgmlop is available

2007-11-04 Thread Grzegorz Makarewicz

Grzegorz Makarewicz added the comment:

Minimalistic test crash (python 2.5 cvs, sgmlop cvs(pyxml)) - compiled
with msvc 2005, where after 10 loops ms-debugger is invoked:
data='''\
?xml version=1.0?
methodCall
  methodNamemws.ScannerLogout/methodName
  params
param
  value
i47/i4
  /value
/param
  /params
/methodCall
'''

import xmlrpclib

def main():
i = 1
while 1:
print i
params, method = xmlrpclib.loads(data)
i+=1
main()

--
components: +Extension Modules -Library (Lib)
nosy: +mak -alanmcintyre, effbot, ldeller, loewis
title: xmlrpclib crash when PyXML installed - xmlrpclib crash when PyXML 
installed - sgmlop is available
versions: +3rd party -Python 2.5

_
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1772916
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1378] fromfd() and dup() for _socket on WIndows

2007-11-04 Thread roudkerk

Changes by roudkerk:


--
versions: +Python 2.6 -Python 2.5

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1378
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1210] imaplib does not run under Python 3

2007-11-04 Thread Christian Heimes

Changes by Christian Heimes:


--
keywords: +py3k
priority:  - normal
resolution:  - accepted

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1210
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1381] cmath is numerically unsound

2007-11-04 Thread Mark Dickinson

Mark Dickinson added the comment:

I took a look at this a while back, and got as far as writing a pure 
Python drop-in replacement for cmath, based on Kahan's branch cuts for 
elementary functions paper.  This fixes a variety of problems in cmath, 
including the buggy branch cuts for asinh.  File attached, in case it's 
of any use.

As Tim Peters pointed out, the real problem here is a lack of decent 
unit tests for these functions.  I have tests for the file above, but 
they assume IEEE754 floats, which is probably not an acceptable 
assumption in general.

--
nosy: +marketdickinson
Added file: http://bugs.python.org/file8685/cmath_py.py

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1381
__
This module provides exponential, logarithmic, trigonometric,
inverse trigonometric, hyperbolic and inverse hyperbolic functions for
complex numbers.  It is intended as a drop-in replacement for the
cmath module in the standard library.  For the most part, it uses
formulas and methods described by W. Kahan in his `Branch cuts for
elementary functions' paper.

Design goals


 - make all functions numerically sound;  both the real part and the
   imaginary part should be within a few ulps of the true result,
   where this is reasonable.  (But see the note on accuracy below.)
 - avoid unnecessary overflows in intermediate expressions, when
   the final result is representable.
 - fix buggy branch cuts in asinh
 - do the 'right thing' with respect to signed zeros on platforms that
   follow IEEE754 and C99/IEC 60559.  In particular, all branch cuts
   should be continuous from both sides in this case.
 - don't do anything unreasonable on platforms that don't support
   signed zeros, or have signed zero support that doesn't comply fully
   with the above standards.  With no signed zeros, continuity at branch cuts
   should match the documentation.  Behaviour here is untested.
 
Non-design goals


 - do the right thing with NaNs and infinities.  It's hard to do this portably.
   I believe that many of the routines below do actually do the right thing
   in the presence of NaNs, but this is mostly accidental.
 - (related to the above): give sensible and consistent exceptions.  Again,
   it seems difficult to do this across platforms.
 - produce results that are provably accurate to within 1ulp.

Note on accuracy


In an ideal world, the complex-valued function f(z) would return the
closest representable complex number to the true mathematical value of
f(z): that is, both the real and imaginary part of the result would be
accurate to within = 0.5ulp.  Achieving this level of accuracy is
very hard---most C math libraries don't manage it.  (But see the
crlibm and MPFR libraries.)  A slightly more realistic goal is 1ulp.

In practice, one might hope that the returned real and imaginary parts
are always within a few ulps (say 10 or 20) of those of the closest
representable value.  But even this is an unrealistic goal in some
situtations.  For example let z be the complex number:

0.79993338661852249060757458209991455078125 +
0.600088817841970012523233890533447265625j

which is exactly representable, assuming IEEE doubles are being used
for the real and imaginary parts.  The nearest representable complex
number to the natural logarithm of z is:

6.16297582203915472977912941627176741932192527428924222476780414581298828125e-33
+ 2.4980915447965088560522417537868022918701171875j

It would take a lot of effort to get the real part anywhere near
correct here.  Other problems occur in computing trigonometric
functions for complex numbers z with large real part, or hyperbolic
trig functions for z with large imaginary part.

Notes on signed zeros  
-

There are many subtle difficulties here: for example, the expression
1 + z should add 1 to the real part of z and leave the imaginary
part untouched.  But in Python, if the imaginary part of z is -0.
then the imaginary part of 1+z will be +0: 1 gets coerced to the
complex number 1 + 0j, and then the imaginary parts get added to give
-0j + 0j = 0j.

But z - 1 always does the right thing:  subtracting +0. won't
change the sign of zero.  Similarly, z + (-1.) is fine.

So we can either work with the underlying reals directly, or rewrite
the erroneous 1+z as -(-z-1), which works.  Similarly, 1-z should be
rewritten as -(z-1).  An alternative fix is to use the complex
number 1 - 0j (note negative sign) in place of 1 in the expressions
1+z and 1-z.

Similarly, the expression i*z is special-cased so that
 i*(x+i*y) = -y + i*x;  see the function mul_by_j.

The code below should `do the right thing'
regardless of whether signed zeros are present.  In particular:

- on a platform (hardware + C math library) that supports signed
zeros, so that for example:

  atan2(-0., positive) gives -0.
  atan2(0., positive) gives 0.
  atan2(-0., 

[issue1382] py3k-pep3137: patch for test_ctypes

2007-11-04 Thread Amaury Forgeot d'Arc

New submission from Amaury Forgeot d'Arc:

This patch corrects test_ctypes in the py3k-pep3137 branch.
Replacing PyBytes_* by PyString_* was 99% of the task.

Also had to modify binascii, which used to return buffers instead of
bytes strings.

Tested on winXP.

--
components: Tests
files: ctypes3.diff
messages: 57099
nosy: amaury.forgeotdarc, gvanrossum, tiran
severity: normal
status: open
title: py3k-pep3137: patch for test_ctypes
versions: Python 3.0
Added file: http://bugs.python.org/file8686/ctypes3.diff

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1382
__

ctypes3.diff
Description: Binary data
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1744580] cvs.get_dialect() return a class object

2007-11-04 Thread Skip Montanaro

Skip Montanaro added the comment:

I changed the documentation for 2.5 and 2.6 to reflect the change in 
semantics.  r58840 and r58841.  Have a look and let me know if that looks 
reasonable.

--
status: open - pending
title: cvs.get_dialect() return a class object  - cvs.get_dialect() return a 
class object

_
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1744580
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1431091] CSV Sniffer fails to report mismatch of column counts

2007-11-04 Thread Skip Montanaro

Skip Montanaro added the comment:

This appears to work better in 2.5 and 2.6 (it doesn't crash, though it 
gets the delimiter wrong) but does indeed fail in 2.4.

--
nosy: +skip.montanaro

_
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1431091
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1382] py3k-pep3137: patch for test_ctypes

2007-11-04 Thread Christian Heimes

Christian Heimes added the comment:

Applied in r58843.

Thank you very much!

--
keywords: +patch, py3k
resolution:  - fixed
status: open - closed

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1382
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1378] fromfd() and dup() for _socket on WIndows

2007-11-04 Thread Guido van Rossum

Changes by Guido van Rossum:


--
nosy: +gvanrossum

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1378
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1381] cmath is numerically unsound

2007-11-04 Thread Martin v. Löwis

Martin v. Löwis added the comment:

It would be ok if a test is only run on a system with IEEE floats, and
skipped elsewhere. For all practical purposes, Python assumes that all
systems have IEEE float.

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1381
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1384] Windows fix for inspect tests

2007-11-04 Thread Christian Heimes

New submission from Christian Heimes:

The patch lower()s the file names on Windows. The tests break on my
system because C:\\... != c:\\...

--
files: py3k_inspect.patch
keywords: patch, py3k
messages: 57105
nosy: tiran
severity: normal
status: open
title: Windows fix for inspect tests
versions: Python 3.0
Added file: http://bugs.python.org/file8688/py3k_inspect.patch

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1384
__Index: Lib/test/test_inspect.py
===
--- Lib/test/test_inspect.py	(revision 58843)
+++ Lib/test/test_inspect.py	(working copy)
@@ -1,3 +1,4 @@
+import os
 import sys
 import types
 import unittest
@@ -4,6 +5,7 @@
 import inspect
 import datetime
 import collections
+import __builtin__
 
 from test.test_support import TESTFN, run_unittest
 
@@ -21,8 +23,16 @@
 if modfile.endswith(('c', 'o')):
 modfile = modfile[:-1]
 
-import __builtin__
+# On Windows some functions may return C:\\path\\to\\file with a lower case
+# 'c:\\'.
+if os.name == 'nt':
+modfile = modfile.lower()
 
+def revise(*args):
+if os.name == 'nt':
+return (args[0].lower(),) + args[1:]
+return args
+
 try:
 1/0
 except:
@@ -88,22 +98,22 @@
 
 def test_stack(self):
 self.assert_(len(mod.st) = 5)
-self.assertEqual(mod.st[0][1:],
+self.assertEqual(revise(*mod.st[0][1:]),
  (modfile, 16, 'eggs', ['st = inspect.stack()\n'], 0))
-self.assertEqual(mod.st[1][1:],
+self.assertEqual(revise(*mod.st[1][1:]),
  (modfile, 9, 'spam', ['eggs(b + d, c + f)\n'], 0))
-self.assertEqual(mod.st[2][1:],
+self.assertEqual(revise(*mod.st[2][1:]),
  (modfile, 43, 'argue', ['spam(a, b, c)\n'], 0))
-self.assertEqual(mod.st[3][1:],
+self.assertEqual(revise(*mod.st[3][1:]),
  (modfile, 39, 'abuse', ['self.argue(a, b, c)\n'], 0))
 
 def test_trace(self):
 self.assertEqual(len(git.tr), 3)
-self.assertEqual(git.tr[0][1:], (modfile, 43, 'argue',
+self.assertEqual(revise(*git.tr[0][1:]), (modfile, 43, 'argue',
  ['spam(a, b, c)\n'], 0))
-self.assertEqual(git.tr[1][1:], (modfile, 9, 'spam',
+self.assertEqual(revise(*git.tr[1][1:]), (modfile, 9, 'spam',
  ['eggs(b + d, c + f)\n'], 0))
-self.assertEqual(git.tr[2][1:], (modfile, 18, 'eggs',
+self.assertEqual(revise(*git.tr[2][1:]), (modfile, 18, 'eggs',
  ['q = y / 0\n'], 0))
 
 def test_frame(self):
@@ -198,8 +208,8 @@
 self.assertSourceEqual(mod.StupidGit, 21, 46)
 
 def test_getsourcefile(self):
-self.assertEqual(inspect.getsourcefile(mod.spam), modfile)
-self.assertEqual(inspect.getsourcefile(git.abuse), modfile)
+self.assertEqual(inspect.getsourcefile(mod.spam).lower(), modfile)
+self.assertEqual(inspect.getsourcefile(git.abuse).lower(), modfile)
 
 def test_getfile(self):
 self.assertEqual(inspect.getfile(mod.StupidGit), mod.__file__)
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1385] hmac module violates RFC for some hash functions, e.g. sha512

2007-11-04 Thread Joachim Wagner

New submission from Joachim Wagner:

(First time submitting a patch to this system.)
The hmac module uses a fixed blocksize of 64 bytes. This is fine for 
many hash functions like md5, sha1 and sha256, but not for sha512 or 
in the general case. The RFC referenced in the python documentation 
specifies that the blocksize has to match the hash function. The 
attached patch is the first of three proposed solutions:

1. use the undocumented block_size attribute of the hashing objects 
provided in the hashlib modules and fallback to 64 bytes if the 
attribute is missing (maybe a depreciated warning would be better); in 
this case it would be a good idea to document to block_size attribute 
(not included in the patch attached); performance could be improved by 
making block_size a class attribute

2. document that the blocksize is 64 and that the RFC is only 
correctly implemented if the hash function also has a blocksize of 64 
bytes; optionally include the workaround to subclass hmac.HMAC and 
overwrite the blocksize (this is documented in the source code, but 
unfortunately not in the python docu)

3. make the blocksize a keyword argument to the constructor and 
document that it has to match the hash function's blocksize for full 
RFC compliance

Regards,
Joachim

--
components: None
files: hmac_1.patch
messages: 57106
nosy: jowagner
severity: normal
status: open
title: hmac module violates RFC for some hash functions, e.g. sha512
type: behavior
versions: Python 3.0
Added file: http://bugs.python.org/file8689/hmac_1.patch

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1385
__--- hmac.orig	2007-11-04 17:44:46.0 +
+++ hmac.py	2007-11-04 18:31:39.0 +
@@ -48,7 +48,15 @@
 self.inner = self.digest_cons()
 self.digest_size = self.inner.digest_size
 
-blocksize = self.blocksize
+try:
+blocksize = self.digest_cons().block_size
+if blocksize  16:
+# very low blocksize
+# probably a legacy value like in Lib/sha.py
+blocksize = self.blocksize
+except AttributeError:
+blocksize = self.blocksize
+
 if len(key)  blocksize:
 key = self.digest_cons(key).digest()
 
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1383] Backport abcoll to 2.6

2007-11-04 Thread Martin v. Löwis

Changes by Martin v. Löwis:


--
keywords: +patch

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1383
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1385] hmac module violates RFC for some hash functions, e.g. sha512

2007-11-04 Thread Martin v. Löwis

Changes by Martin v. Löwis:


--
keywords: +patch

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1385
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1383] Backport abcoll to 2.6

2007-11-04 Thread Georg Brandl

Georg Brandl added the comment:

Is this a successor or a companion to #1026?

--
nosy: +georg.brandl

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1383
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1383] Backport abcoll to 2.6

2007-11-04 Thread Benjamin Aranguren

Benjamin Aranguren added the comment:

This is a companion to #1026.

On 11/4/07, Georg Brandl [EMAIL PROTECTED] wrote:

 Georg Brandl added the comment:

 Is this a successor or a companion to #1026?

 --
 nosy: +georg.brandl

 __
 Tracker [EMAIL PROTECTED]
 http://bugs.python.org/issue1383
 __


__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1383
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1386] py3k-pep3137: patch to ensure that all codecs return bytes

2007-11-04 Thread Amaury Forgeot d'Arc

New submission from Amaury Forgeot d'Arc:

Most codecs return buffer objects, when the rule is now to return bytes.
This patch adds a test, and corrects failing codecs.
(more PyBytes_* - PyString_* replacements)

--
components: Unicode
files: codecs.diff
messages: 57109
nosy: amaury.forgeotdarc, tiran
severity: normal
status: open
title: py3k-pep3137: patch to ensure that all codecs return bytes
versions: Python 3.0
Added file: http://bugs.python.org/file8690/codecs.diff

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1386
__

codecs.diff
Description: Binary data
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1387] py3k-pep3137: patch for hashlib on Windows

2007-11-04 Thread Amaury Forgeot d'Arc

New submission from Amaury Forgeot d'Arc:

On Windows, openssl is not always available, in this case python uses
its own implementation of md5, sha1 co.
This patch correct the failing tests (test_hashlib and test_uuid), by
returning bytes instead of buffers.

--
components: Windows
files: hashlib.diff
messages: 57110
nosy: amaury.forgeotdarc, tiran
severity: normal
status: open
title: py3k-pep3137: patch for hashlib on Windows
versions: Python 3.0
Added file: http://bugs.python.org/file8691/hashlib.diff

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1387
__

hashlib.diff
Description: Binary data
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1386] py3k-pep3137: patch to ensure that all codecs return bytes

2007-11-04 Thread Christian Heimes

Christian Heimes added the comment:

Applied in r58848. Thanks for removing the annoying warnings!

A small request: Please use self.assert_() and its friends instead of
assert() in unit tests.

--
keywords: +patch, py3k
resolution:  - fixed
status: open - closed

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1386
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1387] py3k-pep3137: patch for hashlib on Windows

2007-11-04 Thread Christian Heimes

Christian Heimes added the comment:

Thanks!

Applied in r58847.

--
keywords: +patch, py3k
resolution:  - fixed
status: open - closed

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1387
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   >