[ANNOUNCE] Pygtksourceview 2.6.0

2009-03-22 Thread Gian Mario Tagliaretti
I am pleased to announce version 2.6.0 of the Gtksourceview Python bindings.

Once the mirrors have sync correctly it will be available at:

http://ftp.gnome.org/pub/GNOME/sources/pygtksourceview/2.6/

The bindings are updated with the new Gtksourceview API

News in 2.6.0
=

  o new stable release

Blurb:
==

gtksourceview is a library that provides a widget for source code display
and editing, derived from Gtk's TextView, and used by gedit and nemiver,
among others.

gtksourceview has recently been released 2.6.0

PyGtksourceview requires:
=

 o Gtksourceview = 2.3.0
 o PyGObject = 2.15.2
 o PyGTK = 2.8.0

Bug reports should go to:
http://bugzilla.gnome.org/browse.cgi?product=pygtksourceview

cheers
-- 
Gian Mario Tagliaretti
GNOME Foundation member
gia...@gnome.org
--
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations.html


circuits 1.1 - a Lightweight, Event driven Framework with a strong Component Architecture.

2009-03-22 Thread James Mills
Hi,

I'm pleased to announce the 1.1 release
of circuits: http://trac.softcircuit.com.au/circuits/

== About ==

circuits is a Lightweight, Event driven Framework
with a strong Component Architecture.

== Quick Examples ==

=== Hello World! ===
{{{
#!python
 from circuits import Event, Component

 class MyApp(Component):
...   def hello(self):
...  print Hello World!
 app = MyApp()
 app.start()
 app.push(Event(), hello)
Hello World!
}}}

=== Hello World! (Web) ===
{{{
#!python
from circuits.web import Server, Controller

class Root(Controller):

def index(self):
return Hello World!

(Server(8000) + Root()).run()
}}}

== Enhancements ==

Aside from bug fixes, circuits 1.1 includes
the following enhancements:
  * New drivers package containing drivers for pygame and inotify
  * New and improved web package (circuits.web) providing a HTTP
1.0/1.1 and WSGI compliant Web Server.
  * New developer tools
  * python-2.5 compatibility fixes
  * Runnable Components
  * Improved Debugger

Plus heaps more...

== Links ==
 Home Page:: http://trac.softcircuit.com.au/circuits/
 Mailing list:: http://groups.google.com.au/group/circuits-users/
 Download:: http://trac.softcircuit.com.au/circuits/downloads/
 Library Reference::
http://trac.softcircuit.com.au/circuits/export/tip/docs/html/index.html

cheers

James

--
-- Problems are solved by method
--
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations.html


Re: pyconfig on 64-bit machines with distutils vs 32-bit legacy code

2009-03-22 Thread Martin v. Löwis
 /data/home/nwagner/local/lib/python2.5/pyport.h:734:2: #error
 LONG_BIT definition appears wrong for platform (bad gcc/glibc
 config?).
 
 
 Can anyone offer any advice as to what I might be missing or
 misunderstanding? 

You need to understand where the error comes from:
1. what is the *actual* value of SIZEOF_LONG (it should be 4)?
2. what is the actual value of LONG_BIT, and where does it come
   from? (it should be 32)

To understand that better, I recommend to edit the command line
of gcc in the following way (assuming you use gcc 4.x):
1. replace -c with -E -dD
2. remove the -o file option

This will generate preprocessor output to stdout, which you then
need to search for SIZEOF_LONG and LONG_BIT. Searching back for
# number lines will tell you where the definition was made.

If that doesn't make it clear what the problem is, post your
findings here.

Regards,
Martin
--
http://mail.python.org/mailman/listinfo/python-list


Re: Async serial communication/threads sharing data

2009-03-22 Thread Hendrik van Rooyen
Nick Timkovich prom@gmail.com wrote:


 I've been working on a program that will talk to an embedded device
 over the serial port, using some basic binary communications with
 messages 4-10 bytes long or so.  Most of the nuts and bolts problems
 I've been able to solve, and have learned a little about the threading
 library to avoid blocking all action while waiting for responses
 (which can take 50 ms to 10 s).  Ultimately, this program will test
 the device on the COM port by sending it messages and collecting
 responses for 10k-100k cycles; a cycle being:
  1. tell it to switch a relay,
  2. get it's response from the event,
  3. ask it for some measurements,
  4. get measurements,
  5. repeat.
 Later I would like to develop a GUI as well, but not a big issue now
 (another reason to use threads? not sure).
 
 The overall structure of what I should do is not very apparent to me
 on how to efficiently deal with the communications.  Have the main
 loop handle the overall timing of how often to run the test cycle,
 then have a thread deal with all the communications, and within that
 another thread that just receives data?  My main issue is with how to
 exchange data between different threads; can I just do something like
 have a global list of messages, appending, modifying, and removing as
 needed?  Does the threading.Lock object just prevent every other
 thread from running, or is it bound somehow to a specific object (like
 my list)?

What I have been doing for similar stuff is to modularize using processes
and pipes instead of queues.
This allows you to have the communications in one process, in which
you hide all the protocol issues, and allows you to deal with the resulting
data at a slightly higher level of abstraction.
Such a structure also makes it easier to have different user interfaces -
command line or GUI, it does not matter as the commands and responses
to the comms module can be standardised via an input and output
pipe to and from the comms module.

HTH
- Hendrik

--
http://mail.python.org/mailman/listinfo/python-list


Re: [regex] How to check for non-space character?

2009-03-22 Thread Gilles Ganault
On Sat, 21 Mar 2009 08:53:10 -0500, Tim Chase
python.l...@tim.thechases.com wrote:
It looks like it's these periods that are throwing you off.  Just 
remove them.  For a 3rd syntax:

(\S)(\d{5})

the \S (capital, instead of \s) is any NON-white-space character

Thanks guys for the tips.
--
http://mail.python.org/mailman/listinfo/python-list


Re: default shelve on linux corrupts, does different DB system help?

2009-03-22 Thread Paul Sijben
Thanks very much for a clear and concise explanation of the problem and
the solution!

I am implementing it now in my system. Luckily we caught this one during
testing so no important data has been lost.

Unfortunately windows does not seem to support gdbm. But in our case,
everything that is on the windows client is also available on the linux
server, so we can recreate the DB at the expense of some bandwidth in
case of failures.

Paul

s...@pobox.com wrote:
 Paul I have the problem that my shelve(s) sometimes corrupt (looks like
 Paul it has after python has run out of threads).

 Paul I am using the default shelve so on linux I get the dbhash
 Paul version.  Is there a different DB type I can choose that is known
 Paul to be more resilient? And if so, what is the elegant way of doing
 Paul that?

 You don't say what version of Python you're using or what version of the
 Berkeley DB library underpins your installation, but I am going to guess it
 is 1.85.  This has been known to have serious bugs for over a decade.  (Just
 in the hash file implementation.  The btree and recnum formats are ok.
 Unfortunately, the hash file implementation is what everybody has always
 gravitated to.  Sort of like moths to a flame...)

 If that's the case, simply pick some other dbm file format for your shelves,
 e.g.:

  import gdbm
  import shelve
  f = gdbm.open(/tmp/trash.db, c)
  f.close()
  db = shelve.open(/tmp/trash.db)
  db[mike] = sharon 
  db[4] = 5
  db.keys()
 ['4', 'mike']
  db.close()
  f = gdbm.open(/tmp/trash.db, c)
  f.keys()
 ['4', 'mike']
  f['4']
 'I5\n.'
  f['mike']
 S'sharon'\np1\n.

 As for uncorrupting your existing database, see if your Linux distribution
 has a db_recover program.  If it does, you might be able to retrieve your
 data, though in the case of BerkDB 1.85's hash file I'm skeptical that can
 be done.  I hope you weren't storing something valuable in it like your bank
 account passwords.

   
--
http://mail.python.org/mailman/listinfo/python-list


Re: [ANN] lxml 2.2 released

2009-03-22 Thread Francesco Guerrieri
On Sat, Mar 21, 2009 at 5:22 PM, Stefan Behnel stefan...@behnel.de wrote:

 Hi all,

 I'm proud to announce the release of lxml 2.2 final.

 http://codespeak.net/lxml/
 http://pypi.python.org/pypi/lxml/2.2

 Changelog:
 http://codespeak.net/lxml/changes-2.2.html


Great news! I have relied on lxml in many occasions and it has always been
excellent :-)

Francesco
--
http://mail.python.org/mailman/listinfo/python-list


Re: simplejson: alternate encoder not called enought

2009-03-22 Thread Chris Rebert
On Sat, Mar 21, 2009 at 2:11 PM, Pierre Hanser han...@club-internet.fr wrote:
 hello

 I'm trying to use simplejson to encode some
 python objects using simplejson dumps method.

 The dumps method accept a cls parameter to specify
 an alternate encoder. But it seems that this alternate
 encoder is called only as a last resort, if object type
 is not int, string, and all other basic types.

 My problem is that i wanted to specify an encoding
 for a dbus.Boolean object which inherits from int.

 As int is handled by the standard encoder, the alternate
 encoder is never called and a dbus.Boolean is converted
 to 0 or 1 instead of true or false.

 Could someone knowledgeabled enought confirm my diagnostic?

Indeed, this does appear to be the case:

import json

class Foo(int):
def __init__(self, val):
int.__init__(self, val)
self.val = val

class MyEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Foo):
return [foo, obj.val]
return json.JSONEncoder.default(self, obj)

print Expect:, json.dumps([foo, 42])
print Got:   , json.dumps(Foo(42), cls=MyEncoder)

#output:
# Expect: [foo, 42]
# Got:42

The docs are not entirely clear as to whether this should be expected or not.
I'd file a bug. If this behavior is deemed correct, then at the least
the docs should be clarified to indicate that the default encodings
apply to certain builtin types *and their subclasses*.

Cheers,
Chris

-- 
I have a blog:
http://blog.rebertia.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: Async serial communication/threads sharing data

2009-03-22 Thread Nick Timkovich
On Mar 21, 9:19 pm, Jean-Paul Calderone exar...@divmod.com wrote:
 On Sat, 21 Mar 2009 13:52:21 -0700 (PDT), Nick Timkovich 
 prometheus...@gmail.com wrote:
 I've been working on a program that will talk to an embedded device
 over the serial port, using some basic binary communications with
 messages 4-10 bytes long or so.  Most of the nuts and bolts problems
 I've been able to solve, and have learned a little about the threading
 library to avoid blocking all action while waiting for responses
 (which can take 50 ms to 10 s).  Ultimately, this program will test
 the device on the COM port by sending it messages and collecting
 responses for 10k-100k cycles; a cycle being:
  1. tell it to switch a relay,
  2. get it's response from the event,
  3. ask it for some measurements,
  4. get measurements,
  5. repeat.
 Later I would like to develop a GUI as well, but not a big issue now
 (another reason to use threads? not sure).

 Twisted includes serial port support and will let you integrate with a
 GUI toolkit.  Since Twisted encourages you to write programs which deal
 with things asynchronously in a single thread, if you use it, your
 concerns about data exchange, locking, and timing should be addressed
 as a simple consequence of your overall program structure.

 Jean-Paul

I've looked at Twisted a little bit because of some searches on serial
port comm turning up advice for it.  However, there seems to be no/
minimal documentation for the serial portions, like they are some old
relic that nobody uses from this seemingly massive package.  Do you
have any examples or somewhere in particular you could point me?

Thanks for the responses,
Nick
--
http://mail.python.org/mailman/listinfo/python-list


Re: Safe to call Py_Initialize() frequently?

2009-03-22 Thread Graham Dumpleton
On Mar 21, 2:35 pm, roschler robert.osch...@gmail.com wrote:
 On Mar 20, 7:27 pm, Mark Hammond skippy.hamm...@gmail.com wrote:

  On 21/03/2009 4:20 AM, roschler wrote:

  Calling Py_Initialize() multiple times has no effect.  Calling
  Py_Initialize and Py_Finalize multiple times does leak (Python 3 has
  mechanisms so this need to always be true in the future, but it is true
  now for non-trivial apps.

   If it is not a safe approach, is there another way to get what I want?

  Start a new process each time?

  Cheers,

  Mark

 Hello Mark,

 Thank you for your reply.  I didn't know that Py_Initialize worked
 like that.

 How about using Py_NewInterpreter() and Py_EndInterpreter() with each
 job?  Any value in that approach?  If not, is there at least a
 reliable way to get a list of all active threads and terminate them so
 before starting the next job?  Starting a new process each time seems
 a bit heavy handed.

Using Py_EndInterpreter() is even more fraught with danger. The first
problem is that some third party C extension modules will not work in
sub interpreters because they use simplified GIL state API. The second
problem is that third party C extensions often don't cope well with
the idea that an interpreter may be destroyed that it was initialised
in, with the module then being subsequently used again in a new sub
interpreter instance.

Given that it is one operation per second, creating a new process, be
it a completely fresh one or one forked from existing Python process,
would be simpler.

Graham

--
http://mail.python.org/mailman/listinfo/python-list


Re: __init__ vs. __del__

2009-03-22 Thread David Stanek
2009/3/21 Randy Turner rtms...@yahoo.com:
 There are a number of use-cases for object cleanup that are not covered by
 a generic garbage collector...

 For instance, if an object is caching data that needs to be flushed to
 some persistent resource, then the GC has
 no idea about this.

 It seems to be that for complex objects, clients of these objects will need
 to explictly call the objects cleanup routine
 in some type of finally clause, that is if the main thread of execution
 is some loop that can terminate either expectedly or
 unexpectedly

 Relying on a generic GC is only for simple object cleanup...at least based
 on what I've read so far.

 Someone mentioned a context manager earlier...I may see what this is about
 as well, since I'm new to the language.


If you add a .close method to your class you can use
contextlib.closing[0]. I have used this to clean up distributed locks
and other non-collectable resources.

0. http://docs.python.org/library/contextlib.html#contextlib.closing

-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
--
http://mail.python.org/mailman/listinfo/python-list


Re: script files with python (instead of tcsh/bash)?

2009-03-22 Thread Nick Craig-Wood
Esmail ebo...@hotmail.com wrote:
  I am wondering if anyone is using python to write script files?

Yes!

  Right now I have a bigg'ish bash/tcsh script that contain some grep/awk
  command plus various files are processed and created, renamed and
  moved to specific directories. I also write out some gnuplot scripts
  that later get executed to generate .jpg images.

Almost any script that contains a loop I convert into python.

  In any case, the scripts are starting to look pretty hairy and I was
  wondering if it would make sense to re-write them in Python. I am not
  sure how suitable it would be for this.

With python you get the advantages of a language with a very clear
philosophy which is exceptionally easy to read and write.

Add a few classes and unit tests to your scripts and they will be
better than you can ever achieve with bash (IMHO).

  I've looked around the web w/o much luck for some examples but come
  short. Any comments/suggestions?

Here is a short script I wrote to recover my slrn spool when my disk
fills up.  I could have written it in bash but I think it is much
clearer as python, and doesn't shell out to any external commands.

It searches through the spool directory looking for empty .minmax
files.  It then finds the lowest and highest numeric file name (all
the files are numeric) and writes the minmax file with that.

#!/usr/bin/python

rebuild all the .minmax files for slrnpull after the disk gets full


import os

spool = /var/spool/slrnpull

def main():
for dirpath, dirnames, filenames in os.walk(spool):
if .minmax not in filenames:
continue
minmax_path = os.path.join(dirpath, .minmax)
if os.path.getsize(minmax_path) != 0:
print Skipping non empty %r % minmax_path
continue
print dirpath
digits = [ int(f) for f in filenames if f.isdigit() ]
if not digits:
digits = [0]
digits.sort()
start = digits[0]
end = digits[-1]
f = open(minmax_path, w)
f.write(%s %s % (start, end))
f.close()
print done

if __name__ == __main__: main()


-- 
Nick Craig-Wood n...@craig-wood.com -- http://www.craig-wood.com/nick
--
http://mail.python.org/mailman/listinfo/python-list


Generator

2009-03-22 Thread mattia
Can you explain me this behaviour:

 s = [1,2,3,4,5]
 g = (x for x in s)
 next(g)
1
 s
[1, 2, 3, 4, 5]
 del s[0]
 s
[2, 3, 4, 5]
 next(g)
3


Why next(g) doesn't give me 2?
--
http://mail.python.org/mailman/listinfo/python-list


Re: Async serial communication/threads sharing data

2009-03-22 Thread Jean-Paul Calderone

On Sun, 22 Mar 2009 03:13:36 -0700 (PDT), Nick Timkovich 
prometheus...@gmail.com wrote:

On Mar 21, 9:19 pm, Jean-Paul Calderone exar...@divmod.com wrote:

On Sat, 21 Mar 2009 13:52:21 -0700 (PDT), Nick Timkovich 
prometheus...@gmail.com wrote:
I've been working on a program that will talk to an embedded device
over the serial port, using some basic binary communications with
messages 4-10 bytes long or so.  Most of the nuts and bolts problems
I've been able to solve, and have learned a little about the threading
library to avoid blocking all action while waiting for responses
(which can take 50 ms to 10 s).  Ultimately, this program will test
the device on the COM port by sending it messages and collecting
responses for 10k-100k cycles; a cycle being:
 1. tell it to switch a relay,
 2. get it's response from the event,
 3. ask it for some measurements,
 4. get measurements,
 5. repeat.
Later I would like to develop a GUI as well, but not a big issue now
(another reason to use threads? not sure).

Twisted includes serial port support and will let you integrate with a
GUI toolkit.  Since Twisted encourages you to write programs which deal
with things asynchronously in a single thread, if you use it, your
concerns about data exchange, locking, and timing should be addressed
as a simple consequence of your overall program structure.

Jean-Paul


I've looked at Twisted a little bit because of some searches on serial
port comm turning up advice for it.  However, there seems to be no/
minimal documentation for the serial portions, like they are some old
relic that nobody uses from this seemingly massive package.  Do you
have any examples or somewhere in particular you could point me?


It's true that the serial port support in Twisted isn't the most used
feature. :)  These days, serial ports are on the way out, I think.  That
said, much of the way a serial port is used in Twisted is the same as the
way a TCP connection is used.  This means that the Twisted documentation
for TCP connections is largely applicable to using serial ports.  The only
major difference is how you set up the connection.  You can see examples
of using the serial port support here (one of them seems to suggest that
it won't work on Windows, but I think this is a mistake):

 http://twistedmatrix.com/projects/core/documentation/examples/gpsfix.py
 http://twistedmatrix.com/projects/core/documentation/examples/mouse.py

The 2nd to last line of each example shows how to connect to the serial
port.  These basic documents describe how to implement a protocol.  Though
in the context of TCP, the protocol implementation ideas apply to working
with serial ports as well:

 http://twistedmatrix.com/projects/core/documentation/howto/servers.html
 http://twistedmatrix.com/projects/core/documentation/howto/clients.html

You can ignore the parts about factories, since they're not used with serial
ports.

Hope this helps,

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: script files with python (instead of tcsh/bash)?

2009-03-22 Thread Esmail

Nick Craig-Wood wrote:

Esmail ebo...@hotmail.com wrote:

 I am wondering if anyone is using python to write script files?


Yes!


..


Almost any script that contains a loop I convert into python.


 In any case, the scripts are starting to look pretty hairy and I was
 wondering if it would make sense to re-write them in Python. I am not
 sure how suitable it would be for this.


With python you get the advantages of a language with a very clear
philosophy which is exceptionally easy to read and write.

Add a few classes and unit tests to your scripts and they will be
better than you can ever achieve with bash (IMHO).



Hi Nick,

thanks for including the script, that really helps. Nice way of
finding files.

Two quick questions:

As a replacement for grep I would use the re module and its methods?

What about awk which I regularly use to extract fields based on position
but not column number, what should I be using in Python to do the same?

The other things I need to do consist of moving files, manipulating file
names and piping outputs of one command to the next, so I'm digging into
the documentation as much as I can.

So much to learn, so little time (but so much fun!)

Esmail

--
http://mail.python.org/mailman/listinfo/python-list


Re: Generator

2009-03-22 Thread MRAB

mattia wrote:

Can you explain me this behaviour:


s = [1,2,3,4,5]
g = (x for x in s)
next(g)

1

s

[1, 2, 3, 4, 5]

del s[0]
s

[2, 3, 4, 5]

next(g)

3

Why next(g) doesn't give me 2?


First it yields s[0] (which is 1), then you delete s[1], then it yields
s[1] (which is now 3). It doesn't yield 2 because that's in now s[0],
and it's already yielded s[0].

In general you shouldn't modify what you're iterating over because the
behaviour depends on how it happens to be implemented.
--
http://mail.python.org/mailman/listinfo/python-list


Re: script files with python (instead of tcsh/bash)?

2009-03-22 Thread MRAB

Esmail wrote:

Nick Craig-Wood wrote:

Esmail ebo...@hotmail.com wrote:

 I am wondering if anyone is using python to write script files?


Yes!


..


Almost any script that contains a loop I convert into python.


 In any case, the scripts are starting to look pretty hairy and I was
 wondering if it would make sense to re-write them in Python. I am not
 sure how suitable it would be for this.


With python you get the advantages of a language with a very clear
philosophy which is exceptionally easy to read and write.

Add a few classes and unit tests to your scripts and they will be
better than you can ever achieve with bash (IMHO).



Hi Nick,

thanks for including the script, that really helps. Nice way of
finding files.

Two quick questions:

As a replacement for grep I would use the re module and its methods?

What about awk which I regularly use to extract fields based on position
but not column number, what should I be using in Python to do the same?


Just use string slicing.


The other things I need to do consist of moving files, manipulating file
names and piping outputs of one command to the next, so I'm digging into
the documentation as much as I can.


The 'os' and 'shutil' modules.


So much to learn, so little time (but so much fun!)



--
http://mail.python.org/mailman/listinfo/python-list


Re: default shelve on linux corrupts, does different DB system help?

2009-03-22 Thread skip

Paul Unfortunately windows does not seem to support gdbm.

That is a known issue, but one that can be solved I think by getting rid of
the old 1.85 version of BerkDB and using something more modern.  I believe
the current bsddb module in recent Python versions supports BerkDB 3.x and
4.x.  Sleepycat got bought by Oracle awhile ago.  I believe you can download
4.7 from here:

http://www.oracle.com/technology/software/products/berkeley-db/index.html

They have a Windows installer as well as links to older versions.

-- 
Skip Montanaro - s...@pobox.com - http://www.smontanaro.net/
--
http://mail.python.org/mailman/listinfo/python-list


Using python 3 for scripting?

2009-03-22 Thread Timo Myyrä

Hi,

I'll have to do some scripting in the near future and I was 
thinking on using the Python for it. I would like to know which 
version of Python to use? Is the Python 3 ready for use or should 
I stick with older releases?


Timo
--
http://mail.python.org/mailman/listinfo/python-list


Re: garbage collection / cyclic references

2009-03-22 Thread Aaron Brady
On Mar 21, 11:59 am, andrew cooke and...@acooke.org wrote:
 Aaron Brady wrote:
  My point is, that garbage collection is able to detect when there are
  no program-reachable references to an object.  Why not notify the
  programmer (the programmer's objects) when that happens?  If the
  object does still have other unreachable references, s/he should be
  informed of that too.

 i think we're mixing python-specific and more general / java details, but,
 as far as i understand things, state of the art (and particularly
 generational) garbage collectors don't guarantee that objects will ever be
 reclaimed.  this is a trade for efficiency, and it's a trade that seems to
 be worthwhile and popular.

It's at best worthless, but so is politics.  I take it back; you can
reclaim memory in large numbers with a probabilistic finalizer.  The
expected value of reclaiming a KB with a 90% chance of call is .9 KB.

The allocation structure I am writing will have a very long up-time.
I can forcibly reclaim the memory of an object involved in a cycle,
but lingering references it has will never be detected.  Though, if I
can't guarantee 100% reclamation, I'll have to be anticipating a
buffer dump eventually anyway, which makes, does it not, 90% about the
same as 99%.

 furthermore, you're mixing responsibilities for two logically separate
 ideas just because a particular implementation happens to associate them,
 which is not a good idea from a design pov.

I think a silent omission of finalization is the only alternative.  If
so they're mixed, one way or the other.  I argue it is closer to
associating your class with a hash table: they are logically separate
ideas.  Perhaps implementation is logically separate from design
altogether.

 i can remember, way back in the mists of time

I understand they were having a fog problem there yesterday... not to
mention a sale on sand.  Today: Scattered showers and thunderstorms
before 1pm, then a slight chance of showers.

 using java finalizers for
 doing this kind of thing.  and then learning that it was a bad idea.  once
 i got over the initial frustration, it really hasn't been a problem.  i
 haven't met a situation

I don't suppose I imagine one.  So, you could argue that it's a low
priority.  Washing your hands of the rare, though, disqualifies you
from the associate's in philosophy.  I bet you want to meet my
customers, too.

 where i needed to tie resource management and
 memory management together (except for interfacing with c code that does
 not use the host language's gc - and i can imagine that for python this is
 a very strong (perhaps *the*) argument for reference counting).

I'm using a specialized mapping type to implement the back end of user-
defined classes.  Since I know the implementation, which in particular
maps strings to objects, I can always just break cycles by hand; that
is, until someone wants a C extension.  Then they will want tp_clear
and tp_traverse methods.

 as an bonus example, consider object caching - a very common technique
 that immediately breaks anything that associates other resources with
 memory use.

I assume your other processes are notified of the cache state.  From
what I understand, Windows supports /named/ caching.  Collaborators
can check the named cache, in the process creating it if it doesn't
exist, and read and write at will there.

 just because, in one limited case, you can do something, doesn't mean it's
 a good idea.

 andrew

But you're right.  I haven't talked this over much on the outside, so
I might be missing something huge, and serialized two-step
finalization (tm) is the secret least of my worries.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Tkinter book on current versions

2009-03-22 Thread Mike Driscoll
Hi,

On Fri, Mar 20, 2009 at 10:14 PM, Paul Watson
paul.hermeneu...@gmail.com wrote:
 Has anyone tried the Grayson book, Python and Tkinter Programming,
 with a recent version of Python?

 The first example code (calculator) generates a single row of buttons.
 Perhaps I have not applied the errata correctly.  Has anyone been
 successful?

 I am using:

 Python 2.5.2 (r252:60911, Dec  1 2008, 17:47:46)

 --

I used that book a couple of years ago. I don't recall having any
major issues with it, but I ended up going with wxPython since I
needed some specialized widgets that I couldn't find in Tkinter at the
time.

Mike
--
http://mail.python.org/mailman/listinfo/python-list


How Get Name Of Working File

2009-03-22 Thread Victor Subervi
Hi;
If I am writing a script that generates HTML, how do I grab the name of the
actual file in which I am working? For example, let us say I am working in
test.py. I can have the following code:

import os
dir = os.getcwd()

and that will give me the working dir. But what about test.py?
TIA,
Victor
--
http://mail.python.org/mailman/listinfo/python-list


Re: How Get Name Of Working File

2009-03-22 Thread Christian Heimes
Victor Subervi schrieb:
 Hi;
 If I am writing a script that generates HTML, how do I grab the name of the
 actual file in which I am working? For example, let us say I am working in
 test.py. I can have the following code:
 
 import os
 dir = os.getcwd()
 
 and that will give me the working dir. But what about test.py?

The module variable __file__ contains the file name of the current
Python module.

Christian

--
http://mail.python.org/mailman/listinfo/python-list


Script for a project inside own directory

2009-03-22 Thread Filip Gruszczyński
I am having a project built like this:

project
   module1.py
   module2.py
   packages1/
 module3.py

etc.

I have script that uses objects from those modules/packages. If I keep
this script inside project directory it's ok and it works. But I would
like to move it to own scripts directory and from there it doesn't
work. I think I understand why it doesn't work (the root of local
packages and modules is there and it can't see what it above it), but
I would like to ask if there is any workaround? I would like to keep
all my scripts in separate dir instead of main dir. I like to keep it
clean.

-- 
Filip Gruszczyński
--
http://mail.python.org/mailman/listinfo/python-list


Re: Script for a project inside own directory

2009-03-22 Thread Maxim Khitrov
2009/3/22 Filip Gruszczyński grusz...@gmail.com:
 I am having a project built like this:

 project
   module1.py
   module2.py
   packages1/
     module3.py

 etc.

 I have script that uses objects from those modules/packages. If I keep
 this script inside project directory it's ok and it works. But I would
 like to move it to own scripts directory and from there it doesn't
 work. I think I understand why it doesn't work (the root of local
 packages and modules is there and it can't see what it above it), but
 I would like to ask if there is any workaround? I would like to keep
 all my scripts in separate dir instead of main dir. I like to keep it
 clean.

import sys
sys.path.append('path to project')

If project directory is one level up, you can do something like this:

import os
import sys
sys.path.append(os.path.realpath('..'))

- Max
--
http://mail.python.org/mailman/listinfo/python-list


Re: How Get Name Of Working File

2009-03-22 Thread Maxim Khitrov
On Sun, Mar 22, 2009 at 10:58 AM, Christian Heimes li...@cheimes.de wrote:
 Victor Subervi schrieb:
 Hi;
 If I am writing a script that generates HTML, how do I grab the name of the
 actual file in which I am working? For example, let us say I am working in
 test.py. I can have the following code:

 import os
 dir = os.getcwd()

 and that will give me the working dir. But what about test.py?

 The module variable __file__ contains the file name of the current
 Python module.

Keep in mind that __file__ may be set to test.pyc or test.pyo. If you
always want the .py extension, do this:

from os.path import splitext
file = splitext(__file__)[0] + '.py'

- Max
--
http://mail.python.org/mailman/listinfo/python-list


3.0 - bsddb removed

2009-03-22 Thread Sean
Anyone got any thoughts about what to use as a replacement.  I need 
something (like bsddb) which uses dictionary syntax to read and write an 
underlying (fast!) btree or similar.


Thanks.

Sean
--
http://mail.python.org/mailman/listinfo/python-list


Re: Using python 3 for scripting?

2009-03-22 Thread mahesh

python 2.5 is prefered;

On Mar 22, 7:22 pm, timo.my...@gmail.com (Timo Myyrä) wrote:
 Hi,

 I'll have to do some scripting in the near future and I was
 thinking on using the Python for it. I would like to know which
 version of Python to use? Is the Python 3 ready for use or should
 I stick with older releases?

 Timo

--
http://mail.python.org/mailman/listinfo/python-list


Re: 3.0 - bsddb removed

2009-03-22 Thread Albert Hopkins
On Sun, 2009-03-22 at 15:55 +, Sean wrote:
 Anyone got any thoughts about what to use as a replacement.  I need 
 something (like bsddb) which uses dictionary syntax to read and write an 
 underlying (fast!) btree or similar.
 
gdbm

--
http://mail.python.org/mailman/listinfo/python-list


Script for a project inside own directory

2009-03-22 Thread R. David Murray
=?UTF-8?Q?Filip_Gruszczy=C5=84ski?= grusz...@gmail.com wrote:
 I am having a project built like this:
 
 project
module1.py
module2.py
packages1/
  module3.py
 
 etc.
 
 I have script that uses objects from those modules/packages. If I keep
 this script inside project directory it's ok and it works. But I would
 like to move it to own scripts directory and from there it doesn't
 work. I think I understand why it doesn't work (the root of local
 packages and modules is there and it can't see what it above it), but
 I would like to ask if there is any workaround? I would like to keep
 all my scripts in separate dir instead of main dir. I like to keep it
 clean.

You need to put your project directory on the PYTHONPATH one way
or another.  I would suggest reading up on how the module search
path works in python (PYTHONPATH, sys.path) so that you can decide
which of the many possible ways to make this work will serve you
best.

--
R. David Murray   http://www.bitdance.com

--
http://mail.python.org/mailman/listinfo/python-list


Re: 3.0 - bsddb removed

2009-03-22 Thread Benjamin Peterson
Sean seandc at att.net writes:

 
 Anyone got any thoughts about what to use as a replacement.  I need 
 something (like bsddb) which uses dictionary syntax to read and write an 
 underlying (fast!) btree or similar.

pybsddb is just not included in the core. It's still distributed separately.
http://www.jcea.es/programacion/pybsddb.htm



--
http://mail.python.org/mailman/listinfo/python-list


Re: python 3, subclassing TextIOWrapper.

2009-03-22 Thread Gabriel Genellina

En Sat, 21 Mar 2009 23:58:07 -0300, lamber...@corning.com escribió:


'''
A python 3 question.
Presume this code is in file p.py.
The program fails.

$ python3 p.py
...
ValueError: I/O operation on closed file.

Removing the comment character to increase the stream
reference count fixes the program, at the expense of
an extra TextIOWrapper object.

Please, what is a better way to write the class with
regard to this issue?
'''

import re
import io

class file(io.TextIOWrapper):

'''
Enhance TextIO.  Streams have many sources,
a file name is insufficient.
'''

def __init__(self,stream):
#self.stream = stream
super().__init__(stream.buffer)


print(file(open('p.py')).read())


You're taking a shortcut (the open() builtin) that isn't valid here.

open() creates a raw FileIO object, then a BufferedReader, and finally  
returns a TextIOWrapper. Each of those has a reference to the previous  
object, and delegates many calls to it. In particular, close() propagates  
down to FileIO to close the OS file descriptor.


In your example, you call open() to create a TextIOWrapper object that is  
discarded as soon as the open() call finishes - because you only hold a  
reference to the intermediate buffer. The destructor calls close(), and  
the underlying OS file descriptor is closed.


So, if you're not interested in the TextIOWrapper object, don't create it  
in the first place. That means, don't use the open() shortcut and build  
the required pieces yourself.


---

There is another alternative that relies on undocumented behaviour: use  
open to create a *binary* file and wrap the resulting BufferedReader  
object in your own TextIOWrapper.


import io

class file(io.TextIOWrapper):
 def __init__(self, buffer):
 super().__init__(buffer)

print(file(open('p.py','rb')).read())

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Lambda forms and scoping

2009-03-22 Thread Gabriel Genellina

En Fri, 20 Mar 2009 23:16:00 -0300, alex goretoy
aleksandr.gore...@gmail.com escribió:


i looks at lambdas as unbound functions(or super function), in the case
above we create the functions in a list places it in memory unboud, once
binding a call to the memory address space it returns the value

it is basically same as doing this:
def f():
print f

a=f #unbound function, same as rename function
a() #bind call to address space


Mmm, I don't quite understand what you said. lambda creates functions that
aren't different than functions created by def: apart from the name,
they're really the same thing.

And if you imply that *where* you call a function does matter, it does
not. A function carries its own local namespace, its own closure, and its
global namespace. At call time, no additional binding is done (except
parameters - arguments).

(and the address space is always the one of the running process)

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Script for a project inside own directory

2009-03-22 Thread Benjamin Peterson
Filip Gruszczyński gruszczy at gmail.com writes:
 I would like to ask if there is any workaround?

Use the runpy module.




--
http://mail.python.org/mailman/listinfo/python-list


Using python 3 for scripting?

2009-03-22 Thread R. David Murray
timo.my...@gmail.com (Timo =?utf-8?Q?Myyr=C3=A4?=) wrote:
 Hi,
 
 I'll have to do some scripting in the near future and I was 
 thinking on using the Python for it. I would like to know which 
 version of Python to use? Is the Python 3 ready for use or should 
 I stick with older releases?

If you are using it for scripting that doesn't need anything not included
in the standard library (which is true of a lot of scripting tasks), then
I would say Python 3 is very ready.  You'll have some I/O performance
issues if do lots of I/O with 3.0, but 3.1 is almost out the door and
that fixes the I/O performance issue.

--
R. David Murray   http://www.bitdance.com

--
http://mail.python.org/mailman/listinfo/python-list


Re: python 3, subclassing TextIOWrapper.

2009-03-22 Thread Benjamin Peterson
 lambertdw at corning.com writes:

 Please, what is a better way to write the class with
 regard to this issue?

Set the original TextIOWrapper's buffer to None. 




--
http://mail.python.org/mailman/listinfo/python-list


Re: Using python 3 for scripting?

2009-03-22 Thread Chris Rebert
2009/3/22 Timo Myyrä timo.my...@gmail.com:
 Hi,

 I'll have to do some scripting in the near future and I was thinking on using 
 the Python for it. I would like to know which version of Python to use? Is 
 the Python 3 ready for use or should I stick with older releases?

2.6.1, the latest non-3.x release is probably best. Most libraries
haven't been ported to 3.x yet, so Python 3 has yet to become
widespread.

Cheers,
Chris

-- 
I have a blog:
http://blog.rebertia.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: script files with python (instead of tcsh/bash)?

2009-03-22 Thread Gabriel Genellina
En Sun, 22 Mar 2009 11:05:22 -0300, MRAB goo...@mrabarnett.plus.com  
escribió:

Esmail wrote:

Nick Craig-Wood wrote:

Esmail ebo...@hotmail.com wrote:

 I am wondering if anyone is using python to write script files?



 Two quick questions:
 As a replacement for grep I would use the re module and its methods?


Perhaps; but strings have methods too (`abc in line` is easier to read,  
and faster, than the corresponding r.e.)



The other things I need to do consist of moving files, manipulating file
names and piping outputs of one command to the next, so I'm digging into
the documentation as much as I can.


The 'os' and 'shutil' modules.


And for executing external commands with piping, I'd add the subprocess  
module.


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


regular expressions, stack and nesting

2009-03-22 Thread Aaron Brady
Hi,

Every so often the group gets a request for parsing an expression.  I
think it would be significantly easier to do if regular expressions
could modify a stack.  However, since you might nearly as well write
Python, maybe there is a compromise.

Could the Secret Labs' regular expression engine be modified to
operate on lists, for example, or a mutable non-string type?

Details (juicy and otherwise):

One of the alternatives is to reconstruct a new string on every match,
removing the expression and replacing it with a tag.  (This by the way
takes at least one out-of-band character.)  The running time on it
involves constructing a string from at least three parts, maybe five:
the lead, the opening marker, the inside of the match, the closing
marker, and the tail.  If it used ropes, it's still constant time, but
is O( string length * number of matches ) with just normal strings.

Another alternative is to create a new unicode object API,
PyUnicode_FROM_DATA, which creates a string object from an existing
buffer, but does not copy it.  I expect this would receive -1 from
many people, not least because it breaks immutability of strings.

ctypes character arrays, arrays, and buffer objects are additional
possibilities.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Script for a project inside own directory

2009-03-22 Thread Filip Gruszczyński
Works great. Thanks a lot.

2009/3/22 Maxim Khitrov mkhit...@gmail.com:
 2009/3/22 Filip Gruszczyński grusz...@gmail.com:
 I am having a project built like this:

 project
   module1.py
   module2.py
   packages1/
     module3.py

 etc.

 I have script that uses objects from those modules/packages. If I keep
 this script inside project directory it's ok and it works. But I would
 like to move it to own scripts directory and from there it doesn't
 work. I think I understand why it doesn't work (the root of local
 packages and modules is there and it can't see what it above it), but
 I would like to ask if there is any workaround? I would like to keep
 all my scripts in separate dir instead of main dir. I like to keep it
 clean.

 import sys
 sys.path.append('path to project')

 If project directory is one level up, you can do something like this:

 import os
 import sys
 sys.path.append(os.path.realpath('..'))

 - Max




-- 
Filip Gruszczyński
--
http://mail.python.org/mailman/listinfo/python-list


Generator

2009-03-22 Thread R. David Murray
mattia ger...@gmail.com wrote:
 Can you explain me this behaviour:
 
  s = [1,2,3,4,5]
  g = (x for x in s)
  next(g)
 1
  s
 [1, 2, 3, 4, 5]
  del s[0]
  s
 [2, 3, 4, 5]
  next(g)
 3
 
 
 Why next(g) doesn't give me 2?

Think of it this way:  the generator is exactly equivalent to
the following generator function:

def g(s):
for x in s:
yield x

Now, if you look at the documentation for the 'for' statement, there is
a big warning box that talks about what happens when you mutate an
object that is being looped over:

 There is a subtlety when the sequence is being modified by the loop (this
 can only occur for mutable sequences, i.e. lists). An internal counter is
 used to keep track of which item is used next, and this is incremented on
 each iteration. When this counter has reached the length of the sequence
 the loop terminates. This means that if the suite deletes the current (or
 a previous) item from the sequence, the next item will be skipped (since
 it gets the index of the current item which has already been treated).
 Likewise, if the suite inserts an item in the sequence before the current
 item, the current item will be treated again the next time through the
 loop. 

As you can see, your case is covered explicitly there.

If you want next(g) to yield 3, you'd have to do something like:

g = (x for x in s[:])

where s[:] makes a copy of s that is then iterated over.

--
R. David Murray   http://www.bitdance.com

--
http://mail.python.org/mailman/listinfo/python-list


Re: Using python 3 for scripting?

2009-03-22 Thread Timo Myyrä
Ok, I think I'll stick with the 2.6 then. I recall it gave 
warnings about things that are deprecated in 3.0 so it will make 
porting the scripts to 3.0 easier. 


I might try 3.0 once I know what kind of scripts are needed.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Rough draft: Proposed format specifier for a thousands separator

2009-03-22 Thread rzed
Raymond Hettinger pyt...@rcn.com wrote in
news:e35271b9-7623-4845-bcb9-d8c33971f...@w24g2000prd.googlegroups.c
om: 

 If anyone here is interested, here is a proposal I posted on the
 python-ideas list.
 
 The idea is to make numbering formatting a little easier with the
 new format() builtin
 in Py2.6 and Py3.0: 
 http://docs.python.org/library/string.html#formatspec 
 
[...]
 Comments and suggestions are welcome but I draw the line at
 supporting Mayan numbering conventions ;-)

Is that inclusive or exclusive?

-- 
rzed
--
http://mail.python.org/mailman/listinfo/python-list


Re: regular expressions, stack and nesting

2009-03-22 Thread Chris Rebert
2009/3/22 Aaron Brady castiro...@gmail.com:
 Hi,

 Every so often the group gets a request for parsing an expression.  I
 think it would be significantly easier to do if regular expressions
 could modify a stack.  However, since you might nearly as well write
 Python, maybe there is a compromise.

If you need to parse something of decent complexity, you ought to use
a actual proper parser generator, e.g. PLY, pyparsing, ANTLR, etc.
Abusing regular expressions like that to kludge jury-rigged parsers
together can only lead to pain when special cases and additional
grammar complexity emerge and start breaking the parser in difficult
ways. I'm not seeing the use case for your suggestion.

Cheers,
Chris

-- 
I have a blog:
http://blog.rebertia.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: RSS feed issues, or how to read each item exactly once

2009-03-22 Thread Gabriel Genellina
En Sat, 21 Mar 2009 17:12:45 -0300, John Nagle na...@animats.com  
escribió:



I've been using the feedparser module, and it turns out that
some RSS feeds don't quite do RSS right. [...]
It's
something that feedparser should perhaps do.


Better to ask the author than post here, I think. And even if feedparser  
were a standard module, it's better to file a feature request in the  
tracker (http://bugs.python.org)


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: 3.0 - bsddb removed

2009-03-22 Thread Nick Craig-Wood
Sean sea...@att.net wrote:
  Anyone got any thoughts about what to use as a replacement.  I need 
  something (like bsddb) which uses dictionary syntax to read and write an 
  underlying (fast!) btree or similar.

sqlite.

bsddb gave me no end of trouble with threads, but sqlite worked
brilliantly.

You would need to make a dictionary interface to sqlite, eg

  http://code.activestate.com/recipes/576638/

Or do something a bit simpler yourself.

-- 
Nick Craig-Wood n...@craig-wood.com -- http://www.craig-wood.com/nick
--
http://mail.python.org/mailman/listinfo/python-list


Re: script files with python (instead of tcsh/bash)?

2009-03-22 Thread Nick Craig-Wood
Esmail ebo...@hotmail.com wrote:
  thanks for including the script, that really helps. Nice way of
  finding files.

Python has lots of useful stuff like that!

  Two quick questions:
 
  As a replacement for grep I would use the re module and its
  methods?

The re module works on strings not files, but basically yes.

Note that the re module uses a superset of the grep -E regexps, and
they are almost identical to those used in perl and php.

Here is a simple grep in python

#!/usr/bin/python
import re
import sys
import fileinput
pattern = sys.argv.pop(1)
for line in fileinput.input():
if re.search(pattern, line):
print line.rstrip()

Save in a file called grep.py then you can do

$ ./grep.py dizzy *.py
self.dizzy = 0
if self.dizzy:

$ ls | ./grep.py '\d{2}'
linux-image-2.6.24-21-eeepc_2.6.24-21.39eeepc1_i386.deb
linux-ubuntu-modules-2.6.24-21-eeepc_2.6.24-21.30eeepc6_i386.deb

  What about awk which I regularly use to extract fields based on position
  but not column number, what should I be using in Python to do the
  same?

I presume you mean something like this

   ... | awk '{print $2}'

In python, assuming you've got the line in the line variable, then

In python an equivalent of the above would be

import fileinput
for line in fileinput.input():
print line.split()[1]

Note that the fileinput module is really useful for making shell
command replacements!

  The other things I need to do consist of moving files, manipulating file
  names and piping outputs of one command to the next, so I'm digging into
  the documentation as much as I can.

Read up on the os module and the subprocess module.  You'll find you
need to do much less piping with python as with shell because it has
almost everything you'll need built in.

Using built in functions is much quicker than fork()-ing an external
command too.

  So much to learn, so little time (but so much fun!)

;-)

-- 
Nick Craig-Wood n...@craig-wood.com -- http://www.craig-wood.com/nick
--
http://mail.python.org/mailman/listinfo/python-list


Re: Async serial communication/threads sharing data

2009-03-22 Thread Nick Craig-Wood
Jean-Paul Calderone exar...@divmod.com wrote:
  It's true that the serial port support in Twisted isn't the most used
  feature. :)  These days, serial ports are on the way out, I think.  That
  said, much of the way a serial port is used in Twisted is the same as the
  way a TCP connection is used.  This means that the Twisted documentation
  for TCP connections is largely applicable to using serial ports.  The only
  major difference is how you set up the connection.  You can see examples
  of using the serial port support here (one of them seems to suggest that
  it won't work on Windows, but I think this is a mistake):
 
http://twistedmatrix.com/projects/core/documentation/examples/gpsfix.py
http://twistedmatrix.com/projects/core/documentation/examples/mouse.py
 
  The 2nd to last line of each example shows how to connect to the serial
  port.  These basic documents describe how to implement a protocol.  Though
  in the context of TCP, the protocol implementation ideas apply to working
  with serial ports as well:
 
http://twistedmatrix.com/projects/core/documentation/howto/servers.html
http://twistedmatrix.com/projects/core/documentation/howto/clients.html
 
  You can ignore the parts about factories, since they're not used with serial
  ports.

I wrote a serial port to TCP proxy (with logging) with twisted.  The
problem I had was that twisted serial ports didn't seem to have any
back pressure.  By that I mean I could pump data into a 9600 baud
serial port at 10 Mbit/s.  Twisted would then buffer the data for me
using 10s or 100s or Megabytes of RAM.  No data would be lost, but
there would be hours of latency and my program would use up all my RAM
and explode.

What I wanted to happen was for twisted to stop taking the data when
the serial port buffer was full and to only take the data at 9600
baud.

I never did solve that problem :-(

-- 
Nick Craig-Wood n...@craig-wood.com -- http://www.craig-wood.com/nick
--
http://mail.python.org/mailman/listinfo/python-list


Re: regular expressions, stack and nesting

2009-03-22 Thread Aaron Brady
On Mar 22, 12:18 pm, Chris Rebert c...@rebertia.com wrote:
 2009/3/22 Aaron Brady castiro...@gmail.com:

  Hi,

  Every so often the group gets a request for parsing an expression.  I
  think it would be significantly easier to do if regular expressions
  could modify a stack.  However, since you might nearly as well write
  Python, maybe there is a compromise.

 If you need to parse something of decent complexity, you ought to use
 a actual proper parser generator, e.g. PLY, pyparsing, ANTLR, etc.
 Abusing regular expressions like that to kludge jury-rigged parsers
 together can only lead to pain when special cases and additional
 grammar complexity emerge and start breaking the parser in difficult
 ways. I'm not seeing the use case for your suggestion.

 Cheers,
 Chris

 --
 I have a blog:http://blog.rebertia.com

Hey, I don't see the use case either, but that doesn't stop everyone
and their pet snake from asking about it.  /snippity

I guess I'm looking at something on the scale of a recipe.  Farewell,
dreams and glory.  What do you think anyway?

P.S.  What if the topics were, kludge jury-rigged parsers?
--
http://mail.python.org/mailman/listinfo/python-list


Re: python php/html file upload issue

2009-03-22 Thread Gabriel Genellina
En Sat, 21 Mar 2009 11:47:36 -0300, S.Selvam Siva s.selvams...@gmail.com  
escribió:



I want to upload a file from python to php/html form using urllib2,and my
code is below


See the Python Cookbook: http://code.activestate.com/recipes/langs/python/

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Using python 3 for scripting?

2009-03-22 Thread andrew cooke
mahesh wrote:
 python 2.5 is prefered;

no it is not.

andrew

--
http://mail.python.org/mailman/listinfo/python-list


Re: Using python 3 for scripting?

2009-03-22 Thread R. David Murray
timo.my...@gmail.com (Timo =?utf-8?Q?Myyr=C3=A4?=) wrote:
 Ok, I think I'll stick with the 2.6 then. I recall it gave 
 warnings about things that are deprecated in 3.0 so it will make 
 porting the scripts to 3.0 easier. 
 
 I might try 3.0 once I know what kind of scripts are needed.

In case you don't recall, running your scripts under 2.6 with -3 will
give you useful info.

Someone else recommended 2.5, and that is a valid recommendation if you
are planning to ship your scripts off to a variety of target hosts.
Some linux distributions are still shipping with 2.5 as standard.
You'll run into some systems that haven't been updated from 2.4 yet,
for that matter.

But if this is for your own local use, personally I'd do (will be doing
new stuff) everything possible in 3, and only dropping back to 2.6 when
I have to.  Unfortunately in some cases I _do_ have to support a number
of servers that are still running 2.5 :(

--
R. David Murray   http://www.bitdance.com

--
http://mail.python.org/mailman/listinfo/python-list


loading program's global variables in ipython

2009-03-22 Thread per
hi all,

i have a file that declares some global variables, e.g.

myglobal1 = 'string'
myglobal2 = 5

and then some functions. i run it using ipython as follows:

[1] %run myfile.py

i notice then that myglobal1 and myglobal2 are not imported into
python's interactive namespace. i'd like them too -- how can i do
this?

 (note my file does not contain a __name__ == '__main__' clause.)

thanks.
--
http://mail.python.org/mailman/listinfo/python-list


Re: python 3, subclassing TextIOWrapper.

2009-03-22 Thread R. David Murray
Gabriel Genellina gagsl-...@yahoo.com.ar wrote:
 En Sat, 21 Mar 2009 23:58:07 -0300, lamber...@corning.com escribió:
  import re
  import io
 
  class file(io.TextIOWrapper):
 
  '''
  Enhance TextIO.  Streams have many sources,
  a file name is insufficient.
  '''
 
  def __init__(self,stream):
  #self.stream = stream
  super().__init__(stream.buffer)
 
 
  print(file(open('p.py')).read())
 
 You're taking a shortcut (the open() builtin) that isn't valid here.
 
 open() creates a raw FileIO object, then a BufferedReader, and finally  
 returns a TextIOWrapper. Each of those has a reference to the previous  
 object, and delegates many calls to it. In particular, close() propagates  
 down to FileIO to close the OS file descriptor.
 
 In your example, you call open() to create a TextIOWrapper object that is  
 discarded as soon as the open() call finishes - because you only hold a  
 reference to the intermediate buffer. The destructor calls close(), and  
 the underlying OS file descriptor is closed.
 
 So, if you're not interested in the TextIOWrapper object, don't create it  
 in the first place. That means, don't use the open() shortcut and build  
 the required pieces yourself.
 
 ---
 
 There is another alternative that relies on undocumented behaviour: use  
 open to create a *binary* file and wrap the resulting BufferedReader  
 object in your own TextIOWrapper.
 
 import io
 
 class file(io.TextIOWrapper):
   def __init__(self, buffer):
   super().__init__(buffer)
 
 print(file(open('p.py','rb')).read())

I'm wondering if what we really need here is either some way to tell open
to use a specified subclass(s) instead of the default ones, or perhaps
an 'open factory' function that would yield such an open function that
otherwise is identical to the default open.

What's the standard python idiom for when consumer code should be
able to specialize the classes used to create objects returned from
a called package?  (I'm tempted to say monkey patching the module,
but that can't be optimal :)

--
R. David Murray   http://www.bitdance.com

--
http://mail.python.org/mailman/listinfo/python-list


Re: python 3, subclassing TextIOWrapper.

2009-03-22 Thread Scott David Daniels

lamber...@corning.com wrote:

...  Removing the comment character to increase the stream
reference count fixes the program, at the expense of
an extra TextIOWrapper object.


But you do create that extra TextIOWrapper, so there should
be no crying about its existence.  If you rely on the data structure
of another object, it is a good idea to hold onto that object, rather
than simply hold onto a field or two of its class.

So, at the least, you need these changes:


...
class file(io.TextIOWrapper):
...
def __init__(self, buffered_reader): # was ..., stream)
#self.stream = stream
super().__init__(buffered_reader) # was ...(stream.buffer)
...


print(file(open('p.py', 'rb')).read()) # was ...('p.py')...


But, I'd say the right was to write this last is:

   with file(open('p.py', 'rb')) as src:
   print(src.read())


--
http://mail.python.org/mailman/listinfo/python-list


Re: pyconfig on 64-bit machines with distutils vs 32-bit legacy code

2009-03-22 Thread Rob Clewley
Thanks for replying, Martin.

I got my colleague (Nils) to run exactly the gcc call you described in
your post (see below for what he ran) but it only returns the
following:

/home/nwagner/svn/PyDSTool/PyDSTool/tests/dopri853_temp/dop853_HHnet_vf_wrap.c:124:20:
error: Python.h: Datei oder Verzeichnis nicht gefunden
/home/nwagner/svn/PyDSTool/PyDSTool/tests/dopri853_temp/dop853_HHnet_vf_wrap.c:2495:4:
error: #error This python version requires swigto be run with the
'-classic' option
In file included from
/home/nwagner/local/lib64/python2.6/site-packages/numpy/core/include/numpy/ndarrayobject.h:61,
from
/home/nwagner/local/lib64/python2.6/site-packages/numpy/core/include/numpy/arrayobject.h:14,
from
/home/nwagner/local/lib64/python2.6/site-packages/numpy/numarray/numpy/libnumarray.h:7,
from
/home/nwagner/svn/PyDSTool/PyDSTool/tests/dopri853_temp/dop853_HHnet_vf_wrap.c:2758:
/home/nwagner/local/lib64/python2.6/site-packages/numpy/core/include/numpy/npy_common.h:71:2:
error: #error Must use Python with unicode enabled.

The command was

gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall
-Wstrict-prototypes -fPIC
-I/data/home/nwagner/local/lib/python2.5/site-packages/numpy/core/include
-I/data/home/nwagner/local/lib/python2.5/site-packages/numpy/numarray
-I/data/home/nwagner/svn/PyDSTool/PyDSTool/tests
-I/data/home/nwagner/svn/PyDSTool/PyDSTool/integrator
-I/data/home/nwagner/svn/PyDSTool/PyDSTool/tests/dopri853_temp
-I/data/home/nwagner/local/include/python2.5 -E -dD
/data/home/nwagner/svn/PyDSTool/PyDSTool/tests/dopri853_temp/dop853_HHnet_vf_wrap.c
-w -m32 -D__DOPRI__

Maybe Nils can pick up the thread from here.
Thanks,
Rob

On Sun, Mar 22, 2009 at 2:36 AM, Martin v. Löwis mar...@v.loewis.de wrote:
 /data/home/nwagner/local/lib/python2.5/pyport.h:734:2: #error
 LONG_BIT definition appears wrong for platform (bad gcc/glibc
 config?).


 Can anyone offer any advice as to what I might be missing or
 misunderstanding?

 You need to understand where the error comes from:
 1. what is the *actual* value of SIZEOF_LONG (it should be 4)?
 2. what is the actual value of LONG_BIT, and where does it come
   from? (it should be 32)

 To understand that better, I recommend to edit the command line
 of gcc in the following way (assuming you use gcc 4.x):
 1. replace -c with -E -dD
 2. remove the -o file option

 This will generate preprocessor output to stdout, which you then
 need to search for SIZEOF_LONG and LONG_BIT. Searching back for
 # number lines will tell you where the definition was made.

 If that doesn't make it clear what the problem is, post your
 findings here.

 Regards,
 Martin
--
http://mail.python.org/mailman/listinfo/python-list


Re: pyconfig on 64-bit machines with distutils vs 32-bit legacy code

2009-03-22 Thread Martin v. Löwis
Rob Clewley wrote:
 I got my colleague (Nils) to run exactly the gcc call you described in
 your post (see below for what he ran) but it only returns the
 following:

Sehr seltsam. Welche gcc-Version ist das denn? (gcc -v)

 /home/nwagner/svn/PyDSTool/PyDSTool/tests/dopri853_temp/dop853_HHnet_vf_wrap.c:124:20:
 error: Python.h: Datei oder Verzeichnis nicht gefunden

Auch sehr seltsam. Ist Python.h auf dem Suchpfad vorhanden?

Wenn man eine Datei nur mit

#include Python.h

schreibt, und dann

gcc -E -die ganze I-Optionen datei.c

schreibt, ergibt sich dann eine Ausgabe? Wenn man dann -dD hinzunimmt:
was passiert?

Viel Erfolg,

Martin
--
http://mail.python.org/mailman/listinfo/python-list


Re: Using python 3 for scripting?

2009-03-22 Thread Timo Myyrä
I might get summer job in doing some 2nd tier support and doing 
some scripting besides that in Solaris environment. I gotta see 
what kind of scripts are needed but I'd guess the 2.6 would be the 
safest option. 


Timo
--
http://mail.python.org/mailman/listinfo/python-list


Re: loading program's global variables in ipython

2009-03-22 Thread Peter Otten
per wrote:

 i have a file that declares some global variables, e.g.
 
 myglobal1 = 'string'
 myglobal2 = 5

These aren't declarations, this is exectutable code.
 
 and then some functions. i run it using ipython as follows:
 
 [1] %run myfile.py
 
 i notice then that myglobal1 and myglobal2 are not imported into
 python's interactive namespace. i'd like them too -- how can i do
 this?
 
  (note my file does not contain a __name__ == '__main__' clause.)
 
 thanks.

Use

%run -i myfile.py

or

execfile(myfile.py) # not ipython-specific

Peter
--
http://mail.python.org/mailman/listinfo/python-list


Re: python 3, subclassing TextIOWrapper.

2009-03-22 Thread Gabriel Genellina
En Sun, 22 Mar 2009 15:11:37 -0300, R. David Murray  
rdmur...@bitdance.com escribió:

Gabriel Genellina gagsl-...@yahoo.com.ar wrote:

En Sat, 21 Mar 2009 23:58:07 -0300, lamber...@corning.com escribió:

 class file(io.TextIOWrapper):

 '''
 Enhance TextIO.  Streams have many sources,
 a file name is insufficient.
 '''

 def __init__(self,stream):
 #self.stream = stream
 super().__init__(stream.buffer)


 print(file(open('p.py')).read())


[...] So, if you're not interested in the TextIOWrapper object, don't  
create it in the first place. That means, don't use the open() shortcut  
and build

the required pieces yourself.


I'm wondering if what we really need here is either some way to tell open
to use a specified subclass(s) instead of the default ones, or perhaps
an 'open factory' function that would yield such an open function that
otherwise is identical to the default open.

What's the standard python idiom for when consumer code should be
able to specialize the classes used to create objects returned from
a called package?  (I'm tempted to say monkey patching the module,
but that can't be optimal :)


I've seen:
- pass the desired subclass as an argument to the class constructor /  
factory function.

- set the desired subclass as an instance attribute of the factory object.
- replacing the f_globals attribute of the factory function (I wouldn't  
recomend this! but sometimes is the only way)


In the case of builtin open(), I'm not convinced it would be a good idea  
to allow subclassing. But I have no rational arguments - just don't like  
the idea :(


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: python 3, subclassing TextIOWrapper.

2009-03-22 Thread Benjamin Peterson
Gabriel Genellina gagsl-py2 at yahoo.com.ar writes:
 
 There is another alternative that relies on undocumented behaviour: use  
 open to create a *binary* file and wrap the resulting BufferedReader  
 object in your own TextIOWrapper.

How is that undocumented behavior? TextIOWrapper can wrap any buffer which
follows the io.BufferedIOBase ABC. BufferedReader is a subclass of
io.BufferedIOBase.



--
http://mail.python.org/mailman/listinfo/python-list


what features would you like to see in 2to3?

2009-03-22 Thread Benjamin Peterson
It's GSoC time again, and I've had lots of interested students asking about
doing on project on improving 2to3. What kinds of improvements and features
would you like to see in it which student programmers could accomplish?

--
http://mail.python.org/mailman/listinfo/python-list


Re: Lambda forms and scoping

2009-03-22 Thread R. David Murray
Gabriel Genellina gagsl-...@yahoo.com.ar wrote:
 En Fri, 20 Mar 2009 23:16:00 -0300, alex goretoy
 aleksandr.gore...@gmail.com escribió:
 
  i looks at lambdas as unbound functions(or super function), in the case
  above we create the functions in a list places it in memory unboud, once
  binding a call to the memory address space it returns the value
 
  it is basically same as doing this:
  def f():
  print f
 
  a=f #unbound function, same as rename function
  a() #bind call to address space
 
 Mmm, I don't quite understand what you said. lambda creates functions that
 aren't different than functions created by def: apart from the name,
 they're really the same thing.

Oh, good, I'm not the only one for whom the above didn't make sense :)
I feel a little less dense now.

 And if you imply that *where* you call a function does matter, it does
 not. A function carries its own local namespace, its own closure, and its
 global namespace. At call time, no additional binding is done (except
 parameters - arguments).
 
 (and the address space is always the one of the running process)

I poked around in the API docs and experimented with func_closure and
related attributes, and after bending my brain for a while I think I
understand this.  The actual implementation of the closure is a single
list of 'cell' objects which represent namespace slots in the nested
scopes in which the closed-over function is defined.

But the fact that it is a single list is an implementation detail, and
the implementation is in fact carefully designed so that conceptually
we can think of the closure as giving the function access to those
nested-scope namespaces in almost(*) the same sense that it has a
reference to the global and local namespaces.  That is, if what a name
in _any_ of those namespaces points to is changed, then the closed-over
function sees those changes.

In this way, we understand the original example:  when defining a
lambda having a 'free variable' (that is, one not defined in either the
local or global scope) that was a name in the surrounding function's
local namespace, the lambda is going to see any changes made by the
surrounding function with regards to what that name points to.  Thus,
the final value that the lambda uses is whatever the final value of the
for loop variable was when the surrounding function finished executing.

However, I think that a Python closure is not quite the same thing as a
'computer science' closure, for the same reason that people coming from a
language with variables-and-values as opposed to namespaces get confused
when dealing with Python function call semantics.  Consider:

http://en.wikipedia.org/wiki/Closure_(computer_science)

That says that a closure can be used to provide a function with a private
set of variables that persist from one invocation to the next, so that
a value established in one call can be accessed in the next.  The last
part of that sentence is not true in Python, since any assignment inside
a function affects only the local (per-invocation) namespace or (given
a global statement) the global namespace.  A function cannot change the
thing pointed to by a name in the closure.  Only the outer function,
for whom that name is in its local namespace, can do that.

(*) That last sentence in the previous paragraph is why I said '_almost_
the same sense' earlier: a function can modify what names point to in
its local and global namespaces, but cannot modify what names point to
in the closure namespace.

Of course, we can produce the same _effect_ as a computer science closure
in Python by using mutable objects...which is exactly parallel to the
difference between passing mutable or immutable objects in a function
call.

--
R. David Murray   http://www.bitdance.com

--
http://mail.python.org/mailman/listinfo/python-list


Re: Generator

2009-03-22 Thread mattia
Il Sun, 22 Mar 2009 16:52:02 +, R. David Murray ha scritto:

 mattia ger...@gmail.com wrote:
 Can you explain me this behaviour:
 
  s = [1,2,3,4,5]
  g = (x for x in s)
  next(g)
 1
  s
 [1, 2, 3, 4, 5]
  del s[0]
  s
 [2, 3, 4, 5]
  next(g)
 3
 
 
 Why next(g) doesn't give me 2?
 
 Think of it this way:  the generator is exactly equivalent to the
 following generator function:
 
 def g(s):
 for x in s:
 yield x
 
 Now, if you look at the documentation for the 'for' statement, there is
 a big warning box that talks about what happens when you mutate an
 object that is being looped over:
 
  There is a subtlety when the sequence is being modified by the loop
  (this can only occur for mutable sequences, i.e. lists). An
  internal counter is used to keep track of which item is used next,
  and this is incremented on each iteration. When this counter has
  reached the length of the sequence the loop terminates. This means
  that if the suite deletes the current (or a previous) item from the
  sequence, the next item will be skipped (since it gets the index of
  the current item which has already been treated). Likewise, if the
  suite inserts an item in the sequence before the current item, the
  current item will be treated again the next time through the loop.
 
 As you can see, your case is covered explicitly there.
 
 If you want next(g) to yield 3, you'd have to do something like:
 
 g = (x for x in s[:])
 
 where s[:] makes a copy of s that is then iterated over.

Ok, thanks. Yes, I had the idea that a counter was used in order to 
explain my problem. Now I know that my intuition was correct. Thanks.
--
http://mail.python.org/mailman/listinfo/python-list


Re: python 3, subclassing TextIOWrapper.

2009-03-22 Thread R. David Murray
Gabriel Genellina gagsl-...@yahoo.com.ar wrote:
 En Sun, 22 Mar 2009 15:11:37 -0300, R. David Murray  
 rdmur...@bitdance.com escribió:
  Gabriel Genellina gagsl-...@yahoo.com.ar wrote:
  En Sat, 21 Mar 2009 23:58:07 -0300, lamber...@corning.com escribió:
  
   class file(io.TextIOWrapper):
  
   '''
   Enhance TextIO.  Streams have many sources,
   a file name is insufficient.
   '''
  
   def __init__(self,stream):
   #self.stream = stream
   super().__init__(stream.buffer)
  
  
   print(file(open('p.py')).read())
 
 
  [...] So, if you're not interested in the TextIOWrapper object, don't  
  create it in the first place. That means, don't use the open() shortcut  
  and build
  the required pieces yourself.
 
  I'm wondering if what we really need here is either some way to tell open
  to use a specified subclass(s) instead of the default ones, or perhaps
  an 'open factory' function that would yield such an open function that
  otherwise is identical to the default open.
 
  What's the standard python idiom for when consumer code should be
  able to specialize the classes used to create objects returned from
  a called package?  (I'm tempted to say monkey patching the module,
  but that can't be optimal :)
 
 I've seen:
 - pass the desired subclass as an argument to the class constructor /  
 factory function.
 - set the desired subclass as an instance attribute of the factory object.
 - replacing the f_globals attribute of the factory function (I wouldn't  
 recomend this! but sometimes is the only way)
 
 In the case of builtin open(), I'm not convinced it would be a good idea  
 to allow subclassing. But I have no rational arguments - just don't like  
 the idea :(

When 'file' was just a wrapper around C I/O, that probably made as much
sense as anything.  But now that IO is more Pythonic, it would be nice
to have Pythonic methods for using a subclass of the default classes
instead of the default classes.  Why should a user have to reimplement
'open' just in order to use their own TextIOWrapper subclass?

I should shift this thread to Python-ideas, except I'm not sure I'm
ready to take ownership of it (yet?). :)

--
R. David Murray   http://www.bitdance.com

--
http://mail.python.org/mailman/listinfo/python-list


loading program's global variables in ipython

2009-03-22 Thread R. David Murray
per perfr...@gmail.com wrote:
 hi all,
 
 i have a file that declares some global variables, e.g.
 
 myglobal1 = 'string'
 myglobal2 = 5
 
 and then some functions. i run it using ipython as follows:
 
 [1] %run myfile.py
 
 i notice then that myglobal1 and myglobal2 are not imported into
 python's interactive namespace. i'd like them too -- how can i do
 this?
 
  (note my file does not contain a __name__ == '__main__' clause.)

I'm not familiar with IPython, but perhaps 'from myfile import *'
would do what you want?

--
R. David Murray   http://www.bitdance.com

--
http://mail.python.org/mailman/listinfo/python-list


Re: python 3, subclassing TextIOWrapper.

2009-03-22 Thread Gabriel Genellina
En Sun, 22 Mar 2009 16:37:31 -0300, Benjamin Peterson  
benja...@python.org escribió:

Gabriel Genellina gagsl-py2 at yahoo.com.ar writes:


There is another alternative that relies on undocumented behaviour: use
open to create a *binary* file and wrap the resulting BufferedReader
object in your own TextIOWrapper.


How is that undocumented behavior? TextIOWrapper can wrap any buffer  
which

follows the io.BufferedIOBase ABC. BufferedReader is a subclass of
io.BufferedIOBase.


The undocumented behavior is relying on the open() builtin to return a  
BufferedReader for a binary file.


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: what features would you like to see in 2to3?

2009-03-22 Thread Daniel Fetchinson
 It's GSoC time again, and I've had lots of interested students asking about
 doing on project on improving 2to3. What kinds of improvements and features
 would you like to see in it which student programmers could accomplish?

Last time I used 2to3 (maybe not the latest version) it didn't know
what to do with string exceptions. I'd suggest converting

# py 2
raise 'my string'

to

# py 3
raise Exception( 'my string' )

And notifying the user that this conversion has been done so that
he/she can take appropriate action, i.e. this conversion should not
pass silently.

Cheers,
Daniel



-- 
Psss, psss, put it down! - http://www.cafepress.com/putitdown
--
http://mail.python.org/mailman/listinfo/python-list


Re: Generator

2009-03-22 Thread R. David Murray
mattia ger...@gmail.com wrote:
 Il Sun, 22 Mar 2009 16:52:02 +, R. David Murray ha scritto:
 
  mattia ger...@gmail.com wrote:
  Can you explain me this behaviour:
  
   s = [1,2,3,4,5]
   g = (x for x in s)
   next(g)
  1
   s
  [1, 2, 3, 4, 5]
   del s[0]
   s
  [2, 3, 4, 5]
   next(g)
  3
  
  
  Why next(g) doesn't give me 2?
  
  Think of it this way:  the generator is exactly equivalent to the
  following generator function:
  
  def g(s):
  for x in s:
  yield x
  
  Now, if you look at the documentation for the 'for' statement, there is
  a big warning box that talks about what happens when you mutate an
  object that is being looped over:
  
   There is a subtlety when the sequence is being modified by the loop
   (this can only occur for mutable sequences, i.e. lists). An
   internal counter is used to keep track of which item is used next,
   and this is incremented on each iteration. When this counter has
   reached the length of the sequence the loop terminates. This means
   that if the suite deletes the current (or a previous) item from the
   sequence, the next item will be skipped (since it gets the index of
   the current item which has already been treated). Likewise, if the
   suite inserts an item in the sequence before the current item, the
   current item will be treated again the next time through the loop.
  
  As you can see, your case is covered explicitly there.
  
  If you want next(g) to yield 3, you'd have to do something like:
  
  g = (x for x in s[:])
  
  where s[:] makes a copy of s that is then iterated over.
 
 Ok, thanks. Yes, I had the idea that a counter was used in order to 
 explain my problem. Now I know that my intuition was correct. Thanks.

By the way, it's not the 'for' loop that maintains the counter.  It's the
code that implements the iteration protocol for the thing being looped
over.  So theoretically you could make a subclass of list that would
somehow handle items being deleted or added in a sensible fashion.
But I doubt it is worth doing that, since the implementation would be
would be pretty idiosyncratic to the use-case.

--
R. David Murray   http://www.bitdance.com

--
http://mail.python.org/mailman/listinfo/python-list


Re: what features would you like to see in 2to3?

2009-03-22 Thread Benjamin Kaplan
On Sun, Mar 22, 2009 at 4:49 PM, Daniel Fetchinson 
fetchin...@googlemail.com wrote:

  It's GSoC time again, and I've had lots of interested students asking
 about
  doing on project on improving 2to3. What kinds of improvements and
 features
  would you like to see in it which student programmers could accomplish?

 Last time I used 2to3 (maybe not the latest version) it didn't know
 what to do with string exceptions. I'd suggest converting

 # py 2
 raise 'my string'

 to

 # py 3
 raise Exception( 'my string' )

 And notifying the user that this conversion has been done so that
 he/she can take appropriate action, i.e. this conversion should not
 pass silently.



String exceptions raised a DeprecationWarning in Python 2.5 and were removed
in Python 2.6 (
http://docs.python.org/whatsnew/2.6.html#porting-to-python-2-6) and 2to3 was
meant for valid 2.6+ code, so this isn't really a problem.

In terms of potential enhancements, a fairly simple one would be a warning
for classes that implement __cmp__ but not the rich comparisons. Even better
would be auto-generating the rich comparison methods for these classes.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Tkinter book on current versions

2009-03-22 Thread Paul Watson
On Sat, 2009-03-21 at 08:10 -0700, W. eWatson wrote:
 Paul Watson wrote:
  Has anyone tried the Grayson book, Python and Tkinter Programming,
  with a recent version of Python?
  
  The first example code (calculator) generates a single row of buttons.
  Perhaps I have not applied the errata correctly.  Has anyone been
  successful?
  
  I am using:
  
  Python 2.5.2 (r252:60911, Dec  1 2008, 17:47:46)
  
 If you mean calc1.py, I had no trouble with calc1.py under 2.5, but calc2.py 
 uses Pmw, which I do not have. calc2 has a few problems with mixing tabs and 
 blanks.

Ok.  It works.  I did not type it in correctly.  Thanks for your
confirmation that it does work.

--
http://mail.python.org/mailman/listinfo/python-list


Re: python 3, subclassing TextIOWrapper.

2009-03-22 Thread Benjamin Peterson
Gabriel Genellina gagsl-py2 at yahoo.com.ar writes:
 
 The undocumented behavior is relying on the open() builtin to return a  
 BufferedReader for a binary file.

I don't see the problem. open() will return some BufferedIOBase implmentor, and
that's all that TextIOWrapper needs.




--
http://mail.python.org/mailman/listinfo/python-list


Re: Using python 3 for scripting?

2009-03-22 Thread Paul Watson
On Sun, 2009-03-22 at 17:00 +, Timo Myyrä wrote:
 Ok, I think I'll stick with the 2.6 then. I recall it gave 
 warnings about things that are deprecated in 3.0 so it will make 
 porting the scripts to 3.0 easier. 
 
 I might try 3.0 once I know what kind of scripts are needed.

Yes.  Develop your code using 2.6, then use the '2to3' utility to port
it to 3.0.

As others have mentioned, there are -many- third party packages which
are not available for 3.0 yet.

--
http://mail.python.org/mailman/listinfo/python-list


Re: what features would you like to see in 2to3?

2009-03-22 Thread Daniel Fetchinson
  It's GSoC time again, and I've had lots of interested students asking
 about
  doing on project on improving 2to3. What kinds of improvements and
 features
  would you like to see in it which student programmers could accomplish?

 Last time I used 2to3 (maybe not the latest version) it didn't know
 what to do with string exceptions. I'd suggest converting

 # py 2
 raise 'my string'

 to

 # py 3
 raise Exception( 'my string' )

 And notifying the user that this conversion has been done so that
 he/she can take appropriate action, i.e. this conversion should not
 pass silently.



 String exceptions raised a DeprecationWarning in Python 2.5 and were removed
 in Python 2.6 (
 http://docs.python.org/whatsnew/2.6.html#porting-to-python-2-6) and 2to3 was
 meant for valid 2.6+ code, so this isn't really a problem.

Thanks, I overlooked the fact that 2to3 should only be used on 2.6 code.

Cheers,
Daniel

 In terms of potential enhancements, a fairly simple one would be a warning
 for classes that implement __cmp__ but not the rich comparisons. Even better
 would be auto-generating the rich comparison methods for these classes.



-- 
Psss, psss, put it down! - http://www.cafepress.com/putitdown
--
http://mail.python.org/mailman/listinfo/python-list


safely rename a method with a decorator

2009-03-22 Thread Daniel Fetchinson
I'd like to implement a decorator that would rename the method which
it decorates. Since it's a tricky thing in general involving all sorts
of __magic__ I thought I would ask around first before writing
something buggy :)

It should work something like this:

class myclass( object ):
@rename( 'hello' )
def method( self ):
print 'ok'

# tests

inst = myclass( )
inst.method( )   # raise an AttributeError
inst.hello( )   # prints 'ok'
myclass.method   # raise an AttributeError
myclass.hello   # prints unbound method myclass.hello
assert 'method' in dir( myclass ) is False
assert 'hello' in dir( myclass ) is True

Any ideas?

Cheers,
Daniel

-- 
Psss, psss, put it down! - http://www.cafepress.com/putitdown
--
http://mail.python.org/mailman/listinfo/python-list


Re: Another of those is issues.

2009-03-22 Thread Emanuele D'Arrigo
Thank you all for the replies!

Manu
--
http://mail.python.org/mailman/listinfo/python-list


Re: python 3, subclassing TextIOWrapper.

2009-03-22 Thread Gabriel Genellina
En Sun, 22 Mar 2009 19:12:13 -0300, Benjamin Peterson  
benja...@python.org escribió:

Gabriel Genellina gagsl-py2 at yahoo.com.ar writes:


The undocumented behavior is relying on the open() builtin to return a
BufferedReader for a binary file.


I don't see the problem. open() will return some BufferedIOBase  
implmentor, and

that's all that TextIOWrapper needs.


How do you know? AFAIK, the return value of open() is completely  
undocumented:

http://docs.python.org/3.0/library/functions.html#open
And if you open the  file in text mode, the return value isn't a  
BufferedIOBase.


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: safely rename a method with a decorator

2009-03-22 Thread MRAB

Daniel Fetchinson wrote:

I'd like to implement a decorator that would rename the method which
it decorates. Since it's a tricky thing in general involving all sorts
of __magic__ I thought I would ask around first before writing
something buggy :)

It should work something like this:

class myclass( object ):
@rename( 'hello' )
def method( self ):
print 'ok'

# tests

inst = myclass( )
inst.method( )   # raise an AttributeError
inst.hello( )   # prints 'ok'
myclass.method   # raise an AttributeError
myclass.hello   # prints unbound method myclass.hello
assert 'method' in dir( myclass ) is False
assert 'hello' in dir( myclass ) is True

Any ideas?


What is your use case? Why don't you just give the method the right name
in the first place? :-)
--
http://mail.python.org/mailman/listinfo/python-list


Re: Lambda forms and scoping

2009-03-22 Thread alex goretoy
Sorry to have confused yall. What I meant was that you can do something like
this, where the fucntion isn't called until it is bount to () with the right
params

 def a():
... print inside a
...
 def b():
... print inside b
...
 def c(a,b):
... a()
... b()
...
 d={c:(a,b)}
 d[c][0]()
inside a
 d[c][1]()
inside b
 d[c(d[c][0],d[c][1])]
inside a
inside b
Traceback (most recent call last):
  File stdin, line 1, in module
KeyError: None

where function a and b are bound in function c

-Alex Goretoy
http://www.goretoy.com

Samuel Beckett  - Birth was the death of him.

On Sun, Mar 22, 2009 at 2:42 PM, R. David Murray rdmur...@bitdance.comwrote:

 Gabriel Genellina gagsl-...@yahoo.com.ar wrote:
  En Fri, 20 Mar 2009 23:16:00 -0300, alex goretoy
  aleksandr.gore...@gmail.com escribió:
 
   i looks at lambdas as unbound functions(or super function), in the case
   above we create the functions in a list places it in memory unboud,
 once
   binding a call to the memory address space it returns the value
  
   it is basically same as doing this:
   def f():
   print f
  
   a=f #unbound function, same as rename function
   a() #bind call to address space
 
  Mmm, I don't quite understand what you said. lambda creates functions
 that
  aren't different than functions created by def: apart from the name,
  they're really the same thing.

 Oh, good, I'm not the only one for whom the above didn't make sense :)
 I feel a little less dense now.

  And if you imply that *where* you call a function does matter, it does
  not. A function carries its own local namespace, its own closure, and its
  global namespace. At call time, no additional binding is done (except
  parameters - arguments).
 
  (and the address space is always the one of the running process)

 I poked around in the API docs and experimented with func_closure and
 related attributes, and after bending my brain for a while I think I
 understand this.  The actual implementation of the closure is a single
 list of 'cell' objects which represent namespace slots in the nested
 scopes in which the closed-over function is defined.

 But the fact that it is a single list is an implementation detail, and
 the implementation is in fact carefully designed so that conceptually
 we can think of the closure as giving the function access to those
 nested-scope namespaces in almost(*) the same sense that it has a
 reference to the global and local namespaces.  That is, if what a name
 in _any_ of those namespaces points to is changed, then the closed-over
 function sees those changes.

 In this way, we understand the original example:  when defining a
 lambda having a 'free variable' (that is, one not defined in either the
 local or global scope) that was a name in the surrounding function's
 local namespace, the lambda is going to see any changes made by the
 surrounding function with regards to what that name points to.  Thus,
 the final value that the lambda uses is whatever the final value of the
 for loop variable was when the surrounding function finished executing.

 However, I think that a Python closure is not quite the same thing as a
 'computer science' closure, for the same reason that people coming from a
 language with variables-and-values as opposed to namespaces get confused
 when dealing with Python function call semantics.  Consider:


 http://en.wikipedia.org/wiki/Closure_(computer_science)http://en.wikipedia.org/wiki/Closure_%28computer_science%29

 That says that a closure can be used to provide a function with a private
 set of variables that persist from one invocation to the next, so that
 a value established in one call can be accessed in the next.  The last
 part of that sentence is not true in Python, since any assignment inside
 a function affects only the local (per-invocation) namespace or (given
 a global statement) the global namespace.  A function cannot change the
 thing pointed to by a name in the closure.  Only the outer function,
 for whom that name is in its local namespace, can do that.

 (*) That last sentence in the previous paragraph is why I said '_almost_
 the same sense' earlier: a function can modify what names point to in
 its local and global namespaces, but cannot modify what names point to
 in the closure namespace.

 Of course, we can produce the same _effect_ as a computer science closure
 in Python by using mutable objects...which is exactly parallel to the
 difference between passing mutable or immutable objects in a function
 call.

 --
 R. David Murray   http://www.bitdance.com

 --
 http://mail.python.org/mailman/listinfo/python-list

--
http://mail.python.org/mailman/listinfo/python-list


Re: python 3, subclassing TextIOWrapper.

2009-03-22 Thread Benjamin Peterson
Gabriel Genellina gagsl-py2 at yahoo.com.ar schrieb:
 
 How do you know? AFAIK, the return value of open() is completely  
 undocumented:
 http://docs.python.org/3.0/library/functions.html#open
 And if you open the  file in text mode, the return value isn't a  
 BufferedIOBase.

Oh, I see. I should change that. :) That open() will return a object
implementing  RawIOBase, BufferedIOBase, or TextIOBase depending on the mode is
part of the API.



--
http://mail.python.org/mailman/listinfo/python-list


Re: python 3, subclassing TextIOWrapper.

2009-03-22 Thread lambertdw
Return value of open undocumented?

The return value of open() is a stream, according to
http://docs.python.org/dev/py3k/library/io.html#module-io

Seems like time for a bug report.
--
http://mail.python.org/mailman/listinfo/python-list


Re: python 3, subclassing TextIOWrapper.

2009-03-22 Thread Scott David Daniels

Gabriel Genellina wrote:
En Sun, 22 Mar 2009 19:12:13 -0300, Benjamin Peterson 
benja...@python.org escribió:

Gabriel Genellina gagsl-py2 at yahoo.com.ar writes:

The undocumented behavior is relying on the open() builtin to return a
BufferedReader for a binary file.


I don't see the problem. open() will return some BufferedIOBase 
implmentor, and

that's all that TextIOWrapper needs.


How do you know? AFAIK, the return value of open() is completely 
undocumented:

http://docs.python.org/3.0/library/functions.html#open
And if you open the  file in text mode, the return value isn't a 
BufferedIOBase.


OK, it is documented, but not so clearly.  I went first to the io
module, rather than the open function documentation, and looked at
what io.TextIOWrapper should get ast its first arg:

The io module provides the Python interfaces to stream handling. The
builtin open() function is defined in this module

   class io.TextIOWrapper(buffer[, encoding[, errors[, newline[,
 line_buffering)
A buffered text stream over a BufferedIOBase raw stream, buffer

So, we need a BufferedIOBase constructor.  Back at the introduction to
the io module, we see:
BufferedIOBase deals with buffering on a raw byte stream
(RawIOBase). Its subclasses, BufferedWriter, BufferedReader, and
BufferedRWPair buffer streams that are readable, writable, and both
readable and writable. BufferedRandom provides a buffered interface
to random access streams. BytesIO is a simple stream of in-memory
bytes.

In the Buffered Streams section:
class io.BufferedIOBase
Base class for streams that support buffering. It inherits
IOBase.  There is no public constructor.

OK, that was daunting.
But back in the io.open description, some ways down, we read:
The type of file object returned by the open() function depends on
the mode. When open() is used to open a file in a text mode ('w',
'r', 'wt', 'rt', etc.), it returns a TextIOWrapper. When used to
open a file in a binary mode, the returned class varies: in read
binary mode, it returns a BufferedReader; in write binary and append
binary modes, it returns a BufferedWriter, and in read/write mode,
it returns a BufferedRandom.

Aha! it is documented.  If you have some good ideas on how to make
this more obvious, I'm sure we'd be happy to fix the documentation.

--Scott David Daniels
scott.dani...@acm.org
--
http://mail.python.org/mailman/listinfo/python-list


Re: safely rename a method with a decorator

2009-03-22 Thread Daniel Fetchinson
 I'd like to implement a decorator that would rename the method which
 it decorates. Since it's a tricky thing in general involving all sorts
 of __magic__ I thought I would ask around first before writing
 something buggy :)

 It should work something like this:

 class myclass( object ):
 @rename( 'hello' )
 def method( self ):
 print 'ok'

 # tests

 inst = myclass( )
 inst.method( )   # raise an AttributeError
 inst.hello( )   # prints 'ok'
 myclass.method   # raise an AttributeError
 myclass.hello   # prints unbound method myclass.hello
 assert 'method' in dir( myclass ) is False
 assert 'hello' in dir( myclass ) is True

 Any ideas?

 What is your use case? Why don't you just give the method the right name
 in the first place? :-)

The use case is that I'm writing a turbogears application in which the
URLs are determined by the method names. People might want to change
these names if they want to change the URLs. One way would be to put
the method names into a turbogears configuration file and the @rename
decorator could fetch it from there.

Cheers,
Daniel



-- 
Psss, psss, put it down! - http://www.cafepress.com/putitdown
--
http://mail.python.org/mailman/listinfo/python-list


Re: Lambda forms and scoping

2009-03-22 Thread andrew cooke
alex goretoy wrote:
 Sorry to have confused yall. What I meant was that you can do something
 like
 this, where the fucntion isn't called until it is bount to () with the
 right
 params

 def a():
 ... print inside a
 ...
 def b():
 ... print inside b
 ...
 def c(a,b):
 ... a()
 ... b()
 ...
 d={c:(a,b)}
 d[c][0]()
 inside a
 d[c][1]()
 inside b
 d[c(d[c][0],d[c][1])]
 inside a
 inside b
 Traceback (most recent call last):
   File stdin, line 1, in module
 KeyError: None

 where function a and b are bound in function c


what on earth are you talking about?  you would get exactly the same
result with:

 def alpha():
...   print('in alpha')
...
 def beta():
...   print('in beta')
...
 def c(a, b):
...   a()
...   b()
...
 bollox=42
 d={bollox:(alpha, beta)}
 d[bollox][0]()
in alpha
 d[bollox][1]()
in beta
 d[c(d[bollox][0](),d[bollox][1]())]
in alpha
in beta
Traceback (most recent call last):
  File stdin, line 1, in module
  File stdin, line 2, in c
TypeError: 'NoneType' object is not callable

which is just the combination of:

 c(d[bollox][0],d[bollox][1])
in alpha
in beta
 d[None]
Traceback (most recent call last):
  File stdin, line 1, in module
KeyError: None

d is only used, in d[c(d[bollox][0](),d[bollox][1]())] as a completely
normal lookup via bollox, and then to give an error when looking-up None
(the return value from c).

there is no special binding process.

andrew


--
http://mail.python.org/mailman/listinfo/python-list


Re: python 3, subclassing TextIOWrapper.

2009-03-22 Thread Gabriel Genellina
En Sun, 22 Mar 2009 21:03:38 -0300, Scott David Daniels  
scott.dani...@acm.org escribió:

Gabriel Genellina wrote:
En Sun, 22 Mar 2009 19:12:13 -0300, Benjamin Peterson  
benja...@python.org escribió:

Gabriel Genellina gagsl-py2 at yahoo.com.ar writes:

The undocumented behavior is relying on the open() builtin to return a
BufferedReader for a binary file.


I don't see the problem. open() will return some BufferedIOBase  
implmentor, and

that's all that TextIOWrapper needs.


How do you know? AFAIK, the return value of open() is completely  
undocumented:

http://docs.python.org/3.0/library/functions.html#open
And if you open the  file in text mode, the return value isn't a  
BufferedIOBase.


OK, it is documented, but not so clearly.  I went first to the io
module, rather than the open function documentation, and looked at
what io.TextIOWrapper should get ast its first arg:
[...]
 The type of file object returned by the open() function depends on
 the mode. When open() is used to open a file in a text mode ('w',
 'r', 'wt', 'rt', etc.), it returns a TextIOWrapper. When used to
 open a file in a binary mode, the returned class varies: in read
 binary mode, it returns a BufferedReader; in write binary and append
 binary modes, it returns a BufferedWriter, and in read/write mode,
 it returns a BufferedRandom.

Aha! it is documented.  If you have some good ideas on how to make
this more obvious, I'm sure we'd be happy to fix the documentation.


Ah, yes. Hmm, so the same description appears in three places: the open()  
docstring, the docs for the builtin functions, and the docs for the io  
module. And all three are different :(
Perhaps open.__doc__ == documentation for io.open, and documentation for  
builtin.open should just tell the basic things and refer to io.open for  
details...


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Safe to call Py_Initialize() frequently?

2009-03-22 Thread Graham Dumpleton
On Mar 21, 10:27 am, Mark Hammond skippy.hamm...@gmail.com wrote:
 Calling
 Py_Initialize and Py_Finalize multiple times does leak (Python 3 has
 mechanisms so this need to always be true in the future, but it is true
 now for non-trivial apps.

Mark, can you please clarify this statement you are making. The
grammar used makes it a bit unclear.

Are you saying, that effectively by design, Python 3.0 will always
leak memory upon Py_Finalize() being called, or that it shouldn't leak
memory and that problems with older versions of Python have been fixed
up?

I know that some older versions of Python leaked memory on Py_Finalize
(), but if this is now guaranteed to always be the case and nothing
can be done about it, then the final death knell will have been rung
on mod_python and also embedded mode of mod_wsgi. This is because both
those systems rely on being able to call Py_Initialize()/Py_Finalize()
multiple times. At best they would have to change how they handle
initialisation of Python and defer it until sub processes have been
forked, but this will have some impact on performance and memory
usage.

So, more information appreciated.

Related link on mod_wsgi list about this at:

  
http://groups.google.com/group/modwsgi/browse_frm/thread/65305cfc798c088c?hl=en

Graham


--
http://mail.python.org/mailman/listinfo/python-list


Re: safely rename a method with a decorator

2009-03-22 Thread R. David Murray
Daniel Fetchinson fetchin...@googlemail.com wrote:
  I'd like to implement a decorator that would rename the method which
  it decorates. Since it's a tricky thing in general involving all sorts
  of __magic__ I thought I would ask around first before writing
  something buggy :)
 
  It should work something like this:
 
  class myclass( object ):
  @rename( 'hello' )
  def method( self ):
  print 'ok'
 
  # tests
 
  inst = myclass( )
  inst.method( )   # raise an AttributeError
  inst.hello( )   # prints 'ok'
  myclass.method   # raise an AttributeError
  myclass.hello   # prints unbound method myclass.hello
  assert 'method' in dir( myclass ) is False
  assert 'hello' in dir( myclass ) is True
 
  Any ideas?
 
  What is your use case? Why don't you just give the method the right name
  in the first place? :-)
 
 The use case is that I'm writing a turbogears application in which the
 URLs are determined by the method names. People might want to change
 these names if they want to change the URLs. One way would be to put
 the method names into a turbogears configuration file and the @rename
 decorator could fetch it from there.

Use a WSGI routing engine instead.

--
R. David Murray   http://www.bitdance.com

--
http://mail.python.org/mailman/listinfo/python-list


Re: Lambda forms and scoping

2009-03-22 Thread alex goretoy
I'm talking about in function c, where we bind the function call, kinda same
thing with lambdas too, exactly same

def func1(a):
return a
def func2(a=,b=0):
return %s has %d apples%(a,b)
def c(f1,f2,**kwargs):
print f2(kwargs['name'], f1(kwargs['apple'])) #bind call to function 1
and return from a bound function 2

bollox=42
 d={bollox: (c,(func1,func2)} # call c, which is bound, passing in func1
and func2 unbound, function pointer
 print d[bollox][0](func1,func2,name=fred flinstone,apple=bollox)
fred flinstone has 42 apples

-Alex Goretoy
http://www.goretoy.com

Fred Allen  - The first time I sang in the church choir; two hundred people
changed their religion.
--
http://mail.python.org/mailman/listinfo/python-list


Re: safely rename a method with a decorator

2009-03-22 Thread andrew cooke

there was discussion related to this same problem earlier in the week.

http://groups.google.com/group/comp.lang.python/browse_thread/thread/ad08eb9eb83a4e61/d1906cbc26e16d15?q=Mangle+function+name+with+decorator%3F

andrew


Daniel Fetchinson wrote:
 I'd like to implement a decorator that would rename the method which
 it decorates. Since it's a tricky thing in general involving all sorts
 of __magic__ I thought I would ask around first before writing
 something buggy :)

 It should work something like this:

 class myclass( object ):
 @rename( 'hello' )
 def method( self ):
 print 'ok'

 # tests

 inst = myclass( )
 inst.method( )   # raise an AttributeError
 inst.hello( )   # prints 'ok'
 myclass.method   # raise an AttributeError
 myclass.hello   # prints unbound method myclass.hello
 assert 'method' in dir( myclass ) is False
 assert 'hello' in dir( myclass ) is True

 Any ideas?

 Cheers,
 Daniel

 --
 Psss, psss, put it down! - http://www.cafepress.com/putitdown
 --
 http://mail.python.org/mailman/listinfo/python-list




--
http://mail.python.org/mailman/listinfo/python-list


Re: Generator

2009-03-22 Thread John Posner

[snip]
  If you want next(g) to yield 3, you'd have to do something like:
  
  g = (x for x in s[:])
  
  where s[:] makes a copy of s that is then iterated over.
 

BTW, this simpler statement works, too:

   g = iter(s[:])

--
http://mail.python.org/mailman/listinfo/python-list


Re: Async serial communication/threads sharing data

2009-03-22 Thread Jean-Paul Calderone

On Sun, 22 Mar 2009 12:30:04 -0500, Nick Craig-Wood n...@craig-wood.com wrote:

[snip]

I wrote a serial port to TCP proxy (with logging) with twisted.  The
problem I had was that twisted serial ports didn't seem to have any
back pressure.  By that I mean I could pump data into a 9600 baud
serial port at 10 Mbit/s.  Twisted would then buffer the data for me
using 10s or 100s or Megabytes of RAM.  No data would be lost, but
there would be hours of latency and my program would use up all my RAM
and explode.

What I wanted to happen was for twisted to stop taking the data when
the serial port buffer was full and to only take the data at 9600
baud.

I never did solve that problem :-(



This is what Twisted's producers and consumers let you do.  There's a
document covering these features:

http://twistedmatrix.com/projects/core/documentation/howto/producers.html

In the case of a TCP to serial forwarder, you don't actually have to
implement either a producer or a consumer, since both the TCP connection
and the serial connection are already both producers and consumers.  All
you need to do is hook them up to each other so that when the send buffer
of one fills up, the other one gets paused, and when the buffer is empty
again, it gets resumed.

Hope this helps!

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Lambda forms and scoping

2009-03-22 Thread Gabriel Genellina
En Sun, 22 Mar 2009 20:43:02 -0300, alex goretoy  
aleksandr.gore...@gmail.com escribió:


Sorry to have confused yall. What I meant was that you can do something  
like
this, where the fucntion isn't called until it is bount to () with the  
right

params


def a():

... print inside a
...

def b():

... print inside b
...

def c(a,b):

... a()
... b()
...

d={c:(a,b)}
d[c][0]()

inside a

d[c][1]()

inside b

d[c(d[c][0],d[c][1])]

inside a
inside b
Traceback (most recent call last):
  File stdin, line 1, in module
KeyError: None

where function a and b are bound in function c


Ah, so this is a terminology issue. I'd say that a and b are *called* in  
function c, not *bound*. I've never seen bind used in this sense before,  
but as Humpty Dumpty said to Alice:


- When I use a word, it means just what I choose it to mean -- neither  
more nor less.
- The question is, whether you can make words mean so many different  
things.

- The question is, which is to be master -- that's all.

(Lewis Carroll, Through the Looking Glass, ch. VI)

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Lambda forms and scoping

2009-03-22 Thread alex goretoy

 Ah, so this is a terminology issue. I'd say that a and b are *called* in
 function c, not *bound*. I've never seen bind used in this sense before,
 but as Humpty Dumpty said to Alice:


i use the word expressively

-Alex Goretoy
http://www.goretoy.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: Lambda forms and scoping

2009-03-22 Thread Gabriel Genellina
En Sun, 22 Mar 2009 16:42:21 -0300, R. David Murray  
rdmur...@bitdance.com escribió:

Gabriel Genellina gagsl-...@yahoo.com.ar wrote:



And if you imply that *where* you call a function does matter, it does
not. A function carries its own local namespace, its own closure, and  
its

global namespace. At call time, no additional binding is done (except
parameters - arguments).


I poked around in the API docs and experimented with func_closure and
related attributes, and after bending my brain for a while I think I
understand this.  The actual implementation of the closure is a single
list of 'cell' objects which represent namespace slots in the nested
scopes in which the closed-over function is defined.

But the fact that it is a single list is an implementation detail, and
the implementation is in fact carefully designed so that conceptually
we can think of the closure as giving the function access to those
nested-scope namespaces in almost(*) the same sense that it has a
reference to the global and local namespaces.  That is, if what a name
in _any_ of those namespaces points to is changed, then the closed-over
function sees those changes.

In this way, we understand the original example:  when defining a
lambda having a 'free variable' (that is, one not defined in either the
local or global scope) that was a name in the surrounding function's
local namespace, the lambda is going to see any changes made by the
surrounding function with regards to what that name points to.  Thus,
the final value that the lambda uses is whatever the final value of the
for loop variable was when the surrounding function finished executing.


Exactly.


However, I think that a Python closure is not quite the same thing as a
'computer science' closure, for the same reason that people coming from a
language with variables-and-values as opposed to namespaces get confused
when dealing with Python function call semantics.  Consider:

http://en.wikipedia.org/wiki/Closure_(computer_science)

That says that a closure can be used to provide a function with a private
set of variables that persist from one invocation to the next, so that
a value established in one call can be accessed in the next.  The last
part of that sentence is not true in Python, since any assignment inside
a function affects only the local (per-invocation) namespace or (given
a global statement) the global namespace.  A function cannot change the
thing pointed to by a name in the closure.  Only the outer function,
for whom that name is in its local namespace, can do that.


That's true in Python 2.x, but 3.x has the nonlocal keyword - so you can  
modify variables in outer scopes too:


p3 z = 1
p3 def o():
...   z = 2
...   def i():
... nonlocal z
... print(z in i:, z)
... z = 5
...   print(z in o:, z)
...   i()
...   print(z in o:, z)
...   z=3
...   print(z in o at exit:, z)
...   return i
...
p3 i=o()
z in o: 2
z in i: 2
z in o: 5
z in o at exit: 3
p3 z
1
p3 i()
z in i: 3
p3 i()
z in i: 5

(Anyway I think the inability to modify a variable doesn't invalidate  
the closure concept...)


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


splitting a large dictionary into smaller ones

2009-03-22 Thread per
hi all,

i have a very large dictionary object that is built from a text file
that is about 800 MB -- it contains several million keys.  ideally i
would like to pickle this object so that i wouldnt have to parse this
large file to compute the dictionary every time i run my program.
however currently the pickled file is over 300 MB and takes a very
long time to write to disk - even longer than recomputing the
dictionary from scratch.

i would like to split the dictionary into smaller ones, containing
only hundreds of thousands of keys, and then try to pickle them. is
there a way to easily do this? i.e. is there an easy way to make a
wrapper for this such that i can access this dictionary as just one
object, but underneath it's split into several? so that i can write
my_dict[k] and get a value, or set my_dict[m] to some value without
knowing which sub dictionary it's in.

if there aren't known ways to do this, i would greatly apprciate any
advice/examples on how to write this data structure from scratch,
reusing as much of the dict() class as possible.

thanks.

large_dict[a]
--
http://mail.python.org/mailman/listinfo/python-list


Re: splitting a large dictionary into smaller ones

2009-03-22 Thread Paul Rubin
per perfr...@gmail.com writes:
 i would like to split the dictionary into smaller ones, containing
 only hundreds of thousands of keys, and then try to pickle them. 

That already sounds like the wrong approach.  You want a database.
--
http://mail.python.org/mailman/listinfo/python-list


Re: splitting a large dictionary into smaller ones

2009-03-22 Thread odeits
On Mar 22, 7:32 pm, per perfr...@gmail.com wrote:
 hi all,

 i have a very large dictionary object that is built from a text file
 that is about 800 MB -- it contains several million keys.  ideally i
 would like to pickle this object so that i wouldnt have to parse this
 large file to compute the dictionary every time i run my program.
 however currently the pickled file is over 300 MB and takes a very
 long time to write to disk - even longer than recomputing the
 dictionary from scratch.

 i would like to split the dictionary into smaller ones, containing
 only hundreds of thousands of keys, and then try to pickle them. is
 there a way to easily do this? i.e. is there an easy way to make a
 wrapper for this such that i can access this dictionary as just one
 object, but underneath it's split into several? so that i can write
 my_dict[k] and get a value, or set my_dict[m] to some value without
 knowing which sub dictionary it's in.

 if there aren't known ways to do this, i would greatly apprciate any
 advice/examples on how to write this data structure from scratch,
 reusing as much of the dict() class as possible.

 thanks.

 large_dict[a]

So that I understand the goal, the reason you wish to split the
dictionary into smaller ones is so that you dont have to write all of
the dictionaries to disk if they haven't changed? Or are you trying to
speed up the initial load time?

If you are trying to speed up the initial load time I don't think this
will help. If the save time is what you are after maybe you want to
check out memory mapped files.

--
http://mail.python.org/mailman/listinfo/python-list


Re: splitting a large dictionary into smaller ones

2009-03-22 Thread per
On Mar 22, 10:51 pm, Paul Rubin http://phr...@nospam.invalid wrote:
 per perfr...@gmail.com writes:
  i would like to split the dictionary into smaller ones, containing
  only hundreds of thousands of keys, and then try to pickle them.

 That already sounds like the wrong approach.  You want a database.

fair enough - what native python database would you recommend? i
prefer not to install anything commercial or anything other than
python modules
--
http://mail.python.org/mailman/listinfo/python-list


Re: splitting a large dictionary into smaller ones

2009-03-22 Thread Armin
On Monday 23 March 2009 00:01:40 per wrote:
 On Mar 22, 10:51 pm, Paul Rubin http://phr...@nospam.invalid wrote:
  per perfr...@gmail.com writes:
   i would like to split the dictionary into smaller ones, containing
   only hundreds of thousands of keys, and then try to pickle them.
 
  That already sounds like the wrong approach.  You want a database.

 fair enough - what native python database would you recommend? i
 prefer not to install anything commercial or anything other than
 python modules
 --
 http://mail.python.org/mailman/listinfo/python-list

sqlite3 module  read more about it in python documentation.

-- 
Armin Moradi
--
http://mail.python.org/mailman/listinfo/python-list


Re: splitting a large dictionary into smaller ones

2009-03-22 Thread Paul Rubin
per perfr...@gmail.com writes:
 fair enough - what native python database would you recommend? i
 prefer not to install anything commercial or anything other than
 python modules

I think sqlite is the preferred one these days.
--
http://mail.python.org/mailman/listinfo/python-list


  1   2   >