ANN: wxPython 2.8.10.1

2009-05-17 Thread Robin Dunn

Announcing
--

The 2.8.10.1 release of wxPython is now available for download at
http://wxpython.org/download.php.  This release fixes the problem with 
using Python 2.6's default manifest, and updates wxcairo to work with 
the latest PyCairo.  A summary of changes is listed below and also

at http://wxpython.org/recentchanges.php.

Source code is available as a tarball and a source RPM, as well as
binaries for Python 2.4, 2.5 and 2.6, for Windows and Mac, as well
some packages for various Linux distributions.



What is wxPython?
-

wxPython is a GUI toolkit for the Python programming language. It
allows Python programmers to create programs with a robust, highly
functional graphical user interface, simply and easily. It is
implemented as a Python extension module that wraps the GUI components
of the popular wxWidgets cross platform library, which is written in
C++.

wxPython is a cross-platform toolkit. This means that the same program
will usually run on multiple platforms without modifications.
Currently supported platforms are 32-bit and 64-bit Microsoft Windows,
most Linux or other Unix-like systems using GTK2, and Mac OS X 10.4+.
In most cases the native widgets are used on each platform to provide
a 100% native look and feel for the application.



Changes in 2.8.10.1
---

wx.grid.Grid:  Added methods CalcRowLabelsExposed,
CalcColLabelsExposed, CalcCellsExposed, DrawRowLabels, DrawRowLabel,
DrawColLabels, and DrawColLabel to the Grid class.

Added the wx.lib.mixins.gridlabelrenderer module.  It enables the use
of label renderers for Grids that work like the cell renderers do.  See
the demo for a simple sample.

Solved the manifests problem with Python 2.6 on Windows.  wxPython now
programatically creates its own activation context and loads a
manifest in that context that specifies the use of the themable common
controls on Windows XP and beyond.  This also means that the external
manifest files are no longer needed for the other versions of Python.

wx.Colour: Updated the wx.Colour typemaps and also the wx.NamedColour
constructor to optionally allow an alpha value to be passed in the
color string, using these syntaxes:  #RRGGBBAA or ColourName:AA

wx.lib.wxcairo:  Fixed a problem resulting from PyCairo changing the
layout of their C API structure in a non-binary compatible way.  The
new wx.lib.wxcairo is known to now work with PyCairo 1.6.4 and 1.8.4,
and new binaries for Windows are available online at
http://wxpython.org/cairo/

--
Robin Dunn
Software Craftsman
http://wxPython.org

--
http://mail.python.org/mailman/listinfo/python-announce-list

   Support the Python Software Foundation:
   http://www.python.org/psf/donations.html


Re: Swapping superclass from a module

2009-05-17 Thread Terry Reedy

Steven D'Aprano wrote:

On Sat, 16 May 2009 09:55:39 -0700, Emanuele D'Arrigo wrote:


Hi everybody,

let's assume I have a module with loads of classes inheriting from one
class, from the same module, i.e.:

[...]

Now, let's also assume that myFile.py cannot be changed or it's
impractical to do so. Is there a way to replace the SuperClass at
runtime, so that when I instantiate one of the subclasses NewSuperClass
is used instead of the original SuperClass provided by the first module
module?


That's called monkey patching or duck punching.

http://en.wikipedia.org/wiki/Monkey_patch

http://wiki.zope.org/zope2/MonkeyPatch

http://everything2.com/title/monkey%2520patch


If the names of superclasses is resolved when classes are instantiated, 
the patching is easy.  If, as I would suspect, the names are resolved 
when the classes are created, before the module becomes available to the 
importing code, then much more careful and extensive patching would be 
required, if it is even possible.  (Objects in tuples cannot be 
replaced, and some attributes are not writable.)


tjr

--
http://mail.python.org/mailman/listinfo/python-list


Re: Swapping superclass from a module

2009-05-17 Thread Michele Simionato
Try this:

class Base(object):
pass

class C(Base):
pass

class NewBase(object):
pass

C.__bases__ = (NewBase,)

help(C)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: using urlretrive/urlopen

2009-05-17 Thread rustom
On May 16, 6:30 am, Gabriel Genellina gagsl-...@yahoo.com.ar
wrote:
 En Fri, 15 May 2009 12:03:09 -0300, Rustom Mody rustompm...@gmail.com  
 escribió:

  I am trying to talk to a server that runs on localhost
  The server runs onhttp://localhost:7000/and that opens alright in  a
  web browser.

  However if I use urlopen or urlretrieve what I get is this 'file' --
  obviously not the one that the browser gets:

  htmlbody bgcolor=#ff
  Query 'http://localhost:7000/'not implemented
  /body/html
  Any tips/clues?

 Please post the code you're using to access the server.
 Do you have any proxy set up?

 --
 Gabriel Genellina

Thanks Gabriel!

urlopen(http://localhost:7000;, proxies={})
seems to be what I needed.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Swapping superclass from a module

2009-05-17 Thread Peter Otten
Terry Reedy wrote:

 Steven D'Aprano wrote:
 On Sat, 16 May 2009 09:55:39 -0700, Emanuele D'Arrigo wrote:
 
 Hi everybody,

 let's assume I have a module with loads of classes inheriting from one
 class, from the same module, i.e.:
 [...]
 Now, let's also assume that myFile.py cannot be changed or it's
 impractical to do so. Is there a way to replace the SuperClass at
 runtime, so that when I instantiate one of the subclasses NewSuperClass
 is used instead of the original SuperClass provided by the first module
 module?
 
 That's called monkey patching or duck punching.
 
 http://en.wikipedia.org/wiki/Monkey_patch
 
 http://wiki.zope.org/zope2/MonkeyPatch
 
 http://everything2.com/title/monkey%2520patch
 
 If the names of superclasses is resolved when classes are instantiated,
 the patching is easy.  If, as I would suspect, the names are resolved
 when the classes are created, before the module becomes available to the
 importing code, then much more careful and extensive patching would be
 required, if it is even possible.  (Objects in tuples cannot be
 replaced, and some attributes are not writable.)

It may be sufficient to patch the subclasses:

$ cat my_file.py
class Super(object):
def __str__(self):
return old

class Sub(Super):
def __str__(self):
return Sub(%s) % super(Sub, self).__str__()

class Other(object):
pass

class SubSub(Sub, Other):
def __str__(self):
return SubSub(%s) % super(SubSub, self).__str__()

if __name__ == __main__:
print Sub()

$ cat main2.py
import my_file
OldSuper = my_file.Super

class NewSuper(OldSuper):
def __str__(self):
return new + super(NewSuper, self).__str__()

my_file.Super = NewSuper
for n, v in vars(my_file).iteritems():
if v is not NewSuper:
try:
bases = v.__bases__
except AttributeError:
pass
else:
if OldSuper in bases:
print patching, n
v.__bases__ = tuple(NewSuper if b is OldSuper else b
for b in bases)


print my_file.Sub()
print my_file.SubSub()
$ python main2.py
patching Sub
Sub(newold)
SubSub(Sub(newold))

Peter

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to get Exif data from a jpeg file

2009-05-17 Thread Arnaud Delobelle
Daniel Fetchinson fetchin...@googlemail.com writes:

 I need to get the creation date from a jpeg file in Python.  Googling
 brought up a several references to apparently defunct modules.  The best
 way I have been able to find so far is something like this:

 from PIL import Image
 img = Image.open('img.jpg')
 exif_data = img._getexif()
 creation_date = exif_data[36867]

 Where 36867 is the exif tag for the creation date data (which I found by
 ooking at PIL.ExifTags.TAGS).  But this doesn't even seem to be
 documented in the PIL docs.  Is there a more natural way to do this?


 Have you tried http://sourceforge.net/projects/exif-py/ ?

 HTH,
 Daniel

I will have a look - thank you.

-- 
Arnaud
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: KeyboardInterrupt catch does not shut down the socketserver

2009-05-17 Thread Igor Katson

Gabriel Genellina wrote:
En Sat, 16 May 2009 04:04:03 -0300, Igor Katson descent...@gmail.com 
escribió:

Gabriel Genellina wrote:

En Fri, 15 May 2009 09:04:05 -0300, Igor Katson escribió:

Lawrence D'Oliveiro wrote:
In message mailman.185.1242375959.8015.python-l...@python.org, 
Igor Katson wrote:

Lawrence D'Oliveiro wrote:
In message mailman.183.1242371089.8015.python-l...@python.org, 
Igor Katson wrote:



I have problems in getting a SocketServer to shutdown.

Shutdown implies closing the listening socket, doesn't it?


No (perhaps it should, but that is another issue). There is a
documentation bug; BaseServer.shutdown is documented as Tells the
serve_forever() loop to stop and waits until it does. [1]
The docstring is much more explicit: Stops the serve_forever loop.
Blocks until the loop has finished. This must be called while
serve_forever() is running in another thread, or it will deadlock.

So, if you have a single-threaded server, *don't* use shutdown(). 
And, to orderly close the listening socket, use server_close() 
instead. Your


Hmm. Gabriel, could you please show the same for the threaded 
version? This one deadlocks:

[code removed]


The shutdown method should *only* be called while serve_forever is 
running. If called after server_forever exited, shutdown() blocks 
forever.


[code removed]
But, what are you after, exactly? I think I'd use the above code only 
in a GUI application with a background server.

There are other alternatives, like asyncore or Twisted.
For now, I am just using server.server_close() and it works. The server 
itself is an external transaction manager for PostgreSQL, when a client 
connects to it, serialized data interchange beetween the server and the 
client starts, e.g. first the client sends data, then the server sends 
data, then again the client, then the server and so on.
I haven't used asyncore or Twisted yet, and didn't know about their 
possible usage while writing the project. I'll research in that direction.


--
http://mail.python.org/mailman/listinfo/python-list


Re: Concurrency Email List

2009-05-17 Thread David M. Besonen
On 5/16/2009 5:26 PM, Aahz wrote:

 On Sat, May 16, 2009, Pete wrote:

 python-concurre...@googlegroups.com is a new email list
 for discussion of concurrency issues in python.

 Is there some reason you chose not to create a list on
 python.org?  I'm not joining the list because Google
 requires that you create a login.

i too would join if it was hosted at python.org, and will not
if it's hosted at google for the same reason.


  -- david

-- 
http://mail.python.org/mailman/listinfo/python-list



Re: Concurrency Email List

2009-05-17 Thread James Matthews
I second this. Google groups are annoying! Just request that it be added to
python.org

James

On Sun, May 17, 2009 at 12:22 PM, David M. Besonen dav...@panix.com wrote:

 On 5/16/2009 5:26 PM, Aahz wrote:

  On Sat, May 16, 2009, Pete wrote:
 
  python-concurre...@googlegroups.com is a new email list
  for discussion of concurrency issues in python.
 
  Is there some reason you chose not to create a list on
  python.org?  I'm not joining the list because Google
  requires that you create a login.

 i too would join if it was hosted at python.org, and will not
 if it's hosted at google for the same reason.


  -- david

 --
 http://mail.python.org/mailman/listinfo/python-list




-- 

http://www.goldwatches.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Your Favorite Python Book

2009-05-17 Thread James Matthews
For me it's any book on Django, Core Python 2nd Edition (which I will buy if
updated) and Python Power.



On Fri, May 15, 2009 at 7:05 PM, Lou Pecora pec...@anvil.nrl.navy.milwrote:

 In article
 e1db4ac7-4997-401b-9a1f-112787a9e...@r3g2000vbp.googlegroups.com,
  Mike Driscoll kyoso...@gmail.com wrote:

  On May 11, 4:45 pm, Chris Rebert c...@rebertia.com wrote:

  
   I like Python in a Nutshell as a reference book, although it's now
   slightly outdated given Python 3.0's release (the book is circa 2.5).
  
   Cheers,
   Chris

 Python in a Nutshell -- Absolutely!  Covers a lot in an easily
 accessible way.  The first book I reach for.  I hope Martelli updates it
 to 3.0.

 --
 -- Lou Pecora

 --
 http://mail.python.org/mailman/listinfo/python-list




-- 
http://www.goldwatches.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generic web parser

2009-05-17 Thread James Matthews
I don't see the issue of using urllib and Sqllite for everything you mention
here.

On Sat, May 16, 2009 at 4:18 PM, S.Selvam s.selvams...@gmail.com wrote:

 Hi all,

 I have to design web parser which will visit the given list of websites and
 need to fetch a particular set of details.
 It has to be so generic that even if we add new websites, it must fetch
 those details if available anywhere.
 So it must be something like a framework.

 Though i have done some parsers ,but they will parse for a given
 format(For. eg It will get the data from title tag).But here each website
 may have different format and the information may available within any tags.

 I know its a tough task for me,but i feel with python it should be
 possible.
 My request is, if such thing is already available please let me know ,also
 your suggestions are welcome.

 Note: I planned to use BeautifulSoup for parsing.

 --
 Yours,
 S.Selvam

 --
 http://mail.python.org/mailman/listinfo/python-list




-- 
http://www.goldwatches.com
-- 
http://mail.python.org/mailman/listinfo/python-list


os.path.split gets confused with combined \\ and /

2009-05-17 Thread Stef Mientki

hello,

just wonder how others solve this problem:
I've to distribute both python files and data files.
Everything is developed under windows and now the datafiles contains 
paths with mixed \\ and /.

Under windows everthing is working well,
but under Ubuntu / Fedora sometimes strange errors occurs.
Now I was thinking that using os.path.split would solve all problems,
but if I've the following relative path

   path1/path2\\filename.dat

split will deliver the following under windows
  path = path1 / path2
  filename = filename.dat

while under Linux it will give me
  path = path1
  filename = path\\filename.dat

So I'm now planning to replace all occurences of os.path.split with a 
call to the following function


def path_split ( filename ) :
   # under Ubuntu a filename with both
   # forward and backward slashes seems to give trouble
   # already in os.path.split
   filename = filename.replace ( '\\','/')

   return os.path.split ( filename )

how do others solve this problem ?
Are there better ways to solve this problem ?

thanks,
Stef Mientki
--
http://mail.python.org/mailman/listinfo/python-list


Re: os.path.split gets confused with combined \\ and /

2009-05-17 Thread Chris Rebert
On Sun, May 17, 2009 at 3:11 AM, Stef Mientki stef.mien...@gmail.com wrote:
 hello,

 just wonder how others solve this problem:
 I've to distribute both python files and data files.
 Everything is developed under windows and now the datafiles contains paths
 with mixed \\ and /.
 Under windows everthing is working well,
 but under Ubuntu / Fedora sometimes strange errors occurs.
 Now I was thinking that using os.path.split would solve all problems,
 but if I've the following relative path

   path1/path2\\filename.dat

 split will deliver the following under windows
  path = path1 / path2
  filename = filename.dat

 while under Linux it will give me
  path = path1
  filename = path\\filename.dat

 So I'm now planning to replace all occurences of os.path.split with a call
 to the following function

 def path_split ( filename ) :
   # under Ubuntu a filename with both
   # forward and backward slashes seems to give trouble
   # already in os.path.split
   filename = filename.replace ( '\\','/')

   return os.path.split ( filename )

 how do others solve this problem ?
 Are there better ways to solve this problem ?

Just always use forward-slashes for paths in the first place since
they work on both platforms.
But your technique seems a reasonable way of dealing with mixed-up
datafiles, since you're in that situation.

Cheers,
Chris
-- 
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


python and samba

2009-05-17 Thread Fatih Tumen
Hi,
I am working on a directory synchronisation tool for Linux.

$ gcc --version
gcc (GCC) 4.2.4 (Ubuntu 4.2.4-1ubuntu3)
$ python -V
Python 2.5.2
$ smbclient --version
Version 3.0.28a

I first designed it to work on the local filesystem. I am using filecmp.py
(distributed with Python) for comparing the files. Of course I had to
modify it in a way that it will return object rather than stdout. I have
this compare method in my main class which is os.walk()-ing through
the directories and labeling the files on the GUI according to this returned

object by filecmp.py.
Now I want add support for syncing between (unix and NT) network
shares without having to change my abstraction much. At first googling
I hit a samba discussion appeared on this list ages ago. It did not help me
much though. I also found this pysmbc, libsmbclient bindings, written by
Tim Waug but it requires libsmbclient-3.2.x. which requires
libc6 = 2.8~20080505 which comes with Intreprid but I have reasons
not to upgrade my Hardy.
Another thing I found was pysamba, by Juan M. Casillas,  but it needs
a bit of hack; samba needs to be configured with python.
I am looking for something that won't bother the end user nor me. I want
to keep it as simple as possible. I will distribute the libraries myself if
I have
to and if licence is not an issue.

If I have to summarise, I need to get m_time and size (and preferably stick
with
os.walk and filecmp.py if possible) and copy files back and forth using
samba (unix - NT).  I don't wanna reinvent the wheel or overengineer
anything but if I can do this without any additional libraries, great!

I would appreciate any advice on this.
Thanks..
-- 
   Fatih
-- 
http://mail.python.org/mailman/listinfo/python-list


filecmp.py licensing

2009-05-17 Thread Fatih Tumen
Hi,
As I mentioned on the other thread about samba, I am working on a
synchronisation project and using filecmp.py for comparing files. I
modified it according to my needs and planning to distribute it with my
package. At first glance it seems that filecmp.py is a part of Python
package. Though I don't see a licence header on the file I assume that
it is licensed under PSFL. I will distribute my project with GNU GPL or
Creative Commons BY-NC-SA. My question is if I renamed it and
put the Python attribution on the header, would it be alright?

What is the proper way of doing this?
-- 
   Fatih
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Swapping superclass from a module

2009-05-17 Thread Emanuele D'Arrigo
Wow, thank you all. Lots of ideas and things to try! I wish I knew
which one is going to work best. The module I'm trying to (monkey!)
patch is pxdom, and as it is a bit long (5700 lines of code in one
file!) I'm not quite sure if the simplest patching method will work or
the more complicated ones are necessary. Let me chew on it for a
little while. Luckilly pxdom has a test suite and I should be able to
find out quite quickly if a patch has broken something. Let me try and
chew on it, I'll report back.

Thank you all!

Manu
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Circular relationship: object - type

2009-05-17 Thread Chris Rebert
On Thu, May 14, 2009 at 3:34 PM, Mohan Parthasarathy surut...@gmail.com wrote:
 Hi,

 I have read several articles and emails:

 http://www.cafepy.com/article/python_types_and_objects/python_types_and_objects.html#relationships-transitivity-figure
 http://mail.python.org/pipermail/python-list/2007-February/600128.html

  I understand how type serves to be the default metaclass when an object is
 created and it also can be changed. I also read a few examples on why this
 metaclass can be a powerful concept. What I fail to understand is the
 circular relationship between object and type. Why does type have to be
 subclassed from object ? Just to make Everything is an object and all
 objects are  inherited from object class.

Yes, essentially. It makes the system nice and axiomatic, so one
doesn't have to deal with special-cases when writing introspective
code.

Axiom 1. All classes ultimately subclass the class `object`.
Equivalently, `issubclass(X, object) and X.__mro__[-1] is object` are
true for any class `X`, and `isinstance(Y, object)` is true for all
objects `Y`.

Axiom 2. All (meta)classes are ultimately instances of the (meta)class `type`.
Equivalently, repeated application of type() to any object will
eventually result in `type`.

Any other formulation besides Python's current one would break these
handy axioms. The canonical object-oriented language, Smalltalk, had a
nearly identical setup with regard to its meta-objects.

Cheers,
Chris
-- 
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Conceptual flaw in pxdom?

2009-05-17 Thread Emanuele D'Arrigo
Hi everybody,

I'm looking at pxdom and in particular at its foundation class
DOMObject (source code at the end of the message). In it, the author
attempts to allow the establishment of readonly and readwrite
attributes through the special methods __getattr__ and __setattr__. In
so doing is possible to create subclasses such as:

class MyClass(DOMObject):

def __init__(self):
DOMObject.__init__(self)
self._anAttribute = im_a_readonly_attribute

## The presence of the following method allows
## read-only access to the attribute without the
## underscore, i.e.: aVar = myClassInstance.anAttribute
def _get_anAttribute(self): return self._anAttribute

   ## Uncommenting the following line allows the setting of
anAttribute.
   ## Commented, the same action would raise an exception.
   ## def _set_anAttribute(self, value): self._anAttribute = value

This is all good and dandy and it works, mostly. However, if you look
at the code below for the method __getattr__, it appears to be
attempting to prevent direct access to -any- variable starting with an
underscore.

def __getattr__(self, key):
if key[:1]=='_':
raise AttributeError, key

But access isn't actually prevented because __getattr__ is invoked -
only- if an attribute is not found by normal means. So, is it just me
or that little snipped of code either has another purpose or simply
doesn't do the intended job?

Manu

-
class DOMObject:
Base class for objects implementing DOM interfaces

Provide properties in a way compatible with old versions of
Python:
subclass should provide method _get_propertyName to make a read-
only
property, and also _set_propertyName for a writable. If the
readonly
property is set, all other properties become immutable.

def __init__(self, readonly= False):
self._readonly= readonly

def _get_readonly(self):
return self._readonly

def _set_readonly(self, value):
self._readonly= value

def __getattr__(self, key):
if key[:1]=='_':
raise AttributeError, key
try:
getter= getattr(self, '_get_'+key)
except AttributeError:
raise AttributeError, key
return getter()

def __setattr__(self, key, value):
if key[:1]=='_':
self.__dict__[key]= value
return

# When an object is readonly, there are a few attributes that
can be set
# regardless. Readonly is one (obviously), but due to a wart
in the DOM
# spec it must also be possible to set nodeValue and
textContent to
# anything on nodes where these properties are defined to be
null (with no
# effect). Check specifically for these property names as a
nasty hack
# to conform exactly to the spec.
#
if self._readonly and key not in ('readonly', 'nodeValue',
'textContent'):
raise NoModificationAllowedErr(self, key)
try:
setter= getattr(self, '_set_'+key)
except AttributeError:
if hasattr(self, '_get_'+key):
raise NoModificationAllowedErr(self, key)
raise AttributeError, key
setter(value)
-- 
http://mail.python.org/mailman/listinfo/python-list


Help with Tkinter on OS X --- driving me insane!

2009-05-17 Thread Edward Grefenstette
Any attempt to do anything with Tkinter (save import) raises the
following show-stopping error:

Traceback (most recent call last):
  File stdin, line 1, in module
  File /Library/Frameworks/Python.framework/Versions/2.6/lib/
python2.6/lib-tk/Tkinter.py, line 1645, in __init__
self._loadtk()
  File /Library/Frameworks/Python.framework/Versions/2.6/lib/
python2.6/lib-tk/Tkinter.py, line 1659, in _loadtk
% (_tkinter.TK_VERSION, tk_version)
RuntimeError: tk.h version (8.4) doesn't match libtk.a version (8.5)

As you can see, I'm running the vanilla install python on OS X 10.5.7.
Does anyone know how I can fix this? Google searches have yielded
results ranging from suggestions it has been fixed (not for me) to
recommendations that the user rebuild python against a newer version
of libtk (which I have no idea how to do).

I would greatly appreciate any assistance the community can provide on
the matter.

Best,
Edward
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Best practice for operations on streams of text

2009-05-17 Thread Beni Cherniavsky
On May 8, 12:07 am, MRAB goo...@mrabarnett.plus.com wrote:
 def compound_filter(token_stream):
      stream = lowercase_token(token_stream)
      stream = remove_boring(stream)
      stream = remove_dupes(stream)
      for t in stream(t):
          yield t

The last loop is superfluous.  You can just do::

def compound_filter(token_stream):
 stream = lowercase_token(token_stream)
 stream = remove_boring(stream)
 stream = remove_dupes(stream)
 return stream

which is simpler and slightly more efficient.  This works because from
the caller's perspective, a generator is just a function that returns
an iterator.  It doesn't matter whether it implements the iterator
itself by containing ``yield`` statements, or shamelessly passes on an
iterator implemented elsewhere.
-- 
http://mail.python.org/mailman/listinfo/python-list


Fwd: [python-win32] Fwd: Autosizing column widths in Excel using win32com.client ?

2009-05-17 Thread James Matthews
-- Forwarded message --
From: Tim Golden m...@timgolden.me.uk
Date: Sun, May 17, 2009 at 1:00 PM
Subject: Re: [python-win32] Fwd: Autosizing column widths in Excel using
win32com.client ?
To:
Cc: Python-Win32 List python-wi...@python.org


James Matthews wrote:

 -- Forwarded message --
 From: nonse...@mynonsense.net
 Date: Fri, May 15, 2009 at 7:45 PM
 Subject: Autosizing column widths in Excel using win32com.client ?
 To: python-list@python.org


 Is there a way to autosize the widths of the excel columns as when you
 double click them manually?


Usual answer to this kind of question: record a macro
in Excel to do what you want, and then use COM to
automate that. On my Excel 2007, this is the VBA result
of recording:

vba
Sub Macro2()
'
' Macro2 Macro
'
'
  Columns(A:A).Select
  Range(Selection, Selection.End(xlToRight)).Select
  Columns(A:D).EntireColumn.AutoFit
End Sub
/vba


You then just fire up win32com.client or comtypes,
according to taste and go from there. If you need
help with the COM side of things, post back here.

TJG
___
python-win32 mailing list
python-wi...@python.org
http://mail.python.org/mailman/listinfo/python-win32



-- 
http://www.goldwatches.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Adding a Par construct to Python?

2009-05-17 Thread jeremy
From a user point of view I think that adding a 'par' construct to
Python for parallel loops would add a lot of power and simplicity,
e.g.

par i in list:
updatePartition(i)

There would be no locking and it would be the programmer's
responsibility to ensure that the loop was truly parallel and correct.

The intention of this would be to speed up Python execution on multi-
core platforms. Within a few years we will see 100+ core processors as
standard and we need to be ready for that.

There could also be parallel versions of map, filter and reduce
provided.

BUT...none of this would be possible with the current implementation
of Python with its Global Interpreter Lock, which effectively rules
out true parallel processing.

See: 
http://jessenoller.com/2009/02/01/python-threads-and-the-global-interpreter-lock/

What do others think?

Jeremy Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread jeremy
On 17 May, 13:05, jer...@martinfamily.freeserve.co.uk wrote:
 From a user point of view I think that adding a 'par' construct to
 Python for parallel loops would add a lot of power and simplicity,
 e.g.

 par i in list:
     updatePartition(i)

...actually, thinking about this further, I think it would be good to
add a 'sync' keyword which causes a thread rendezvous within a
parallel loop. This would allow parallel loops to run for longer in
certain circumstances without having the overhead of stopping and
restarting all the threads, e.g.

par i in list:
for j in iterations:
   updatePartion(i)
   sync
   commitBoundaryValues(i)
   sync

This example is a typical iteration over a grid, e.g. finite elements,
calculation, where the boundary values need to be read by neighbouring
partitions before they are updated. It assumes that the new values of
the boundary values are stored in temporary variables until they can
be safely updated.

Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with Tkinter on OS X --- driving me insane!

2009-05-17 Thread Piet van Oostrum
 Edward Grefenstette egre...@gmail.com (EG) wrote:

EG Any attempt to do anything with Tkinter (save import) raises the
EG following show-stopping error:

EG Traceback (most recent call last):
EG   File stdin, line 1, in module
EG   File /Library/Frameworks/Python.framework/Versions/2.6/lib/
EG python2.6/lib-tk/Tkinter.py, line 1645, in __init__
EG self._loadtk()
EG   File /Library/Frameworks/Python.framework/Versions/2.6/lib/
EG python2.6/lib-tk/Tkinter.py, line 1659, in _loadtk
EG % (_tkinter.TK_VERSION, tk_version)
EG RuntimeError: tk.h version (8.4) doesn't match libtk.a version (8.5)

EG As you can see, I'm running the vanilla install python on OS X 10.5.7.
EG Does anyone know how I can fix this? Google searches have yielded
EG results ranging from suggestions it has been fixed (not for me) to
EG recommendations that the user rebuild python against a newer version
EG of libtk (which I have no idea how to do).

EG I would greatly appreciate any assistance the community can provide on
EG the matter.

Have you installed Tk version 8.5?

If so, remove it. You might also install the latest 8.4 version.
-- 
Piet van Oostrum p...@cs.uu.nl
URL: http://pietvanoostrum.com [PGP 8DAE142BE17999C4]
Private email: p...@vanoostrum.org
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Photoimage on button appears pixelated when button is disabled

2009-05-17 Thread Dustan
On May 15, 2:59 pm, Dustan dustangro...@gmail.com wrote:
 In tkinter, when I place a photoimage on a button and disable the
 button, the image has background dots scattered through the image.
 Searching the web, I wasn't able to find any documentation on this
 behavior, nor how to turn it off. So here I am. How do I keep this
 from happening?

 Also, how can I extract the base-64 encoding of a GIF, so I can put
 the image directly into the code instead of having to keep a separate
 file for the image?

 All responses appreciated,
 Dustan

At the very least, someone ought to be able to provide an answer to
the second question.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Steven D'Aprano
On Sun, 17 May 2009 05:05:03 -0700, jeremy wrote:

 From a user point of view I think that adding a 'par' construct to
 Python for parallel loops would add a lot of power and simplicity, e.g.
 
 par i in list:
 updatePartition(i)
 
 There would be no locking and it would be the programmer's
 responsibility to ensure that the loop was truly parallel and correct.

What does 'par' actually do there?

Given that it is the programmer's responsibility to ensure that 
updatePartition was actually parallelized, couldn't that be written as:

for i in list:
updatePartition(i)

and save a keyword?



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Photoimage on button appears pixelated when button is disabled

2009-05-17 Thread Tim Golden

Dustan wrote:

On May 15, 2:59 pm, Dustan dustangro...@gmail.com wrote:

In tkinter, when I place a photoimage on a button and disable the
button, the image has background dots scattered through the image.
Searching the web, I wasn't able to find any documentation on this
behavior, nor how to turn it off. So here I am. How do I keep this
from happening?

Also, how can I extract the base-64 encoding of a GIF, so I can put
the image directly into the code instead of having to keep a separate
file for the image?

All responses appreciated,
Dustan


At the very least, someone ought to be able to provide an answer to
the second question.


Well I know nothing about Tkinter, but to do base64 encoding,
you want to look at the base64 module.

TJG
--
http://mail.python.org/mailman/listinfo/python-list


Re: What's the use of the else in try/except/else?

2009-05-17 Thread Beni Cherniavsky
[Long mail.  You may skip to the last paragraph to get the summary.]

On May 12, 12:35 pm, Steven D'Aprano wrote:
 To really be safe, that should become:

 try:
     rsrc = get(resource)
 except ResourceError:
     log('no more resources available')
     raise
 else:
     try:
         do_something_with(rsrc)
     finally:
         rsrc.close()

 which is now starting to get a bit icky (but only a bit, and only because
 of the nesting, not because of the else).

Note that this example doesn't need ``else``, because the ``except``
clause re-raises the exception.  It could as well be::

try:
rsrc = get(resource)
except ResourceError:
log('no more resources available')
raise
try:
do_something_with(rsrc)
finally:
rsrc.close()

``else`` is relevant only if your ``except`` clause(s) may quietly
suppress the exception::

try:
rsrc = get(resource)
except ResourceError:
log('no more resources available, skipping do_something')
else:
try:
do_something_with(rsrc)
finally:
rsrc.close()

And yes, it's icky - not because of the ``else`` but because
aquisition-release done correctly is always an icky pattern.  That's
why we now have the ``with`` statement - assuming `get()` implements a
context manager, you should be able to write::

with get(resource) as rsrc:
do_something_with(rsrc)

But wait, what if get() fails?  We get an exception!  We wanted to
suppress it::

try:
with get(resource) as rsrc:
do_something_with(rsrc)
except ResourceError:
log('no more resources available, skipping do_something')

But wait, that catches ResourceError in ``do_something_with(rsrc)`` as
well!  Which is precisely what we tried to avoid by using
``try..else``!
Sadly, ``with`` doesn't have an else clause.  If somebody really
believes it should support this pattern, feel free to write a PEP.

I think this is a bad example of ``try..else``.  First, why would you
silently suppress out-of-resource exceptions?  If you don't suppress
them, you don't need ``else``.  Second, such runtime problems are
normally handled uniformely at some high level (log / abort / show a
message box / etc.), wherever they occur - if ``do_something_with(rsrc)
`` raises `ResourceError` you'd want it handled the same way.

So here is another, more practical example of ``try..else``:

try:
bar = foo.get_bar()
except AttributeError:
quux = foo.get_quux()
else:
quux = bar.get_quux()

assuming ``foo.get_bar()`` is optional but ``bar.get_quux()`` isn't.
If we had put ``bar.get_quux()`` inside the ``try``, it could mask a
bug.  In fact to be precise, we don't want to catch an AttributeError
that may happen during the call to ``get_bar()``, so we should move
the call into the ``else``::

try:
get_bar = foo.get_bar
except AttributeError:
quux = foo.get_quux()
else:
quux = get_bar().get_quux()

Ick!

The astute reader will notice that cases where it's important to
localize exception catching involves frequent excetions like
`AttributeError` or `IndexError` -- and that these cases are already
handled by `getattr` and `dict.get` (courtesy of Guido's Time
Machine).

Bottom line(s):
1. ``try..except..else`` is syntactically needed only when ``except``
might suppress the exception.
2. Minimal scope of ``try..except`` doesn't always apply (for
`AttirbuteError` it probably does, for `MemoryError` it probably
doesn't).
3. It *is* somewhat ackward to use, which is why the important use
cases - exceptions that are frequently raised and caught - deserve
wrapping by functions like `getattr()` with default arguments.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Grant Edwards
On 2009-05-17, Steven D'Aprano st...@remove-this-cybersource.com.au wrote:
 On Sun, 17 May 2009 05:05:03 -0700, jeremy wrote:

 From a user point of view I think that adding a 'par' construct to
 Python for parallel loops would add a lot of power and simplicity, e.g.
 
 par i in list:
 updatePartition(i)
 
 There would be no locking and it would be the programmer's
 responsibility to ensure that the loop was truly parallel and correct.

 What does 'par' actually do there?

My reading of the OP is that it tells the interpreter that it
can execute any/all iterations of updatePartion(i) in parallel
(or presumably serially in any order) rather than serially in a
strict sequence.

 Given that it is the programmer's responsibility to ensure
 that updatePartition was actually parallelized, couldn't that
 be written as:

 for i in list:
 updatePartition(i)

 and save a keyword?

No, because a for loop is defined to execute it's iterations
serially in a specific order.  OTOH, a par loop is required
to execute once for each value, but those executions could
happen in parallel or in any order.

At least that's how I understood the OP.

-- 
Grant

-- 
http://mail.python.org/mailman/listinfo/python-list


pushback iterator

2009-05-17 Thread Matus
Hallo pylist,

I searches web and python documentation for implementation of pushback
iterator but found none in stdlib.

problem:

when you parse a file, often you have to read a line from parsed file
before you can decide if you want that line it or not. if not, it would
be a nice feature to be able po push the line back into the iterator, so
nest time when you pull from iterator you get this 'unused' line.

solution:
=
I found a nice and fast solution somewhere on the net:

-
class Pushback_wrapper( object ):
def __init__( self, it ):
self.it = it
self.pushed_back = [ ]
self.nextfn = it.next

def __iter__( self ):
return self

def __nonzero__( self ):
if self.pushed_back:
return True

try:
self.pushback( self.nextfn( ) )
except StopIteration:
return False
else:
return True

def popfn( self ):
lst = self.pushed_back
res = lst.pop( )
if not lst:
self.nextfn = self.it.next
return res

def next( self ):
return self.nextfn( )

def pushback( self, item ):
self.pushed_back.append( item )
self.nextfn = self.popfn
-

proposal:
=
as this is (as I suppose) common problem, would it be possible to extend
the stdlib of python (ie itertools module) with a similar solution so
one do not have to reinvent the wheel every time pushback is needed?


thx, Matus
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread bearophileHUGS
Jeremy Martin, nowadays a parallelfor can be useful, and in future
I'll try to introduce similar things in D too, but syntax isn't
enough. You need a way to run things in parallel. But Python has the
GIL.
To implement a good parallel for your language may also need more
immutable data structures (think about finger trees), and pure
functions can improve the safety of your code a lot, and so on.

The multiprocessing module Python2.6 already does something like what
you are talking about. For example I have used the parallel map of
that module to almost double the speed of a small program of mine.

Bye,
bearophile
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pushback iterator

2009-05-17 Thread Mike Kazantsev
On Sun, 17 May 2009 16:39:38 +0200
Matus mat...@gmail.com wrote:

 I searches web and python documentation for implementation of pushback
 iterator but found none in stdlib.
 
 problem:
 
 when you parse a file, often you have to read a line from parsed file
 before you can decide if you want that line it or not. if not, it would
 be a nice feature to be able po push the line back into the iterator, so
 nest time when you pull from iterator you get this 'unused' line.
  
...
 
 proposal:
 =
 as this is (as I suppose) common problem, would it be possible to extend
 the stdlib of python (ie itertools module) with a similar solution so
 one do not have to reinvent the wheel every time pushback is needed?  

Sounds to me more like an iterator with a cache - you can't really pull
the line from a real iterable like generator function and then just push
it back.
If this iterator is really a list then you can use it as such w/o
unnecessary in-out operations.

And if you're pushing back the data for later use you might just as
well push it to dict with the right indexing, so the next pop won't
have to roam thru all the values again but instantly get the right one
from the cache, or just get on with that iterable until it depletes.

What real-world scenario am I missing here?

-- 
Mike Kazantsev // fraggod.net


signature.asc
Description: PGP signature
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with Tkinter on OS X --- driving me insane!

2009-05-17 Thread Benjamin Kaplan
On Sun, May 17, 2009 at 8:42 AM, Piet van Oostrum p...@cs.uu.nl wrote:

  Edward Grefenstette egre...@gmail.com (EG) wrote:

 EG Any attempt to do anything with Tkinter (save import) raises the
 EG following show-stopping error:

 EG Traceback (most recent call last):
 EG   File stdin, line 1, in module
 EG   File /Library/Frameworks/Python.framework/Versions/2.6/lib/
 EG python2.6/lib-tk/Tkinter.py, line 1645, in __init__
 EG self._loadtk()
 EG   File /Library/Frameworks/Python.framework/Versions/2.6/lib/
 EG python2.6/lib-tk/Tkinter.py, line 1659, in _loadtk
 EG % (_tkinter.TK_VERSION, tk_version)
 EG RuntimeError: tk.h version (8.4) doesn't match libtk.a version (8.5)

 EG As you can see, I'm running the vanilla install python on OS X 10.5.7.
 EG Does anyone know how I can fix this? Google searches have yielded
 EG results ranging from suggestions it has been fixed (not for me) to
 EG recommendations that the user rebuild python against a newer version
 EG of libtk (which I have no idea how to do).

 EG I would greatly appreciate any assistance the community can provide on
 EG the matter.

 Have you installed Tk version 8.5?

 If so, remove it. You might also install the latest 8.4 version.


There were a couple bugs in the 2.6.0 installer that stopped Tkinter from
working and this error message was given by one of them[1]. The python
installer looked in /System/Library before /Library/, so it used the System
Tk. The linker looks in /Library first so it found the user installed Tk and
used that instead. You then get a version mismatch. Try reinstalling Python
(use 2.6.2 if you're not already). That should get it to link with the
proper Tk.

[1] http://bugs.python.org/issue4017

--
 Piet van Oostrum p...@cs.uu.nl
 URL: http://pietvanoostrum.com [PGP 8DAE142BE17999C4]
 Private email: p...@vanoostrum.org
 --
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Steven D'Aprano
On Sun, 17 May 2009 09:26:35 -0500, Grant Edwards wrote:

 On 2009-05-17, Steven D'Aprano st...@remove-this-cybersource.com.au
 wrote:
 On Sun, 17 May 2009 05:05:03 -0700, jeremy wrote:

 From a user point of view I think that adding a 'par' construct to
 Python for parallel loops would add a lot of power and simplicity,
 e.g.
 
 par i in list:
 updatePartition(i)
 
 There would be no locking and it would be the programmer's
 responsibility to ensure that the loop was truly parallel and correct.

 What does 'par' actually do there?
 
 My reading of the OP is that it tells the interpreter that it can
 execute any/all iterations of updatePartion(i) in parallel (or
 presumably serially in any order) rather than serially in a strict
 sequence.
 
 Given that it is the programmer's responsibility to ensure that
 updatePartition was actually parallelized, couldn't that be written as:

 for i in list:
 updatePartition(i)

 and save a keyword?
 
 No, because a for loop is defined to execute it's iterations serially
 in a specific order.  OTOH, a par loop is required to execute once for
 each value, but those executions could happen in parallel or in any
 order.
 
 At least that's how I understood the OP.

I can try guessing what the OP is thinking just as well as anyone else, 
but in the face of ambiguity, refuse the temptation to guess :)

It isn't clear to me what the OP expects the par construct is supposed 
to actually do. Does it create a thread for each iteration? A process? 
Something else? Given that the rest of Python will be sequential (apart 
from explicitly parallelized functions), and that the OP specifies that 
updatePartition still needs to handle its own parallelization, does it 
really matter if the calls to updatePartition happen sequentially?

If it's important to make the calls in arbitrary order, random.shuffle 
will do that. If there's some other non-sequential and non-random order 
to the calls, the OP should explain what it is. What else, if anything, 
does par do, that it needs to be a keyword and statement rather than a 
function? What does it do that (say) a parallel version of map() wouldn't 
do?

The OP also suggested:

There could also be parallel versions of map, filter and reduce
provided.

It makes sense to talk about parallelizing map(), because you can 
allocate a list of the right size to slot the results into as they become 
available. I'm not so sure about filter(), unless you give up the 
requirement that the filtered results occur in the same order as the 
originals.

But reduce()? I can't see how you can parallelize reduce(). By its 
nature, it has to run sequentially: it can't operate on the nth item 
until it is operated on the (n-1)th item.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pushback iterator

2009-05-17 Thread Mike Kazantsev
Somehow, I got the message off the list.

On Sun, 17 May 2009 17:42:43 +0200
Matus mat...@gmail.com wrote:

  Sounds to me more like an iterator with a cache - you can't really pull
  the line from a real iterable like generator function and then just push
  it back.
 
 true, that is why you have to implement this iterator wrapper

I fail to see much point of such a dumb cache, in most cases you
shouldn't iterate again and again thru the same sequence, so what's
good hardcoding (and thus, encouraging) such thing will do?

Besides, this wrapper breaks iteration order, since it's cache is LIFO
instead of FIFO, which should rather be implemented with deque instead
of list.

  If this iterator is really a list then you can use it as such w/o
  unnecessary in-out operations.
 
 of course, it is not a list. you can wrap 'real' iterator using this
 wrapper (), and voila, you can use pushback method to 'push back' item
 received by next method. by calling next again, you will get pushed back
 item again, that is actually the point.

Wrapper differs from list(iterator) in only one thing: it might not
make it to the end of iterable, but if pushing back is common
operation, there's a good chance you'll make it to the end of the
iterator during execution, dragging whole thing along as a burden each
time.

  And if you're pushing back the data for later use you might just as
  well push it to dict with the right indexing, so the next pop won't
  have to roam thru all the values again but instantly get the right one
  from the cache, or just get on with that iterable until it depletes.
  
  What real-world scenario am I missing here?
  
 
 ok, I admit that that the file was not good example. better example
 would be just any iterator you use in your code.

Somehow I've always managed to avoid such re-iteration scenarios, but
of course, it could be just my luck ;)

-- 
Mike Kazantsev // fraggod.net


signature.asc
Description: PGP signature
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread MRAB

Steven D'Aprano wrote:

On Sun, 17 May 2009 09:26:35 -0500, Grant Edwards wrote:


On 2009-05-17, Steven D'Aprano st...@remove-this-cybersource.com.au
wrote:

On Sun, 17 May 2009 05:05:03 -0700, jeremy wrote:


From a user point of view I think that adding a 'par' construct to
Python for parallel loops would add a lot of power and simplicity,
e.g.

par i in list:
updatePartition(i)

There would be no locking and it would be the programmer's
responsibility to ensure that the loop was truly parallel and correct.

What does 'par' actually do there?

My reading of the OP is that it tells the interpreter that it can
execute any/all iterations of updatePartion(i) in parallel (or
presumably serially in any order) rather than serially in a strict
sequence.


Given that it is the programmer's responsibility to ensure that
updatePartition was actually parallelized, couldn't that be written as:

for i in list:
updatePartition(i)

and save a keyword?

No, because a for loop is defined to execute it's iterations serially
in a specific order.  OTOH, a par loop is required to execute once for
each value, but those executions could happen in parallel or in any
order.

At least that's how I understood the OP.


I can try guessing what the OP is thinking just as well as anyone else, 
but in the face of ambiguity, refuse the temptation to guess :)


It isn't clear to me what the OP expects the par construct is supposed 
to actually do. Does it create a thread for each iteration? A process? 
Something else? Given that the rest of Python will be sequential (apart 
from explicitly parallelized functions), and that the OP specifies that 
updatePartition still needs to handle its own parallelization, does it 
really matter if the calls to updatePartition happen sequentially?


If it's important to make the calls in arbitrary order, random.shuffle 
will do that. If there's some other non-sequential and non-random order 
to the calls, the OP should explain what it is. What else, if anything, 
does par do, that it needs to be a keyword and statement rather than a 
function? What does it do that (say) a parallel version of map() wouldn't 
do?


The OP also suggested:

There could also be parallel versions of map, filter and reduce
provided.

It makes sense to talk about parallelizing map(), because you can 
allocate a list of the right size to slot the results into as they become 
available. I'm not so sure about filter(), unless you give up the 
requirement that the filtered results occur in the same order as the 
originals.


But reduce()? I can't see how you can parallelize reduce(). By its 
nature, it has to run sequentially: it can't operate on the nth item 
until it is operated on the (n-1)th item.



It can calculate the items in parallel, but the final result must be
calculated sequence, although if the final operation is commutative then
some of them could be done in parallel.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Diez B. Roggisch
But reduce()? I can't see how you can parallelize reduce(). By its 
nature, it has to run sequentially: it can't operate on the nth item 
until it is operated on the (n-1)th item.


That depends on the operation in question. Addition for example would 
work. My math-skills are a bit too rusty to qualify the exact nature of 
the operation, commutativity springs to my mind.


Diez
--
http://mail.python.org/mailman/listinfo/python-list


Re: pushback iterator

2009-05-17 Thread Luis Alberto Zarrabeitia Gomez

Quoting Mike Kazantsev mk.frag...@gmail.com:

 And if you're pushing back the data for later use you might just as
 well push it to dict with the right indexing, so the next pop won't
 have to roam thru all the values again but instantly get the right one
 from the cache, or just get on with that iterable until it depletes.
 
 What real-world scenario am I missing here?

Other than one he described in his message? Neither of your proposed solutions
solves the OP's problem. He doesn't have a list (he /could/ build a list, and
thus defeat the purpose of having an iterator). He /could/ use alternative data
structures, like the dictionary you are suggesting... and he is, he is using his
pushback iterator, but he has to include it over and over.

Currently there is no good pythonic way of building a functions that decide to
stop consuming from an iterator when the first invalid input is encountered:
that last, invalid input is lost from the iterator. You can't just abstract the
whole logic inside the function, something must leak.

Consider, for instance, the itertools.dropwhile (and takewhile). You can't just
use it like

i = iter(something)
itertools.dropwhile(condition, i)
# now consume the rest

Instead, you have to do this:

i = iter(something)
i = itertools.dropwhile(condition, i) 
# and now i contains _another_ iterator
# and the first one still exists[*], but shouldn't be used
# [*] (assume it was a parameter instead of the iter construct)

For parsing files, for instance (similar to the OP's example), it could be nice
to do:

f = file(something)
lines = iter(f)
parse_headers(lines)
parse_body(lines)
parse_footer(lines)

which is currently impossible.

To the OP: if you don't mind doing instead:

f = file(something)
rest = parse_headers(f)
rest = parse_body(rest)
rest = parse_footer(rest)

you could return itertools.chain([pushed_back], iterator) from your parsing
functions. Unfortunately, this way will add another layer of itertools.chain on
top of the iterator, you will have to hope this will not cause a
performace/memory penalty.

Cheers,

-- 
Luis Zarrabeitia
Facultad de Matemática y Computación, UH
http://profesores.matcom.uh.cu/~kyrie

-- 
Participe en Universidad 2010, del 8 al 12 de febrero de 2010
La Habana, Cuba 
http://www.universidad2010.cu

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Roy Smith
In article 0220260f$0$20645$c3e8...@news.astraweb.com,
 Steven D'Aprano st...@remove-this-cybersource.com.au wrote:

 But reduce()? I can't see how you can parallelize reduce(). By its 
 nature, it has to run sequentially: it can't operate on the nth item 
 until it is operated on the (n-1)th item.

Well, if you're willing to impose the additional constraint that f() must 
be associative, then you could load the items into a tree, and work your 
way up from the bottom of the tree, applying f() pairwise to the left and 
right child of each node, propagating upward.

It would take k1 * O(n) to create the (unsorted) tree, and if all the pairs 
in each layer really could be done in parallel, k2 * O(lg n) to propagate 
the intermediate values.  As long as k2 is large compared to k1, you win.

Of course, if the items are already in some random-access container (such 
as a list), you don't even need to do the first step, but in the general 
case of generating the elements on the fly with an iterable, you do.  Even 
with an iterable, you could start processing the first elements while 
you're still generating the rest of them, but that gets a lot more 
complicated and assuming k2  k1, of limited value.

If k2 is about the same as k1, then the whole thing is pointless.

But, this would be something to put in a library function, or maybe a 
special-purpose Python derivative, such as numpy.  Not in the core language.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Gary Herron

MRAB wrote:

Steven D'Aprano wrote:

On Sun, 17 May 2009 09:26:35 -0500, Grant Edwards wrote:


On 2009-05-17, Steven D'Aprano st...@remove-this-cybersource.com.au
wrote:

On Sun, 17 May 2009 05:05:03 -0700, jeremy wrote:


From a user point of view I think that adding a 'par' construct to
Python for parallel loops would add a lot of power and simplicity,
e.g.

par i in list:
updatePartition(i)

There would be no locking and it would be the programmer's
responsibility to ensure that the loop was truly parallel and 
correct.

What does 'par' actually do there?

My reading of the OP is that it tells the interpreter that it can
execute any/all iterations of updatePartion(i) in parallel (or
presumably serially in any order) rather than serially in a strict
sequence.


Given that it is the programmer's responsibility to ensure that
updatePartition was actually parallelized, couldn't that be written 
as:


for i in list:
updatePartition(i)

and save a keyword?

No, because a for loop is defined to execute it's iterations serially
in a specific order.  OTOH, a par loop is required to execute once 
for

each value, but those executions could happen in parallel or in any
order.

At least that's how I understood the OP.


I can try guessing what the OP is thinking just as well as anyone 
else, but in the face of ambiguity, refuse the temptation to guess :)


It isn't clear to me what the OP expects the par construct is 
supposed to actually do. Does it create a thread for each iteration? 
A process? Something else? Given that the rest of Python will be 
sequential (apart from explicitly parallelized functions), and that 
the OP specifies that updatePartition still needs to handle its own 
parallelization, does it really matter if the calls to 
updatePartition happen sequentially?


If it's important to make the calls in arbitrary order, 
random.shuffle will do that. If there's some other non-sequential and 
non-random order to the calls, the OP should explain what it is. What 
else, if anything, does par do, that it needs to be a keyword and 
statement rather than a function? What does it do that (say) a 
parallel version of map() wouldn't do?


The OP also suggested:

There could also be parallel versions of map, filter and reduce
provided.

It makes sense to talk about parallelizing map(), because you can 
allocate a list of the right size to slot the results into as they 
become available. I'm not so sure about filter(), unless you give up 
the requirement that the filtered results occur in the same order as 
the originals.


But reduce()? I can't see how you can parallelize reduce(). By its 
nature, it has to run sequentially: it can't operate on the nth item 
until it is operated on the (n-1)th item.



It can calculate the items in parallel, but the final result must be
calculated sequence, although if the final operation is commutative then
some of them could be done in parallel.


That should read associative not commutative.

For instance A+B+C+D could be calculated sequentially as implied by
 ((A+B)+C)+D
or with some parallelism as implied by
 (A+B)+(C+D)
That's an application of the associativity of addition.

Gary Herron


--
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Steven D'Aprano
On Sun, 17 May 2009 18:24:34 +0200, Diez B. Roggisch wrote:

 But reduce()? I can't see how you can parallelize reduce(). By its
 nature, it has to run sequentially: it can't operate on the nth item
 until it is operated on the (n-1)th item.
 
 That depends on the operation in question. Addition for example would
 work. 

You'd think so, but you'd be wrong. You can't assume addition is always 
commutative.

 reduce(operator.add, (1.0, 1e57, -1e57))
0.0
 reduce(operator.add, (1e57, -1e57, 1.0))
1.0



 My math-skills are a bit too rusty to qualify the exact nature of
 the operation, commutativity springs to my mind.

And how is reduce() supposed to know whether or not some arbitrary 
function is commutative?


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Steven D'Aprano
On Sun, 17 May 2009 17:19:15 +0100, MRAB wrote:

 But reduce()? I can't see how you can parallelize reduce(). By its
 nature, it has to run sequentially: it can't operate on the nth item
 until it is operated on the (n-1)th item.
 
 It can calculate the items in parallel, 

I don't understand what calculation you are talking about. Let's take a 
simple example:

reduce(operator.sub, [100, 50, 25, 5])  = 100-50-25-5 = 20

What calculations do you expect to do in parallel?


 but the final result must be
 calculated sequence, although if the final operation is commutative then
 some of them could be done in parallel.

But reduce() can't tell whether the function being applied is commutative 
or not. I suppose it could special-case a handful of special cases (e.g. 
operator.add for int arguments -- but not floats!) or take a caller-
supplied argument that tells it whether the function is commutative or 
not. But in general, you can't assume the function being applied is 
commutative or associative, so unless you're willing to accept undefined 
behaviour, I don't see any practical way of parallelizing reduce().


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread MRAB

Steven D'Aprano wrote:

On Sun, 17 May 2009 17:19:15 +0100, MRAB wrote:


But reduce()? I can't see how you can parallelize reduce(). By its
nature, it has to run sequentially: it can't operate on the nth item
until it is operated on the (n-1)th item.

It can calculate the items in parallel, 


I don't understand what calculation you are talking about. Let's take a 
simple example:


reduce(operator.sub, [100, 50, 25, 5])  = 100-50-25-5 = 20

What calculations do you expect to do in parallel?



but the final result must be
calculated sequence, although if the final operation is commutative then
some of them could be done in parallel.


But reduce() can't tell whether the function being applied is commutative 
or not. I suppose it could special-case a handful of special cases (e.g. 
operator.add for int arguments -- but not floats!) or take a caller-
supplied argument that tells it whether the function is commutative or 
not. But in general, you can't assume the function being applied is 
commutative or associative, so unless you're willing to accept undefined 
behaviour, I don't see any practical way of parallelizing reduce().



I meant associative not commutative.

I was thinking about calculating the sum of a list of expressions, where
the expressions could be calculated in parallel.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Diez B. Roggisch



My math-skills are a bit too rusty to qualify the exact nature of
the operation, commutativity springs to my mind.


And how is reduce() supposed to know whether or not some arbitrary 
function is commutative?


I don't recall anybody saying it should know that - do you? The OP wants 
to introduce parallel variants, not replace the existing ones.


Diez
--
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Diez B. Roggisch
But reduce() can't tell whether the function being applied is commutative 
or not. I suppose it could special-case a handful of special cases (e.g. 
operator.add for int arguments -- but not floats!) or take a caller-
supplied argument that tells it whether the function is commutative or 
not. But in general, you can't assume the function being applied is 
commutative or associative, so unless you're willing to accept undefined 
behaviour, I don't see any practical way of parallelizing reduce().


def reduce(operation, sequence, startitem=None, parallelize=False)

should be enough. Approaches such as OpenMP also don't guess, they use 
explicit annotations.


Diez
--
http://mail.python.org/mailman/listinfo/python-list


threading issue

2009-05-17 Thread anusha k
hi,

i am using pygtk,glade in the front end and postgresql,python-twisted
(xmlrpc) as the back end.My issue is i am trying to add the progress bar in
my application but when the progress bar comes up it is blocking the backend
process.So i started using threading in my application.But when i added that
it is working but i am not able to destroy the window where the progress bar
is present, which is part of main window.

my code is
class ledger(threading.Thread):
This class sets the fraction of the progressbar

#Thread event, stops the thread if it is set.
stopthread = threading.Event()
 ..
wTree = gtk.glade.XML('gnukhata/main_window.glade','window_progressbar')
window = wTree.get_widget('window_progressbar')
window.set_size_request(300,50)
progressbar = wTree.get_widget('progressbar')
#window.connect('destroy',self.main_quit)
window.show_all()



def run(self):
Run method, this is the code that runs while thread is alive.

#Importing the progressbar widget from the global scope
#global progressbar
course = True
#While the stopthread event isn't setted, the thread keeps going on
while course :
# Acquiring the gtk global mutex
gtk.gdk.threads_enter()
#Setting a random value for the fraction
l=0.1
while l1:
self.progressbar.pulse()
time.sleep(0.1)
l=l+0.1


queryParams = []
res1=self.x.account.getAllAccountNamesByLedger(queryParams)
for l in range(0,len(res1)):
   ;;
# Releasing the gtk global mutex
gtk.gdk.threads_leave()

#Delaying 100ms until the next iteration
time.sleep(0.1)
course = False
#gtk.main_quit()
global fs
print 'anu'
#Stopping the thread and the gtk's main loop
fs.stop()

#window.destroy()

self.ods.save(Ledger.ods)
os.system(ooffice Ledger.ods)


def stop(self):
Stop method, sets the event to terminate the thread's main
loop
self.stopthread.set()

*



njoy the share of freedom,
Anusha Kadambala
-- 
http://mail.python.org/mailman/listinfo/python-list


Seeking old post on developers who like IDEs vs developers who like simple languages

2009-05-17 Thread Steve Ferg
A few years ago someone, somewhere on the Web, posted a blog in which
he observed that developers, by general temperament, seem to fall into
two groups.

On the one hand, there are developers who love big IDEs with lots of
features (code generation, error checking, etc.), and rely on them to
provide the high level of support needed to be reasonably productive
in heavy-weight languages (e.g. Java).

On the other hand there are developers who much prefer to keep things
light-weight and simple.  They like clean high-level languages (e.g.
Python) which are compact enough that you can keep the whole language
in your head, and require only a good text editor to be used
effectively.

The author wasn't saying that one was better than the other: only that
there seemed to be this recognizable difference in preferences.

I periodically think of that blog, usually in circumstances that make
me also think Boy, that guy really got it right.  But despite
repeated and prolonged bouts of googling I haven't been able to find
the article again.  I must be using the wrong search terms or
something.

Does anybody have a link to this article?

Thanks VERY MUCH in advance,
-- Steve Ferg
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pushback iterator

2009-05-17 Thread Matus


Luis Alberto Zarrabeitia Gomez wrote:
 Quoting Mike Kazantsev mk.frag...@gmail.com:
 
 And if you're pushing back the data for later use you might just as
 well push it to dict with the right indexing, so the next pop won't
 have to roam thru all the values again but instantly get the right one
 from the cache, or just get on with that iterable until it depletes.

 What real-world scenario am I missing here?
 
 Other than one he described in his message? Neither of your proposed solutions
 solves the OP's problem. He doesn't have a list (he /could/ build a list, and
 thus defeat the purpose of having an iterator). He /could/ use alternative 
 data
 structures, like the dictionary you are suggesting... and he is, he is using 
 his
 pushback iterator, but he has to include it over and over.
 
 Currently there is no good pythonic way of building a functions that decide 
 to
 stop consuming from an iterator when the first invalid input is encountered:
 that last, invalid input is lost from the iterator. You can't just abstract 
 the
 whole logic inside the function, something must leak.
 
 Consider, for instance, the itertools.dropwhile (and takewhile). You can't 
 just
 use it like
 
 i = iter(something)
 itertools.dropwhile(condition, i)
 # now consume the rest
 
 Instead, you have to do this:
 
 i = iter(something)
 i = itertools.dropwhile(condition, i) 
 # and now i contains _another_ iterator
 # and the first one still exists[*], but shouldn't be used
 # [*] (assume it was a parameter instead of the iter construct)
 
 For parsing files, for instance (similar to the OP's example), it could be 
 nice
 to do:
 
 f = file(something)
 lines = iter(f)
 parse_headers(lines)
 parse_body(lines)
 parse_footer(lines)
 

that is basically one of many possible scenarios I was referring to.
other example would be:


iter = Pushback_wrapper( open( 'my.file' ).readlines( ) )
for line in iter:
if is_outer_scope( line ):
'''
do some processing for this logical scope of file. there is 
only fet
outer scope lines
'''
continue

for line in iter:
'''
here we expect 1000 - 2000 lines of inner scope and we do not 
want to
run is_outer_scope()
for every line as it is expensive, so we decided to reiterate
'''
if is_inner_scope( line ):
'''
do some processing for this logical scope of file 
untill outer scope
condition occurs
'''
elif is_outer_scope( line ):
iter.pushback( line )
break
else:
'''flush line'''


 which is currently impossible.
 
 To the OP: if you don't mind doing instead:
 
 f = file(something)
 rest = parse_headers(f)
 rest = parse_body(rest)
 rest = parse_footer(rest)
 
 you could return itertools.chain([pushed_back], iterator) from your parsing
 functions. Unfortunately, this way will add another layer of itertools.chain 
 on
 top of the iterator, you will have to hope this will not cause a
 performace/memory penalty.
 
 Cheers,
 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Swapping superclass from a module

2009-05-17 Thread Terry Reedy

Peter Otten wrote:

Terry Reedy wrote:




If the names of superclasses is resolved when classes are instantiated,
the patching is easy.  If, as I would suspect, the names are resolved
when the classes are created, before the module becomes available to the
importing code, then much more careful and extensive patching would be
required, if it is even possible.  (Objects in tuples cannot be
replaced, and some attributes are not writable.)


It may be sufficient to patch the subclasses:


I was not sure if __bases__ is writable or not.  There is also __mro__ 
to consider.



$ cat my_file.py
class Super(object):
def __str__(self):
return old

class Sub(Super):
def __str__(self):
return Sub(%s) % super(Sub, self).__str__()

class Other(object):
pass

class SubSub(Sub, Other):
def __str__(self):
return SubSub(%s) % super(SubSub, self).__str__()

if __name__ == __main__:
print Sub()

$ cat main2.py
import my_file
OldSuper = my_file.Super

class NewSuper(OldSuper):
def __str__(self):
return new + super(NewSuper, self).__str__()

my_file.Super = NewSuper
for n, v in vars(my_file).iteritems():
if v is not NewSuper:
try:
bases = v.__bases__
except AttributeError:
pass
else:
if OldSuper in bases:
print patching, n
v.__bases__ = tuple(NewSuper if b is OldSuper else b
for b in bases)


print my_file.Sub()
print my_file.SubSub()
$ python main2.py
patching Sub
Sub(newold)
SubSub(Sub(newold))

Peter



--
http://mail.python.org/mailman/listinfo/python-list


PEP 384: Defining a Stable ABI

2009-05-17 Thread Martin v. Löwis
Thomas Wouters reminded me of a long-standing idea; I finally
found the time to write it down.

Please comment!

Regards,
Martin

PEP: 384
Title: Defining a Stable ABI
Version: $Revision: 72754 $
Last-Modified: $Date: 2009-05-17 21:14:52 +0200 (So, 17. Mai 2009) $
Author: Martin v. Löwis mar...@v.loewis.de
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 17-May-2009
Python-Version: 3.2
Post-History:

Abstract


Currently, each feature release introduces a new name for the
Python DLL on Windows, and may cause incompatibilities for extension
modules on Unix. This PEP proposes to define a stable set of API
functions which are guaranteed to be available for the lifetime
of Python 3, and which will also remain binary-compatible across
versions. Extension modules and applications embedding Python
can work with different feature releases as long as they restrict
themselves to this stable ABI.

Rationale
=

The primary source of ABI incompatibility are changes to the lay-out
of in-memory structures. For example, the way in which string interning
works, or the data type used to represent the size of an object, have
changed during the life of Python 2.x. As a consequence, extension
modules making direct access to fields of strings, lists, or tuples,
would break if their code is loaded into a newer version of the
interpreter without recompilation: offsets of other fields may have
changed, making the extension modules access the wrong data.

In some cases, the incompatibilities only affect internal objects of
the interpreter, such as frame or code objects. For example, the way
line numbers are represented has changed in the 2.x lifetime, as has
the way in which local variables are stored (due to the introduction
of closures). Even though most applications probably never used these
objects, changing them had required to change the PYTHON_API_VERSION.

On Linux, changes to the ABI are often not much of a problem: the
system will provide a default Python installation, and many extension
modules are already provided pre-compiled for that version. If additional
modules are needed, or additional Python versions, users can typically
compile them themselves on the system, resulting in modules that use
the right ABI.

On Windows, multiple simultaneous installations of different Python
versions are common, and extension modules are compiled by their
authors, not by end users. To reduce the risk of ABI incompatibilities,
Python currently introduces a new DLL name pythonXY.dll for each
feature release, whether or not ABI incompatibilities actually exist.

With this PEP, it will be possible to reduce the dependency of binary
extension modules on a specific Python feature release, and applications
embedding Python can be made work with different releases.

Specification
=

The ABI specification falls into two parts: an API specification,
specifying what function (groups) are available for use with the
ABI, and a linkage specification specifying what libraries to link
with. The actual ABI (layout of structures in memory, function
calling conventions) is not specified, but implied by the
compiler. As a recommendation, a specific ABI is recommended for
selected platforms.

During evolution of Python, new ABI functions will be added.
Applications using them will then have a requirement on a minimum
version of Python; this PEP provides no mechanism for such
applications to fall back when the Python library is too old.

Terminology
---

Applications and extension modules that want to use this ABI
are collectively referred to as applications from here on.

Header Files and Preprocessor Definitions
-

Applications shall only include the header file Python.h (before
including any system headers), or, optionally, include pyconfig.h, and
then Python.h.

During the compilation of applications, the preprocessor macro
Py_LIMITED_API must be defined. Doing so will hide all definitions
that are not part of the ABI.

Structures
--

Only the following structures and structure fields are accessible to
applications:

- PyObject (ob_refcnt, ob_type)
- PyVarObject (ob_base, ob_size)
- Py_buffer (buf, obj, len, itemsize, readonly, ndim, shape,
  strides, suboffsets, smalltable, internal)
- PyMethodDef (ml_name, ml_meth, ml_flags, ml_doc)
- PyMemberDef (name, type, offset, flags, doc)
- PyGetSetDef (name, get, set, doc, closure)

The accessor macros to these fields (Py_REFCNT, Py_TYPE, Py_SIZE)
are also available to applications.

The following types are available, but opaque (i.e. incomplete):

- PyThreadState
- PyInterpreterState

Type Objects


The structure of type objects is not available to applications;
declaration of static type objects is not possible anymore
(for applications using this ABI).
Instead, type objects get created dynamically. To allow an
easy creation of types (in particular, to be able to fill out
function pointers 

Re: Adding a Par construct to Python?

2009-05-17 Thread Paul Boddie
On 17 Mai, 14:05, jer...@martinfamily.freeserve.co.uk wrote:
 From a user point of view I think that adding a 'par' construct to
 Python for parallel loops would add a lot of power and simplicity,
 e.g.

 par i in list:
     updatePartition(i)

You can do this right now with a small amount of work to make
updatePartition a callable which works in parallel, and without the
need for extra syntax. For example, with the pprocess module, you'd
use boilerplate like this:

  import pprocess
  queue = pprocess.Queue(limit=ncores)
  updatePartition = queue.manage(pprocess.MakeParallel
(updatePartition))

(See http://www.boddie.org.uk/python/pprocess/tutorial.html#Map for
details.)

At this point, you could use a normal for loop, and you could then
sync for results by reading from the queue. I'm sure it's a similar
story with the multiprocessing/processing module.

 There would be no locking and it would be the programmer's
 responsibility to ensure that the loop was truly parallel and correct.

Yes, that's the idea.

 The intention of this would be to speed up Python execution on multi-
 core platforms. Within a few years we will see 100+ core processors as
 standard and we need to be ready for that.

In what sense are we not ready? Perhaps the abstractions could be
better, but it's definitely possible to run Python code on multiple
cores today and get decent core utilisation.

 There could also be parallel versions of map, filter and reduce
 provided.

Yes, that's what pprocess.pmap is for, and I imagine that other
solutions offer similar facilities.

 BUT...none of this would be possible with the current implementation
 of Python with its Global Interpreter Lock, which effectively rules
 out true parallel processing.

 See:http://jessenoller.com/2009/02/01/python-threads-and-the-global-inter...

 What do others think?

That your last statement is false: true parallel processing is
possible today. See the Wiki for a list of solutions:

http://wiki.python.org/moin/ParallelProcessing

In addition, Jython and IronPython don't have a global interpreter
lock, so you have the option of using threads with those
implementations, too.

Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] PEP 384: Defining a Stable ABI

2009-05-17 Thread Dirkjan Ochtman
On Sun, May 17, 2009 at 10:54 PM, Martin v. Löwis mar...@v.loewis.de wrote:
 Excluded Functions
 --

 Functions declared in the following header files are not part
 of the ABI:
 - cellobject.h
 - classobject.h
 - code.h
 - frameobject.h
 - funcobject.h
 - genobject.h
 - pyarena.h
 - pydebug.h
 - symtable.h
 - token.h
 - traceback.h

What kind of effect does this have on optimization efforts, for
example all the stuff done by Antoine Pitrou over the last few months,
and the first few results from unladen? Will it mean we won't get to
the good optimizations until 4.0? Or does it just mean unladen swallow
takes longer to come back to trunk (until 4.0) and every extension
author who wants to be compatible with it will basically have the same
burden as now?

Cheers,

Dirkjan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] PEP 384: Defining a Stable ABI

2009-05-17 Thread Martin v. Löwis
 Functions declared in the following header files are not part
 of the ABI:
 - cellobject.h
 - classobject.h
 - code.h
 - frameobject.h
 - funcobject.h
 - genobject.h
 - pyarena.h
 - pydebug.h
 - symtable.h
 - token.h
 - traceback.h
 
 What kind of effect does this have on optimization efforts, for
 example all the stuff done by Antoine Pitrou over the last few months,
 and the first few results from unladen? 

I fail to see the relationship, so: no effect that I can see.

Why do you think that optimization efforts could be related to
the PEP 384 proposal?

Regards,
Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] PEP 384: Defining a Stable ABI

2009-05-17 Thread Dirkjan Ochtman
On Mon, May 18, 2009 at 12:07 AM, Martin v. Löwis mar...@v.loewis.de wrote:
 I fail to see the relationship, so: no effect that I can see.

 Why do you think that optimization efforts could be related to
 the PEP 384 proposal?

It would seem to me that optimizations are likely to require data
structure changes, for exactly the kind of core data structures that
you're talking about locking down. But that's just a high-level view,
I might be wrong.

Cheers,

Dirkjan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Creating temperory files for a web application

2009-05-17 Thread sserrano
I would use a js plotting library, like http://code.google.com/p/flot/

On 8 mayo, 06:26, koranthala koranth...@gmail.com wrote:
 Hi,
    I am doing web development using Django. I need to create an image
 (chart) and show it to the users - based on some data which user
 selects.
    My question is - how do I create a temporary image for the user? I
 thought of tempfile, but I think it will be deleted once the process
 is done - which would happen by the time user starts seeing the image.
 I can think of no other option other than to have another script which
 will delete all images based on time of creation.
    Since python is extensively used for web development, I guess this
 should be an usual scenario for many people here. How do you usually
 handle this?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] PEP 384: Defining a Stable ABI

2009-05-17 Thread Martin v. Löwis
Dirkjan Ochtman wrote:
 On Mon, May 18, 2009 at 12:07 AM, Martin v. Löwis mar...@v.loewis.de 
 wrote:
 I fail to see the relationship, so: no effect that I can see.

 Why do you think that optimization efforts could be related to
 the PEP 384 proposal?
 
 It would seem to me that optimizations are likely to require data
 structure changes, for exactly the kind of core data structures that
 you're talking about locking down. But that's just a high-level view,
 I might be wrong.

Ah. It's exactly the opposite: The purpose of the PEP is not to lock
the data structures down, but to allow more flexible evolution of
them - by completely hiding them from extension modules.

Currently, any data structure change must be weighed for its impact
on binary compatibility. With the PEP, changing structures can
be done fairly freely - with the exception of the very few structures
that do get locked down. In particular, the list of header files
that you quoted precisely contains the structures that can be
modified with no impact on the ABI.

I'm not aware that any of the structures that I propose to lock
would be relevant for optimization - but I might be wrong. If so,
I'd like to know, and it would be possible to add accessor functions
in cases where extension modules might still legitimately want to
access certain fields.

Certain changes to the VM would definitely be binary-incompatible,
such as removal of reference counting. However, such a change would
probably have a much wider effect, breaking not just binary
compatibility, but also source compatibility. It would be justified
to call a Python release that makes such a change 4.0.

Regards,
Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with Tkinter on OS X --- driving me insane!

2009-05-17 Thread Edward Grefenstette
I thought of this. I uninstalled Tk from macports, but the same error
crops up. Evidently, Tk 8.5 remains installed somewhere else, but I
don't know where. How can I find out?

Best,
Edward



 Have you installed Tk version 8.5?

 If so, remove it. You might also install the latest 8.4 version.
 --
 Piet van Oostrum p...@cs.uu.nl
 URL:http://pietvanoostrum.com[PGP 8DAE142BE17999C4]
 Private email: p...@vanoostrum.org

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] PEP 384: Defining a Stable ABI

2009-05-17 Thread Martin v. Löwis
Dino Viehland wrote:
 Dirkjan Ochtman wrote:
 It would seem to me that optimizations are likely to require data
 structure changes, for exactly the kind of core data structures that
 you're talking about locking down. But that's just a high-level view,
 I might be wrong.

 
 
 In particular I would guess that ref counting is the biggest issue here.
 I would think not directly exposing the field and having inc/dec ref
 Functions (real methods, not macros) for it would give a lot more
 ability to change the API in the future.

In the context of optimization, I'm skeptical that introducing functions
for the reference counting would be useful. Making the INCREF/DECREF
macros functions just in case the reference counting goes away is IMO
an unacceptable performance cost.

Instead, such a change should go through the regular deprecation
procedure and/or cause the release of Python 4.0.

 It also might make it easier for alternate implementations to support
 the same API so some modules could work cross implementation - but I
 suspect that's a non-goal of this PEP :).

Indeed :-) I'm also skeptical that this would actually allow
cross-implementation modules to happen. The list of functions that
an alternate implementation would have to provide is fairly long.

The memory management APIs in particular also assume a certain layout
of Python objects in general, namely that they start with a header
whose size is a compile-time constant. Again, making this more flexible
just in case would also impact performance, and probably fairly badly
so.

 Other fields directly accessed (via macros or otherwise) might have similar
 problems but they don't seem as core as ref counting.

Access to the type object reference is probably similar. All the other
structs are used directly in C code, with no accessor macros.

Regards,
Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: [Python-Dev] PEP 384: Defining a Stable ABI

2009-05-17 Thread Dino Viehland
Dirkjan Ochtman wrote:

 It would seem to me that optimizations are likely to require data
 structure changes, for exactly the kind of core data structures that
 you're talking about locking down. But that's just a high-level view,
 I might be wrong.



In particular I would guess that ref counting is the biggest issue here.
I would think not directly exposing the field and having inc/dec ref
Functions (real methods, not macros) for it would give a lot more
ability to change the API in the future.

It also might make it easier for alternate implementations to support
the same API so some modules could work cross implementation - but I
suspect that's a non-goal of this PEP :).

Other fields directly accessed (via macros or otherwise) might have similar
problems but they don't seem as core as ref counting.
-- 
http://mail.python.org/mailman/listinfo/python-list


http://orbited.org/ - anybody using it?

2009-05-17 Thread Aljosa Mohorovic
can anybody comment on http://orbited.org/ ?
is it an active project? does it work?

Aljosa Mohorovic
-- 
http://mail.python.org/mailman/listinfo/python-list


how to verify SSL certificate chain - M2 Crypto library?

2009-05-17 Thread skrobul
Hi,

is there any simple way to do SSL certificate chain validation using
M2Crypto or any other library ?

Basically what I want to achieve is to be able to say if certificate
chain contained in 'XYZ.pem' file is issued by known CA (list of
common root-CA's certs should be loaded from separate directory).
Right now I do it by spawning command 'openssl verify -CApath
ca_certs_path XYZ.pem' and it works. However I think that there must
be a simpler way.

 I've spent last few hours trying to go through M2Crypto sources and
API documentation but the only possible way that I've found is
spawning separate server thread listening on some port, and connecting
just to verify if cert chain is valid, but going this way is at least
not right. The other approach which I've tried is using low-level
function m2.X509_verify() but it does not work as I expect. It returns
0 (which means valid) even if CA certificate is not known.

Any suggestions / tips ?

thanks,
Marek Skrobacki
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Steven D'Aprano
On Sun, 17 May 2009 20:34:00 +0200, Diez B. Roggisch wrote:

 My math-skills are a bit too rusty to qualify the exact nature of the
 operation, commutativity springs to my mind.
 
 And how is reduce() supposed to know whether or not some arbitrary
 function is commutative?
 
 I don't recall anybody saying it should know that - do you? The OP wants
 to introduce parallel variants, not replace the existing ones.

Did I really need to spell it out? From context, I'm talking about the 
*parallel version* of reduce().



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] PEP 384: Defining a Stable ABI

2009-05-17 Thread Michael Foord

Martin v. Löwis wrote:

Dino Viehland wrote:
  

Dirkjan Ochtman wrote:


It would seem to me that optimizations are likely to require data
structure changes, for exactly the kind of core data structures that
you're talking about locking down. But that's just a high-level view,
I might be wrong.

  

In particular I would guess that ref counting is the biggest issue here.
I would think not directly exposing the field and having inc/dec ref
Functions (real methods, not macros) for it would give a lot more
ability to change the API in the future.



In the context of optimization, I'm skeptical that introducing functions
for the reference counting would be useful. Making the INCREF/DECREF
macros functions just in case the reference counting goes away is IMO
an unacceptable performance cost.

Instead, such a change should go through the regular deprecation
procedure and/or cause the release of Python 4.0.

  

It also might make it easier for alternate implementations to support
the same API so some modules could work cross implementation - but I
suspect that's a non-goal of this PEP :).



Indeed :-) I'm also skeptical that this would actually allow
cross-implementation modules to happen. The list of functions that
an alternate implementation would have to provide is fairly long.

  


Just in case you're unaware of it; the company I work for has an open 
source project called Ironclad. This *is* a reimplementation of the 
Python C API and gives us binary compatibility with [some subset of] 
Python C extensions for use from IronPython.


http://www.resolversystems.com/documentation/index.php/Ironclad.html

It's an ambitious project but it is now at the stage where 1000s of the 
Numpy and Scipy tests pass when run from IronPython. I don't think this 
PEP impacts the project, but it is not completely unfeasible for the 
alternative implementations to do this.


In particular we have had to address the issue of the GIL and extensions 
(IronPython has no GIL) and reference counting (which IronPython also 
doesn't) use.


Michael Foord



--
http://www.ironpythoninaction.com/
http://www.voidspace.org.uk/blog


--
http://mail.python.org/mailman/listinfo/python-list


Re: Help with Tkinter on OS X --- driving me insane!

2009-05-17 Thread Kevin Walzer

Edward Grefenstette wrote:

I thought of this. I uninstalled Tk from macports, but the same error
crops up. Evidently, Tk 8.5 remains installed somewhere else, but I
don't know where. How can I find out?

Best,
Edward



Look in /Library/Frameworks...

Kevin Walzer
Code by Kevin
http://www.codebykevin.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] PEP 384: Defining a Stable ABI

2009-05-17 Thread James Y Knight


On May 17, 2009, at 4:54 PM, Martin v. Löwis wrote:

Currently, each feature release introduces a new name for the
Python DLL on Windows, and may cause incompatibilities for extension
modules on Unix. This PEP proposes to define a stable set of API
functions which are guaranteed to be available for the lifetime
of Python 3, and which will also remain binary-compatible across
versions. Extension modules and applications embedding Python
can work with different feature releases as long as they restrict
themselves to this stable ABI.



It seems like a good ideal to strive for.

But I think this is too strong a promise. IMO it would be better to  
say that ABI compatibility across releases is a goal. If someone does  
make a change that breaks the ABI, I'd expect whomever is proposing it  
to put forth a fairly strong argument towards why it's a worthwhile  
change. But it should be possible and allowed, given the right  
circumstances. Because I think it's pretty much inevitable that it  
*will* need to happen, sometime.


(of course there will need to be ABI tests, so that any potential ABI  
breakages are known about when they occur)


Python is much more defined by its source language than its C  
extension API, so tying the python major version number to the C ABI  
might not be the best idea from a marketing standpoint. (I can see  
it now...Python 4.0 major new features: we changed the C method  
definition struct layout incompatibly :)


James
--
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread Steven D'Aprano
On Sun, 17 May 2009 20:36:36 +0200, Diez B. Roggisch wrote:

 But reduce() can't tell whether the function being applied is
 commutative or not. I suppose it could special-case a handful of
 special cases (e.g. operator.add for int arguments -- but not floats!)
 or take a caller- supplied argument that tells it whether the function
 is commutative or not. But in general, you can't assume the function
 being applied is commutative or associative, so unless you're willing
 to accept undefined behaviour, I don't see any practical way of
 parallelizing reduce().
 
 def reduce(operation, sequence, startitem=None, parallelize=False)
 
 should be enough. Approaches such as OpenMP also don't guess, they use
 explicit annotations.

It would be nice if the OP would speak up and tell us what he intended, 
so we didn't have to guess what he meant. We're getting further and 
further away from his original suggestion of a par loop.

If you pass parallize=True, then what? Does it assume that operation is 
associative, or take some steps to ensure that it is? Does it guarantee 
to perform the operations in a specific order, or will it potentially 
give non-deterministic results depending on the order that individual 
calculations come back?

As I said earlier, parallelizing map() sounds very plausible to me, but 
the approaches that people have talked about for parallelizing reduce() 
so far sound awfully fragile and magically to me. But at least I've 
learned one thing: given an associative function, you *can* parallelize 
reduce using a tree. (Thanks Roy!)



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with Tkinter on OS X --- driving me insane!

2009-05-17 Thread Ned Deily
In article 
f9435b73-5001-48d9-b2e8-6fd866339...@l32g2000vba.googlegroups.com,
 Edward Grefenstette egre...@gmail.com wrote:
 I thought of this. I uninstalled Tk from macports, but the same error
 crops up. Evidently, Tk 8.5 remains installed somewhere else, but I
 don't know where. How can I find out?

Look in /Library/Frameworks for Tcl.framework and Tk.framework.  You can 
safely delete those if you don't need them.  But also make sure you 
update to the latest 2.6 (currently 2.6.2) python.org version; as noted, 
the original 2.6 python.org release had issues with user-installed Tcl 
and Tk frameworks.

-- 
 Ned Deily,
 n...@acm.org

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with Tkinter on OS X --- driving me insane!

2009-05-17 Thread Edward Grefenstette
Thanks to Kevin and Ned for the pointers.
The question is now this. Running find tells me I have tk.h in the
following locations:
===
/Developer/SDKs/MacOSX10.4u.sdk/System/Library/Frameworks/Tk.framework/
Versions/8.4/Headers/tk.h
/Developer/SDKs/MacOSX10.4u.sdk/usr/include/tk.h
/Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Tk.framework/
Versions/8.4/Headers/tk.h
/Developer/SDKs/MacOSX10.5.sdk/usr/include/tk.h
/Library/Frameworks/Tk.framework/Versions/8.5/Headers/tk.h
/System/Library/Frameworks/Tk.framework/Versions/8.4/Headers/tk.h
/usr/include/tk.h
/usr/local/WordNet-3.0/include/tk/tk.h
===

This seams to entail that the Tk 8.4 framework seems to be installed
in
===
/Developer/SDKs/MacOSX10.4u.sdk/System/Library/Frameworks/Tk.framework/
Versions/8.4/
/Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Tk.framework/
Versions/8.4/Headers/tk.h
/System/Library/Frameworks/Tk.framework/Versions/8.4/Headers/tk.h
===

Whereas Tk 8.5 is installed in:
===
/Library/Frameworks/Tk.framework/Versions/8.5/
===

Which ones should I delete? Should I remove all the other tk.h files?

Sorry if these are rather dumb questions, but I really do appreciate
the help.

Best,
Edward

On May 18, 1:09 am, Ned Deily n...@acm.org wrote:
 In article
 f9435b73-5001-48d9-b2e8-6fd866339...@l32g2000vba.googlegroups.com,
  Edward Grefenstette egre...@gmail.com wrote:

  I thought of this. I uninstalled Tk from macports, but the same error
  crops up. Evidently, Tk 8.5 remains installed somewhere else, but I
  don't know where. How can I find out?

 Look in /Library/Frameworks for Tcl.framework and Tk.framework.  You can
 safely delete those if you don't need them.  But also make sure you
 update to the latest 2.6 (currently 2.6.2) python.org version; as noted,
 the original 2.6 python.org release had issues with user-installed Tcl
 and Tk frameworks.

 --
  Ned Deily,
  n...@acm.org

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with Tkinter on OS X --- driving me insane!

2009-05-17 Thread Kevin Walzer

Edward Grefenstette wrote:



Whereas Tk 8.5 is installed in:
===
/Library/Frameworks/Tk.framework/Versions/8.5/
===



Delete this one if you want to ensure that Python sees 8.4.

--
Kevin Walzer
Code by Kevin
http://www.codebykevin.com
--
http://mail.python.org/mailman/listinfo/python-list


Generating Tones With Python

2009-05-17 Thread Adam Gaskins
I am pretty sure this shouldn't be as hard as I'm making it to be, but 
how does one go about generating tones of specific frequency, volume, and 
L/R pan? I've been digging around the internet for info, and found a few 
examples. One was with gstreamer, but I can't find much in the 
documentation to explain how to do this. Also some people said to use 
tksnack snack, but the one example I found to do this didn't do  anything 
on my machine, no error or sounds. I'd like this to be cross platform, 
but at this point I just want to do it any way I can.

Thanks,
-Adam
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with Tkinter on OS X --- driving me insane!

2009-05-17 Thread Ned Deily
In article 
c094cd74-78a3-46dc-9a53-6da08c19e...@o30g2000vbc.googlegroups.com,
 Edward Grefenstette egre...@gmail.com wrote:
 Bingo! Updating to Python 6.2.2 did the trick (I had 6.2). I just had
 to relink the /usr/bin/python to the Current directory in /Library/
 Frameworks/Python.framework/Versions/ and everything worked without
 deletions etc. Thanks for your help, everyone!

Glad that helped but beware: changing /usr/bin/python is not 
recommended.  That link (and everything else in /usr/bin) is maintained 
by Apple and should always point to the OSX-supplied python at
/System/Library/Python.framework/Versions/2.5/bin/python

By default, the python.org installers create links at 
/usr/local/bin/python and /usr/local/bin/python2.6; use one of those 
paths to get to the python.org 2.6 or ensure /usr/local/bin comes before 
/usr/bin on your $PATH.

-- 
 Ned Deily,
 n...@acm.org

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with Tkinter on OS X --- driving me insane!

2009-05-17 Thread Edward Grefenstette
Bingo! Updating to Python 6.2.2 did the trick (I had 6.2). I just had
to relink the /usr/bin/python to the Current directory in /Library/
Frameworks/Python.framework/Versions/ and everything worked without
deletions etc. Thanks for your help, everyone!

Best,
Edward
-- 
http://mail.python.org/mailman/listinfo/python-list


Python mail truncate problem

2009-05-17 Thread David
Hi,

I am writing Python script to process e-mails in a user's mail
account. What I want to do is to update that e-mail's Status to 'R'
after processing it, however, the following script truncates old e-
mails even though it updates that e-mail's Status correctly. Anybody
knows how to fix this?

Thanks so much.

  fp = '/var/spool/mail/' + user
mbox = mailbox.mbox(fp)

for key, msg in mbox.iteritems():
flags = msg.get_flags()

if 'R' not in flags:
# now process the e-mail
# now update status
msg.add_flag('R' + flags)
mbox[key] = msg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: http://orbited.org/ - anybody using it?

2009-05-17 Thread alex23
On May 18, 9:14 am, Aljosa Mohorovic aljosa.mohoro...@gmail.com
wrote:
 can anybody comment onhttp://orbited.org/?
 is it an active project? does it work?

I have no idea about your second question but looking at PyPI,the
module was last updated on the 9th of this much, so I'd say it's very
much an active project:

http://pypi.python.org/pypi/orbited/0.7.9
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generating Tones With Python

2009-05-17 Thread Matus
try http://www.pygame.org, as far as I remember there is a way to
generate sound arrays though not sure aboout the pan

m

Adam Gaskins wrote:
 I am pretty sure this shouldn't be as hard as I'm making it to be, but 
 how does one go about generating tones of specific frequency, volume, and 
 L/R pan? I've been digging around the internet for info, and found a few 
 examples. One was with gstreamer, but I can't find much in the 
 documentation to explain how to do this. Also some people said to use 
 tksnack snack, but the one example I found to do this didn't do  anything 
 on my machine, no error or sounds. I'd like this to be cross platform, 
 but at this point I just want to do it any way I can.
 
 Thanks,
 -Adam
-- 
http://mail.python.org/mailman/listinfo/python-list


Which C compiler?

2009-05-17 Thread Jive Dadson
I am using Python 2.4.  I need to make a native Python extension for 
Windows XP.  I have both VC++ 6.0 and Visual C++ 2005 Express Edition. 
Will VC++ 6.0 do the trick?  That would be easier for me, because the 
project is written for that one.  If not, will the 2005 compiler do it?


Thanks much,
Jive
--
http://mail.python.org/mailman/listinfo/python-list


[issue808164] socket.close() doesn't play well with __del__

2009-05-17 Thread test...@smail.ee

test...@smail.ee test...@smail.ee added the comment:

The same is happened when you are trying to close pycurl handler at
__del__ method.

--
nosy: +test157
versions: +Python 2.5 -Python 2.6, Python 2.7, Python 3.0, Python 3.1

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue808164
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4144] 3 tutorial documentation errors

2009-05-17 Thread Georg Brandl

Georg Brandl ge...@python.org added the comment:

I fixed the three docs issues in r72703 and r72704.  The doctest issue
is not an issue; the single backslash is already removed by Python's
tokenizer, so that doctest only sees 'doesn't' which is obviously a
syntax error.  You need to duplicate the backslash, or use a raw docstring.

--
resolution:  - fixed
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue4144
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6002] test_urllib2_localnet DigestAuthHandler leaks nonces

2009-05-17 Thread Senthil

Changes by Senthil orsent...@gmail.com:


--
nosy: +orsenthil

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6002
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6017] Dict fails to notice addition and deletion of keys during iteration

2009-05-17 Thread Georg Brandl

Georg Brandl ge...@python.org added the comment:

OK, I now changed it to may raise ... or fail to iterate over all
entries in r72708.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6017
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5951] email.message : get_payload args's documentation is confusing

2009-05-17 Thread Georg Brandl

Georg Brandl ge...@python.org added the comment:

Will be fixed along with all other such instances.

--
resolution:  - postponed
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5951
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5942] Ambiguity in dbm.open flag documentation

2009-05-17 Thread Georg Brandl

Georg Brandl ge...@python.org added the comment:

I think you meant anydbm?  It's already documented well for dbm.open --
I've copied over that table to anydbm in r72710.

--
resolution:  - fixed
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5942
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6012] enhance getargs O to accept cleanup function

2009-05-17 Thread Hirokazu Yamamoto

Hirokazu Yamamoto ocean-c...@m2.ccsnet.ne.jp added the comment:

 Modifying convert_to_unicode is incorrect; this function is not an O
 converter. Instead, PyUnicode_FSConverter needs to change.

Well, convert_to_unicode used to use O before, and I fixed memory leak
with O formatter in current way. I used this function because I know
about it more than PyUnicode_FSConverter. Sorry for confusion. ;-)

 Attached is a patch that does that, and also uses the O approach. It
 also adjusts the patch to use capsules.

Oh, this is how to use new capsule functions I noticed capsule.c was
added, but I didn't know how to use it.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6012
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6045] Fix dbm interfaces

2009-05-17 Thread Georg Brandl

New submission from Georg Brandl ge...@python.org:

All the dbm.* modules currently have different interfaces, and different
levels of supporting the Python3-style dictionary interface -- while the
docs claim they all have (most of) the dict interface.

For example, both dbm.gnu and dbm.ndbm only have keys() methods, and
they return a list.  Etc. for other dict-style methods.

So, either we remove the claim that they have a dict-style interface
beyond __*item__() and keys(), or we do something about it.

--
components: Library (Lib)
messages: 87968
nosy: georg.brandl
priority: release blocker
severity: normal
stage: needs patch
status: open
title: Fix dbm interfaces
type: behavior
versions: Python 3.1

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6045
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5937] Problems with dbm documentation

2009-05-17 Thread Georg Brandl

Georg Brandl ge...@python.org added the comment:

Superseded by #6045.

--
resolution:  - duplicate
status: open - closed
superseder:  - Fix dbm interfaces

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5937
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6046] test_distutils.py fails on VC6(Windows)

2009-05-17 Thread Hirokazu Yamamoto

New submission from Hirokazu Yamamoto ocean-c...@m2.ccsnet.ne.jp:

test_disutils(test_get_outputs) fails on VC6. I cannot know if this
happens on VC9 too because buildbot is down. :-(

/

test_get_outputs (distutils.tests.test_build_ext.BuildExtTestCase) ... foo.c
   ライブラリ
c:\docume~1\whiter~1\locals~1\temp\tmpzdhkyv\tempt\docume~1\whiter~1\l
ocals~1\temp\tmpkhvw2m\foo.lib とオブジェクト
c:\docume~1\whiter~1\locals~1\temp\tmp
zdhkyv\tempt\docume~1\whiter~1\locals~1\temp\tmpkhvw2m\foo.exp を作成中
CVPACK : 致命的なエラー CK1003: ファイル
c:\docume~1\whiter~1\locals~1\temp\tmpb4w8fe\f
oo.exe を開くことができません
LINK : warning LNK4027: CVPACK error

/

I looked into the directory tmpkhvw2m before it would be deleted, but
I could not find foo.exe. There was only foo.

--
assignee: tarek
components: Distutils
messages: 87970
nosy: ocean-city, tarek
severity: normal
status: open
title: test_distutils.py fails on VC6(Windows)
versions: Python 2.7

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6046
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6046] test_distutils.py fails on VC6(Windows)

2009-05-17 Thread Hirokazu Yamamoto

Hirokazu Yamamoto ocean-c...@m2.ccsnet.ne.jp added the comment:

Here is translated version.

test_get_outputs (distutils.tests.test_build_ext.BuildExtTestCase) ... foo.c
   Library
c:\docume~1\whiter~1\locals~1\temp\tmpzdhkyv\tempt\docume~1\whiter~1\l
ocals~1\temp\tmpkhvw2m\foo.lib and object
c:\docume~1\whiter~1\locals~1\temp\tmp
zdhkyv\tempt\docume~1\whiter~1\locals~1\temp\tmpkhvw2m\foo.exp を作成中
CVPACK : fatal error CK1003: File
c:\docume~1\whiter~1\locals~1\temp\tmpb4w8fe\f
oo.exe cannot be opened
LINK : warning LNK4027: CVPACK error

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6046
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5935] Better documentation of use of BROWSER environment variable

2009-05-17 Thread Georg Brandl

Georg Brandl ge...@python.org added the comment:

Fixed in r72712.

--
resolution:  - fixed
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5935
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6046] test_distutils.py fails on VC6(Windows)

2009-05-17 Thread Hirokazu Yamamoto

Hirokazu Yamamoto ocean-c...@m2.ccsnet.ne.jp added the comment:

Here is translated version.

test_get_outputs (distutils.tests.test_build_ext.BuildExtTestCase) ... foo.c
   Library
c:\docume~1\whiter~1\locals~1\temp\tmpzdhkyv\tempt\docume~1\whiter~1\l
ocals~1\temp\tmpkhvw2m\foo.lib and object
c:\docume~1\whiter~1\locals~1\temp\tmp
zdhkyv\tempt\docume~1\whiter~1\locals~1\temp\tmpkhvw2m\foo.exp is being
created
CVPACK : fatal error CK1003: File
c:\docume~1\whiter~1\locals~1\temp\tmpb4w8fe\f
oo.exe cannot be opened
LINK : warning LNK4027: CVPACK error

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6046
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6046] test_distutils.py fails on VC6(Windows)

2009-05-17 Thread Hirokazu Yamamoto

Changes by Hirokazu Yamamoto ocean-c...@m2.ccsnet.ne.jp:


--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6046
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6044] Exception message in int() when trying to convert a complex

2009-05-17 Thread Georg Brandl

Georg Brandl ge...@python.org added the comment:

That no unambiguous conversion between complex and int is defined is
exactly the reason for this error message.  You could want the absolute
value, the real part, the imaginary part, or even the polar angle...

int(abs(z)) works as intended, giving you the absolute value of the
complex number, which is 1 for 1j**2.

--
nosy: +georg.brandl
resolution:  - invalid
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6044
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6042] Document and slightly simplify lnotab tracing

2009-05-17 Thread Georg Brandl

Georg Brandl ge...@python.org added the comment:

Jeffrey, while you're at lnotab stuff, could you have a look at #1689458
as well?

--
nosy: +georg.brandl

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6042
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6044] Exception message in int() when trying to convert a complex

2009-05-17 Thread Mark Dickinson

Mark Dickinson dicki...@gmail.com added the comment:

I always found the use int(abs(z)) part of that message odd,
as well.  As Georg points out, there are many possible ways that
one might want to convert complex to int;  it seems strange to
give advice for one particular one when that may well not match
what the user wanted.

I'd suggest just dropping the use int(abs(z)) from the error message.
I think it's more likely to be confusing than helpful.

The same applies to float(complex).

--
nosy: +marketdickinson

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6044
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6023] Search does not intelligently handle module.function queries on docs.python.org

2009-05-17 Thread Georg Brandl

Georg Brandl ge...@python.org added the comment:

This is already done in Sphinx trunk, and will be used for Python as
soon as it is released.

--
resolution:  - fixed
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6023
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6047] install target in python 3.x makefile should be fullinstall

2009-05-17 Thread Ronald Oussoren

New submission from Ronald Oussoren ronaldousso...@mac.com:

The default install target in the toplevel makefile for python 3.x 
behaves like the altinstall target in python 2.x. This behaviour was 
choosen to avoid conflicts between python 3.x and python 2.x 
installations.

IMO this is no longer needed, make fullinstall can coexist nicely with 
Python 2.x because the binaries that get installed for Python 3.x are 
named differently than those in Python 2.x.  Furthermore the 
fullinstall target is what most users will actually want to use: that 
install the 'unversioned' filenames.

I therefore propose to renamed the install target in Python 3.x to 
altinstall and to rename the fullinstall target to install.

--
assignee: benjamin.peterson
messages: 87980
nosy: benjamin.peterson, ronaldoussoren
severity: normal
status: open
title: install target in python 3.x makefile should be fullinstall
type: behavior
versions: Python 3.1

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6047
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6046] test_distutils.py fails on VC6(Windows)

2009-05-17 Thread Hirokazu Yamamoto

Hirokazu Yamamoto ocean-c...@m2.ccsnet.ne.jp added the comment:

Ah, well, if this runs on VC9, I think you don't have to worry about
VC6. (VC6 is too old) Of course, I'm happy if this runs on VC6 too.

Anyway, there is msvc9compiler.py, so maybe does different code run on
between VC9 and VC8(or elder)?

I don't know about buildbot slave.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6046
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5514] Darwin framework libpython3.0.a is not a normal static library

2009-05-17 Thread Ronald Oussoren

Ronald Oussoren ronaldousso...@mac.com added the comment:

Jack: could you please explain what the issue is? Unless you do so I 
will close this issue as won't fix.

In particular: will linking with -lpython3.0 work with this 
hypothetical future version of Xcode you're talking about? 

As mentioned before libpythonX.Y.a is present for compatibility with a 
number of popular 3th-party tools that have manually written makefiles 
that expect this library to be present. Making the file a symlink to the 
actual dylib in the framework seems to work as intented as the moment: 
the executable gets linked to the framework (just as if it was linked 
with -framework Python)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5514
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5514] Darwin framework libpython3.0.a is not a normal static library

2009-05-17 Thread Ronald Oussoren

Changes by Ronald Oussoren ronaldousso...@mac.com:


--
assignee:  - ronaldoussoren

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5514
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5766] Mac/scripts/BuildApplet.py reset of sys.executable during install can cause it to use wrong modules

2009-05-17 Thread Ronald Oussoren

Ronald Oussoren ronaldousso...@mac.com added the comment:

I haven't looked in this particular problem yet, but please note that the 
Mac-specific libraries do not work with an UCS4 build of python.

--
nosy: +ronaldoussoren

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5766
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6012] enhance getargs O to accept cleanup function

2009-05-17 Thread Hirokazu Yamamoto

Hirokazu Yamamoto ocean-c...@m2.ccsnet.ne.jp added the comment:

Well, please see r65745. That was O before.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6012
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   >