[Python-Dev] Some new additions to functools

2007-04-15 Thread SevenInchBread

So I've cooked up some very simple functions to add to functools - to expand
it into a more general-purpose module.


def cat(x): return x

class nullfunc(object):
   def __call__(self, *args, **kargs): return self
   def __getattr__(self, name):return getattr(None, name)

def multimap(func, seq, n=2):
   assert n  0, n must be positive
   if n == 1: return map(func, seq)
   else:   return map(lambda x: multimap(func, x, n-1), seq)

def multifilter(func, seq, n=2):
   return multimap(lambda x: filter(func, x), seq, n-1)

def multireduce(func, seq, n=2):
   return multimap(lambda x: reduce(func, x), seq, n-1)

In an expression, cat achieves the effect of doing nothing - which makes it
a nice default value for some filter-like functions. It's not a huge
improvement - lambda x: x is almost as trivial to define, but I think it
would be nice to have a standard identity function that all of Python could
recognize.

nullfunc is a function that *chomps* away at its input while otherwise
retaining the properties of None - much like the recently proposed callable
None - except not as disasterous to existing practices and potentially more
useful as an optional behavior. This is something that cannot be as quickly
implemented as the cat function.

multimap is a multi-dimensional mapping function that recursively decends a
sequence and applies a function to the data within.

multifilter is a multi-dimensional filter function that recursively descends
a sequence and applies a function to the data within

multireduce - well... you get the idea

A better place to put some of these functions might be __builtin__, so you
don't have to waste time importing something as basic as cat.









--
What's money? A man is a success if he gets up in the morning and goes to
bed at night and in between does what he wants to do. ~ Bob Dylan
___
Python-Dev mailing list
[EMAIL PROTECTED]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] functools additions

2007-04-15 Thread SevenInchBread

So if it's alright with the privledged folk - I'd like to commit these
minor (and probably non-controversial) additions to the functools module.



def cat(x): return x

def multimap(func, s, n=2):
   assert n  0, n must be positive
   return (map(func, seq)
   if n == 1 else
   map(lambda x: multimap(func, x, n-1),
   seq))

def multifilter(func, s, n=2):
   return multimap(lambda x: filter(func, x), s, n-1)

def multireduce(func, s, n=2):
   return multimap(lambda x: reduce(func, x), s, n-1)


class nullfunc(object):
   def __call__(self, *a, **k): return self
   def __getattr(self, name): return getattr(None, name)



cat is a generic identity function - useful for some higher-order functions
to specify a function that does nothing. multimap, multifilter, and
multireduce, are all multi-dimensional versions of map, filter, and reduce.
nullfunc is a special callable object that emulates the failed callable None
proposal - which wasn't really a good idea, but would have been more useful
as a different object apart from None.

you could probably even put cat in __builtins__ - so you don't need to waste
effort importing such a trivial function.

--.
What's money? A man is a success if he gets up in the morning and goes to
bed at night and in between does what he wants to do. ~ Bob Dylan
___
Python-Dev mailing list
[EMAIL PROTECTED]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] new subscriber looking for grunt work

2007-04-15 Thread Martin v. Löwis
 I'll be as brief as possible.  The welcome message to this list suggested
 that I post a brief introduction of myself, so here it goes.  I've been
 programming for about 10 years now (7 professionally).  I would rank
 myself as a moderate programmer always looking to improve, and
 would like to help on this project in whatever way I can.  I'll be trolling
 for grunt work and any documentation that needs to be done while
 I get the hang of things around here.

Hi Sergio,

an activity that is always worthwhile is bug and patch review. Pick a
patch or a bug report that hasn't seen any feedback (there are,
unfortunately, plenty of them), and try to analyse it.

See

http://mail.python.org/pipermail/python-dev/2002-November/029831.html

for an older article of what these activites involve.

Regards,
Martin
___
Python-Dev mailing list
[EMAIL PROTECTED]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Extended Buffer Interface/Protocol

2007-04-15 Thread Travis Oliphant
Greg Ewing wrote:

 But since the NumPy object has to know about the provider,
 it can simply pass the release call on to it if appropriate.
 I don't see how this case necessitates making the release call
 on a different object.

 I'm -1 on involving any other objects or returning object
 references from the buffer interface, unless someone can
 come up with a use case which actually *requires* this
 (as opposed to it just being something which might be
 nice to have). The buffer interface should be Blazingly
 Fast(tm), and messing with PyObject*s is not the way to
 get that.

The current proposal would be fast but would be more flexible for 
objects that don't have a memory representation that can be shared 
unless they create their own sharing object that perhaps copies the 
data into a contiguous chunk first.   Objects which have memory which 
can be shared perfectly through the interface would simply pass 
themselves as the return value (after incrementing their extant 
buffers count by one).  


 Seems to me the lock should apply to *everything* returned
 by getbuffer. If the requestor is going to iterate over the
 data, and there are multiple dimensions, surely it's going to
 want to refer to the shape and stride info at regular intervals
 while it's doing that. Requiring it to make its own copy
 would be a burden.


There are two use cases that seem to be under discussion.

1) When you want to apply an algorithm to an arbitrary object that 
exposes the buffer interface

2) When you want to create an object that shares memory with another 
object exposing the buffer interface.

These two use cases have slightly different needs.  What I want to avoid 
is forcing the exporting object to be unable to change its shape and 
strides just because an object is using the memory for use case #2. 

I think the solution that states the shape and strides information are 
only guaranteed valid until the GIL is released is sufficent.  

Alternatively, one could release the shape and strides and format 
separately from the memory with a flag as a second argument to 
releasebuffer.

-Travis







 -- 
 Greg



___
Python-Dev mailing list
[EMAIL PROTECTED]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Extended Buffer Interface/Protocol

2007-04-15 Thread Travis Oliphant
Carl Banks wrote:

 Tr
 ITSM that we are using the word view very differently.  Consider 
 this example:

 A = zeros((100,100))
 B = A.transpose()


You are thinking of NumPy's particular use case.  I'm thinking of a 
generic use case.  So, yes I'm using the word view in two different 
contexts.

In this scenario, NumPy does not even use the buffer interface.  It 
knows how to transpose it's own objects and does so by creating a new 
NumPy object (with it's own shape and strides space) with a data buffer 
pointed to by A.

Yes, I use the word view for this NumPy usage, but only in the context 
of NumPy.   In the PEP, I've been using the word view quite a bit more 
generically.

So, I don't think this is a good example because A.transpose() will 
never call getbuffer of the A object (it will instead use the known 
structure of NumPy directly).  So, let's talk about the generic 
situation instead of the NumPy specific one.


 I'd suggest the object returned by A.getbuffer should be called the 
 buffer provider or something like that.

I don't care what we call it.  I've been using the word view because 
of the obvious analogy to my use of view in NumPy.  When I had 
envisioned returning an actual object very similar to a NumPy array from 
the buffer interface it made a lot of sense to call it a view.  Now, I'm 
fine to call it buffer provider


 For the sake of discussion, I'm going to avoid the word view 
 altogether.  I'll call A the exporter, as before.  B I'll refer to as 
 the requestor.  The object returned by A.getbuffer is the provider.

Fine.  Let's use that terminology since it is new and not cluttered by 
other uses in other contexts.

 Having thought quite a bit about it, and having written several 
 abortive replies, I now understand it and see the importance of it.  
 getbuffer returns the object that you are to call releasebuffer on.  
 It may or may not be the same object as exporter.  Makes sense, is 
 easy to explain.

Yes, that's exactly all I had considered it to be.   Only now, I'm 
wondering if we need to explicitly release a lock on the shape, strides, 
and format information as well as the buffer location information.


 It's easy to see possible use cases for returning a different object.  
 A  hypothetical future incarnation of NumPy might shift the 
 responsibility of managing buffers from NumPy array object to a hidden 
 raw buffer object.  In this scenario, the NumPy object is the 
 exporter, but the raw buffer object the provider.

 Considering this use case, it's clear that getbuffer should return the 
 shape and stride data independently of the provider.  The raw buffer 
 object wouldn't have that information; all it does is store a pointer 
 and keep a reference count.  Shape and stride is defined by the exporter.


So, who manages the memory to the shape and strides and isptr arrays?   
When a provider is created do these need to be created so that the shape 
and strides arrays are never deallocated when in use. 

The situation I'm considering is  if you have a NumPy array of shape 
(2,3,3) which you then obtain a provider of  (presumably from another 
package) and it retains a lock on the memory for a while.  Should it 
also retain a lock on the shape and strides array?   Can the NumPy array 
re-assign the shape and strides while the provider has still not been 
released?

I would like to say yes, which means that the provider must supply it's 
own copy of shape and strides arrays.  This could be the policy.  
Namely, that the provider must supply the memory for the shape, strides, 
and format arrays which is guaranteed for as long as a lock is held.  In 
the case of NumPy, that provider could create it's own copy of the shape 
and strides arrays (or do it when the shape and strides arrays are 
re-assigned).


 Second question: what happens if a view wants to re-export the 
 buffer? Do the views of the buffer ever change?  Example, say you 
 create a transposed view of a Numpy array.  Now you want a slice of 
 the transposed array.  What does the transposed view's getbuffer 
 export?


 Basically, you could not alter the internal representation of the 
 object while views which depended on those values were around.

 In NumPy, a transposed array actually creates a new NumPy object that 
 refers to the same data but has its own shape and strides arrays.

 With the new buffer protocol, the NumPy array would not be able to 
 alter it's shape/strides/or reallocate its data areas while views 
 were being held by other objects.


 But requestors could alter their own copies of the data, no?  Back to 
 the transpose example: B itself obviously can't use the same strides 
 array as A uses.  It would have to create its own strides, right?


I don't like this example because B does have it's own strides because 
it is a complete NumPy array.   I think we are talking about the same 
thing and that is who manages the memory for the shape and strides 
(and format). 

I think the 

[Python-Dev] Summary of Tracker Issues

2007-04-15 Thread Tracker

ACTIVITY SUMMARY (03/20/07 - 03/27/07)
Tracker at http://bugs.python.org/

To view or respond to any of the issues listed below, click on the issue 
number.  Do NOT respond to this message.


 1646 open ( +2) /  8583 closed ( +1) / 10229 total ( +3)

Average duration of open issues: 747 days.
Median duration of open issues: 695 days.

Open Issues Breakdown
   open  1646 ( +2)
pending 0 ( +0)

Issues Created Or Reopened (3)
__

Buy Tramadol online  03/25/07
   http://bugs.python.org/issue1023created  conor2k   

Testing bugs.python.org domain   03/26/07
CLOSED http://bugs.python.org/issue1024created  izak  

Test issue   03/26/07
   http://bugs.python.org/issue1025created  loewis



Issues Now Closed (2)
_

Personal loans. Bad credit personal loans. Unsecured personal l1 days
   http://bugs.python.org/issue1002dubois

Testing bugs.python.org domain 0 days
   http://bugs.python.org/issue1024izak  



___
Python-Dev mailing list
[EMAIL PROTECTED]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Extended buffer PEP

2007-04-15 Thread Travis Oliphant


Here is my final draft of the extended buffer interface PEP. 

For those who have been following the discussion, I eliminated the 
releaser object and the lock-buffer function.   I decided that there is 
enough to explain with the new striding and sub-offsets without the 
added confusion of releasing buffers, especially when it is not clear 
what is to be gained by such complexity except a few saved lines of code.


The striding and sub-offsets, however, allow extension module writers to 
write code (say video and image processing code or scientific computing 
code or data-base processing code) that works on any object exposing the 
buffer interface.  I think this will be of great benefit and so is worth 
the complexity.


This will take some work to get implemented for Python 3k.  I could use 
some help with this in order to speed up the process.  I'm working right 
now on the extensions to the struct module until the rest is approved.


Thank you for any and all comments:

-Travis


:PEP: XXX
:Title: Revising the buffer protocol
:Version: $Revision: $
:Last-Modified: $Date:  $
:Authors: Travis Oliphant [EMAIL PROTECTED], Carl Banks [EMAIL PROTECTED]
:Status: Draft
:Type: Standards Track
:Content-Type: text/x-rst
:Created: 28-Aug-2006
:Python-Version: 3000

Abstract


This PEP proposes re-designing the buffer interface (PyBufferProcs
function pointers) to improve the way Python allows memory sharing
in Python 3.0

In particular, it is proposed that the character buffer portion 
of the API be elminated and the multiple-segment portion be 
re-designed in conjunction with allowing for strided memory
to be shared.   In addition, the new buffer interface will 
allow the sharing of any multi-dimensional nature of the
memory and what data-format the memory contains. 

This interface will allow any extension module to either 
create objects that share memory or create algorithms that
use and manipulate raw memory from arbitrary objects that 
export the interface. 


Rationale
=

The Python 2.X buffer protocol allows different Python types to
exchange a pointer to a sequence of internal buffers.  This
functionality is *extremely* useful for sharing large segments of
memory between different high-level objects, but it is too limited and
has issues:

1. There is the little used sequence-of-segments option
   (bf_getsegcount) that is not well motivated. 

2. There is the apparently redundant character-buffer option
   (bf_getcharbuffer)

3. There is no way for a consumer to tell the buffer-API-exporting
   object it is finished with its view of the memory and
   therefore no way for the exporting object to be sure that it is
   safe to reallocate the pointer to the memory that it owns (for
   example, the array object reallocating its memory after sharing
   it with the buffer object which held the original pointer led
   to the infamous buffer-object problem).

4. Memory is just a pointer with a length. There is no way to
   describe what is in the memory (float, int, C-structure, etc.)

5. There is no shape information provided for the memory.  But,
   several array-like Python types could make use of a standard
   way to describe the shape-interpretation of the memory
   (wxPython, GTK, pyQT, CVXOPT, PyVox, Audio and Video
   Libraries, ctypes, NumPy, data-base interfaces, etc.)

6. There is no way to share discontiguous memory (except through
   the sequence of segments notion).  

   There are two widely used libraries that use the concept of
   discontiguous memory: PIL and NumPy.  Their view of discontiguous
   arrays is different, though.  The proposed buffer interface allows
   sharing of either memory model.  Exporters will use only one
   approach and consumers may choose to support discontiguous 
   arrays of each type however they choose. 

   NumPy uses the notion of constant striding in each dimension as its
   basic concept of an array. With this concept, a simple sub-region
   of a larger array can be described without copying the data.   T
   Thus, stride information is the additional information that must be
   shared. 

   The PIL uses a more opaque memory representation. Sometimes an
   image is contained in a contiguous segment of memory, but sometimes
   it is contained in an array of pointers to the contiguous segments
   (usually lines) of the image.  The PIL is where the idea of multiple
   buffer segments in the original buffer interface came from.   

   NumPy's strided memory model is used more often in computational
   libraries and because it is so simple it makes sense to support
   memory sharing using this model.  The PIL memory model is sometimes 
   used in C-code where a 2-d array can be then accessed using double
   pointer indirection:  e.g. image[i][j].  

   The buffer interface should allow the object to export either of these
   memory models.  Consumers are free to either require contiguous memory
   or write code to handle one or both of these 

Re: [Python-Dev] Extended buffer PEP

2007-04-15 Thread Travis Oliphant
Greg Ewing wrote:

 Travis Oliphant wrote:

 Carl Banks wrote:
  I'd like to see it accept a flags argument over what kind of buffer 
  it's allowed to return.  I'd rather not burden the user to check 
 all  the entries in bufferinfo to make sure it doesn't get something 
  unexpected.
 Yes, I agree. We had something like that at one point.


 Maybe this could be handled in an intermediate layer
 between the user and implementor of the interface,
 i.e. the user calls

   PyBuffer_GetBuffer(obj, info, flags);

 the object's tp_getbufferinfo just gets called as

   getbufferinfo(self, info)

 and PyBuffer_GetBuffer then checks that the result
 conforms to the requested feature set. This would
 relieve users of the interface from having to check
 that themselves, while not requiring implementors
 to be burdened with it either.

I like this strategy.Then, any intermediate buffering (that prompted 
the killed release-buffer object in the protocol) could be handled in 
this layer as well.

I also like the idea of passing something to the getbuffer call so that 
exporters can do less work if some things are not being requested, but 
that the exporter should be free to ignore the flag and always produce 
everything.

-Travis

___
Python-Dev mailing list
[EMAIL PROTECTED]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Some new additions to functools

2007-04-15 Thread Josiah Carlson

SevenInchBread [EMAIL PROTECTED] wrote:
 So I've cooked up some very simple functions to add to functools - to expand
 it into a more general-purpose module.
[snip]

I don't use a functional approach very often, but I haven't ever had a
case where I would want or need to use any of the functions you provide.
Does anyone else think that any or all of these would be useful?

 A better place to put some of these functions might be __builtin__, so you
 don't have to waste time importing something as basic as cat.

We do not arbitrarily add functions to builtins.  The general approach
that most people have is to *reduce* the number of builtins, not
increase them.

Generally, -.5 on the functions/objects you provide (I haven't used them,
but I'm reserving a -1 or -0 for if people have other opinions).  -1000
on adding them to the builtins.

 - Josiah

___
Python-Dev mailing list
[EMAIL PROTECTED]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] functools additions

2007-04-15 Thread Martin v. Löwis
 So if it's alright with the privledged folk - I'd like to commit
 these minor (and probably non-controversial) additions to the functools
 module.

Do you have commit access? What's your real name?

-1 on these additions. If lambda x:x would be added, it should be named
identity, not cat (if that name is meant to raise associations with
the Unix cat(1) utility, please understand that cat is short for
concatenate).

Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Some new additions to functools

2007-04-15 Thread Paul Hankin
On 4/15/07, SevenInchBread [EMAIL PROTECTED] wrote:
 So I've cooked up some very simple functions to add to functools - to expand
 it into a more general-purpose module.

 def cat(x): return x

 class nullfunc(object):
 def __call__(self, *args, **kargs): return self
 def __getattr__(self, name):return getattr(None, name)

 def multimap(func, seq, n=2):
 assert n  0, n must be positive
 if n == 1: return map(func, seq)
 else:   return map(lambda x: multimap(func, x, n-1), seq)

 def multifilter(func, seq, n=2):
 return multimap(lambda x: filter(func, x), seq, n-1)

 def multireduce(func, seq, n=2):
 return multimap(lambda x: reduce(func, x), seq, n-1)

-0 for cat (renamed to 'identity'), -1 for the rest.

I've occasionally wanted multimap but only in the default
case where it is simpler to be explicit.

-- 
Paul Hankin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Py3: function signatures, type checking, and related crap

2007-04-15 Thread SevenInchBread

People seem to be pushing for a consistent method for checking the x-ness
of objects (that is, interfaces that the object implements).

So I present an idea for a simple and straightforward type that provides a
way to construct descriptions of object structures, and I'd like some help
expanding it into a useful extension to Python's standard set of utilities.

Its a basic constructor that produces callable interface-checking predicates
(which can be use in things such as list comprehensions, filter, if
statements, or even a new syntax for function signatures that allows for
automatic interface-checking). These predicates check that an object matches
the behavior described by the constructor . Since I can't think of a name
for this constructor, and because I've never liked the term interface,
I'll just call it can.

Can takes an arbitrary number of keyword arguments and produces a callable
object. The keys represent object attributes, while the values are
behavior-checking predicates like the ones produced by can. Since the can
constructor produces an object that can in turn be used in other can
constructors, using previously defined interfaces in new constructions is
fairly straight-forward

   callable =  can(__call__ = object)#Returns an object that describes
objects with a  __call__ attribute
   readable =  can(read  = callable)
#  ...with a callable read attribute
   writable =  can(write = callable)
#  ...with a callable write attribute


   #a join operator can be used to combine can objects...
   #...for now I'll just use and and or to represent them.

   isfilelike = readable and writable #returns an object that matches any
type that is described
  #by both readable and writable

   IOable = readable or writable   #any type that is readable or
writable

objects that are constructed with can, when called, return True or False
based on whether or not the passed object matches the behavior described.

   callable(hash)  #returns True - as it would in the current version
of Python.


Here's some more nifty examples:

   iterable = can(__iter__=callable) or can(next=callable)
   hashable = can(__hash__=callable)

   completefile = isfilelike and iterable and can(fileno=callable,
close=callable)

   def outlines(f, seq):
   Outputs a sequence of lines to a file-like object
   assert isfilelike(f), %r is not a file-like object. % f
   assert isiterable(seq), %r must be iterable % seq
   f.write(\n.join(seq))

   #a trivial example... you'd get similar error messages from Python
builtins even without the assertions.


As it stands, I don't think that this deserves to be in Python - but I think
the basic premise could be used as a foundation for  better things.


--
What's money? A man is a success if he gets up in the morning and goes to
bed at night and in between does what he wants to do. ~ Bob Dylan
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2.5.1c1 pickle problem

2007-04-15 Thread Ralf W. Grosse-Kunstleve
Hi Raymond,

Thanks for the detailed explanation!

 I'm not sure what your code was doing where the bugfix would cause
 breakage.  If its __getitem__() override returned a meaningful value
 for each element in obj.keys(), then it should have worked fine.  Of
 course, if it was raising an exception or triggering a side-effect,
 then one could argue that the bugfix was working as intended by
 allowing  the subclasser to affect how the base class goes about its
 business.

It wasn't actually my own code, but that of a colleague. His
__getitem__() override is below. It fails because __getitem__() is
called with the wrong type of argument (plain int instead of
 a list of
atoms). I'm not sure it presents a convincing use case. I would have
used composition instead of inheritance in the same situation.

I don't feel competent to join the discussion about the changes in the
pickling behavior, but I feel quite strongly that such a change should
not be introduced in a bug-fix release.

Ralf


  def __getitem__(self, atoms):
LIST = []
for atom in atoms:
  attr = 'index' 
  if hasattr(atom, 'unique_index'):
attr = 'unique_index'
  LIST.append(getattr(atom, attr))
key1 = tuple(LIST)
LIST.reverse()   
key2 = tuple(LIST)
  
rc = None
try:
  rc =
 dict.__getitem__(self, key1)
except KeyError, e:
  pass
try:
  rc = dict.__getitem__(self, key2)
except KeyError, e:
  pass
return rc




__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com ___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] functools additions

2007-04-15 Thread SevenInchBread

Do you have commit access? What's your real name?

I prefer to remain pseudonymous, and I don't have commit access.

Yeah... they're not terribly useful - more or less there for the sake of
being there. Batteries included and all that


...but now I've got a more useful idea for a function wrapper around
predicate functions to make them a little more functionally inclined.
Something like...

@predicate
def hasnext(x): return hasattr(x, next)

@predicate
def hasiter(x): return hasattr(x, __iter__)

isiterable   = hasnext or hasiter   #or/and/not operators construct new
predicate functions

isEmptyIterable = isiterable and not predicate(bool)

isgenerator = isiterable and (lambda x: hasattr(x, send) and hasattr(x,
throw))

filter(isgenerator or callable, SomeSequence)

--
What's money? A man is a success if he gets up in the morning and goes to
bed at night and in between does what he wants to do. ~ Bob Dylan
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] functools additions

2007-04-15 Thread BJörn Lindqvist
 def cat(x): return x

 def multimap(func, s, n=2):
 assert n  0, n must be positive
 return (map(func, seq)
 if n == 1 else
 map(lambda x: multimap(func, x, n-1),
 seq))

 def multifilter(func, s, n=2):
 return multimap(lambda x: filter(func, x), s, n-1)

 def multireduce(func, s, n=2):
 return multimap(lambda x: reduce(func, x), s, n-1)

 class nullfunc(object):
 def __call__(self, *a, **k): return self
 def __getattr(self, name): return getattr(None, name)

Could you describe what these functions do? Preferably with examples
that demonstrates that they are useful.

-- 
mvh Björn
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] functools additions

2007-04-15 Thread Jean-Paul Calderone
On Sun, 15 Apr 2007 18:18:16 -0400, SevenInchBread [EMAIL PROTECTED] wrote:
Do you have commit access? What's your real name?
I prefer to remain pseudonymous, and I don't have commit access.

Yeah... they're not terribly useful - more or less there for the sake of
being there. Batteries included and all that


Please discuss this on the python-ideas list before bringing it up on
python-dev.

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Some new additions to functools

2007-04-15 Thread Guido van Rossum
On 4/15/07, SevenInchBread [EMAIL PROTECTED] wrote:
[...]
 --
 What's money? A man is a success if he gets up in the morning and goes to
 bed at night and in between does what he wants to do. ~ Bob Dylan

If you ask me, SevenInchBread (not her real name) is a Troll. Let her
prove she isn't.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] my 2.5 checkins

2007-04-15 Thread Anthony Baxter
On Saturday 14 April 2007 10:07, Kristján Valur Jónsson wrote:
 Hello all.
 I made two checkins to the 25 maintainance branch before Martin
 kindly pointed out to me that it is frozen. These are quite
 simple fixes to real crashes I have experienced.  The fix in
 frameobject.c will be necessary if you work with opcodes  128,
 which we routinely do at CCP J.  Security through opcode
 randomization Anyway, just let me know if you like me to roll
 them back.

I really, really, really don't want to cut another release 
candidate. These fixes don't strike me as critical enough to need 
that - and I'm not happy to do the release and just hope. I'll roll 
them all back.

Anthony
-- 
Anthony Baxter [EMAIL PROTECTED]
It's never too late to have a happy childhood.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com