[Numpy-discussion] Interrupting long running calculations ....

2008-08-18 Thread Sebastian Haase
Hi,
Could someone remind me of the current state of numpy with regards to
honoring KeyboardInterrupts !?
I think KeyboardInterrupt is the exception that would have to be used
to do this kind of thing - right !? E.g. by pressing Ctrl-C

Another question, and not quite numpy specific, is how to generate
this exception from another thread -- effectively injecting it into
the calling stack !? I ask this, because I'm using wxPython, which
makes things even more complicated  :-(


Thanks for any help,

Sebastian Haase
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Interrupting long running calculations ....

2008-08-18 Thread Fernando Perez
On Mon, Aug 18, 2008 at 12:09 AM, Sebastian Haase [EMAIL PROTECTED] wrote:


 Another question, and not quite numpy specific, is how to generate
 this exception from another thread -- effectively injecting it into
 the calling stack !? I ask this, because I'm using wxPython, which
 makes things even more complicated  :-(


An undocumented, unsupported, private hack is needed.   Have a look at
shell.py in ipython, around line 291:

http://bazaar.launchpad.net/~ipython-dev/ipython/trunk/annotate/1107?file_id=shell.py-20080216095032-xb0is4a97lmosv2z-38

for how to do it.

Cheers,

f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Possible new multiplication operators for Python

2008-08-18 Thread Ondrej Certik
On Mon, Aug 18, 2008 at 1:50 AM, Andrew Dalke [EMAIL PROTECTED] wrote:
 On Aug 18, 2008, at 12:00 AM, Ondrej Certik wrote:
 There is some inconsistency though, for example one can override A() +
 A(), but one cannot override 1 + 1. This could (should) be fixed
 somehow.

 That will never, ever change in Python.  There's no benefit
 to being able to redefine int.__add__ and doing so will break
 entirely too many assumptions.

 Here's one assumption - the C implementation does some
 simple constant folding:

   def f():
 ...   print 1+1
 ...
   import dis
   dis.dis(f)
   2   0 LOAD_CONST   2 (2)
   3 PRINT_ITEM
   4 PRINT_NEWLINE
   5 LOAD_CONST   0 (None)
   8 RETURN_VALUE
  

 With what you want that's not possible.


 Just think of the implementation difficulty.  Are changes on the
 per-module or per-scope or per-thread level?  And performance
 would be lousy (or require deep inferencing analysis) because
 every operation in C would need to go through the Python API
 just in case some fundamental definition like this was changed.

 Such a language is possible.  I wouldn't be surprised if
 you could do it in Smalltalk and mostly do it in Ruby.  But
 the general belief is that good design follows the
 open/closed principle:

   software entities (classes, modules, functions, etc.)
   should be open for extension, but closed for modification
  - Bertrand Meyer, quoted by
 http://en.wikipedia.org/wiki/Open/closed_principle

 In Python, all types are closed for modification, and
 while classes are open for modification it's highly frowned
 upon to do so.  The name monkey-patch sounds somewhat
 derisive for a reason.

Yeah, I understand the reasoning. My reason is that sometimes you want
to do something else on 1/2 rather than to get 0, or 0.500. I
would like to get my own class called, but if it's again Python, then
I am perfectly happy with Python as it is now. No changes needed.

Anyway, this is off topic.

Ondrej
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Please volunteer a Mac OS X buildbot slave

2008-08-18 Thread Barry Wark
Stéfan,

Again, thanks to you and Thomas.

cheers,
Barry

On Tue, Aug 12, 2008 at 10:08 AM, Stéfan van der Walt [EMAIL PROTECTED] wrote:
 2008/8/12 Barry Wark [EMAIL PROTECTED]:
 Stefan,

 I'm sorry I dropped the ball on this one. I didn't have time to get
 things working again before I left town for a month and, obviously,
 there it sat. Again, sorry.

 No worries, Barry.  That machine was troublesome, and I didn't want to
 bother you every week.  Thomas Heller's machine should do the job, if
 we can get it set up.

 Cheers
 Stéfan
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Generalized ufuncs?

2008-08-18 Thread Engel, Hans-Andreas


Robert Kern [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]...
 On Sun, Aug 17, 2008 at 21:55, Anne Archibald
[EMAIL PROTECTED] wrote:
  2008/8/17 Robert Kern [EMAIL PROTECTED]:
 
  I suggested that we move it to a branch for the time being so we
can
  play with it and come up with examples of its use. If you have
  examples that you have already written, I would love to see them.
I,
  for one, am amenable to seeing this in 1.2.0, but only if we push
back
  the release by at least a few weeks.
 
  This is not a worked example, but this is exactly what is needed to
  make possible the arrays of matrices functions that were discussed
  some time ago. For example, users frequently want to do something
like
  multiply an array of matrices by an array of matrices; that is not
  currently feasible without a very large temporary array (or a loop).
 
  But I think you were looking for examples of code using the
interface,
  to see whether it worked well.
 
 I'll take what I can get. In order of increasing utility:
 
   1. Descriptions of use cases (as above).
   2. Mockups of the Python code demonstrating the use case (e.g.
  nonfunctional Python code showing a potential generalized ufunc
  with inputs and outputs).
   3. The C code implementing the actual functionality for the use
case.
 
 -- 
 Robert Kern
 
 I have come to believe that the whole world is an enigma, a harmless
 enigma that is made terrible by our own mad attempt to interpret it as
 though it had an underlying truth.
  -- Umberto Eco


Please find an example on inner1d in the following.

1.  One use case for an inner function is described on
http://scipy.org/scipy/numpy/wiki/GeneralLoopingFunctions and
http://thread.gmane.org/gmane.comp.python.numeric.general/17694.
(Another one would be the array of matrices usage mentioned above; we
have called this dot2d with signature (m,n),(n,p)-(m,p) on
http://scipy.org/scipy/numpy/ticket/887:  here the matrix multiplication
would occur with respect to the last two dimensions.)


2.  The mockup python code would be:
   from numpy import *
   N = 10
   a = random.randn(3, 5, N)
   b = random.randn(5, N)
   # standard inner function
  ... inner(a, b).shape
  (3, 5, 5)
   # new ufunc inner1d with signature (i),(i)-(), satisfying
GeneralLoopingFunctions use case
  ... inner1d(a, b).shape  
  (3, 5)


3. Concrete implementation of inner1d in C:

/*
 *  This implements the function  out = inner1d(in1, in2)  with
 * out[K] = sum_i { in1[K, i] * in2[K, i] }
 *  and multi-index K, as described on
 *  http://scipy.org/scipy/numpy/wiki/GeneralLoopingFunctions
 *  and on http://projects.scipy.org/scipy/numpy/ticket/887.
 */

static void
DOUBLE_inner1d(char **args, intp *dimensions, intp *steps, void *func)
{
/* Standard ufunc loop length and strides. */
intp dn = dimensions[0];
intp s0 = steps[0];
intp s1 = steps[1];
intp s2 = steps[2];

intp n;

/* Additional loop length and strides for dimension i in
elementary function. */
intp di = dimensions[1];
intp i_s1 = steps[3];
intp i_s2 = steps[4];
intp i;

/* Outer loop: equivalent to standard ufuncs */
for (n = 0; n  dn; n++, args[0] += s0, args[1] += s1, args[2] +=
s2) {
char *ip1 = args[0], *ip2 = args[1], *op = args[2];

/* Implement elementary function:  out = sum_i { in1[i] * in2[i]
}  */
npy_double sum = 0;
for (i = 0; i  di; i++) {
sum += (*(npy_double *)ip1) * (*(npy_double *)ip2);
ip1 += i_s1;
ip2 += i_s2;
}
*(npy_double *)op = sum;
}
}


/* Actually create the ufunc object */

static PyUFuncGenericFunction inner1d_functions[] = { DOUBLE_inner1d };
static void *inner1d_data[] = { (void *)NULL };
static char inner1d_sigs[] = { PyArray_DOUBLE, PyArray_DOUBLE,
PyArray_DOUBLE };

static void
addInner1d(PyObject *dictionary) {
PyObject *f = PyUFunc_FromFuncAndDataAndSignature(inner1d_functions,
inner1d_data, inner1d_sigs, 1,
2, 1, PyUFunc_None, inner1d,
inner on the last dimension and
broadcast on the other dimensions,
0, 
(i),(i)-());
PyDict_SetItemString(dictionary, inner1d, f);
}
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Generalized ufuncs?

2008-08-18 Thread Engel, Hans-Andreas


Charles R Harris [EMAIL PROTECTED] wrote in message news:[EMAIL 
PROTECTED]...
On Sun, Aug 17, 2008 at 7:56 PM, Stéfan van der Walt [EMAIL PROTECTED]wrote:

 2008/8/17 Robert Kern [EMAIL PROTECTED]:
  I suggested that we move it to a branch for the time being so we can
  play with it and come up with examples of its use.

 That branch is here:

 http://[EMAIL PROTECTED]/svn/numpy/branches/gen_ufuncs


For an earlier thread about using vector valued ufuncs for sorts and such --
and negative reactions to the suggestion -- go
herehttp://thread.gmane.org/gmane.comp.python.numeric.general/20552/focus=20560.
One of the major objections was how to call such functions with the ufunc
machinery and the needed machinery for type promotions, sub classes, and all
that nonsense. Are these dealt with in the patch? The current numpy code for
all that is a bit of a mess anyway, and it would be nice to figure out some
unified interface to call through and clean up the current code in the
process. In fact, I've been making some preliminary efforts in that
direction by formatting the current code and working through it. Also, do we
also want reduce methods and such? I suspect there is still a lot of work to
do to get this whole thing up and running.

Chuck

--


The good news is that the patch just uses of the existing code to deal with all 
the tricky issues (this is why the patch is so short).  By the way, sort could 
be implemented with the proposed specifications, its signature would be 
(i)-(i).  I agree that it would be nice if that code could be made somewhat 
clearer; however, I think that this task is orthogonal to the generalized 
ufuncs patch, because there is no code overlap.  

The way the suggested implementation basically works is to remove the core 
dimensions from the input/output arrays, and then have the existing code 
handle all the intricacies over the loop dimensions.

Reduce methods are currently not supported (an error is raised).  Therefore, 
the current patch does not forestall anything and the desired functionality can 
be added whenever it is clear what would be best.

I do not think that it would makes sense to specify/implement all possible 
extensions, optimizations, concrete ufuncs, morphing of existing numpy 
functions to ufuncs, etc. at once; presumably it is much better to start with a 
small but extremely flexible specification of generalized ufuncs first.

Best,
Hansres


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] indexing (compared to matlab)

2008-08-18 Thread Alan G Isaac
 this should definitely be in the Numpy for Matlab users 
 page, http://www.scipy.org/NumPy_for_Matlab_Users, right 
 after the line:
 Matlab Numpy Notes 


Good form is to make that change yourself when you get
useful advice.  But I did it this time.

Cheers,
Alan Isaac


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Generalized ufuncs?

2008-08-18 Thread Travis E. Oliphant

 The good news is that the patch just uses of the existing code to deal with 
 all the tricky issues (this is why the patch is so short).  By the way, sort 
 could be implemented with the proposed specifications, its signature would be 
 (i)-(i).  I agree that it would be nice if that code could be made 
 somewhat clearer; however, I think that this task is orthogonal to the 
 generalized ufuncs patch, because there is no code overlap.  
   
I agree with this. 
 The way the suggested implementation basically works is to remove the core 
 dimensions from the input/output arrays, and then have the existing code 
 handle all the intricacies over the loop dimensions.

 Reduce methods are currently not supported (an error is raised).  Therefore, 
 the current patch does not forestall anything and the desired functionality 
 can be added whenever it is clear what would be best.

 I do not think that it would makes sense to specify/implement all possible 
 extensions, optimizations, concrete ufuncs, morphing of existing numpy 
 functions to ufuncs, etc. at once; presumably it is much better to start with 
 a small but extremely flexible specification of generalized ufuncs first.
   
One of the key reasons I'm enthused about the patch is because it's so 
small.   By enhancing the ufunc object and without changing the 
signature of the underlying function, the patch is able to implement the 
general description of a generalized ufunc.   

I think it is useful to evaluate whether or not a few more changes will 
allow more functionality with little cost, but I don't think it is worth 
holding up the patch hoping that the code will get cleaned-up (which 
all code needs according to somebody's definition of cleaning).

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-18 Thread Travis E. Oliphant
David Cournapeau wrote:
 On Sat, Aug 16, 2008 at 11:59 PM, David Cournapeau [EMAIL PROTECTED] wrote:
   
 On Sat, Aug 16, 2008 at 11:16 PM, Charles R Harris
 [EMAIL PROTECTED] wrote:
 
 I'm slowly coming to the conviction that there should be no C-ABI changes in
 1.2.
   
 It does not make sense to revert those changes anymore,
 

 Actually, I did not follow the discussion when this change happened,
 but it does not look difficult to change the code such as we do not
 break the ABI. Instead of replacing the flag, we can put it at the
 end, and deprecate (but not remove) the old one.

 Would anyone be strongly against that ?
   
No, we could do that.

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-18 Thread Travis E. Oliphant
Charles R Harris wrote:


 On Sat, Aug 16, 2008 at 11:21 PM, David Cournapeau [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:

 On Sat, Aug 16, 2008 at 11:59 PM, David Cournapeau
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
  On Sat, Aug 16, 2008 at 11:16 PM, Charles R Harris
  [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
 wrote:
 
  I'm slowly coming to the conviction that there should be no
 C-ABI changes in
  1.2.
 
  It does not make sense to revert those changes anymore,

 Actually, I did not follow the discussion when this change happened,
 but it does not look difficult to change the code such as we do not
 break the ABI. Instead of replacing the flag, we can put it at the
 end, and deprecate (but not remove) the old one.

 Would anyone be strongly against that ?


 I have nothing against extensions when they can be made to serve. If a 
 dictionary gets added to ndarrays I hope it is done that way, likewise 
 for generalized ufuncs. In the present case I think Travis wants to 
 preserve the functionality while changing the name and type, and that 
 doesn't really fit the extension model. But I might be wrong about that.
The problem was that I didn't break ABI compatibility at 1.0.  I new the 
char was too small to hold what the field had really become (a flag 
field).I had intended for 1.1 to be a potential ABI breakage, but 
this was changed when the release strategy changed.

But, there is no real functionality added by changing the ABI at this 
point.  I'm just looking to the future, but I can be convinced that it's 
too late.  

What about the version number changes.

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Generalized ufuncs?

2008-08-18 Thread Charles R Harris
On Mon, Aug 18, 2008 at 10:13 AM, Travis E. Oliphant [EMAIL PROTECTED]
 wrote:


  The good news is that the patch just uses of the existing code to deal
 with all the tricky issues (this is why the patch is so short).  By the way,
 sort could be implemented with the proposed specifications, its signature
 would be (i)-(i).  I agree that it would be nice if that code could be
 made somewhat clearer; however, I think that this task is orthogonal to the
 generalized ufuncs patch, because there is no code overlap.
 
 I agree with this.
  The way the suggested implementation basically works is to remove the
 core dimensions from the input/output arrays, and then have the existing
 code handle all the intricacies over the loop dimensions.
 
  Reduce methods are currently not supported (an error is raised).
  Therefore, the current patch does not forestall anything and the desired
 functionality can be added whenever it is clear what would be best.
 
  I do not think that it would makes sense to specify/implement all
 possible extensions, optimizations, concrete ufuncs, morphing of existing
 numpy functions to ufuncs, etc. at once; presumably it is much better to
 start with a small but extremely flexible specification of generalized
 ufuncs first.
 
 One of the key reasons I'm enthused about the patch is because it's so
 small.   By enhancing the ufunc object and without changing the
 signature of the underlying function, the patch is able to implement the
 general description of a generalized ufunc.

 I think it is useful to evaluate whether or not a few more changes will
 allow more functionality with little cost, but I don't think it is worth
 holding up the patch hoping that the code will get cleaned-up (which
 all code needs according to somebody's definition of cleaning).


I think the plan is that 1.2.1 will come out before the end of the year and
it would be reasonable to put the patch in there. As gen_ufuncs are
currently unused there is no practical effect to waiting until after the 1.2
release.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-18 Thread Charles R Harris
On Mon, Aug 18, 2008 at 10:26 AM, Travis E. Oliphant [EMAIL PROTECTED]
 wrote:

 Charles R Harris wrote:
 
 
  On Sat, Aug 16, 2008 at 11:21 PM, David Cournapeau [EMAIL PROTECTED]
  mailto:[EMAIL PROTECTED] wrote:
 
  On Sat, Aug 16, 2008 at 11:59 PM, David Cournapeau
  [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
   On Sat, Aug 16, 2008 at 11:16 PM, Charles R Harris
   [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
  wrote:
  
   I'm slowly coming to the conviction that there should be no
  C-ABI changes in
   1.2.
  
   It does not make sense to revert those changes anymore,
 
  Actually, I did not follow the discussion when this change happened,
  but it does not look difficult to change the code such as we do not
  break the ABI. Instead of replacing the flag, we can put it at the
  end, and deprecate (but not remove) the old one.
 
  Would anyone be strongly against that ?
 
 
  I have nothing against extensions when they can be made to serve. If a
  dictionary gets added to ndarrays I hope it is done that way, likewise
  for generalized ufuncs. In the present case I think Travis wants to
  preserve the functionality while changing the name and type, and that
  doesn't really fit the extension model. But I might be wrong about that.
 The problem was that I didn't break ABI compatibility at 1.0.  I new the
 char was too small to hold what the field had really become (a flag
 field).I had intended for 1.1 to be a potential ABI breakage, but
 this was changed when the release strategy changed.

 But, there is no real functionality added by changing the ABI at this
 point.  I'm just looking to the future, but I can be convinced that it's
 too late.

 What about the version number changes.


You mean the version number tracking ABI changes? I think it will be useful
if/when we do break the ABI so that people will be informed, until then, we
could leave it out. If you could figure out how to add a new flags field
without affecting the old one or requiring existion applications to be
recompiled, that would be good. We also need to distinquish between internal
and external ABI's.  One way to avoid the problems like this is to put some
spares in the original structure to reserve space for future enhancements.
It can also be useful to use getters and setters, ala C++, in the interface.
These would be actual functions instead of macros and hide changes to the
structures, just accepting pointers that can be downcast.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Possible new multiplication operators for Python

2008-08-18 Thread Grégory Lielens
On Sat, 2008-08-16 at 22:03 -0700, Fernando Perez wrote:
 Hi all,
 
 [ please keep all replies to this only on the numpy list.  I'm cc'ing
 the scipy ones to make others aware of the topic, but do NOT reply on
 those lists so we can have an organized thread for future reference]
 
 In the Python-dev mailing lists, there were recently two threads
 regarding the possibility of adding to the language new multiplication
 operators (amongst others).  This would allow one to define things
 like an element-wise and a matrix product for numpy arrays, for
 example:
 
 http://mail.python.org/pipermail/python-dev/2008-July/081508.html
 http://mail.python.org/pipermail/python-dev/2008-July/081551.html
 
 It turns out that there's an old pep on this issue:
 
 http://www.python.org/dev/peps/pep-0225/
 
 which hasn't been ruled out, simply postponed.  At this point it seems
 that there is room for some discussion, and obviously the input of the
 numpy/scipy crowd would be very welcome.  I volunteered to host a BOF
 next week at scipy so we could collect feedback from those present,
 but it's important that those NOT present at the conference can
 equally voice their ideas/opinions.
 
 So I wanted to open this thread here to collect feedback.  We'll then
 try to have the bof next week at the conference, and I'll summarize
 everything for python-dev.  Obviously this doesn't mean that we'll get
 any changes in, but at least there's interest in discussing a topic
 that has been dear to everyone here.
 
 Cheers,
 
 f


As one of the original author behind the PEP225, I think this is an
excellent idea.
(BTW, thanks for resurrecting this old PEP :-) and considering it
useful :-) :-) ).

I think I do not need to speak to much for the PEP,  telling that I did
not change my mind should be enough ;-)...but still, I can not resist
adding a few considerations:

Demands for Elementwise operators and/or matrix product operator is
likely to resurface from time to time on Python-dev or Python-idea,
given that this is a central feature of Matlab and Matlab is a de-facto
standard when it comes to numeric-oriented interpreted languages (well,
at least in the engineering world, it is in my experience the biggest
player by far).

At the time of the original discussion on python-dev and of PEP225
redaction , I was new to Python and fresh from Matlab, and the default
behavior of elementwise-product annoyed me a lot. Since then, I have
learned to appreciate the power of numpy broadcast (Use it extensively
in my code :-) ), so the default dehavior do not annoy me anymore...
But I still feel that 2 sets of operator would be very useful
( especially in some code which implement directly heavy linear algebra
formula), the only thing where my point of view has changed is that I
now think that the Matlab way ( defining * as matrix product and .* as
elementwise product) is not necessarily the best choice, the reverse
choice is as valid...

Given the increasing success of Python as a viable alternative, I think
that settling the Elementwise operator issue is probably a good idea.
Especially as the Python 3000 transition is maybe a good time to
investigate syntax changes/extension.


 
 I don't think so, but given that pep 225 exists and is fully fleshed
 out, I guess it should be considered the starting point of the
 discussion for reference.  This doesn't mean that modifications to it
 can't be suggested, but that I'm assuming python-dev will want that as
 the reference point.  For something as big as this, they would
 definitely want to work off a real pep.
 
 Having said that, I think all ideas are fair game at this point. I
 personally would like to see it happen, but if not I'd like to see a
 final pronouncement on the matter rather than seeing pep 225 deferred
 forever.
 

I agree 100%. Keeping PEP 225 in limbo is the worst situation imho,
given that the discussion about elementwise operator (or matrix product)
keep showing again and again, having a final decision (even if negative)
would be better. And as I said above, I feel the timing is right for
this final decision...

Best regards,

Greg.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.2.0b2 released

2008-08-18 Thread Andrew Dalke
Andrew Dalke:
 Any chance of someone reviewing my suggestions for
 making the import somewhat faster still?

http://scipy.org/scipy/numpy/ticket/874



Travis E. Oliphant:
 In sum:  I think 2, 3, 6, 7, 8, and 9 can be done immediately.  1) and
 4) could be O.K. but 1) does break code and 4  I'm not sure  
 about.5
 seems like it's too much code duplication for too little savings  
 for my
 taste.

Since no one else has said yea or nay, and 2.1 release draws nigh[*],
the simplest solution is to do 2, 3, 6, 7, 8, and 9.  I showed that
1 will break existing code.  As for #4 - as far as I can tell the code
in 'doc' is recent, so no user code depends on it.  Plus, the
documentation that's there is effectively unusable, with files like:



==
Jargon
==

Placeholder for computer science, engineering and other jargon.



so I still want to remove the import doc in numpy/__init__.py .

As for #5, that should probably be tied to the nosetests
migration, so it will be done soonish, but not for this release.

Is there a lack of disagreement on this?  Should I construct
a patch accordingly?  Or wait longer?



Andrew
[EMAIL PROTECTED]

[*] nigh: a word I don't use often except after draw.
   http://en.wiktionary.org/wiki/nigh
Interesting.  English once used nigh, near, next instead of
near, nearer, nearest.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] global overloading of 1+1 - MyClass(1, 1)

2008-08-18 Thread Ondrej Certik
Hi,

with Andrew's permission, I am starting a new thread, where our
discussion is ontopic. :)

My original question was, that I would like to override 1+1 to return
MyClass(1, 1) or something.
Robert said it would break other libraries and Andrew said this:

On Mon, Aug 18, 2008 at 9:23 PM, Andrew Dalke [EMAIL PROTECTED] wrote:
 There are basically two possible options:

 1) global overloading of +
 2) allow to tell Python that 1 is not its int, but ours Integer.

 BTW, Sage just preparses Python for exactly this reason and
 substitutes all numbers like 1 with Integer(1). This really sucks
 imho.

 How would you like to do it?  Any solution I can think of
 would cause huge problems.  For example, being able to change
 int.__add__ would cause systemic problems throughout the
 code base and would prevent, or very much hinder, migration
 of Python code to C.

Agree.


 Changes to support a different constructor for basic types,
 on a per-module basis has its own problems.  When are the
 new types specified?  In the module itself or by the code
 which imports the module?  Can the definitions be changed
 on a scope-by-scope level?

 Now, I can imagine a Python with builtin multimethod
 dispatch defined via static scoping, so that

 def __add__(left:int, right:int):
  return __builtin__.__add__(__builtin__.__add__(left, right), 2)

 def __int__(s):
  return Decimal(s)

 might work, but even here there's a problem because the 2 in
 __add__ gets replaced by a Decimal, when I really want it to
 be an integer.

Not getting why is 2 replaced by Decimal if you don't want to? If
you don't want it, you can just reimplement
 __int__(s). This would of course be only active in the module where
it was defined.


 So I don't see how you can do it in the context of Python.
 In the context of a very different programming language,
 sure, it's possible.


 But that's what Sage is doing, they just preparse the input. But in
 SymPy we'd like to use it as a regular Python library in users
 programs, so imho preparsing, or custom compiling using your package
 is not really an option, is it?

 Use a different extension, like .sympy and an import hook
 which does the .sympy - .pyc conversion.

 But it's not one I recommend for real code, at least not
 without a lot of personal experience to see if it's worthwhile
 in the face of the downsides.

Yes, this is a mess, this is just like preparsing. Well, not like --
this is preparsing.

Ok, in the current state, you don't know either what's going to
happen. If you write

In [1]: x/2*3/4

you have no idea what the result is going to be, you need to analyze
x.__div__() and start from there. But if you write

In [2]: 1/2*3/4

currently you know it will be 0. But imho you could as well analyze
the global __mul__ (or global __int__, depending on how this would be
technically implemented) to see what's going to happen.

I mean what is the difference between [1] and [2]?

Ondrej
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.2.0b2 released

2008-08-18 Thread Travis E. Oliphant
Andrew Dalke wrote:
 Andrew Dalke:
   
 Any chance of someone reviewing my suggestions for
 making the import somewhat faster still?

http://scipy.org/scipy/numpy/ticket/874

 


 Travis E. Oliphant:
   
 In sum:  I think 2, 3, 6, 7, 8, and 9 can be done immediately.  1) and
 4) could be O.K. but 1) does break code and 4  I'm not sure  
 about.5
 seems like it's too much code duplication for too little savings  
 for my
 taste.
 

 Since no one else has said yea or nay, and 2.1 release draws nigh[*],
 the simplest solution is to do 2, 3, 6, 7, 8, and 9.  I showed that
 1 will break existing code.  As for #4 - as far as I can tell the code
 in 'doc' is recent, so no user code depends on it.  Plus, the
 documentation that's there is effectively unusable, with files like:

   

I say go ahead including changing #1 and #4.  Let's leave 5 for the moment.

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] global overloading of 1+1 - MyClass(1, 1)

2008-08-18 Thread Andrew Dalke
On Aug 18, 2008, at 10:01 PM, Ondrej Certik wrote:

 with Andrew permission, I am starting a new thread, where our
 discussion is ontopic. :)

Though I want to point out that without specific proposals
of how the implementation might look, this thread will
not go anywhere as it will be too distant from usable code.

I sent examples to show how such a system might look, as
the basis for getting a feel if it was practical.  I do
not think my examples are practical, but they were meant
as an example of how such a proposal might look.

Since I know that the Python implementation will not change
to support per-module or per-scope redefinitions for 1+2
and builtin object constructors, the only feasible mechanism
is through some sort of alternate grammar that compiles to
either Python or directly to the Python virtual machine.

One such was is through import hooks.

 Yes, this is a mess, this is just like preparsing. Well, not like --
 this is preparsing.


It's not preparsing.  It's parsing.  There's no pre about it.
It's not a macro language.  My ply4python tutorial compiles
various Python-like languages to the Python virtual machine
bytecode.


 I mean what is the difference between [1] and [2]?

I want to see how you would extend Python to support such
a mechanism before I worried about how to interpret it.

Or in other words, the difference between [1] and [2]
is that [2] can be fully evaluated through simple static
analysis, while [1] cannot.

BTW, this is unexpected.  Python does constant folding
of that expression, but only with specific settings.

  import dis
  def f():
...   print 1/2*3/4
...
  dis.dis(f)
   2   0 LOAD_CONST   1 (1)
   3 LOAD_CONST   2 (2)
   6 BINARY_DIVIDE
   7 LOAD_CONST   3 (3)
  10 BINARY_MULTIPLY
  11 LOAD_CONST   4 (4)
  14 BINARY_DIVIDE
  15 PRINT_ITEM
  16 PRINT_NEWLINE
  17 LOAD_CONST   0 (None)
  20 RETURN_VALUE
 
  from __future__ import division
  def f():
...   print 1/2*3/4
...
  dis.dis(f)
   2   0 LOAD_CONST   7 (0.375)
   3 PRINT_ITEM
   4 PRINT_NEWLINE
   5 LOAD_CONST   0 (None)
   8 RETURN_VALUE
 



The only way I can see to do what you want requires
multimethods, which don't currently exist in Python
except as third-party extensions.  The one I know about,
from Philip J. Eby, works on a global-level, not module
level, because of how registration happens, so it does
not support what you would like.

Andrew
[EMAIL PROTECTED]

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] global overloading of 1+1 - MyClass(1, 1)

2008-08-18 Thread Ondrej Certik
On Mon, Aug 18, 2008 at 10:45 PM, Andrew Dalke
[EMAIL PROTECTED] wrote:
 On Aug 18, 2008, at 10:01 PM, Ondrej Certik wrote:

 with Andrew permission, I am starting a new thread, where our
 discussion is ontopic. :)

 Though I want to point out that without specific proposals
 of how the implementation might look, this thread will
 not go anywhere as it will be too distant from usable code.

 I sent examples to show how such a system might look, as
 the basis for getting a feel if it was practical.  I do
 not think my examples are practical, but they were meant
 as an example of how such a proposal might look.

 Since I know that the Python implementation will not change
 to support per-module or per-scope redefinitions for 1+2
 and builtin object constructors, the only feasible mechanism
 is through some sort of alternate grammar that compiles to
 either Python or directly to the Python virtual machine.

 One such was is through import hooks.

 Yes, this is a mess, this is just like preparsing. Well, not like --
 this is preparsing.


 It's not preparsing.  It's parsing.  There's no pre about it.
 It's not a macro language.  My ply4python tutorial compiles
 various Python-like languages to the Python virtual machine
 bytecode.


 I mean what is the difference between [1] and [2]?

 I want to see how you would extend Python to support such
 a mechanism before I worried about how to interpret it.

 Or in other words, the difference between [1] and [2]
 is that [2] can be fully evaluated through simple static
 analysis, while [1] cannot.

 BTW, this is unexpected.  Python does constant folding
 of that expression, but only with specific settings.

   import dis
   def f():
 ...   print 1/2*3/4
 ...
   dis.dis(f)
   2   0 LOAD_CONST   1 (1)
   3 LOAD_CONST   2 (2)
   6 BINARY_DIVIDE
   7 LOAD_CONST   3 (3)
  10 BINARY_MULTIPLY
  11 LOAD_CONST   4 (4)
  14 BINARY_DIVIDE
  15 PRINT_ITEM
  16 PRINT_NEWLINE
  17 LOAD_CONST   0 (None)
  20 RETURN_VALUE
  
   from __future__ import division
   def f():
 ...   print 1/2*3/4
 ...
   dis.dis(f)
   2   0 LOAD_CONST   7 (0.375)
   3 PRINT_ITEM
   4 PRINT_NEWLINE
   5 LOAD_CONST   0 (None)
   8 RETURN_VALUE
  



 The only way I can see to do what you want requires
 multimethods, which don't currently exist in Python
 except as third-party extensions.  The one I know about,
 from Philip J. Eby, works on a global-level, not module
 level, because of how registration happens, so it does
 not support what you would like.

One way to fix that would be to use the trick that Travis Oliphant
told me at EuroSciPy -- hack the while (or if) statement and do the
preparsing in there.

So clearly the language seems to support that, so imho it could be
made more official. E.g. you would write:

while sympy:
e = 1/2

and e would be Integer(1)/Integer(2).

But anyway, it's kind of hackish and I don't know what to say more
about it, besides what I already said. The problem is that I don't
have time to dig more into Python internals and without it it seems I
cannot provide more constructive answer besides I want some way to
avoid Python reducing 1/2 to 0.

Generally I believe that if there is will, it can always be done. :)

Ondrej
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] global overloading of 1+1 - MyClass(1, 1)

2008-08-18 Thread Christian Heimes
Ondrej Certik wrote:
   Ok, in the current state, you don't know either what's going to
 happen. If you write
 
 In [1]: x/2*3/4
 
 you have no idea what the result is going to be, you need to analyze
 x.__div__() and start from there. But if you write
 
 In [2]: 1/2*3/4
 
 currently you know it will be 0. But imho you could as well analyze
 the global __mul__ (or global __int__, depending on how this would be
 technically implemented) to see what's going to happen.
 
 I mean what is the difference between [1] and [2]?

Andrew has already pointed it out very well. I like to comment on your 
proposal from the perspective of a Python core developer as well as the 
perspective of somebody who has worked with Guido for more than a year.

I'd bet my life that Guido is never every going to allow it. The core 
types are fundemental to the Python interpreter. Even the possibility of 
pluggable type methods would make the implementation slower, more 
fragile and much more complicated. We'd have to remove several speed 
tricks and special cases for e.g. ints and replace them with slower 
implementations.

But don't give up hope yet! During the alpha phase of Python 3.0 and the 
revamping of the decimal module, some core developers had an even better 
idea. We were discussing the possibility of making decimals the default 
for float literals. The idea was rejected eventually but it gave birth 
to yet another idea. What about making the *result* of a literal 
pluggable? The Python creates a float for the literal 1.0. Some header 
in a module could replace the default target 'float' with e.g. 
decimal.Decimal.

Example syntax (rough idea):

  type(1.0)
type 'float'
  with float as from decimal import Decimal
  type(1.0)
class 'decimal.Decimal'

Wouldn't that solve your general problem more elegant without breaking 
other modules?

Christian

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] global overloading of 1+1 - MyClass(1, 1)

2008-08-18 Thread Ondrej Certik
Hi Christian,

On Mon, Aug 18, 2008 at 11:22 PM, Christian Heimes [EMAIL PROTECTED] wrote:
 Ondrej Certik wrote:
   Ok, in the current state, you don't know either what's going to
 happen. If you write

 In [1]: x/2*3/4

 you have no idea what the result is going to be, you need to analyze
 x.__div__() and start from there. But if you write

 In [2]: 1/2*3/4

 currently you know it will be 0. But imho you could as well analyze
 the global __mul__ (or global __int__, depending on how this would be
 technically implemented) to see what's going to happen.

 I mean what is the difference between [1] and [2]?

 Andrew has already pointed it out very well. I like to comment on your
 proposal from the perspective of a Python core developer as well as the
 perspective of somebody who has worked with Guido for more than a year.

 I'd bet my life that Guido is never every going to allow it. The core
 types are fundemental to the Python interpreter. Even the possibility of
 pluggable type methods would make the implementation slower, more
 fragile and much more complicated. We'd have to remove several speed
 tricks and special cases for e.g. ints and replace them with slower
 implementations.

 But don't give up hope yet! During the alpha phase of Python 3.0 and the
 revamping of the decimal module, some core developers had an even better
 idea. We were discussing the possibility of making decimals the default
 for float literals. The idea was rejected eventually but it gave birth
 to yet another idea. What about making the *result* of a literal
 pluggable? The Python creates a float for the literal 1.0. Some header
 in a module could replace the default target 'float' with e.g.
 decimal.Decimal.

 Example syntax (rough idea):

   type(1.0)
 type 'float'
   with float as from decimal import Decimal
   type(1.0)
 class 'decimal.Decimal'

 Wouldn't that solve your general problem more elegant without breaking
 other modules?

It absolutely would. Thanks very much for the email.  How is your
proposal (redefine literals) different to just saying to Python --
hey, just call my class when someone writes 1, e.g. proposition 2)
from my first email? Or am I missing something.


I agree with the technical reasonings, why some particular solution is
not good. I.e.  I didn't do any proposal, I am just trying to find a
way, so that we don't have to always type

In [3]: Integer(1)/2 * x

sometimes, but

In [4]: x/2

some other times. If you know what I mean. Both do the same thing, but
[1] is very annoying to write and a source of common mistakes that
people do with SymPy, it simply returns 0.

Ondrej
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.2.0b2 released

2008-08-18 Thread Robert Kern
On Mon, Aug 18, 2008 at 15:04, Travis E. Oliphant
[EMAIL PROTECTED] wrote:
 I say go ahead including changing #1 and #4.  Let's leave 5 for the moment.

I think we can just delete all of the test() and bench() functions
except for numpy.{bench,test}(). That way, there is no code
duplication.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] global overloading of 1+1 - MyClass(1, 1)

2008-08-18 Thread Andrew Dalke
On Aug 18, 2008, at 11:22 PM, Christian Heimes wrote:
 Example syntax (rough idea):

 type(1.0)
 type 'float'
 with float as from decimal import Decimal
 type(1.0)
 class 'decimal.Decimal'

When would this with float ...  considered valid?


For example, could I define things before asking
for a redefinition?


def f(a=1.0):
   print a+2.0

with float as from decimal import Decimal


In this case would the 1.0 be a float and
the 2.0 be a decimal?  Or would they both
be floats?

Would the following be allowed?


def f(x):
   with float as from decimal import Decimal
   return 1.0

and would that affect things in the function
scope or the entire module scope?  What if
I stuck in a global int in that function?


What about the following, which uses gmpy
if it's available, otherwise uses decimal.

try:
   import gmpy
   # Pretend there's a bug in some versions
   # of the library, and don't use gmpy
   # if that bug is present.
   if gmpy.mpf(3.0) == 3.0:
 with float as from gmpy import gmpy.mpf
   else:
 # work around the hypothetical bug
 raise ImportError
except ImportError:
   with float as from decimal import Decimal

Hmm.. though this could be handled with
some support library, making the result be

with float as support_library.the_float_to_use


The simplest is that if that statement type appears
in the module at all then int/float/complex/string/
dict/list creation goes through a module function,
and the statement simply redefines that function.

This would slow down the entire module, but if
it's what the developer wanted...  But I really
don't like the unexpected slow down and I think
that will be a constant gotcha, with people
rewriting code from

   sum(1 for x in data if x  100)

into something like

   one = 1
   one_hundred = 100
   # or possibly: one = __builtin__.int(1)
   sum(one for x in data if x  one_hundred)

in order to eek out performance by not
calling new_int(1) so many times.


Also, in the use-case for SymPy it means that
all modules would start with:

   with float as from SymPy import float
   with int as from SymPy import int
   with complex as from SymPy import complex

which makes for tedious boilerplate.  Better might be

   with (float, int, complex) as from SymPy import (float, int, complex)



If there's support for

   with str as 

then how does one create an actual Python string
in a module where string is redefined?  For example,

   with str as from MyModule import string
   open(__builtins__.str(/path/to/nowhere))

wouldn't necessarily work because /path/to gets
converted to something else, and that something else
might not be convertible back to a Python string.

Blowing my own horn again, I wrote python4ply
precisely to support experimentation like this.
We can discuss pros and cons but with a change
this extensive it's hard to judge the usefulness
with without some real-world experience.  And I
didn't want to wait for PyPy. ;)


Andrew
[EMAIL PROTECTED]


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Possible new multiplication operators for Python

2008-08-18 Thread Robert Kern
On Mon, Aug 18, 2008 at 12:21, Charles R Harris
[EMAIL PROTECTED] wrote:
 Tim Hochberg proposed using the call operator for matrix multiplication,
 i.e.,

 A(B(C))

 Which has the advantage of using an existing operator. It looks like
 function composition, which isn't that far off the mark if matrices are
 looked at as mappings, but might take a bit of getting used to.

It's certainly worth exploring. My personal opinion is that I could
just use a single operator for doing matrix multiplication. I don't
want to see two variants of every single operator.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] global overloading of 1+1 - MyClass(1, 1)

2008-08-18 Thread Dag Sverre Seljebotn
Ondrej Certik wrote:
 Hi Christian,

 On Mon, Aug 18, 2008 at 11:22 PM, Christian Heimes [EMAIL PROTECTED]
 wrote:
 Ondrej Certik wrote:
   Ok, in the current state, you don't know either what's going to
 happen. If you write

 In [1]: x/2*3/4

 you have no idea what the result is going to be, you need to analyze
 x.__div__() and start from there. But if you write

 In [2]: 1/2*3/4

 currently you know it will be 0. But imho you could as well analyze
 the global __mul__ (or global __int__, depending on how this would be
 technically implemented) to see what's going to happen.

 I mean what is the difference between [1] and [2]?

 Andrew has already pointed it out very well. I like to comment on your
 proposal from the perspective of a Python core developer as well as the
 perspective of somebody who has worked with Guido for more than a year.

 I'd bet my life that Guido is never every going to allow it. The core
 types are fundemental to the Python interpreter. Even the possibility of
 pluggable type methods would make the implementation slower, more
 fragile and much more complicated. We'd have to remove several speed
 tricks and special cases for e.g. ints and replace them with slower
 implementations.

 But don't give up hope yet! During the alpha phase of Python 3.0 and the
 revamping of the decimal module, some core developers had an even better
 idea. We were discussing the possibility of making decimals the default
 for float literals. The idea was rejected eventually but it gave birth
 to yet another idea. What about making the *result* of a literal
 pluggable? The Python creates a float for the literal 1.0. Some header
 in a module could replace the default target 'float' with e.g.
 decimal.Decimal.

 Example syntax (rough idea):

   type(1.0)
 type 'float'
   with float as from decimal import Decimal
   type(1.0)
 class 'decimal.Decimal'

 Wouldn't that solve your general problem more elegant without breaking
 other modules?

 It absolutely would. Thanks very much for the email.  How is your
 proposal (redefine literals) different to just saying to Python --
 hey, just call my class when someone writes 1, e.g. proposition 2)
 from my first email? Or am I missing something.


 I agree with the technical reasonings, why some particular solution is
 not good. I.e.  I didn't do any proposal, I am just trying to find a
 way, so that we don't have to always type

 In [3]: Integer(1)/2 * x

 sometimes, but

 In [4]: x/2

 some other times. If you know what I mean. Both do the same thing, but
 [1] is very annoying to write and a source of common mistakes that
 people do with SymPy, it simply returns 0.

It should be mentioned here that this is exactly what the SAGE preparser
does -- all 2 turns into Integer(2), and that idea definitely seems to
work for them. (And so if you want Python behaviour in SAGE, you simply do
Integer = int, while normally 1/2 becomes a rational object in QQ
because one object in ZZ is divided by another one).

Pluggable literals seems like a much better idea to me, not from pragmatic
but from design reasons -- I would prefer them even in the (very unlikely)
case that they were slower and harder to implement than overriding the
operators for the builtin types.

(But I'm just a bystander here.)

Dag Sverre

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] global overloading of 1+1 - MyClass(1, 1)

2008-08-18 Thread Christian Heimes
Andrew Dalke wrote:
 When would this with float ...  considered valid?

[long posting]

Oh h... what have I done ... *g*

Slow down, please. For now there are no concrete plans what-so-ever to 
implement the feature in the near future. Some developers have expressed 
their interest in a way to alter the resulting type of a literal. It was 
my attention to show you, that we have discussed the idea, too.

Now for the with type as from import syntax. I came up with the syntax 
idea about an hour ago. I tried to come up with some nice syntax that 
reuses existing keywords. IMHO it has a nice ring. Other possibilities I 
came up with:

   def float as factory
   def float as from module import factory
   with float yield factory
   with float yield from module import factory

After some careful thinking I'm in favor of with ... yield  It's 
less ambiguous and can't be mistaken for with open(filename) as fh.

The ideas needs a good PEP. You are definitely up to something. You also 
came up with a list of possible issues and corner cases. Are you 
interested in pursuing the proposal? *wink*

Christian

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] global overloading of 1+1 - MyClass(1, 1)

2008-08-18 Thread Ondrej Certik
On Tue, Aug 19, 2008 at 1:06 AM, Christian Heimes [EMAIL PROTECTED] wrote:
 Andrew Dalke wrote:
 When would this with float ...  considered valid?

 [long posting]

 Oh h... what have I done ... *g*

 Slow down, please. For now there are no concrete plans what-so-ever to
 implement the feature in the near future. Some developers have expressed
 their interest in a way to alter the resulting type of a literal. It was
 my attention to show you, that we have discussed the idea, too.

 Now for the with type as from import syntax. I came up with the syntax
 idea about an hour ago. I tried to come up with some nice syntax that
 reuses existing keywords. IMHO it has a nice ring. Other possibilities I
 came up with:

   def float as factory
   def float as from module import factory
   with float yield factory
   with float yield from module import factory

 After some careful thinking I'm in favor of with ... yield  It's
 less ambiguous and can't be mistaken for with open(filename) as fh.

 The ideas needs a good PEP. You are definitely up to something. You also
 came up with a list of possible issues and corner cases. Are you
 interested in pursuing the proposal? *wink*

Are we able to provide an actual patch to Python that implements this?
If so, then I am.
Imho the proposal should come with an actual patch, otherwise it's
difficult to judge it.

Ondrej
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.2.0b2 released

2008-08-18 Thread Stéfan van der Walt
2008/8/18 Travis E. Oliphant [EMAIL PROTECTED]:
 I say go ahead including changing #1 and #4.  Let's leave 5 for the moment.

I ran several benchmarks and made sure that these imports take a
minimal amount of time.  Wouldn't we want users to have access with
the doc framework without doing anything special?  And, yes, some of
the documents are empty, but a number of them have already been
written.

I still think we are going about this the wrong way.  We have two
different sets of expectations, and we can't satisfy both by ripping
everything apart.  I'd much prefer two entry points into NumPy: one
for people who need speed, and one for those who need the convenience
of everything being at hand.

Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] global overloading of 1+1 - MyClass(1, 1)

2008-08-18 Thread Andrew Dalke
On Aug 19, 2008, at 1:06 AM, Christian Heimes wrote:

 [long posting]

 Oh h... what have I done ... *g*

*shrug* I write long emails.  I've been told that
by several people.  It's probably a bad thing.

 The ideas needs a good PEP. You are definitely up to something. You  
 also
 came up with a list of possible issues and corner cases. Are you
 interested in pursuing the proposal? *wink*

No.  I'm doing this to head off endless discussion about
some proposed hypothetical future Python.  If someone wants
to experiment with new operators, new ways of handling
int/float/whatever constructions, new lexical forms, or
whatever then IMNSHO the best thing is to implement it and
see if it's actually useful.

In most cases there are seriously hard questions that
are hand-waved away, because it's more fun to talk about
language design than to implement one and because
it's hard to tweak CPython to try out new things.

Hence I wrote python4ply and an extensive tutorial
specifically so people could easily jump into the
implementation details, find the sticky points, then
either quickly reject a bad idea or iterate to get a
good solution.  Many of these changes can be done
with less than a couple of days of work.

Then to show that there are sticky points, I listed a few.

BTW, it's *fun* to modify an existing language and
afterwards it you know a secret - that programming
languages are just flimsy facades held together by
a shared hallucination.  Like in a dream, change
things too much or leave gaps and people notice the
illogic, wake up, and look elsewhere for refuge from
harsh digital machine reality.

Hmmm, it really is too late for me.

Andrew
[EMAIL PROTECTED]


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.2.0b2 released

2008-08-18 Thread Travis E. Oliphant
Stéfan van der Walt wrote:
 2008/8/18 Travis E. Oliphant [EMAIL PROTECTED]:
   
 I say go ahead including changing #1 and #4.  Let's leave 5 for the moment.
 

 I ran several benchmarks and made sure that these imports take a
 minimal amount of time.  Wouldn't we want users to have access with
 the doc framework without doing anything special?  And, yes, some of
 the documents are empty, but a number of them have already been
 written.

 I still think we are going about this the wrong way.  We have two
 different sets of expectations, and we can't satisfy both by ripping
 everything apart.  I'd much prefer two entry points into NumPy: one
 for people who need speed, and one for those who need the convenience
 of everything being at hand.

   
I think you are right Stefan.   It would be great to have another 
name-space that is lighter from which numpy imports.   But there is no 
reason to hold up these useful speed increases waiting for a better 
solution.

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] global overloading of 1+1 - MyClass(1, 1)

2008-08-18 Thread Scott Ransom
On Tue, Aug 19, 2008 at 01:57:20AM +0200, Andrew Dalke wrote:

 BTW, it's *fun* to modify an existing language and
 afterwards it you know a secret - that programming
 languages are just flimsy facades held together by
 a shared hallucination.  Like in a dream, change
 things too much or leave gaps and people notice the
 illogic, wake up, and look elsewhere for refuge from
 harsh digital machine reality.

That is the quote of the month for sure.

Scott

-- 
Scott M. RansomAddress:  NRAO
Phone:  (434) 296-0320   520 Edgemont Rd.
email:  [EMAIL PROTECTED] Charlottesville, VA 22903 USA
GPG Fingerprint: 06A9 9553 78BE 16DB 407B  FFCA 9BFA B6FF FFD3 2989
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Possible new multiplication operators for Python

2008-08-18 Thread Gael Varoquaux
On Sun, Aug 17, 2008 at 04:28:55PM -0400, Alan G Isaac wrote:
 That said, what kind of problems do you have in mind?

A lot of software still don't deal well with unicode (wxPython's unicode
situation under windows, for instance, in interesting). But wht I am
most worried about is not being able to enter the symbol, because I am in
an editor I don't know, and the symbol is not on my keyboard. This has
happened to me more than once with unicode. So I see the losss, and not
the gain.

I actually think PEP 225 is pretty good. I have not big opinion about ~
vs @.

Gaël
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] global overloading of 1+1 - MyClass(1, 1)

2008-08-18 Thread Travis Vaught

On Aug 18, 2008, at 6:57 PM, Andrew Dalke wrote:
 ...

 BTW, it's *fun* to modify an existing language and
 afterwards it you know a secret - that programming
 languages are just flimsy facades held together by
 a shared hallucination.  Like in a dream, change
 things too much or leave gaps and people notice the
 illogic, wake up, and look elsewhere for refuge from
 harsh digital machine reality.


I think you meant to say shared abstraction, right Andrew ... (call  
the guards!)

Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.2.0b2 released

2008-08-18 Thread Stéfan van der Walt
2008/8/18 Andrew Dalke [EMAIL PROTECTED]:
 How do users know that those are present?  How do users
 view those docs?  You're the one who added that directory, yes?,
 so you've probably got the most experience with it.  I
 couldn't figure out it, and the README in the doc/ directory
 wasn't helpful.

The numpy/doc directory existed before I implemented this, which may
explain some odd design decisions.  Usage is meant to happen via
help or IPython's ?:

In [2]: np.doc?
Type:   module
Base Class: type 'module'
String Form:module 'numpy.doc' from
'/Users/stefan/lib/python2.5/site-packages/numpy/doc/__init__.pyc'
Namespace:  Interactive
File:   /Users/stefan/lib/python2.5/site-packages/numpy/doc/__init__.py
Docstring:
The following topics are available:

- basics
- broadcasting
- creation
- glossary
- howtofind
- indexing
- internals
- io
- jargon
- methods_vs_functions
- misc
- performance
- structured_arrays
- ufuncs

In [3]: np.doc.broadcasting?
Type: module
Base Class:   type 'module'
String Form:   module 'numpy.doc.reference.broadcasting' from
'/Users/stefan/lib/python2.5/site-packages/numpy/doc/reference/broadcasting.pyc'
Namespace:Interactive
File:
/Users/stefan/lib/python2.5/site-packages/numpy/doc/reference/broadcasting.py
Docstring:

Broadcasting over arrays


[...]

 I agree, the load time is very small.  But with all my patches
 in place the import time goes down from about 0.18 second to
 about 0.10 seconds.  Times add up.

Here are some of the timings I did, for interest's sake:

For each trial, I included N copies of the NumPy documentation
guide as topics under numpy.do, and took the best of 3 trials.  The
topic number is currently 14.

Without numpy.doc:

real0m0.259s
user0m0.082s
sys 0m0.169s



200 files

real0m0.341s
user0m0.095s
sys 0m0.232s

-

100

real0m0.282s
user0m0.087s
sys 0m0.190s

-

50

real0m0.273s
user0m0.085s
sys 0m0.179s
[EMAIL PROTECTED]:/tmp$ time python -c 'import numpy'

-

20

real0m0.262s
user0m0.083s
sys 0m0.173s



 I still think we are going about this the wrong way.  We have two
 different sets of expectations, and we can't satisfy both by ripping
 everything apart.  I'd much prefer two entry points into NumPy: one
 for people who need speed, and one for those who need the convenience
 of everything being at hand.

 I thought I was very careful to not rip things apart. :(

 Everything I did was API compatible except for the
 proposed removals of numpy.ctypeslib and numpy.doc.  I
 chose ctypeslib because importing ctypes takes 10% of
 the total load time on my box.  I chose numpy.doc because
 I couldn't figure out how it's used.

Sorry, I did not mean to make you sound like a back-yard surgeon!
Maybe hyperbole is best avoided.  I am quite happy with the non-API
changing modifications you propose, and probably with the others too:
I just want us to get our heads together and decide on a policy before
we proceed (see my reply to Travis).

 It seems like every time I use num* (which isn't often) I
 need to learn a new library.  I don't want to switch again
 for a few years.

Sure, we all need to get work done.  But in we I include those who
already wrote apps using numpy.ctypeslib.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Possible new multiplication operators for Python

2008-08-18 Thread Stéfan van der Walt
2008/8/18 Gael Varoquaux [EMAIL PROTECTED]:
 I actually think PEP 225 is pretty good. I have not big opinion about ~
 vs @.

Both of these already have meanings (boolean not and decorator),
so it's pretty much a toss-up for me.  In a way, the concept of a
decorator could still apply:

@* takes the function `mul` and returns a new function, array_mul.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion