Re: [Numpy-discussion] float32 to float64 casting

2012-11-17 Thread Benjamin Root
On Saturday, November 17, 2012, Charles R Harris wrote:

>
>
> On Sat, Nov 17, 2012 at 1:00 PM, Olivier Delalleau 
> 
> > wrote:
>
>> 2012/11/17 Gökhan Sever > 'cvml', 'gokhanse...@gmail.com');>>
>>
>>>
>>>
>>> On Sat, Nov 17, 2012 at 9:47 AM, Nathaniel Smith 
>>> 
>>> > wrote:
>>>
 On Fri, Nov 16, 2012 at 9:53 PM, Gökhan Sever 
 >>> 'gokhanse...@gmail.com');>>
 wrote:
 > Thanks for the explanations.
 >
 > For either case, I was expecting to get float32 as a resulting data
 type.
 > Since, float32 is large enough to contain the result. I am wondering
 if
 > changing casting rule this way, requires a lot of modification in the
 NumPy
 > code. Maybe as an alternative to the current casting mechanism?
 >
 > I like the way that NumPy can convert to float64. As if these
 data-types are
 > continuation of each other. But just the conversation might happen
 too early
 > --at least in my opinion, as demonstrated in my example.
 >
 > For instance comparing this example to IDL surprises me:
 >
 > I16 np.float32()*5e38
 > O16 2.77749998e+42
 >
 > I17 (np.float32()*5e38).dtype
 > O17 dtype('float64')

 In this case, what's going on is that 5e38 is a Python float object,
 and Python float objects have double-precision, i.e., they're
 equivalent to np.float64's. So you're multiplying a float32 and a
 float64. I think most people will agree that in this situation it's
 better to use float64 for the output?

 -n
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org >>> 'NumPy-Discussion@scipy.org');>
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

>>>
>>> OK, I see your point. Python numeric data objects and NumPy data objects
>>> mixed operations require more attention.
>>>
>>> The following causes float32 overflow --rather than casting to float64
>>> as in the case for Python float multiplication, and behaves like in IDL.
>>>
>>> I3 (np.float32()*np.float32(5e38))
>>> O3 inf
>>>
>>> However, these two still surprises me:
>>>
>>> I5 (np.float32()*1).dtype
>>> O5 dtype('float64')
>>>
>>> I6 (np.float32()*np.int32(1)).dtype
>>> O6 dtype('float64')
>>>
>>
>> That's because the current way of finding out the result's dtype is based
>> on input dtypes only (not on numeric values), and numpy.can_cast('int32',
>> 'float32') is False, while numpy.can_cast('int32', 'float64') is True (and
>> same for int64).
>> Thus it decides to cast to float64.
>>
>
> It might be nice to revisit all the casting rules at some point, but
> current experience suggests that any changes will lead to cries of pain and
> outrage ;)
>
> Chuck
>
>
Can we at least put these examples into the tests?  Also, I think the
bigger issue was that, unlike deprecation of a function, it is much harder
to grep for particular operations, especially in a dynamic language like
python. What were intended as minor bugfixes ended up becoming much larger.

Has the casting table been added to the tests?  I think that will bring
much more confidence and assurances for future changes going forward.

Cheers!
Ben Root
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] float32 to float64 casting

2012-11-17 Thread Charles R Harris
On Sat, Nov 17, 2012 at 1:00 PM, Olivier Delalleau  wrote:

> 2012/11/17 Gökhan Sever 
>
>>
>>
>> On Sat, Nov 17, 2012 at 9:47 AM, Nathaniel Smith  wrote:
>>
>>> On Fri, Nov 16, 2012 at 9:53 PM, Gökhan Sever 
>>> wrote:
>>> > Thanks for the explanations.
>>> >
>>> > For either case, I was expecting to get float32 as a resulting data
>>> type.
>>> > Since, float32 is large enough to contain the result. I am wondering if
>>> > changing casting rule this way, requires a lot of modification in the
>>> NumPy
>>> > code. Maybe as an alternative to the current casting mechanism?
>>> >
>>> > I like the way that NumPy can convert to float64. As if these
>>> data-types are
>>> > continuation of each other. But just the conversation might happen too
>>> early
>>> > --at least in my opinion, as demonstrated in my example.
>>> >
>>> > For instance comparing this example to IDL surprises me:
>>> >
>>> > I16 np.float32()*5e38
>>> > O16 2.77749998e+42
>>> >
>>> > I17 (np.float32()*5e38).dtype
>>> > O17 dtype('float64')
>>>
>>> In this case, what's going on is that 5e38 is a Python float object,
>>> and Python float objects have double-precision, i.e., they're
>>> equivalent to np.float64's. So you're multiplying a float32 and a
>>> float64. I think most people will agree that in this situation it's
>>> better to use float64 for the output?
>>>
>>> -n
>>> ___
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>
>> OK, I see your point. Python numeric data objects and NumPy data objects
>> mixed operations require more attention.
>>
>> The following causes float32 overflow --rather than casting to float64 as
>> in the case for Python float multiplication, and behaves like in IDL.
>>
>> I3 (np.float32()*np.float32(5e38))
>> O3 inf
>>
>> However, these two still surprises me:
>>
>> I5 (np.float32()*1).dtype
>> O5 dtype('float64')
>>
>> I6 (np.float32()*np.int32(1)).dtype
>> O6 dtype('float64')
>>
>
> That's because the current way of finding out the result's dtype is based
> on input dtypes only (not on numeric values), and numpy.can_cast('int32',
> 'float32') is False, while numpy.can_cast('int32', 'float64') is True (and
> same for int64).
> Thus it decides to cast to float64.
>

It might be nice to revisit all the casting rules at some point, but
current experience suggests that any changes will lead to cries of pain and
outrage ;)

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] float32 to float64 casting

2012-11-17 Thread Olivier Delalleau
2012/11/17 Gökhan Sever 

>
>
> On Sat, Nov 17, 2012 at 9:47 AM, Nathaniel Smith  wrote:
>
>> On Fri, Nov 16, 2012 at 9:53 PM, Gökhan Sever 
>> wrote:
>> > Thanks for the explanations.
>> >
>> > For either case, I was expecting to get float32 as a resulting data
>> type.
>> > Since, float32 is large enough to contain the result. I am wondering if
>> > changing casting rule this way, requires a lot of modification in the
>> NumPy
>> > code. Maybe as an alternative to the current casting mechanism?
>> >
>> > I like the way that NumPy can convert to float64. As if these
>> data-types are
>> > continuation of each other. But just the conversation might happen too
>> early
>> > --at least in my opinion, as demonstrated in my example.
>> >
>> > For instance comparing this example to IDL surprises me:
>> >
>> > I16 np.float32()*5e38
>> > O16 2.77749998e+42
>> >
>> > I17 (np.float32()*5e38).dtype
>> > O17 dtype('float64')
>>
>> In this case, what's going on is that 5e38 is a Python float object,
>> and Python float objects have double-precision, i.e., they're
>> equivalent to np.float64's. So you're multiplying a float32 and a
>> float64. I think most people will agree that in this situation it's
>> better to use float64 for the output?
>>
>> -n
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>
> OK, I see your point. Python numeric data objects and NumPy data objects
> mixed operations require more attention.
>
> The following causes float32 overflow --rather than casting to float64 as
> in the case for Python float multiplication, and behaves like in IDL.
>
> I3 (np.float32()*np.float32(5e38))
> O3 inf
>
> However, these two still surprises me:
>
> I5 (np.float32()*1).dtype
> O5 dtype('float64')
>
> I6 (np.float32()*np.int32(1)).dtype
> O6 dtype('float64')
>

That's because the current way of finding out the result's dtype is based
on input dtypes only (not on numeric values), and numpy.can_cast('int32',
'float32') is False, while numpy.can_cast('int32', 'float64') is True (and
same for int64).
Thus it decides to cast to float64.

-=- Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] float32 to float64 casting

2012-11-17 Thread Gökhan Sever
On Sat, Nov 17, 2012 at 9:47 AM, Nathaniel Smith  wrote:

> On Fri, Nov 16, 2012 at 9:53 PM, Gökhan Sever 
> wrote:
> > Thanks for the explanations.
> >
> > For either case, I was expecting to get float32 as a resulting data type.
> > Since, float32 is large enough to contain the result. I am wondering if
> > changing casting rule this way, requires a lot of modification in the
> NumPy
> > code. Maybe as an alternative to the current casting mechanism?
> >
> > I like the way that NumPy can convert to float64. As if these data-types
> are
> > continuation of each other. But just the conversation might happen too
> early
> > --at least in my opinion, as demonstrated in my example.
> >
> > For instance comparing this example to IDL surprises me:
> >
> > I16 np.float32()*5e38
> > O16 2.77749998e+42
> >
> > I17 (np.float32()*5e38).dtype
> > O17 dtype('float64')
>
> In this case, what's going on is that 5e38 is a Python float object,
> and Python float objects have double-precision, i.e., they're
> equivalent to np.float64's. So you're multiplying a float32 and a
> float64. I think most people will agree that in this situation it's
> better to use float64 for the output?
>
> -n
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>

OK, I see your point. Python numeric data objects and NumPy data objects
mixed operations require more attention.

The following causes float32 overflow --rather than casting to float64 as
in the case for Python float multiplication, and behaves like in IDL.

I3 (np.float32()*np.float32(5e38))
O3 inf

However, these two still surprises me:

I5 (np.float32()*1).dtype
O5 dtype('float64')

I6 (np.float32()*np.int32(1)).dtype
O6 dtype('float64')
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] float32 to float64 casting

2012-11-17 Thread Nathaniel Smith
On Fri, Nov 16, 2012 at 9:53 PM, Gökhan Sever  wrote:
> Thanks for the explanations.
>
> For either case, I was expecting to get float32 as a resulting data type.
> Since, float32 is large enough to contain the result. I am wondering if
> changing casting rule this way, requires a lot of modification in the NumPy
> code. Maybe as an alternative to the current casting mechanism?
>
> I like the way that NumPy can convert to float64. As if these data-types are
> continuation of each other. But just the conversation might happen too early
> --at least in my opinion, as demonstrated in my example.
>
> For instance comparing this example to IDL surprises me:
>
> I16 np.float32()*5e38
> O16 2.77749998e+42
>
> I17 (np.float32()*5e38).dtype
> O17 dtype('float64')

In this case, what's going on is that 5e38 is a Python float object,
and Python float objects have double-precision, i.e., they're
equivalent to np.float64's. So you're multiplying a float32 and a
float64. I think most people will agree that in this situation it's
better to use float64 for the output?

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Assignment function with a signature similar to take?

2012-11-17 Thread Eric Moore
Is there a function that operates like 'take' but does assignment? 
Specifically that takes indices and an axis?  As far as I can tell no 
such function exists.  Is there any particular reason?

One can fake such a thing by doing (code untested):

s = len(a.shape)*[np.s_[:]]
s[axis] = np.s_[1::2]
a[s] = b.take(np.arange(1,b.shape[axis],2), axis)

Or by using np.rollaxis:

a = np.rollaxis(a, axis, len(a.shape))
a[..., 1::2] = b[..., 1::2]
a = np.rollaxis(a, len(a.shape)-1, axis)

But I don't really think that either of these are particularly clear, 
but probably prefer the rollaxis solution.

Also, while I'm here, what about having take also be able to use a slice 
object in lieu of a collection of indices?

-Eric
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] the fast way to loop over ndarray elements?

2012-11-17 Thread Chao YUE
Yes, both the "base" and "target" are ascending.  Thanks!

Chao

On Sat, Nov 17, 2012 at 2:40 PM, Benjamin Root  wrote:

>
>
> On Saturday, November 17, 2012, Chao YUE wrote:
>
>> Dear all,
>>
>> I need to make a linear contrast of the 2D numpy array "data" from an
>> interval to another, the approach is:
>> I have another two list: "base" & "target", then I check for each ndarray
>> element "data[i,j]",
>> if   base[m] =< data[i,j] <= base[m+1], then it will be linearly
>> converted to be in the interval of (target[m], target[m+1]),
>> using another function called "lintrans".
>>
>>
>> #The way I do is to loop each row and column of the 2D array, and finally
>> loop the intervals constituted by base list:
>>
>> for row in range(data.shape[0]):
>> for col in range(data.shape[1]):
>> for i in range(len(base)-1):
>> if data[row,col]>=base[i] and data[row,col]<=base[i+1]:
>>
>> data[row,col]=lintrans(data[row,col],(base[i],base[i+1]),(target[i],target[i+1]))
>> break  #use break to jump out of loop as the data have to
>> be ONLY transferred ONCE.
>>
>>
>> Now the profiling result shows that most of the time has been used in
>> this loop over the array ("plot_array_transg"),
>> and less time in calling the linear transformation fuction "lintrans":
>>
>>ncalls tottime  percallcumtimepercall
>> filename:lineno(function)
>>   180470.1100.000  0.1100.000
>> mathex.py:132(lintrans)
>>   112.495  12.495   19.061  19.061
>> mathex.py:196(plot_array_transg)
>>
>>
>> so is there anyway I can speed up this loop?  Thanks for any suggestions!!
>>
>> best,
>>
>> Chao
>>
>>
> If the values in base are ascending, you can use searchsorted() to find
> out where values from data can be placed into base while maintaining order.
>  Don't know if it is faster, but it would certainly be easier to read.
>
> Cheers!
> Ben Root
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>


-- 
***
Chao YUE
Laboratoire des Sciences du Climat et de l'Environnement (LSCE-IPSL)
UMR 1572 CEA-CNRS-UVSQ
Batiment 712 - Pe 119
91191 GIF Sur YVETTE Cedex
Tel: (33) 01 69 08 29 02; Fax:01.69.08.77.16

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] the fast way to loop over ndarray elements?

2012-11-17 Thread Benjamin Root
On Saturday, November 17, 2012, Chao YUE wrote:

> Dear all,
>
> I need to make a linear contrast of the 2D numpy array "data" from an
> interval to another, the approach is:
> I have another two list: "base" & "target", then I check for each ndarray
> element "data[i,j]",
> if   base[m] =< data[i,j] <= base[m+1], then it will be linearly converted
> to be in the interval of (target[m], target[m+1]),
> using another function called "lintrans".
>
>
> #The way I do is to loop each row and column of the 2D array, and finally
> loop the intervals constituted by base list:
>
> for row in range(data.shape[0]):
> for col in range(data.shape[1]):
> for i in range(len(base)-1):
> if data[row,col]>=base[i] and data[row,col]<=base[i+1]:
>
> data[row,col]=lintrans(data[row,col],(base[i],base[i+1]),(target[i],target[i+1]))
> break  #use break to jump out of loop as the data have to
> be ONLY transferred ONCE.
>
>
> Now the profiling result shows that most of the time has been used in this
> loop over the array ("plot_array_transg"),
> and less time in calling the linear transformation fuction "lintrans":
>
>ncalls tottime  percallcumtimepercall
> filename:lineno(function)
>   180470.1100.000  0.1100.000
> mathex.py:132(lintrans)
>   112.495  12.495   19.061  19.061
> mathex.py:196(plot_array_transg)
>
>
> so is there anyway I can speed up this loop?  Thanks for any suggestions!!
>
> best,
>
> Chao
>
>
If the values in base are ascending, you can use searchsorted() to find out
where values from data can be placed into base while maintaining order.
 Don't know if it is faster, but it would certainly be easier to read.

Cheers!
Ben Root
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] the fast way to loop over ndarray elements?

2012-11-17 Thread Chao YUE
Dear all,

I need to make a linear contrast of the 2D numpy array "data" from an
interval to another, the approach is:
I have another two list: "base" & "target", then I check for each ndarray
element "data[i,j]",
if   base[m] =< data[i,j] <= base[m+1], then it will be linearly converted
to be in the interval of (target[m], target[m+1]),
using another function called "lintrans".


#The way I do is to loop each row and column of the 2D array, and finally
loop the intervals constituted by base list:

for row in range(data.shape[0]):
for col in range(data.shape[1]):
for i in range(len(base)-1):
if data[row,col]>=base[i] and data[row,col]<=base[i+1]:

data[row,col]=lintrans(data[row,col],(base[i],base[i+1]),(target[i],target[i+1]))
break  #use break to jump out of loop as the data have to
be ONLY transferred ONCE.


Now the profiling result shows that most of the time has been used in this
loop over the array ("plot_array_transg"),
and less time in calling the linear transformation fuction "lintrans":

   ncalls tottime  percallcumtimepercall
filename:lineno(function)
  180470.1100.000  0.1100.000
mathex.py:132(lintrans)
  112.495  12.495   19.061  19.061
mathex.py:196(plot_array_transg)


so is there anyway I can speed up this loop?  Thanks for any suggestions!!

best,

Chao

-- 
***
Chao YUE
Laboratoire des Sciences du Climat et de l'Environnement (LSCE-IPSL)
UMR 1572 CEA-CNRS-UVSQ
Batiment 712 - Pe 119
91191 GIF Sur YVETTE Cedex
Tel: (33) 01 69 08 29 02; Fax:01.69.08.77.16

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion