Re: [Numpy-discussion] fpower ufunc

2016-10-21 Thread Charles R Harris
On Fri, Oct 21, 2016 at 1:45 AM, Sebastian Berg 
wrote:

> On Do, 2016-10-20 at 21:38 -0600, Charles R Harris wrote:
> >
> >
> > On Thu, Oct 20, 2016 at 9:11 PM, Nathaniel Smith 
> > wrote:
> > > On Thu, Oct 20, 2016 at 7:58 PM, Charles R Harris
> > >  wrote:
> > > > Hi All,
> > > >
> > > > I've put up a preliminary PR for the proposed fpower ufunc. Apart
> > > from
> > > > adding more tests and documentation, I'd like to settle a few
> > > other things.
> > > > The first is the name, two names have been proposed and we should
> > > settle on
> > > > one
> > > >
> > > > fpower (short)
> > > > float_power (obvious)
> > >
> > > +0.6 for float_power
> > >
> > > > The second thing is the minimum precision. In the preliminary
> > > version I have
> > > > used float32, but perhaps it makes more sense for the intended
> > > use to make
> > > > the minimum precision float64 instead.
> > >
> > > Can you elaborate on what you're thinking? I guess this is because
> > > float32 has limited range compared to float64, so is more likely to
> > > see overflow? float32 still goes up to 10**38 which is <
> > > int64_max**2,
> > > FWIW. Or maybe there's some subtlety with the int->float casting
> > > here?
> > logical, (u)int8, (u)int16, and float16 get converted to float32,
> > which is probably sufficient to avoid overflow and such. My thought
> > was that float32 is something of a "specialized" type these days,
> > while float64 is the standard floating point precision for everyday
> > computation.
> >
>
>
> Isn't the behaviour we already have (e.g. such as mean).
>
> ints -> float64
> inexacts do not get upcast?
>
>
Hmm... The best way to do that would be to put the function in
`fromnumeric` and do it in python rather than as a ufunc, then for integer
types call power with `dtype=float64`. I like that idea better than the
current implementation, my mind was stuck in the ufunc universe.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fpower ufunc

2016-10-21 Thread Sebastian Berg
On Fr, 2016-10-21 at 09:45 +0200, Sebastian Berg wrote:
> On Do, 2016-10-20 at 21:38 -0600, Charles R Harris wrote:
> > 
> > 
> > 
> > On Thu, Oct 20, 2016 at 9:11 PM, Nathaniel Smith 
> > wrote:
> > > 
> > > On Thu, Oct 20, 2016 at 7:58 PM, Charles R Harris
> > >  wrote:
> > > > 
> > > > Hi All,
> > > > 
> > > > I've put up a preliminary PR for the proposed fpower ufunc.
> > > > Apart
> > > from
> > > > 
> > > > adding more tests and documentation, I'd like to settle a few
> > > other things.
> > > > 
> > > > The first is the name, two names have been proposed and we
> > > > should
> > > settle on
> > > > 
> > > > one
> > > > 
> > > > fpower (short)
> > > > float_power (obvious)
> > > +0.6 for float_power
> > > 
> > > > 
> > > > The second thing is the minimum precision. In the preliminary
> > > version I have
> > > > 
> > > > used float32, but perhaps it makes more sense for the intended
> > > use to make
> > > > 
> > > > the minimum precision float64 instead.
> > > Can you elaborate on what you're thinking? I guess this is
> > > because
> > > float32 has limited range compared to float64, so is more likely
> > > to
> > > see overflow? float32 still goes up to 10**38 which is <
> > > int64_max**2,
> > > FWIW. Or maybe there's some subtlety with the int->float casting
> > > here?
> > logical, (u)int8, (u)int16, and float16 get converted to float32,
> > which is probably sufficient to avoid overflow and such. My thought
> > was that float32 is something of a "specialized" type these days,
> > while float64 is the standard floating point precision for everyday
> > computation.
> > 
> 
> Isn't the behaviour we already have (e.g. such as mean).
> 
> ints -> float64
> inexacts do not get upcast?
> 

Ah, on the other hand, some/most of the float only ufuncs probably do
it as you made it work?


> - Sebastian
> 
> 
> > 
> > Chuck 
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > https://mail.scipy.org/mailman/listinfo/numpy-discussion
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion

signature.asc
Description: This is a digitally signed message part
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fpower ufunc

2016-10-21 Thread Sebastian Berg
On Do, 2016-10-20 at 21:38 -0600, Charles R Harris wrote:
> 
> 
> On Thu, Oct 20, 2016 at 9:11 PM, Nathaniel Smith 
> wrote:
> > On Thu, Oct 20, 2016 at 7:58 PM, Charles R Harris
> >  wrote:
> > > Hi All,
> > >
> > > I've put up a preliminary PR for the proposed fpower ufunc. Apart
> > from
> > > adding more tests and documentation, I'd like to settle a few
> > other things.
> > > The first is the name, two names have been proposed and we should
> > settle on
> > > one
> > >
> > > fpower (short)
> > > float_power (obvious)
> > 
> > +0.6 for float_power
> > 
> > > The second thing is the minimum precision. In the preliminary
> > version I have
> > > used float32, but perhaps it makes more sense for the intended
> > use to make
> > > the minimum precision float64 instead.
> > 
> > Can you elaborate on what you're thinking? I guess this is because
> > float32 has limited range compared to float64, so is more likely to
> > see overflow? float32 still goes up to 10**38 which is <
> > int64_max**2,
> > FWIW. Or maybe there's some subtlety with the int->float casting
> > here?
> logical, (u)int8, (u)int16, and float16 get converted to float32,
> which is probably sufficient to avoid overflow and such. My thought
> was that float32 is something of a "specialized" type these days,
> while float64 is the standard floating point precision for everyday
> computation.
> 


Isn't the behaviour we already have (e.g. such as mean).

ints -> float64
inexacts do not get upcast?

- Sebastian


> Chuck 
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion

signature.asc
Description: This is a digitally signed message part
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fpower ufunc

2016-10-20 Thread Charles R Harris
On Thu, Oct 20, 2016 at 9:11 PM, Nathaniel Smith  wrote:

> On Thu, Oct 20, 2016 at 7:58 PM, Charles R Harris
>  wrote:
> > Hi All,
> >
> > I've put up a preliminary PR for the proposed fpower ufunc. Apart from
> > adding more tests and documentation, I'd like to settle a few other
> things.
> > The first is the name, two names have been proposed and we should settle
> on
> > one
> >
> > fpower (short)
> > float_power (obvious)
>
> +0.6 for float_power
>
> > The second thing is the minimum precision. In the preliminary version I
> have
> > used float32, but perhaps it makes more sense for the intended use to
> make
> > the minimum precision float64 instead.
>
> Can you elaborate on what you're thinking? I guess this is because
> float32 has limited range compared to float64, so is more likely to
> see overflow? float32 still goes up to 10**38 which is < int64_max**2,
> FWIW. Or maybe there's some subtlety with the int->float casting here?
>

logical, (u)int8, (u)int16, and float16 get converted to float32, which is
probably sufficient to avoid overflow and such. My thought was that float32
is something of a "specialized" type these days, while float64 is the
standard floating point precision for everyday computation.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fpower ufunc

2016-10-20 Thread Nathaniel Smith
On Thu, Oct 20, 2016 at 7:58 PM, Charles R Harris
 wrote:
> Hi All,
>
> I've put up a preliminary PR for the proposed fpower ufunc. Apart from
> adding more tests and documentation, I'd like to settle a few other things.
> The first is the name, two names have been proposed and we should settle on
> one
>
> fpower (short)
> float_power (obvious)

+0.6 for float_power

> The second thing is the minimum precision. In the preliminary version I have
> used float32, but perhaps it makes more sense for the intended use to make
> the minimum precision float64 instead.

Can you elaborate on what you're thinking? I guess this is because
float32 has limited range compared to float64, so is more likely to
see overflow? float32 still goes up to 10**38 which is < int64_max**2,
FWIW. Or maybe there's some subtlety with the int->float casting here?

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] fpower ufunc

2016-10-20 Thread Charles R Harris
Hi All,

I've put up a preliminary PR  for
the proposed fpower ufunc. Apart from adding more tests and documentation,
I'd like to settle a few other things. The first is the name, two names
have been proposed and we should settle on one

   - fpower (short)
   - float_power (obvious)

The second thing is the minimum precision. In the preliminary version I
have used float32, but perhaps it makes more sense for the intended use to
make the minimum precision float64 instead.

Thoughts?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion