On Do, 2016-10-20 at 21:38 -0600, Charles R Harris wrote: > > > On Thu, Oct 20, 2016 at 9:11 PM, Nathaniel Smith <n...@pobox.com> > wrote: > > On Thu, Oct 20, 2016 at 7:58 PM, Charles R Harris > > <charlesr.har...@gmail.com> wrote: > > > Hi All, > > > > > > I've put up a preliminary PR for the proposed fpower ufunc. Apart > > from > > > adding more tests and documentation, I'd like to settle a few > > other things. > > > The first is the name, two names have been proposed and we should > > settle on > > > one > > > > > > fpower (short) > > > float_power (obvious) > > > > +0.6 for float_power > > > > > The second thing is the minimum precision. In the preliminary > > version I have > > > used float32, but perhaps it makes more sense for the intended > > use to make > > > the minimum precision float64 instead. > > > > Can you elaborate on what you're thinking? I guess this is because > > float32 has limited range compared to float64, so is more likely to > > see overflow? float32 still goes up to 10**38 which is < > > int64_max**2, > > FWIW. Or maybe there's some subtlety with the int->float casting > > here? > logical, (u)int8, (u)int16, and float16 get converted to float32, > which is probably sufficient to avoid overflow and such. My thought > was that float32 is something of a "specialized" type these days, > while float64 is the standard floating point precision for everyday > computation. >
Isn't the behaviour we already have (e.g. such as mean). ints -> float64 inexacts do not get upcast? - Sebastian > Chuck > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion
signature.asc
Description: This is a digitally signed message part
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion