I can also confirm that at least on NumPy 1.5.1: 

integer array * (literal Python float scalar)   --- creates a double result.   

So, my memory was incorrect on that (unless it changed at an earlier release, 
but I don't think so). 

-Travis



On Feb 13, 2012, at 9:40 PM, Mark Wiebe wrote:

> I believe the main lessons to draw from this are just how incredibly 
> important a complete test suite and staying on top of code reviews are. I'm 
> of the opinion that any explicit design choice of this nature should be 
> reflected in the test suite, so that if someone changes it years later, they 
> get immediate feedback that they're breaking something important. NumPy has 
> gradually increased its test suite coverage, and when I dealt with the type 
> promotion subsystem, I added fairly extensive tests:
> 
> https://github.com/numpy/numpy/blob/master/numpy/core/tests/test_numeric.py#L345
> 
> Another subsystem which is in a similar state as what the type promotion 
> subsystem was, is the subscript operator and how regular/fancy indexing work. 
> What this means is that any attempt to improve it that doesn't coincide with 
> the original intent years ago can easily break things that were originally 
> intended without them being caught by a test. I believe this subsystem needs 
> improvement, and the transition to new/improved code will probably be 
> trickier to manage than for the dtype promotion case.
> 
> Let's try to learn from the type promotion case as best we can, and use it to 
> improve NumPy's process. I believe Charles and Ralph have been doing a great 
> job of enforcing high standards in new NumPy code, and managing the release 
> process in a way that has resulted in very few bugs and regressions in the 
> release. Most of these quality standards are still informal, however, and 
> it's probably a good idea to write them down in a canonical location. It will 
> be especially helpful for newcomers, who can treat the standards as a 
> checklist before submitting pull requests.
> 
> Thanks,
> -Mark
> 
> On Mon, Feb 13, 2012 at 7:11 PM, Travis Oliphant <tra...@continuum.io> wrote:
> The problem is that these sorts of things take a while to emerge.  The 
> original system was more consistent than I think you give it credit.  What 
> you are seeing is that most people get NumPy from distributions and are 
> relying on us to keep things consistent. 
> 
> The scalar coercion rules were deterministic and based on the idea that a 
> scalar does not determine the output dtype unless it is of a different kind.  
>  The new code changes that unfortunately. 
> 
> Another thing I noticed is that I thought that int16 <op> scalar float would 
> produce float32 originally.  This seems to have changed, but I need to check 
> on an older version of NumPy.
> 
> Changing the scalar coercion rules is an unfortunate substantial change in 
> semantics and should not have happened in the 1.X series.
> 
> I understand you did not get a lot of feedback and spent a lot of time on the 
> code which we all appreciate.   I worked to stay true to the Numeric casting 
> rules incorporating the changes to prevent scalar upcasting due to the 
> absence of single precision Numeric literals in Python.
> 
> We will need to look in detail at what has changed.  I will write a test to 
> do that. 
> 
> Thanks,
> 
> Travis 
> 
> --
> Travis Oliphant
> (on a mobile)
> 512-826-7480
> 
> 
> On Feb 13, 2012, at 7:58 PM, Mark Wiebe <mwwi...@gmail.com> wrote:
> 
>> On Mon, Feb 13, 2012 at 5:00 PM, Travis Oliphant <tra...@continuum.io> wrote:
>> Hmmm.   This seems like a regression.  The scalar casting API was fairly 
>> intentional.
>> 
>> What is the reason for the change?
>> 
>> In order to make 1.6 ABI-compatible with 1.5, I basically had to rewrite 
>> this subsystem. There were virtually no tests in the test suite specifying 
>> what the expected behavior should be, and there were clear inconsistencies 
>> where for example "a+b" could result in a different type than "b+a". I 
>> recall there being some bugs in the tracker related to this as well, but I 
>> don't remember those details.
>> 
>> This change felt like an obvious extension of an existing behavior for 
>> eliminating overflow, where the promotion changed unsigned -> signed based 
>> on the value of the scalar. This change introduced minimal upcasting only in 
>> a set of cases where an overflow was guaranteed to happen without that 
>> upcasting.
>> 
>> During the 1.6 beta period, I signaled that this subsystem had changed, as 
>> the bullet point starting "The ufunc uses a more consistent algorithm for 
>> loop selection.":
>> 
>> http://mail.scipy.org/pipermail/numpy-discussion/2011-March/055156.html
>> 
>> The behavior Matthew has observed is a direct result of how I designed the 
>> minimization function mentioned in that bullet point, and the algorithm for 
>> it is documented in the 'Notes' section of the result_type page:
>> 
>> http://docs.scipy.org/doc/numpy/reference/generated/numpy.result_type.html
>> 
>> Hopefully that explains it well enough. I made the change intentionally and 
>> carefully, tested its impact on SciPy and other projects, and advocated for 
>> it during the release cycle.
>> 
>> Cheers,
>> Mark
>> 
>> --
>> Travis Oliphant
>> (on a mobile)
>> 512-826-7480
>> 
>> 
>> On Feb 13, 2012, at 6:25 PM, Matthew Brett <matthew.br...@gmail.com> wrote:
>> 
>> > Hi,
>> >
>> > I recently noticed a change in the upcasting rules in numpy 1.6.0 /
>> > 1.6.1 and I just wanted to check it was intentional.
>> >
>> > For all versions of numpy I've tested, we have:
>> >
>> >>>> import numpy as np
>> >>>> Adata = np.array([127], dtype=np.int8)
>> >>>> Bdata = np.int16(127)
>> >>>> (Adata + Bdata).dtype
>> > dtype('int8')
>> >
>> > That is - adding an integer scalar of a larger dtype does not result
>> > in upcasting of the output dtype, if the data in the scalar type fits
>> > in the smaller.
>> >
>> > For numpy < 1.6.0 we have this:
>> >
>> >>>> Bdata = np.int16(128)
>> >>>> (Adata + Bdata).dtype
>> > dtype('int8')
>> >
>> > That is - even if the data in the scalar does not fit in the dtype of
>> > the array to which it is being added, there is no upcasting.
>> >
>> > For numpy >= 1.6.0 we have this:
>> >
>> >>>> Bdata = np.int16(128)
>> >>>> (Adata + Bdata).dtype
>> > dtype('int16')
>> >
>> > There is upcasting...
>> >
>> > I can see why the numpy 1.6.0 way might be preferable but it is an API
>> > change I suppose.
>> >
>> > Best,
>> >
>> > Matthew
>> > _______________________________________________
>> > NumPy-Discussion mailing list
>> > NumPy-Discussion@scipy.org
>> > http://mail.scipy.org/mailman/listinfo/numpy-discussion
>> _______________________________________________
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>> 
>> _______________________________________________
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
> 
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
> 
> 
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to