On Thu, Jul 17, 2014 at 4:21 PM, <josef.p...@gmail.com> wrote:

>
>
>
> On Thu, Jul 17, 2014 at 4:07 PM, <josef.p...@gmail.com> wrote:
>
>>
>>
>>
>> On Wed, Jul 16, 2014 at 9:52 AM, Nathaniel Smith <n...@pobox.com> wrote:
>>
>>> On 16 Jul 2014 10:26, "Tony Yu" <tsy...@gmail.com> wrote:
>>> >
>>> > Is there any reason why the defaults for `allclose` and
>>> `assert_allclose` differ? This makes debugging a broken test much more
>>> difficult. More importantly, using an absolute tolerance of 0 causes
>>> failures for some common cases. For example, if two values are very close
>>> to zero, a test will fail:
>>>
>>
And one more comment: I debug "broken tests" pretty often. My favorites in
pdb are

np.max(np.abs(x - y))

and

np.max(np.abs(x / y - 1))

to see how much I would have to adjust atol and rtol in assert_allclose in
the tests to make them pass, and to decide whether this is an acceptable
numerical difference or a bug.

allclose doesn't tell me anything and I almost never use it.

Josef



> >
>>> >     np.testing.assert_allclose(0, 1e-14)
>>> >
>>> > Git blame suggests the change was made in the following commit, but I
>>> guess that change only reverted to the original behavior.
>>> >
>>> >
>>> https://github.com/numpy/numpy/commit/f43223479f917e404e724e6a3df27aa701e6d6bf
>>> >
>>> > It seems like the defaults for  `allclose` and `assert_allclose`
>>> should match, and an absolute tolerance of 0 is probably not ideal. I guess
>>> this is a pretty big behavioral change, but the current default for
>>> `assert_allclose` doesn't seem ideal.
>>>
>>> What you say makes sense to me, and loosening the default tolerances
>>> won't break any existing tests. (And I'm not too worried about people who
>>> were counting on getting 1e-7 instead of 1e-5 or whatever... if it matters
>>> that much to you exactly what tolerance you test, you should be setting the
>>> tolerance explicitly!) I vote that unless someone comes up with some
>>> terrible objection in the next few days then you should submit a PR :-)
>>>
>>
>> If you mean by this to add atol=1e-8 as default, then I'm against it.
>>
>> At least it will change the meaning of many of our tests in statsmodels.
>>
>> I'm using rtol to check for correct 1e-15 or 1e-30, which would be
>> completely swamped if you change the default atol=0.
>> Adding atol=0 to all assert_allclose that currently use only rtol is a
>> lot of work.
>> I think I almost never use a default rtol, but I often leave atol at the
>> default = 0.
>>
>> If we have zeros, then I don't think it's too much work to decide whether
>> this should be atol=1e-20, or 1e-8.
>>
>
> Just to explain, p-values, sf of the distributions are usually accurate at
> 1e-30 or 1e-50 or something like that. And when we test the tails of the
> distributions we use that the relative error is small and the absolute
> error is "tiny".
>
> We would need to do a grep to see how many cases there actually are in
> scipy and statsmodels, before we change it because for some use cases we
> only get atol 1e-5 or 1e-7 (e.g. nonlinear optimization).
> Linear algebra is usually atol or rtol 1e-11 to 1e-14 in my cases, AFAIR.
>
> Josef
>
>
>>
>> Josef
>>
>>
>>
>>> -n
>>>
>>> _______________________________________________
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>>
>>
>
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to