Le mar. 18 sept. 2018 à 11:53, Guillaume Larcheveque <
[email protected]> a écrit :

> Maybe #to:by: should convert its parameters in Fraction to avoid Floats
> problems (not sure, just an idea)
>
>
Hi Guillaume,
Yes possibly...
But if the author explicitely requested a loop on Float, why not honour the
request, as bad as it can be?
If the answer is, well, it could be legitimate in some cases, but we just
don't know how to recognize theses legitimate cases, then it's better to
not alter the resquest at all, and let the author deal with the
responsibility (and all the possible consequences).

You could argue for an intermediate solution:
maybe we could perform the increment arithmetic with Fraction but convert
back asFloat just before evaluating the block ...
But in this case, (0 to: 1 by: 0.1 asFraction) collect: #asFloat would be
probably more surprising than (0 to: 1 by: 0.1 ) asArray w.r.t. naive
expectations:
#(0.0 0.1 0.2 0.30000000000000004 0.4 0.5 0.6000000000000001
0.7000000000000001 0.8 0.9)
#(0.0 0.1 0.2 0.30000000000000004 0.4 0.5 0.6000000000000001
0.7000000000000001 0.8 0.9 1.0)

We could try to workaround by converting start/step
asMinialDecimalFraction, just as a wild guess of author's intentions...
But then again, if we are not sure of the intentions, it's better to not
alter the request at all.

We have Renraku in Pharo, so we could use Renraku for trivial cases (usage
of literal Float as start/step).
And with Instruction Reified capability of Pharo Compiler, we could even
instrument some code and implement runtime checks when the static analysis
cannot infer the types like the cases submitted by Guillermo, if it really
matters.
It would be very much like the undefined behavior runtime checks of clang
for example, and a nice subject for an advanced student.

2018-09-18 11:25 GMT+02:00 Esteban Lorenzano <[email protected]>:
>
>>
>>
>> On 18 Sep 2018, at 11:13, Guillermo Polito <[email protected]>
>> wrote:
>>
>>
>>
>> On Tue, Sep 18, 2018 at 11:06 AM Julien <[email protected]>
>> wrote:
>>
>>> Hello,
>>>
>>> I realised that it is possible to create an interval of floats.
>>>
>>> I think this is bad because, since intervals are computed by
>>> successively adding a number, it might result in precision errors.
>>>
>>> (0.0 to: 1.0 by: 0.1) asArray >>> #(0.0 0.1 0.2 0.30000000000000004 0.4
>>> 0.5 0.6000000000000001 0.7000000000000001 0.8 0.9 1.0)
>>>
>>> The correct (precise) way to do it would be to use ScaledDecimal:
>>>
>>> (0.0s1 to: 1.0s1 by: 0.1s1) asArray >>> #(0.0s1 0.1s1 0.2s1 0.3s1 0.4s1
>>> 0.5s1 0.6s1 0.7s1 0.8s1 0.9s1 1.0s1)
>>>
>>> I opened an issue about it:
>>> https://pharo.fogbugz.com/f/cases/22467/Float-should-not-implement-to-to-by-etc
>>>
>>> And I’d like to discuss this with you.
>>>
>>> What do you think?
>>>
>>
>> Well, I think it's a matter of balance :)
>>
>> #to:by: is defined in Number. So we could, for example, cancel it in
>> Float.
>> However, people would still be able to do
>>
>> 1 to: 1.0 by: 0.1
>>
>> Which would still show problems.
>>
>>
>> Nevertheless, I have seen this a lot of times.
>>
>> 0.0 to: 1.0 by: 0.1
>>
>> Is a common use case.
>>
>>
>> And moreover, we could try to do
>>
>> 1 to: 7 by: (Margin fromNumber: 1)
>>
>> And even worse
>>
>> 1 to: Object new by: (Margin fromNumber: 1)
>>
>> I think adding type-validations all over the place is not a good
>> solution, and is kind of opposite to our philosophy...
>>
>> So we should
>>  - document the good usages
>>  - document the bad ones
>>  - and live with the fact that we have a relaxed type system that will
>> fail at runtime :)
>>
>>
>> yup.
>> But not cancel.
>>
>> Esteban
>>
>>
>> Guille
>>
>>
>>
>
>
> --
> *Guillaume Larcheveque*
>
>

Reply via email to