My latest thoughts; please advise if I have misunderstood anything.

On Jan 24, 2025, at 3:11 AM, Jan Kowalski <jan7...@gmail.com> wrote:

I'd say that, if it's possible, we should reduce the arithmetic artifacts, 
rather than introduce them through not really needed, and not visible at the 
first sight, type conversions.

… Do you think introducing such change would be beneficial to simplify the 
code, or rather introduce minor precision improvement, while we still don’t 
have 100% decimal precision?

Okay, so what we’re looking for is a way to convert floats to BigDecimals in 
such a way that `0.1f` comes out the same as `new BigDecimal(“0.1”)`.

This thread is characterizing that outcome as “reducing artifacts” and 
“improving precision”, which seems fair on the surface, but I believe this is 
more like an illusion. I think the reason this looks like obvious “improvement” 
to us is only because we happen to be using literals in our examples. But for a 
float value that isn’t a literal, like our friend `0.1f + 0.2f`, I think that 
the illusion is shattered. I think this “exposes" that the scale chosen by 
BD.valueOf(double) is based on an “artifact” of that value that isn’t really 
meant to be “information carried by” the value. (Were we to think the latter 
way, it makes us think of a float-to-double cast as losing information, which 
feels like nonsense.)

I think the fact that a new overload would affect current behavior means we 
need to rule that option out. I don’t think this is a case that can justify 
that cost. So it would at best have to be a new separately-named method like 
`valueOfFloat`. So at best this issue will still bite users of `valueOf`, and 
we would still want the documentation of that method to advise users on what to 
do instead.

My feeling is that all we need it to do is advise the user to call 
`BigDecimal.valueOf(Float.toString(val))`. This is very transparent about 
what’s really happening. Here the user is intentionally choosing the 
representation/scale.

I personally don’t see this as a case where fast-enough benchmark result would 
justify adding a new method.

Your thoughts?

Reply via email to