Re: checking whether a double is in the range of long

2023-04-23 Thread Ben Pfaff
On Sat, Apr 22, 2023 at 10:40 PM Paul Eggert  wrote:
> On 2023-04-22 16:34, Ben Pfaff wrote:
> > determine whether converting 'd' to 'long' would
> > yield a 'long' with the same value as 'd'
>
> LONG_MIN - 1.0 < d && d < LONG_MAX + 1.0 && d == (long) d
>
> On all practical platforms this should avoid undefined behavior and
> works correctly even if rounding occurs. You can replace "d == (long) d"
> with "d == floor (d)" if you prefer.

That's much simpler. Thank you.



Re: checking whether a double is in the range of long

2023-04-22 Thread Paul Eggert

On 2023-04-22 16:34, Ben Pfaff wrote:

determine whether converting 'd' to 'long' would
yield a 'long' with the same value as 'd'


LONG_MIN - 1.0 < d && d < LONG_MAX + 1.0 && d == (long) d

On all practical platforms this should avoid undefined behavior and 
works correctly even if rounding occurs. You can replace "d == (long) d" 
with "d == floor (d)" if you prefer.




Re: checking whether a double is in the range of long

2023-04-22 Thread Ben Pfaff
On Sat, Apr 22, 2023 at 5:52 PM Bruno Haible  wrote:
>
> Ben Pfaff wrote:
> > determine whether converting 'd' to 'long' would
> > yield a 'long' with the same value as 'd'.
>
> Maybe
>   d == (double) (long) d
> ?
>
> Just a wild guess. I haven't tested it.

I don't trust the undefined behavior in conversions that go outside the
valid range. The program I showed gave me different output with and
without GCC optimization turned on, for example.



Re: checking whether a double is in the range of long

2023-04-22 Thread Bruno Haible
Ben Pfaff wrote:
> determine whether converting 'd' to 'long' would
> yield a 'long' with the same value as 'd'.

Maybe
  d == (double) (long) d
?

Just a wild guess. I haven't tested it.

Bruno






Re: checking whether a double is in the range of long

2023-04-22 Thread Ben Pfaff
On Sat, Apr 22, 2023 at 4:34 PM Ben Pfaff  wrote:
> Before this afternoon, I thought that a check like this for a double 'd':
> d == floor (d) && d >= LONG_MIN && d <= LONG_MAX
> was sufficient to determine whether converting 'd' to 'long' would
> yield a 'long' with the same value as 'd'.
>
> Now I realize that this is wrong. In particular, take a look at the
> following program:
>
> #include 
> #include 
>
> int main (void)
> {
>   long i = LONG_MAX;
>   double d = i;
>   long j = d;
>   printf ("%ld, %f, %ld\n", i, d, j);
>   return 0;
> }
>
> On my system, this prints:
>
> 9223372036854775807, 9223372036854775808.00, -9223372036854775808
>
> In other words, LONG_MAX gets rounded up to 2**63 when it's converted to
> 'double', which makes sense because 'double' only has 53 bits of precision,
> but this also means that 'd <= LONG_MAX' and still doesn't fit in 'long', as 
> one
> can see from it getting converted to a wrong answer (-2**63 instead of
> 2**63) when converted back to 'long'. And of course any answer is OK there,
> since this out-of-range conversion yields undefined behavior.
>
> Can anyone suggest a correct way to check whether a 'double' is in the
> range of 'long'?

I figured out a solution to the problem I wanted to solve. After
thinking about it for
a while longer, I realized that what I really wanted was the range of
integers that
'double' represents without loss of precision, that is, most
commonly,-2**53...2**53.
And then I wanted the intersection of this range with the range of
'long'. Typically
that's going to be -2**53...2**53 also, of course.

I came up with the following. Feedback welcomed!

#include 
#include 
#include 
#include 

/* Maximum positive integer 'double' represented with no loss of precision
   (that is, with unit precision).

   The maximum negative integer with this property is -DBL_UNIT_MAX. */
#if DBL_MANT_DIG == 53  /* 64-bit double */
#define DBL_UNIT_MAX 9007199254740992.0
#elif DBL_MANT_DIG == 64/* 80-bit double */
#define DBL_UNIT_MAX 18446744073709551616.0
#elif DBL_MANT_DIG == 113   /* 128-bit double */
#define DBL_UNIT_MAX 10384593717069655257060992658440192.0
#else
#error "Please define DBL_UNIT_MAX for your system (as 2**DBL_MANT_DIG)."
#endif

/* Intersection of ranges [LONG_MIN,LONG_MAX] and [-DBL_UNIT_MAX,DBL_UNIT_MAX],
   as a range of 'long's.  This range is the (largest contiguous) set of
   integer values that can be safely converted between 'long' and 'double'
   without loss of precision. */
#if DBL_MANT_DIG < LONG_WIDTH - 1
#define DBL_UNIT_LONG_MIN ((long) -DBL_UNIT_MAX)
#define DBL_UNIT_LONG_MAX ((long) DBL_UNIT_MAX)
#else
#define DBL_UNIT_LONG_MIN LONG_MIN
#define DBL_UNIT_LONG_MAX LONG_MAX
#endif

int main (void)
{
  long i = DBL_UNIT_LONG_MAX;
  double d = i;
  long j = d;
  printf ("%ld, %f, %ld\n", i, d, j);

  printf ("%f\n", DBL_UNIT_MAX);
  printf ("%d, %d\n", DBL_MANT_DIG, LONG_WIDTH);
  printf ("%ld, %ld\n", DBL_UNIT_LONG_MIN, DBL_UNIT_LONG_MAX);
  printf ("%ld, %ld\n", LONG_MIN, LONG_MAX);

  return 0;
}



checking whether a double is in the range of long

2023-04-22 Thread Ben Pfaff
Before this afternoon, I thought that a check like this for a double 'd':
d == floor (d) && d >= LONG_MIN && d <= LONG_MAX
was sufficient to determine whether converting 'd' to 'long' would
yield a 'long' with the same value as 'd'.

Now I realize that this is wrong. In particular, take a look at the
following program:

#include 
#include 

int main (void)
{
  long i = LONG_MAX;
  double d = i;
  long j = d;
  printf ("%ld, %f, %ld\n", i, d, j);
  return 0;
}

On my system, this prints:

9223372036854775807, 9223372036854775808.00, -9223372036854775808

In other words, LONG_MAX gets rounded up to 2**63 when it's converted to
'double', which makes sense because 'double' only has 53 bits of precision,
but this also means that 'd <= LONG_MAX' and still doesn't fit in 'long', as one
can see from it getting converted to a wrong answer (-2**63 instead of
2**63) when converted back to 'long'. And of course any answer is OK there,
since this out-of-range conversion yields undefined behavior.

Can anyone suggest a correct way to check whether a 'double' is in the
range of 'long'?

One workaround would be to check for the range of 'int' (or int32_t, I guess
really), since there's not going to be any loss of precision converting
INT(32)_MAX to 'double'. But I'd rather support a wider range in the code
I'm working on.

Thanks,

Ben.