On 9/28/20 12:30 AM, Tomas Hajny via fpc-devel wrote:
On 2020-09-27 18:27, Nikolay Nikolov via fpc-devel wrote:
On 9/27/20 7:21 PM, Florian Klämpfl via fpc-devel wrote:
Am 27.09.20 um 18:03 schrieb Martin Frb via fpc-devel:
On 27/09/2020 09:34, Sven Barth via fpc-devel wrote:
Ben Grasset via fpc-devel <fpc-devel@lists.freepascal.org
<mailto:fpc-devel@lists.freepascal.org>> schrieb am So., 27. Sep.
2020, 07:50:
That last quote is absolute BS, to be very frank. There is no
reason whatsoever not to use a natively-64-bit copy of FPC if
running a natively-64-bit copy of Windows, and there hasn't been
for well over half a decade at this point.
Yes, there is a reason: you can not build a i8086 or i386 cross
compiler with the Win64 compiler (or any non-x86 compiler to be
fair) due to missing Extended support. Thus the majority of the
FPC Core team considers the Win64 compiler as inferior and also
unnecessary cause the 32-bit one works just as well on that platform.
Just my 2 cents.
Well, one the one hand, native 64 bit is only really important if
it can do something that 32 bit can not do. (faster, bigger
sources, ....).
On the other hand, not everyone needs a win64 to win32 cross
compiler. And if they do, a native 32bit compiler can be renamed
and will happily serve as such a cross compiler. (But that is not a
must be included / such workarounds may not be wanted, especially
since they might cause repeated extra work)
So the question here is/are imho about the work it takes to amend
the release-build process (i.e. update the scripts). And then the
amount of extra time needed for each release (build and testing).
The thing is: we would distribute a compiler (the x86_64-win64 one)
which claims to be able to compile to e.g. to x86_64-linux, but it
would generate programs which might behave differently than natively
compiled ones as float constants are handled internally different.
And in this particular case, "different" means "less accurate", due to
rounding errors, caused by compile time conversion of 80-bit extended
float constants to 64-bit double precision constants. And "less
accurate" is bad. :)
Sorry for a silly question, but is it really the case that a higher
precision is good (or that it doesn't matter at least)? I assume that
performing compile-time calculations in higher precision than
calculations performed at run-time may still result in differences
and, in spite of the fact that the calculations are more precise, the
differences may still lead to confusion of our users (if not something
worse) - especially if it may not be always clear which part will be
computed at compile-time and which part at run-time. Is my
understanding correct? Or is there some solution allowing to achieve
specific precision with a higher precision library?
I don't have an exact answer, but I think higher precision is better,
compared to lower. You can not expect bitwise identical result, when
using floating point calculations anyway. For example AMD and Intel FPUs
perform calculations with some very slight variations, so the same
calculation doesn't always result in the same bitwise identical floating
point number, even though, they're practically really close, so it
doesn't matter. I don't know of any program that breaks e.g. on AMD
FPUs, because it was designed for Intel or vice versa.
In theory this matters, if we implement 128-bit soft float support in
the compiler, or if we encounter an FPU that supports 128-bit floating
point. The question is whether it's safe to implement 80-bit x87 FPU
floating point support on host targets with 128-bit FPU support. I think
it's safe, but I'm not an expert on floating point.
But compile time calculations having a lower precision, compared to the
runtime precision is definitely bad.
Nikolay
_______________________________________________
fpc-devel maillist - fpc-devel@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-devel