On 2020-09-27 18:27, Nikolay Nikolov via fpc-devel wrote:
On 9/27/20 7:21 PM, Florian Klämpfl via fpc-devel wrote:
Am 27.09.20 um 18:03 schrieb Martin Frb via fpc-devel:
On 27/09/2020 09:34, Sven Barth via fpc-devel wrote:
Ben Grasset via fpc-devel <fpc-devel@lists.freepascal.org <mailto:fpc-devel@lists.freepascal.org>> schrieb am So., 27. Sep. 2020, 07:50:

    That last quote is absolute BS, to be very frank. There is no
    reason whatsoever not to use a natively-64-bit copy of FPC if
    running a natively-64-bit copy of Windows, and there hasn't been
    for well over half a decade at this point.


Yes, there is a reason: you can not build a i8086 or i386 cross compiler with the Win64 compiler (or any non-x86 compiler to be fair) due to missing Extended support. Thus the majority of the FPC Core team considers the Win64 compiler as inferior and also unnecessary cause the 32-bit one works just as well on that platform.

Just my 2 cents.

Well, one the one hand, native 64 bit is only really important if it can do something that 32 bit can not do. (faster, bigger sources, ....).

On the other hand, not everyone needs a win64 to win32 cross compiler. And if they do, a native 32bit compiler can be renamed and will happily serve as such a cross compiler. (But that is not a must be included / such workarounds may not be wanted, especially since they might cause repeated extra work)

So the question here is/are imho about the work it takes to amend the release-build process (i.e. update the scripts). And then the amount of extra time needed for each release (build and testing).

The thing is: we would distribute a compiler (the x86_64-win64 one) which claims to be able to compile to e.g. to x86_64-linux, but it would generate programs which might behave differently than natively compiled ones as float constants are handled internally different.

And in this particular case, "different" means "less accurate", due to
rounding errors, caused by compile time conversion of 80-bit extended
float constants to 64-bit double precision constants. And "less
accurate" is bad. :)

Sorry for a silly question, but is it really the case that a higher precision is good (or that it doesn't matter at least)? I assume that performing compile-time calculations in higher precision than calculations performed at run-time may still result in differences and, in spite of the fact that the calculations are more precise, the differences may still lead to confusion of our users (if not something worse) - especially if it may not be always clear which part will be computed at compile-time and which part at run-time. Is my understanding correct? Or is there some solution allowing to achieve specific precision with a higher precision library?

Tomas
_______________________________________________
fpc-devel maillist  -  fpc-devel@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-devel

Reply via email to