Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-16 Thread Martok
Am 14.07.2018 um 13:42 schrieb Florian Klämpfl:
>> Still, the Delphi(32) compiler always works with Extended precision at 
>> compile
>> time. See
>> :
>> "If constantExpression is a real, its type is Extended" - that type then 
>> propagates.
> 
> Yes, this is for sure x87 inheritance. On x87 it is in practice always the 
> case that operations are carried out with 
> extended precision.
Probably. All their current compilers are still pure win32 applications... I
have a suspicion that the fact that they put many x87 specific things in the
reference might be part of the reason.

This is impossible to replicate on FPC (without using the same softfloat on all
platforms), so the current situation is the only consistent solution.
It is missing documentation however: AFAICS, there is currently no reference for
real literal types, and a user inferring from Delphi will have the wrong
expectation.


Regards,

Martok

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-14 Thread Florian Klämpfl

Am 13.07.2018 um 16:36 schrieb Martok:

Am 12.07.2018 um 23:38 schrieb Florian Klämpfl:

This will result in different results for runtime and compile time calculated 
expressions => bad idea.


Aye, doing the same at runtime and compile time would be the sane idea.

Still, the Delphi(32) compiler always works with Extended precision at compile
time. See
:
"If constantExpression is a real, its type is Extended" - that type then 
propagates.


Yes, this is for sure x87 inheritance. On x87 it is in practice always the case that operations are carried out with 
extended precision.





The other links were about intermediates of runtime calculations, so this change
is correct:

I have added support for the directive $EXCESSPRECISION: it forces that all 
binary float operations are executed with
the highest available precision available for the currently selected FPU

On that commit, am I blind or is this the same expression twice?





___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-13 Thread Sven Barth via fpc-pascal
Martok  schrieb am Fr., 13. Juli 2018, 16:37:

> The other links were about intermediates of runtime calculations, so this
> change
> is correct:
> > I have added support for the directive $EXCESSPRECISION: it forces that
> all binary float operations are executed with
> > the highest available precision available for the currently selected FPU
> On that commit, am I blind or is this the same expression twice?
> <
> https://github.com/graemeg/freepascal/blob/340c0b3b/compiler/nadd.pas#L159
> >
>

That was this way already before that commit, but you're right that it
should be "t1" in one of the two. ;)

Regards,
Sven

>
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-13 Thread Martok
Am 12.07.2018 um 23:38 schrieb Florian Klämpfl:
> This will result in different results for runtime and compile time calculated 
> expressions => bad idea.

Aye, doing the same at runtime and compile time would be the sane idea.

Still, the Delphi(32) compiler always works with Extended precision at compile
time. See
:
"If constantExpression is a real, its type is Extended" - that type then 
propagates.
So Real literal minimisation (Jonas' original change only exposed it) is another
one for the portability notes.


Speaking of which, who is admin for the wiki (I couldn't find any contact
information)? It would be really useful to have the "Cite" extension available.
It should be bundled, just needs to be enabled.


The other links were about intermediates of runtime calculations, so this change
is correct:
> I have added support for the directive $EXCESSPRECISION: it forces that all 
> binary float operations are executed with 
> the highest available precision available for the currently selected FPU
On that commit, am I blind or is this the same expression twice?



-- 
Regards,
Martok

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-12 Thread Florian Klämpfl

Am 10.07.2018 um 19:35 schrieb Martok:

I seem to remember that this was once documented somewhere for Delphi. Can't
seem to find it though, so maybe it was a 3rd-party book? There is
no hint of it
in the FPC documentation, anyway.

As already noted, it is not necessary in DelphiAh sorry, I was wrong: 
misremembered Integer const evaluation, where you should

cast the first part of the expression to Int64 if you want your expression to be
evaluated as that instead of Integer. Completely different issue.


The fix for #25121 doesn't seem like the best solution. The reported issue was
with explicit casts, I think the truncation should rather be in
ttypeconvnode.{typecheck_int_to_real, typecheck_real_to_real,
typecheck_real_to_currency} ?

That would give bestreal-precision back to const evaluation *unless* the
programmer explicitly casts to a lower precision? 


This will result in different results for runtime and compile time calculated 
expressions => bad idea.


During Codegen, casting down
happens anyway because of the storage requirements.



I have added support for the directive $EXCESSPRECISION: it forces that all binary float operations are executed with 
the highest available precision available for the currently selected FPU.

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-10 Thread Martok
>> I seem to remember that this was once documented somewhere for Delphi. Can't
>> seem to find it though, so maybe it was a 3rd-party book? There is  
>> no hint of it
>> in the FPC documentation, anyway.
> As already noted, it is not necessary in DelphiAh sorry, I was wrong: 
> misremembered Integer const evaluation, where you should
cast the first part of the expression to Int64 if you want your expression to be
evaluated as that instead of Integer. Completely different issue.


The fix for #25121 doesn't seem like the best solution. The reported issue was
with explicit casts, I think the truncation should rather be in
ttypeconvnode.{typecheck_int_to_real, typecheck_real_to_real,
typecheck_real_to_currency} ?

That would give bestreal-precision back to const evaluation *unless* the
programmer explicitly casts to a lower precision? During Codegen, casting down
happens anyway because of the storage requirements.

-- 
Regards,
Martok


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-09 Thread gtt


Zitat von Florian Klaempfl :


I am happy to implement this in delphi mode, but it requires some  
documentation references how delphi evaluates such expressions  
(which precision is used when and why) and how they are handled in  
expressions.


These links may be of interest:

http://docwiki.embarcadero.com/RADStudio/Tokyo/en/About_Floating-Point_Arithmetic#Understand_the_Data_Flow

and

http://docwiki.embarcadero.com/RADStudio/Tokyo/en/Floating_point_precision_control_%28Delphi_for_x64%29



___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-09 Thread Florian Klaempfl

Am 09.07.2018 um 19:55 schrieb g...@wolfgang-ehrhardt.de:


Zitat von Martok :
To make sure this works, one has to manually make the const expression 
be of the

type required:
const
  e: Extended = Extended(1.0) / 3.0;

I seem to remember that this was once documented somewhere for Delphi. 
Can't
seem to find it though, so maybe it was a 3rd-party book? There is no 
hint of it

in the FPC documentation, anyway.


As already noted, it is not necessary in Delphi

D:\Work\TMP>D:\DMX\M18\DCC32 -b exttest.dpr
Embarcadero Delphi for Win32 compiler version 25.0
Copyright (c) 1983,2013 Embarcadero Technologies, Inc.
exttest.dpr(14)
15 lines, 0.00 seconds, 20816 bytes code, 13864 bytes data.
D:\Work\TMP>exttest
   0.333433  0.333433
     0.    0.
     0.    0.

And with Delphi Tokio 10.2.3 Starter

G:\TMP>bds -b exttest.dpr
G:\TMP>exttest.exe
   0.333433  0.333433
     0.    0.
     0.    0.


I am happy to implement this in delphi mode, but it requires some 
documentation references how delphi evaluates such expressions (which 
precision is used when and why) and how they are handled in expressions.

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-09 Thread gtt


Zitat von Martok :
To make sure this works, one has to manually make the const  
expression be of the

type required:
const
  e: Extended = Extended(1.0) / 3.0;

I seem to remember that this was once documented somewhere for Delphi. Can't
seem to find it though, so maybe it was a 3rd-party book? There is  
no hint of it

in the FPC documentation, anyway.


As already noted, it is not necessary in Delphi

D:\Work\TMP>D:\DMX\M18\DCC32 -b exttest.dpr
Embarcadero Delphi for Win32 compiler version 25.0
Copyright (c) 1983,2013 Embarcadero Technologies, Inc.
exttest.dpr(14)
15 lines, 0.00 seconds, 20816 bytes code, 13864 bytes data.
D:\Work\TMP>exttest
  0.333433  0.333433
0.0.
0.0.

And with Delphi Tokio 10.2.3 Starter

G:\TMP>bds -b exttest.dpr
G:\TMP>exttest.exe
  0.333433  0.333433
0.0.
0.0.

And I also wrote that the explicit type cast can only be used with
very new Delphi, e.g. compiler version 25 gives for

{$apptype console}
const
  x1: extended = extended(1.0)/3.0;
begin
  writeln(x1:30:16);
end.

D:\Work\TMP>b18 exttest2.dpr
D:\Work\TMP>D:\DMX\M18\DCC32 -b exttest2.dpr
Embarcadero Delphi for Win32 compiler version 25.0
Copyright (c) 1983,2013 Embarcadero Technologies, Inc.
exttest2.dpr(3) Error: E2089 Invalid typecast
exttest2.dpr(7)

Tokyo compiles without error, I do know now which was the
first version which allows the type cast.

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-09 Thread Martok
Am 03.07.2018 um 23:10 schrieb Florian Klämpfl:
>> OK, then two questions remain: Why does is occur/apply only for (newer?) 
>> 3.1.1 versions?
> I dug a little bit deeper, the reason is:
> https://bugs.freepascal.org/view.php?id=25121I tried figuring this out, 
> sharing what I found here for reference and a workaround.

In the example of 1.0 / 3.0, both numbers are represented exactly in IEEE-754,
so there is no difference caused by this "range-rounding". Typedef passing has
not changed, and the "minimal" types found in the parser at
compiler/pexpr.pas:1711 have not changed for years either.

So the expression is s32real(1.0) / s32real(3.0) {as before}, which then gets
evaluated by taddnode(typ=slashn) at bestreal precision {as before}, and
returned at s32real precision {as before} and "range rounded" to single {this is
new}. It is then stored in a typed const of whatever type was specified {as 
before}.

To make sure this works, one has to manually make the const expression be of the
type required:
const
  e: Extended = Extended(1.0) / 3.0;

I seem to remember that this was once documented somewhere for Delphi. Can't
seem to find it though, so maybe it was a 3rd-party book? There is no hint of it
in the FPC documentation, anyway.


Regards,
Martok



___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-07 Thread Martok
No official answer?
Well, that's an answer too I guess.

Am 03.07.2018 um 22:25 schrieb Ralf Quint:
> However, too many people
> just turn all the checks off, because they feel bothered by the all
> warnings that the compiler gives them where they are programming in a
> very ambiguous and possibly dangerous way.
You do realize runtime checks are done at runtime?

-- 
Regards,
Martok


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread Paul Nance
It's hard to make things fool proof, fool's are too ingenius.


On Jul 2, 2018 6:14 PM, "Wolf"  wrote:

> Not so long ago, Florian was proudly bragging about "Pascal does not allow
> you to shoot yourself in the foot
> "
>
> What about this little program:
>
> program Project1;
>
> var a,b: byte;
> begin
>   a:=1;
>   b:=a*(-1);
>   writeln(b);// result: 255
> end.
>
>
> The result is obviously correct, given how the variables are declared. But
> there are no compiler warnings / errors that the assignment b:=a*(-1) is
> fishy, to put it mildly. And if you are serious about strong typing, it
> ought to be illegal, with a suitable complaint from the compiler.
>
> Who is shooting whom in the foot?
>
> Wolf
>
> On 02/07/2018 20:22, Santiago A. wrote:
>
> El 01/07/2018 a las 10:27, C Western escribió:
>
> On 29/06/18 21:55, Sven Barth via fpc-pascal wrote:
>
> More confusingly, if a single variable is used, the expected Max(Double,
> Double) is called:
>
> function Max(a, b: Double): Double; overload;
> begin
>   WriteLn('Double');
>   if a > b then Result := a else Result := b;
> end;
>
> function Max(a, b: Single): Single; overload;
> begin
>   WriteLn('Single');
>   if a > b then Result := a else Result := b;
> end;
>
> var
>   v1: Double;
>   v2: Single;
> begin
>   v1 := Pi;
>   v2 := 0;
>   WriteLn(v1);
>   WriteLn(Max(v1,0));
>   WriteLn(Max(v1,0.0));
>   WriteLn(Max(v1,v2));
> end.
>
> Prints:
>  3.1415926535897931E+000
> Single
>  3.141592741E+00
> Double
>  3.1415926535897931E+000
> Double
>  3.1415926535897931E+000
>
> If this is not a bug, it would be very helpful if the compiler could print
> a warning whenever a value is implicitly converted from double to single.
>
> Well, pascal is a hard typed language, but not that hard in numeric
> issues. I think it is a little inconsistent that it implicitly converts
> '0.0' to double but '0 to single.
>
> Nevertheless, I think it is a bug. It doesn't choose the right overloaded
> function
>
> But the main is this:
> you have several overload options for max
> 1 extended, extended
> 2 double, double
> 3 single, single
> 4 int64, int64
> 5 integer, integer
>
> When it finds (double, single), why does  it choose (single, single)
> instead of (double, double)?
> The natural behavior should be to widen to the greater parameter, like it
> does in expressions.
>
>
>
> ___
> fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
> http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal
>
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread Florian Klämpfl

Am 03.07.2018 um 22:11 schrieb g...@wolfgang-ehrhardt.de:


Zitat von Florian Klämpfl :


 But actually, I just found out that we have something like this already for 
years:

{$minfpconstprec 64}


OK, then two questions remain: Why does is occur/apply only for (newer?) 3.1.1 
versions?


I dug a little bit deeper, the reason is:
https://bugs.freepascal.org/view.php?id=25121


And why is there no option to select the maximum precision for the target FPU?
E.g. if extended has 10 bytes (no only an alias for double with 64-bit), it
should be possible to select {$minfpconstprec 80} or {$minfpconstprec max}.



From the FPC sources:

  { adding support for 80 bit here is tricky, since we can't really }
  { check whether the target cpu+OS actually supports it}

(comment not by me). I am not sure, if this is still valid, it is >10 years old.
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread Ralf Quint
On 7/3/2018 10:41 AM, Martok wrote:
>>> If you compile with range checks on, you get a runtime error.
>> why are so many folks NOT developing with all the checks (range, heap, 
>> stack, 
>> etc) turned ON and then turning them off for production builds???
> Actually, while we're at it - it seems to me that for FPC, "all runtime checks
> enabled" is the "defined" way to use the language, and disabling them is more 
> of
> an optimization that the programmer may choose?
> In books about TP and Delphi, it is usually presented the other way around, 
> the
> checks being a debugging tool for edge cases and not essential.
>
> If so, that'd explain some of the issues people have.
>
Exactly, Pascal by and large assumes that you develop with all checks
enabled. Which is something that people also should do in other
languages, like C, which do have those options. However, too many people
just turn all the checks off, because they feel bothered by the all
warnings that the compiler gives them where they are programming in a
very ambiguous and possibly dangerous way.

Speed kills...

Ralf


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread Ralf Quint
On 7/3/2018 10:33 AM, wkitt...@windstream.net wrote:
> On 07/03/2018 12:41 PM, Ralf Quint wrote:
>> And no "new language" can absolve the programmer from properly doing
>> their
>> work. Everything else is just a quick hack, not a properly designed
>> program...
>
>
> Welcome Back, Ralf!  we've missed you O:) O:) O:) 
Thanks, Mark, well, I have always been here (as in FreePascal), it was
just "the other place" where I had to take a prolonged time off... ;-)

Ralf

---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread Ralf Quint
On 7/3/2018 10:09 AM, Marco van de Voort wrote:
> In our previous episode, Santiago A. said:
>> Pascal needs to break backward compatibility to advance, that is, in 
>> fact, a new language. But if pascal is struggling to survive, let alone 
>> a new language if you are not mozilla, google...
> I think to advance Pascal needs less discussion about language and more
> about libraries.
+1

It seems that there are more and more people trying to make Pascal into
a second coming of whatever other language they are used to/have used
before, instead of properly solving problems with the language tools and
features that are already provided.

Ralf

---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread gtt


Zitat von Florian Klämpfl :

 But actually, I just found out that we have something like this  
already for years:


{$minfpconstprec 64}


OK, then two questions remain: Why does is occur/apply only for  
(newer?) 3.1.1 versions?

And why is there no option to select the maximum precision for the target FPU?
E.g. if extended has 10 bytes (no only an alias for double with 64-bit), it
should be possible to select {$minfpconstprec 80} or {$minfpconstprec max}.


But thanks anyhow.

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread Florian Klämpfl

Am 03.07.2018 um 21:43 schrieb g...@wolfgang-ehrhardt.de:


Zitat von Florian Klämpfl :



So you want float constants being evaluated always with full precision (which would be required for consistency) 
causing any floating point expression containing a constant being evaluated with full precision as well?


Yes, at least as default or selectable per option (like FASTMATH etc),
and AFAIK it is default for all compilers I know except 3.1.1.


I am not sure if people like this, this means for example that

single2:=single1/3.0;

results in two type conversions and a double division (which is more expensive than a single one). But actually, I just 
found out that we have something like this already for years:


{$minfpconstprec 64}
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread gtt


Zitat von Florian Klämpfl :



So you want float constants being evaluated always with full  
precision (which would be required for consistency) causing any  
floating point expression containing a constant being evaluated with  
full precision as well?


Yes, at least as default or selectable per option (like FASTMATH etc),
and AFAIK it is default for all compilers I know except 3.1.1.


2. This is a regression from 3.0.4 (here the 32-bit version works as
expected for both double and extended, and same for 64-bit except that
here extended=double) and to 3.1.1 (both under Win7/64).


This was probably changed for consistency. I tried to find the  
commit which changes this, but I cannot currently find it.


Thank you for your effort.

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread Florian Klämpfl

Am 03.07.2018 um 20:33 schrieb g...@wolfgang-ehrhardt.de:


Zitat von Florian Klämpfl :


In pascal, the rule applies that *the resulting type of an operation
does not depend on the type a value is assigned too*. So: 1.0 fits
perfectly into a single, 3.0 as well, they are single constants (you
didn't tell the compiler otherwise). Nobody wants that unnecessarily
larger types are used. So for the compiler this is a single division
and later on the result is converted into extended. The result for
integer is indeed a little bit strange, here probably the default
rule applies that any /-operand is converted to the best available
real type if it is not a real type, thus the result differs.
Question is, if the compiler should look the operand types and
choose more carefully the types, but I tend more to say no.


In any case there two facts.
1. The constants are evaluated at compile time, so there is no
speed penalty.


So you want float constants being evaluated always with full precision (which would be required for consistency) causing 
any floating point expression containing a constant being evaluated with full precision as well? Or how do you want to 
change rules to overcome the mentioned fact? I see only rules which require additional exceptions or causing full 
precision calculations always. The fact that the result type of an operations depends only on the operands, is not 
changeable, doing so will end in devil's kitchen.



2. This is a regression from 3.0.4 (here the 32-bit version works as
expected for both double and extended, and same for 64-bit except that
here extended=double) and to 3.1.1 (both under Win7/64).


This was probably changed for consistency. I tried to find the commit which 
changes this, but I cannot currently find it.






Is there a definite statement, that is will remain so.


Insert type casts for the constants to get proper results on all
archs, see below.

This is problematic because for portable code because not all compilers
support type casts here.


(Delphi works as expected).


The reason why delphi behaves different is probably due to it's
roots in being x87 only: x87 does all calculations with extended
precision.

This is also as expected for 64-Bit Delphis using SSE.



As said, (very strong) roots in x87.
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread gtt


Zitat von Florian Klämpfl :


In pascal, the rule applies that *the resulting type of an operation
does not depend on the type a value is assigned too*. So: 1.0 fits
perfectly into a single, 3.0 as well, they are single constants (you
didn't tell the compiler otherwise). Nobody wants that unnecessarily
larger types are used. So for the compiler this is a single division
and later on the result is converted into extended. The result for
integer is indeed a little bit strange, here probably the default
rule applies that any /-operand is converted to the best available
real type if it is not a real type, thus the result differs.
Question is, if the compiler should look the operand types and
choose more carefully the types, but I tend more to say no.


In any case there two facts.
1. The constants are evaluated at compile time, so there is no
speed penalty.
2. This is a regression from 3.0.4 (here the 32-bit version works as
expected for both double and extended, and same for 64-bit except that
here extended=double) and to 3.1.1 (both under Win7/64).




Is there a definite statement, that is will remain so.


Insert type casts for the constants to get proper results on all
archs, see below.

This is problematic because for portable code because not all compilers
support type casts here.


(Delphi works as expected).


The reason why delphi behaves different is probably due to it's
roots in being x87 only: x87 does all calculations with extended
precision.

This is also as expected for 64-Bit Delphis using SSE.


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread Florian Klämpfl

Am 03.07.2018 um 19:42 schrieb g...@wolfgang-ehrhardt.de:


Zitat von Vojtěch Čihák :

will not give any warning even if correct result is 2.5.
It is simply absurd because it is not about shooting your own foot. Compiler is not a crystal ball, it does what you 
tell him.
If you need floating point, use floating point types and floating point division (my example) and if you need signed 
results, use


Really?

There are other source of loss of precision (in the trunk version) and the 
compiler does
not what I tell him. Example:

const
   d1: double = 1.0/3.0;
   d2: double = 1/3;
   x1: extended = 1.0/3.0;
   x2: extended = 1/3;
   s1: single   = 1.0/3.0;
   s2: single   = 1/3;
begin
   writeln(s1:30:10,  s2:30:10);
   writeln(d1:30:16,  d2:30:16);
   writeln(x1:30:16,  x2:30:16);
end.

and the result

   0.333433  0.333433
     0.333432674408    0.
     0.333432674408    0.

The single result is expected. But the  double and extended constants d1, x1 are
evaluated as single, even if I told the compiler to use the floating point 
quotient 1.0/3.0.


In pascal, the rule applies that *the resulting type of an operation does not depend on the type a value is assigned 
too*. So: 1.0 fits perfectly into a single, 3.0 as well, they are single constants (you didn't tell the compiler 
otherwise). Nobody wants that unnecessarily larger types are used. So for the compiler this is a single division and 
later on the result is converted into extended. The result for integer is indeed a little bit strange, here probably the 
default rule applies that any /-operand is converted to the best available real type if it is not a real type, thus the 
result differs. Question is, if the compiler should look the operand types and choose more carefully the types, but I 
tend more to say no.




If I use the integer quotient the values are as expected. This is against intuition 


True, but intuition is bad in programming :)


and gives
no warning. And even if I can adapt to this and live with this quirk: 


What is exactly the quirk?


Is there a definite
statement, that is will remain so. 


Insert type casts for the constants to get proper results on all archs, see 
below.


(Delphi works as expected).


The reason why delphi behaves different is probably due to it's roots in being x87 only: x87 does all calculations with 
extended precision. You get similar trouble if you do multiple single operations in one statement and on x87, the result 
is normally different from all other FPUs because x87 does not round in between. This is different from most other FPU 
implementations. This can be fixed only with the price doing on all non-x87 FPUs floating point operations unnecessarily 
precise.

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread Florian Klämpfl

Am 03.07.2018 um 14:33 schrieb Wolf:


Maybe we do get some views from the key authors of Free Pascal.


Not from me. These topics have been discussed zillion times.
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread gtt


Zitat von Vojtěch Čihák :

will not give any warning even if correct result is 2.5.
It is simply absurd because it is not about shooting your own foot.  
Compiler is not a crystal ball, it does what you tell him.
If you need floating point, use floating point types and floating  
point division (my example) and if you need signed results, use


Really?

There are other source of loss of precision (in the trunk version) and  
the compiler does

not what I tell him. Example:

const
  d1: double = 1.0/3.0;
  d2: double = 1/3;
  x1: extended = 1.0/3.0;
  x2: extended = 1/3;
  s1: single   = 1.0/3.0;
  s2: single   = 1/3;
begin
  writeln(s1:30:10,  s2:30:10);
  writeln(d1:30:16,  d2:30:16);
  writeln(x1:30:16,  x2:30:16);
end.

and the result

  0.333433  0.333433
0.3334326744080.
0.3334326744080.

The single result is expected. But the  double and extended constants  
d1, x1 are
evaluated as single, even if I told the compiler to use the floating  
point quotient 1.0/3.0.


If I use the integer quotient the values are as expected. This is  
against intuition and gives
no warning. And even if I can adapt to this and live with this quirk:  
Is there a definite

statement, that is will remain so. (Delphi works as expected).


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread Martok
>> If you compile with range checks on, you get a runtime error.
> why are so many folks NOT developing with all the checks (range, heap, stack, 
> etc) turned ON and then turning them off for production builds???
Actually, while we're at it - it seems to me that for FPC, "all runtime checks
enabled" is the "defined" way to use the language, and disabling them is more of
an optimization that the programmer may choose?
In books about TP and Delphi, it is usually presented the other way around, the
checks being a debugging tool for edge cases and not essential.

If so, that'd explain some of the issues people have.

-- 
Regards,
Martok


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread wkitty42

On 07/03/2018 12:41 PM, Ralf Quint wrote:

And no "new language" can absolve the programmer from properly doing their
work. Everything else is just a quick hack, not a properly designed
program...



Welcome Back, Ralf!  we've missed you O:) O:) O:)



--
 NOTE: No off-list assistance is given without prior approval.
   *Please keep mailing list traffic on the list unless*
   *a signed and pre-paid contract is in effect with us.*
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread Jim Lee



On 07/03/18 05:33, Wolf wrote:


The major shortcoming of this thread, the way I see it, is that the 
answer provided explains what the compiler does, but not why the key 
authors of Free Pascal have made these choices. What their choices 
achieve is a substantial watering-down of what is supposedly Pascal's 
most significant paradigm: strong typing. As Jim Lee points out, 
strong typing does limit utility - but if utility is first concern, a 
weakly typed language such as C would be more appropriate.


When looking at the (partial) disassembly of my little program, we see 
to what degree the compiler writers have sacrificed strong typing:


/*a:=1;*
  movb   $0x1,0x22de87(%rip)    # move 1 as single byte into 8 bit 
wide variable A

*b:=a*(-1);*
  movzbl 0x22de80(%rip),%eax    # move A into register %EAX and 
convert to 32 bit

  neg    %rax   # negate what is in %EAX
  mov    %al,0x22de87(%rip) # extract the low 8 bit from %EAX 
and store it in variable B

*writeln(b);    // result: 255*/
. . .

This was compiled without any optimizations. As you can see, the 
brackets are ignored, as is the fact that variables A and B were 
supposed to be multiplied. In other words, the compiler has optimized 
the code, where it was not supposed to do so. It has also replaced 
byte typed values with longint typed values. It has taken my code and 
translated it as if I had written

/  var
    a: byte;
    b: longint;//
//begin
    a:=1;
    b:=-longint(a);  // convert A to a longint and negate it, 
then save result in B
    writeln( (Lower(b) );    // 'Lower' is a fictional typecast to 
denote that I only use the %AL portion of the %EAX register for the result

  end./
Which is quite a bit different from what I did program. Sorry if I am 
picky here, but this is the type of bug you can expect in software if 
you test using examples, and not through rigorous reasoning. And this 
is the reason why the original Borland Pascal had range checking 
built-in. If you activate it, the compiler does complain, both on my 
little program and on Jim's.
But by now, range checking is optional, and Lazarus at least does not 
even activate it by default.
But range checking is not the same as type checking, so I regard it as 
a crutch, a work-around that needs to be taken because the compiler 
does not adhere to (the spirit of) strong typing. And in this sense, 
what I submit here represents the same issue as what is given in the 
subject string if the whole thread:


Strong typing, and also readability, has been sacrificed on the altar 
of utility, by using implicit type conversions.


Maybe we do get some views from the key authors of Free Pascal.


Wolf



I didn't fully understand the intent of your first post, but now I get 
what you're saying.


I tend to agree.  Strict typing is the main thing that separates Pascal 
from C, conceptually.  I'd rather not see them converge.


-Jim


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread Marco van de Voort
In our previous episode, Santiago A. said:
> 
> Pascal needs to break backward compatibility to advance, that is, in 
> fact, a new language. But if pascal is struggling to survive, let alone 
> a new language if you are not mozilla, google...

I think to advance Pascal needs less discussion about language and more
about libraries.
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread wkitty42

On 07/03/2018 03:26 AM, C Western wrote:

On 02/07/18 23:13, Wolf wrote:

Who is shooting whom in the foot?

Wolf



If you compile with range checks on, you get a runtime error.



why are so many folks NOT developing with all the checks (range, heap, stack, 
etc) turned ON and then turning them off for production builds??? that is 
another one of ""the Pascal ways"" AFAIK... we've (TINW) been doing this since 
Pascal came out with these checks... it just makes no sense not to use the tools 
at hand and these checks are important tools... even moreso in today's world...



--
 NOTE: No off-list assistance is given without prior approval.
   *Please keep mailing list traffic on the list unless*
   *a signed and pre-paid contract is in effect with us.*
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread Vojtěch Čihák

Hi,
 
I agree with the original topic that overload function Math.Max(Double, Double) 
shoud be chosen instead of Math.Max(Single, Single) when one of parameters is 
Integer.
 
But your criticism make no sense to me. You can as well point that this code:
 
var a, b: Byte;
begin
  a:=5;
  b:=a div 2;
  writeln(b);  // result: 2
end;
 
will not give any warning even if correct result is 2.5.
It is simply absurd because it is not about shooting your own foot. Compiler is 
not a crystal ball, it does what you tell him.
If you need floating point, use floating point types and floating point 
division (my example) and if you need signed results, use signed types (your 
example).
 
Vojtěch
__

Od: Wolf 
Komu: fpc-pascal@lists.freepascal.org
Datum: 03.07.2018 14:33
Předmět: Re: [fpc-pascal] Loss of precision when using math.Max()


On 03/07/2018 11:26, Jim Lee wrote:
 

On 07/02/18 15:13, Wolf wrote:Not so long ago, Florian was proudly bragging about "Pascal 
does not allow you to shoot yourself in the foot 
<http://www.toodarkpark.org/computers/humor/shoot-self-in-foot.html>"
What about this little program:
program Project1;

var a,b: byte;
begin
  a:=1;
  b:=a*(-1);
  writeln(b);    // result: 255
end.
    
The result is obviously correct, given how the variables are declared. But there are no compiler warnings / errors that the assignment b:=a*(-1) is fishy, to put it mildly. And if you are serious about strong typing, it ought to be illegal, with a suitable complaint from the compiler.

Who is shooting whom in the foot?
Wolf


Should the compiler balk at this as well?

program Project1;

var a,b,c: byte;
begin
  a:=5;
  b:=6;
  c:=a-b;
  writeln(c);    // result: 255
end.

Without the implicit conversion of signed/unsigned values, the utility of the 
language is greatly diminished.

-Jim



___fpc-pascal maillist  -  
fpc-pascal@lists.freepascal.org 
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal
 <http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal>


--

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal 
<http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal>
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread Ralf Quint
On 7/3/2018 5:01 AM, Santiago A. wrote:
> El 03/07/2018 a las 01:26, Jim Lee escribió:
>>
>> Without the implicit conversion of signed/unsigned values, the
>> utility of the language is greatly diminished.
Bollocks! Just learn to program in Pascal, not trying to have Pascal act
just like another compiler/language
>
> Let's be honest, compared to C and many other languages (included C++,
> that is a suicide without extra-language analyzer tools), Pascal is
> very type secure. For instance, many languages allow assigning a float
> to an integer without any problem. Moreover without being clearly
> specified by language definition what the compiler should do, truncate
> or round.
>
> Pascal needs to break backward compatibility to advance, that is, in
> fact, a new language. But if pascal is struggling to survive, let
> alone a new language if you are not mozilla, google...
Again bollocks!

If you have properly range check enabled, you get an exception at those
code samples. Such for code samples (all variables of the same type,
Byte in the examples), the compiler can not make a determination at
compile time (and we have a compiler here, not just another interpreter,
like Python, etc ).

And no "new language" can absolve the programmer from properly doing
their work. Everything else is just a quick hack, not a properly
designed program...

Ralf

---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread Santiago A.

El 03/07/2018 a las 14:33, Wolf escribió:



PS.: while composing this mail, Santiago wrote:  Pascal needs to break 
backward compatibility to advance, that is, in fact, a new language. 
But if pascal is struggling to survive, let alone a new language if 
you are not mozilla, google...


In which direction should Free Pascal move - lower type (range, 
overflow, memory) checking demands, with the implied additional 
sources for bugs, but also better speed and shorter code, a la C, or 
should Free Pascal rather take the lead and move towards safer, and 
more trustworthy, code, a la Rust?


Well, I am more for safer. But the problem is not that Pascal is not 
safer enough (some parts could be improved, but it has a good mark) it 
is about new features that need convoluted workarounds or libraries and 
should be part of the language syntax.
For instance: Some functional programing, closures, anonymous functions, 
concurrency, a clear use of character sets, different types of pointers.


And there are things that I would change in the current syntax, but I 
suppose it is a matter of  taste.


This is a topic for fpc-other ;-)

--
Saludos

Santiago A.

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread Wolf
The major shortcoming of this thread, the way I see it, is that the 
answer provided explains what the compiler does, but not why the key 
authors of Free Pascal have made these choices. What their choices 
achieve is a substantial watering-down of what is supposedly Pascal's 
most significant paradigm: strong typing. As Jim Lee points out, strong 
typing does limit utility - but if utility is first concern, a weakly 
typed language such as C would be more appropriate.


When looking at the (partial) disassembly of my little program, we see 
to what degree the compiler writers have sacrificed strong typing:


/*a:=1;*
  movb   $0x1,0x22de87(%rip)    # move 1 as single byte into 8 bit 
wide variable A

*b:=a*(-1);*
  movzbl 0x22de80(%rip),%eax    # move A into register %EAX and 
convert to 32 bit

  neg    %rax   # negate what is in %EAX
  mov    %al,0x22de87(%rip) # extract the low 8 bit from %EAX 
and store it in variable B

*writeln(b);    // result: 255*/
. . .

This was compiled without any optimizations. As you can see, the 
brackets are ignored, as is the fact that variables A and B were 
supposed to be multiplied. In other words, the compiler has optimized 
the code, where it was not supposed to do so. It has also replaced byte 
typed values with longint typed values. It has taken my code and 
translated it as if I had written

/  var
    a: byte;
    b: longint;//
//begin
    a:=1;
    b:=-longint(a);  // convert A to a longint and negate it, 
then save result in B
    writeln( (Lower(b) );    // 'Lower' is a fictional typecast to 
denote that I only use the %AL portion of the %EAX register for the result

  end./
Which is quite a bit different from what I did program. Sorry if I am 
picky here, but this is the type of bug you can expect in software if 
you test using examples, and not through rigorous reasoning. And this is 
the reason why the original Borland Pascal had range checking built-in. 
If you activate it, the compiler does complain, both on my little 
program and on Jim's.
But by now, range checking is optional, and Lazarus at least does not 
even activate it by default.
But range checking is not the same as type checking, so I regard it as a 
crutch, a work-around that needs to be taken because the compiler does 
not adhere to (the spirit of) strong typing. And in this sense, what I 
submit here represents the same issue as what is given in the subject 
string if the whole thread:


Strong typing, and also readability, has been sacrificed on the altar of 
utility, by using implicit type conversions.


Maybe we do get some views from the key authors of Free Pascal.


Wolf


PS.: while composing this mail, Santiago wrote:  Pascal needs to break 
backward compatibility to advance, that is, in fact, a new language. But 
if pascal is struggling to survive, let alone a new language if you are 
not mozilla, google...


In which direction should Free Pascal move - lower type (range, 
overflow, memory) checking demands, with the implied additional sources 
for bugs, but also better speed and shorter code, a la C, or should Free 
Pascal rather take the lead and move towards safer, and more 
trustworthy, code, a la Rust?


W.



On 03/07/2018 11:26, Jim Lee wrote:




On 07/02/18 15:13, Wolf wrote:


Not so long ago, Florian was proudly bragging about "Pascal does not 
allow you to shoot yourself in the foot 
"


What about this little program:

program Project1;

var a,b: byte;
begin
  a:=1;
  b:=a*(-1);
  writeln(b);    // result: 255
end.

The result is obviously correct, given how the variables are 
declared. But there are no compiler warnings / errors that the 
assignment b:=a*(-1) is fishy, to put it mildly. And if you are 
serious about strong typing, it ought to be illegal, with a suitable 
complaint from the compiler.


Who is shooting whom in the foot?

Wolf





Should the compiler balk at this as well?

program Project1;

var a,b,c: byte;
begin
  a:=5;
  b:=6;
  c:=a-b;
  writeln(c);    // result: 255
end.

Without the implicit conversion of signed/unsigned values, the utility 
of the language is greatly diminished.


-Jim



___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread Santiago A.

El 03/07/2018 a las 01:26, Jim Lee escribió:




On 07/02/18 15:13, Wolf wrote:


Not so long ago, Florian was proudly bragging about "Pascal does not 
allow you to shoot yourself in the foot 
"


What about this little program:

program Project1;

var a,b: byte;
begin
  a:=1;
  b:=a*(-1);
  writeln(b);    // result: 255
end.

The result is obviously correct, given how the variables are 
declared. But there are no compiler warnings / errors that the 
assignment b:=a*(-1) is fishy, to put it mildly. And if you are 
serious about strong typing, it ought to be illegal, with a suitable 
complaint from the compiler.


Who is shooting whom in the foot?

Wolf





Should the compiler balk at this as well?

program Project1;

var a,b,c: byte;
begin
  a:=5;
  b:=6;
  c:=a-b;
  writeln(c);    // result: 255
end.

Without the implicit conversion of signed/unsigned values, the utility 
of the language is greatly diminished.


Let's be honest, compared to C and many other languages (included C++, 
that is a suicide without extra-language analyzer tools), Pascal is very 
type secure. For instance, many languages allow assigning a float to an 
integer without any problem. Moreover without being clearly specified by 
language definition what the compiler should do, truncate or round.


Pascal needs to break backward compatibility to advance, that is, in 
fact, a new language. But if pascal is struggling to survive, let alone 
a new language if you are not mozilla, google...




-Jim



___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal



--
Saludos

Santiago A.

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-03 Thread C Western

On 02/07/18 23:13, Wolf wrote:
Not so long ago, Florian was proudly bragging about "Pascal does not 
allow you to shoot yourself in the foot 
"


What about this little program:

program Project1;

var a,b: byte;
begin
   a:=1;
   b:=a*(-1);
   writeln(b);    // result: 255
end.

The result is obviously correct, given how the variables are declared. 
But there are no compiler warnings / errors that the assignment 
b:=a*(-1) is fishy, to put it mildly. And if you are serious about 
strong typing, it ought to be illegal, with a suitable complaint from 
the compiler.


Who is shooting whom in the foot?

Wolf



If you compile with range checks on, you get a runtime error.

Colin
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-02 Thread Jim Lee



On 07/02/18 15:13, Wolf wrote:


Not so long ago, Florian was proudly bragging about "Pascal does not 
allow you to shoot yourself in the foot 
"


What about this little program:

program Project1;

var a,b: byte;
begin
  a:=1;
  b:=a*(-1);
  writeln(b);    // result: 255
end.

The result is obviously correct, given how the variables are declared. 
But there are no compiler warnings / errors that the assignment 
b:=a*(-1) is fishy, to put it mildly. And if you are serious about 
strong typing, it ought to be illegal, with a suitable complaint from 
the compiler.


Who is shooting whom in the foot?

Wolf





Should the compiler balk at this as well?

program Project1;

var a,b,c: byte;
begin
  a:=5;
  b:=6;
  c:=a-b;
  writeln(c);    // result: 255
end.

Without the implicit conversion of signed/unsigned values, the utility 
of the language is greatly diminished.


-Jim

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-02 Thread Wolf
Not so long ago, Florian was proudly bragging about "Pascal does not 
allow you to shoot yourself in the foot 
"


What about this little program:

program Project1;

var a,b: byte;
begin
  a:=1;
  b:=a*(-1);
  writeln(b);    // result: 255
end.

The result is obviously correct, given how the variables are declared. 
But there are no compiler warnings / errors that the assignment 
b:=a*(-1) is fishy, to put it mildly. And if you are serious about 
strong typing, it ought to be illegal, with a suitable complaint from 
the compiler.


Who is shooting whom in the foot?

Wolf


On 02/07/2018 20:22, Santiago A. wrote:

El 01/07/2018 a las 10:27, C Western escribió:

On 29/06/18 21:55, Sven Barth via fpc-pascal wrote:

More confusingly, if a single variable is used, the expected 
Max(Double, Double) is called:


function Max(a, b: Double): Double; overload;
begin
  WriteLn('Double');
  if a > b then Result := a else Result := b;
end;

function Max(a, b: Single): Single; overload;
begin
  WriteLn('Single');
  if a > b then Result := a else Result := b;
end;

var
  v1: Double;
  v2: Single;
begin
  v1 := Pi;
  v2 := 0;
  WriteLn(v1);
  WriteLn(Max(v1,0));
  WriteLn(Max(v1,0.0));
  WriteLn(Max(v1,v2));
end.

Prints:
 3.1415926535897931E+000
Single
 3.141592741E+00
Double
 3.1415926535897931E+000
Double
 3.1415926535897931E+000

If this is not a bug, it would be very helpful if the compiler could 
print a warning whenever a value is implicitly converted from double 
to single.
Well, pascal is a hard typed language, but not that hard in numeric 
issues. I think it is a little inconsistent that it implicitly 
converts '0.0' to double but '0 to single.


Nevertheless, I think it is a bug. It doesn't choose the right 
overloaded function


But the main is this:
you have several overload options for max
1 extended, extended
2 double, double
3 single, single
4 int64, int64
5 integer, integer

When it finds (double, single), why does  it choose (single, single) 
instead of (double, double)?
The natural behavior should be to widen to the greater parameter, like 
it does in expressions.




___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-02 Thread Santiago A.

El 01/07/2018 a las 10:27, C Western escribió:

On 29/06/18 21:55, Sven Barth via fpc-pascal wrote:

More confusingly, if a single variable is used, the expected 
Max(Double, Double) is called:


function Max(a, b: Double): Double; overload;
begin
  WriteLn('Double');
  if a > b then Result := a else Result := b;
end;

function Max(a, b: Single): Single; overload;
begin
  WriteLn('Single');
  if a > b then Result := a else Result := b;
end;

var
  v1: Double;
  v2: Single;
begin
  v1 := Pi;
  v2 := 0;
  WriteLn(v1);
  WriteLn(Max(v1,0));
  WriteLn(Max(v1,0.0));
  WriteLn(Max(v1,v2));
end.

Prints:
 3.1415926535897931E+000
Single
 3.141592741E+00
Double
 3.1415926535897931E+000
Double
 3.1415926535897931E+000

If this is not a bug, it would be very helpful if the compiler could 
print a warning whenever a value is implicitly converted from double 
to single.
Well, pascal is a hard typed language, but not that hard in numeric 
issues. I think it is a little inconsistent that it implicitly converts 
'0.0' to double but '0 to single.


Nevertheless, I think it is a bug. It doesn't choose the right 
overloaded function


But the main is this:
you have several overload options for max
1 extended, extended
2 double, double
3 single, single
4 int64, int64
5 integer, integer

When it finds (double, single), why does  it choose (single, single) 
instead of (double, double)?
The natural behavior should be to widen to the greater parameter, like 
it does in expressions.


--
Saludos

Santiago A.

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-02 Thread C Western

On 01/07/18 09:27, C Western wrote:

On 29/06/18 21:55, Sven Barth via fpc-pascal wrote:

Am 29.06.2018 um 18:45 schrieb Alan Krause:
I stumbled upon something the other day that was causing numerical 
differences between compiled Delphi and FPC code. Executing the 
following sample console application illustrates the issue clearly:


program test;

uses
  math, SysUtils;

var
  arg1 : double;
  arg2 : double;
  res  : double;
begin
  arg1 := 10.00;
  arg2 := 72500.51;
  writeln( 'arg1 = ' + FormatFloat( '0.', arg1 ) );
  writeln( 'arg2 = ' + FormatFloat( '0.', arg2 ) );

  res := arg1 - arg2;
  writeln( 'arg1 - arg2 = ' + FormatFloat( '0.', res ) );
  writeln( 'Max( arg1 - arg2, 0 ) = ' + FormatFloat( '0.', 
Max( res, 0 ) ) );
  writeln( 'Max( arg1 - arg2, 0.0 ) = ' + FormatFloat( '0.', 
Max( res, 0.0 ) ) );

end.

--- begin output (Linux x86_64) ---

arg1 = 10.
arg2 = 72500.5100
arg1 - arg2 = 27499.4900
*Max( res, 0 ) = 27499.49023438*
Max( res, 0.0 ) = 27499.4900

--- end output ---

I am guessing that the integer value of zero is causing the wrong 
overloaded function to be called? I was able to solve the problem in 
my code by replacing the 0 with 0.0.


The compiler converts the 0 to the type with the lowest precision that 
can hold the value (or the largest if none can hold it exactly). For 0 
this is already satisfied by Single, so the compiler essentially has 
the parameter types Double and Single. For some reason (I don't know 
whether it's due to a bug or by design) it picks the Single overload 
instead of the Double one.
Someone who knows more about the compiler's overload handling would 
need to answer why it favors (Single, Single) over (Double, Double) 
for (Double, Single) parameters (or (Single, Double), the order 
doesn't matter here).


Regards,
Sven



More confusingly, if a single variable is used, the expected Max(Double, 
Double) is called:


function Max(a, b: Double): Double; overload;
begin
   WriteLn('Double');
   if a > b then Result := a else Result := b;
end;

function Max(a, b: Single): Single; overload;
begin
   WriteLn('Single');
   if a > b then Result := a else Result := b;
end;

var
   v1: Double;
   v2: Single;
begin
   v1 := Pi;
   v2 := 0;
   WriteLn(v1);
   WriteLn(Max(v1,0));
   WriteLn(Max(v1,0.0));
   WriteLn(Max(v1,v2));
end.

Prints:
  3.1415926535897931E+000
Single
  3.141592741E+00
Double
  3.1415926535897931E+000
Double
  3.1415926535897931E+000

If this is not a bug, it would be very helpful if the compiler could 
print a warning whenever a value is implicitly converted from double to 
single.


Colin



And if an integer variable is used, Max(Single, Single) is called:

var
  v1: Double;
  v2: Single;
  v3: Integer;
begin
  v1 := Pi;
  v2 := 0;
  WriteLn(v1);
  WriteLn(Max(v1,0));
  WriteLn(Max(v1,0.0));
  WriteLn(Max(v1,v2));
  WriteLn(Max(v1,v3));
end.

Prints:

 3.1415926535897931E+000
Single
 3.141592741E+00
Double
 3.1415926535897931E+000
Double
 3.1415926535897931E+000
Single
 3.141592741E+00

Delphi prints Double for all 4. I think the last case is definitely a bug.

Colin

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-07-01 Thread C Western

On 29/06/18 21:55, Sven Barth via fpc-pascal wrote:

Am 29.06.2018 um 18:45 schrieb Alan Krause:
I stumbled upon something the other day that was causing numerical 
differences between compiled Delphi and FPC code. Executing the 
following sample console application illustrates the issue clearly:


program test;

uses
  math, SysUtils;

var
  arg1 : double;
  arg2 : double;
  res  : double;
begin
  arg1 := 10.00;
  arg2 := 72500.51;
  writeln( 'arg1 = ' + FormatFloat( '0.', arg1 ) );
  writeln( 'arg2 = ' + FormatFloat( '0.', arg2 ) );

  res := arg1 - arg2;
  writeln( 'arg1 - arg2 = ' + FormatFloat( '0.', res ) );
  writeln( 'Max( arg1 - arg2, 0 ) = ' + FormatFloat( '0.', 
Max( res, 0 ) ) );
  writeln( 'Max( arg1 - arg2, 0.0 ) = ' + FormatFloat( '0.', 
Max( res, 0.0 ) ) );

end.

--- begin output (Linux x86_64) ---

arg1 = 10.
arg2 = 72500.5100
arg1 - arg2 = 27499.4900
*Max( res, 0 ) = 27499.49023438*
Max( res, 0.0 ) = 27499.4900

--- end output ---

I am guessing that the integer value of zero is causing the wrong 
overloaded function to be called? I was able to solve the problem in 
my code by replacing the 0 with 0.0.


The compiler converts the 0 to the type with the lowest precision that 
can hold the value (or the largest if none can hold it exactly). For 0 
this is already satisfied by Single, so the compiler essentially has the 
parameter types Double and Single. For some reason (I don't know whether 
it's due to a bug or by design) it picks the Single overload instead of 
the Double one.
Someone who knows more about the compiler's overload handling would need 
to answer why it favors (Single, Single) over (Double, Double) for 
(Double, Single) parameters (or (Single, Double), the order doesn't 
matter here).


Regards,
Sven



More confusingly, if a single variable is used, the expected Max(Double, 
Double) is called:


function Max(a, b: Double): Double; overload;
begin
  WriteLn('Double');
  if a > b then Result := a else Result := b;
end;

function Max(a, b: Single): Single; overload;
begin
  WriteLn('Single');
  if a > b then Result := a else Result := b;
end;

var
  v1: Double;
  v2: Single;
begin
  v1 := Pi;
  v2 := 0;
  WriteLn(v1);
  WriteLn(Max(v1,0));
  WriteLn(Max(v1,0.0));
  WriteLn(Max(v1,v2));
end.

Prints:
 3.1415926535897931E+000
Single
 3.141592741E+00
Double
 3.1415926535897931E+000
Double
 3.1415926535897931E+000

If this is not a bug, it would be very helpful if the compiler could 
print a warning whenever a value is implicitly converted from double to 
single.


Colin


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Re: [fpc-pascal] Loss of precision when using math.Max()

2018-06-29 Thread Sven Barth via fpc-pascal

Am 29.06.2018 um 18:45 schrieb Alan Krause:
I stumbled upon something the other day that was causing numerical 
differences between compiled Delphi and FPC code. Executing the 
following sample console application illustrates the issue clearly:


program test;

uses
  math, SysUtils;

var
  arg1 : double;
  arg2 : double;
  res  : double;
begin
  arg1 := 10.00;
  arg2 := 72500.51;
  writeln( 'arg1 = ' + FormatFloat( '0.', arg1 ) );
  writeln( 'arg2 = ' + FormatFloat( '0.', arg2 ) );

  res := arg1 - arg2;
  writeln( 'arg1 - arg2 = ' + FormatFloat( '0.', res ) );
  writeln( 'Max( arg1 - arg2, 0 ) = ' + FormatFloat( '0.', 
Max( res, 0 ) ) );
  writeln( 'Max( arg1 - arg2, 0.0 ) = ' + FormatFloat( '0.', 
Max( res, 0.0 ) ) );

end.

--- begin output (Linux x86_64) ---

arg1 = 10.
arg2 = 72500.5100
arg1 - arg2 = 27499.4900
*Max( res, 0 ) = 27499.49023438*
Max( res, 0.0 ) = 27499.4900

--- end output ---

I am guessing that the integer value of zero is causing the wrong 
overloaded function to be called? I was able to solve the problem in 
my code by replacing the 0 with 0.0.


The compiler converts the 0 to the type with the lowest precision that 
can hold the value (or the largest if none can hold it exactly). For 0 
this is already satisfied by Single, so the compiler essentially has the 
parameter types Double and Single. For some reason (I don't know whether 
it's due to a bug or by design) it picks the Single overload instead of 
the Double one.
Someone who knows more about the compiler's overload handling would need 
to answer why it favors (Single, Single) over (Double, Double) for 
(Double, Single) parameters (or (Single, Double), the order doesn't 
matter here).


Regards,
Sven
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

[fpc-pascal] Loss of precision when using math.Max()

2018-06-29 Thread Alan Krause
I stumbled upon something the other day that was causing numerical
differences between compiled Delphi and FPC code. Executing the following
sample console application illustrates the issue clearly:

program test;

uses
  math, SysUtils;

var
  arg1 : double;
  arg2 : double;
  res  : double;
begin
  arg1 := 10.00;
  arg2 := 72500.51;
  writeln( 'arg1 = ' + FormatFloat( '0.', arg1 ) );
  writeln( 'arg2 = ' + FormatFloat( '0.', arg2 ) );

  res := arg1 - arg2;
  writeln( 'arg1 - arg2 = ' + FormatFloat( '0.', res ) );
  writeln( 'Max( arg1 - arg2, 0 ) = ' + FormatFloat( '0.', Max(
res, 0 ) ) );
  writeln( 'Max( arg1 - arg2, 0.0 ) = ' + FormatFloat( '0.', Max(
res, 0.0 ) ) );
end.

--- begin output (Linux x86_64) ---

arg1 = 10.
arg2 = 72500.5100
arg1 - arg2 = 27499.4900
*Max( res, 0 ) = 27499.49023438*
Max( res, 0.0 ) = 27499.4900

--- end output ---

I am guessing that the integer value of zero is causing the wrong
overloaded function to be called? I was able to solve the problem in my
code by replacing the 0 with 0.0.

Thanks,
  Alan
-- 
Alan Krause
*President @ Sherman & Associates, Inc.*
Office: (760) 634-1700Web: https://www.shermanloan.com/
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal