Re: Accuracy of floating point calculations

2019-10-31 Thread Robert M. Münch via Digitalmars-d-learn

On 2019-10-31 16:07:07 +, H. S. Teoh said:


Maybe you might be interested in this:


https://stackoverflow.com/questions/6769881/emulate-double-using-2-floats


Thanks, I know the 2nd mentioned paper.


Maybe switch to PPC? :-D


Well, our customers don't use PPC Laptops ;-) otherwise that would be cool.

--
Robert M. Münch
http://www.saphirion.com
smarter | better | faster



Re: Accuracy of floating point calculations

2019-10-31 Thread H. S. Teoh via Digitalmars-d-learn
On Thu, Oct 31, 2019 at 09:52:08AM +0100, Robert M. Münch via 
Digitalmars-d-learn wrote:
> On 2019-10-30 15:12:29 +, H. S. Teoh said:
[...]
> > Do you mean *simulated* 128-bit reals (e.g. with a pair of 64-bit
> > doubles), or do you mean actual IEEE 128-bit reals?
> 
> Simulated, because HW support is lacking on X86. And PPC is not that
> mainstream. I exect Apple to move to ARM, but never heard about 128-Bit
> support for ARM.

Maybe you might be interested in this:


https://stackoverflow.com/questions/6769881/emulate-double-using-2-floats

It's mostly talking about simulating 64-bit floats where the hardware
only supports 32-bit floats,  but the same principles apply for
simulating 128-bit floats with 64-bit hardware.


[...]
> > In the meantime, I've been looking into arbitrary-precision float
> > libraries like libgmp instead. It's software-simulated, and
> > therefore slower, but for certain applications where I want very
> > high precision, it's the currently the only option.
> 
> Yes, but it's way too slow for our product.

Fair point.  In my case I'm mainly working with batch-oriented
processing, so a slight slowdown is an acceptable tradeoff for higher
accuracy.


> Maybe one day we need to deliver an FPGA based co-processor PCI card
> that can run 128-Bit based calculations... but that will be a pretty
> hard way to go.
[...]

Maybe switch to PPC? :-D


T

-- 
If you want to solve a problem, you need to address its root cause, not just 
its symptoms. Otherwise it's like treating cancer with Tylenol...


Re: Accuracy of floating point calculations

2019-10-31 Thread Robert M. Münch via Digitalmars-d-learn

On 2019-10-30 15:12:29 +, H. S. Teoh said:


It wasn't a wrong *decision* per se, but a wrong *prediction* of where
the industry would be headed.


Fair point...

Walter was expecting that people would move towards higher precision, 
but what with SSE2 and other such trends, and the general neglect of 
x87 in hardware developments, it appears that people have been moving 
towards 64-bit doubles rather than 80-bit extended.


Yes, which is wondering me as well... but all the AI stuff seems to 
dominate the game and follow the hype is still a frequently used 
management strategy.



Though TBH, my opinion is that it's not so much neglecting higher
precision, but a general sentiment of the recent years towards
standardization, i.e., to be IEEE-compliant (64-bit floating point)
rather than work with a non-standard format (80-bit x87 reals).


I see it more of a "let's sell what people want". The CPU vendors don't 
seem able to market higher precision. Better implement a 
highly-specific and exploding command-set...



Do you mean *simulated* 128-bit reals (e.g. with a pair of 64-bit
doubles), or do you mean actual IEEE 128-bit reals?


Simulated, because HW support is lacking on X86. And PPC is not that 
mainstream. I exect Apple to move to ARM, but never heard about 128-Bit 
support for ARM.



I'm still longing for 128-bit reals (i.e., actual IEEE 128-bit format)
to show up in x86, but I'm not holding my breath.


Me too.

In the meantime, I've been looking into arbitrary-precision float 
libraries like libgmp instead. It's software-simulated, and therefore 
slower, but for certain applications where I want very high precision, 
it's the currently the only option.


Yes, but it's way too slow for our product.

Maybe one day we need to deliver an FPGA based co-processor PCI card 
that can run 128-Bit based calculations... but that will be a pretty 
hard way to go.


--
Robert M. Münch
http://www.saphirion.com
smarter | better | faster



Re: Accuracy of floating point calculations

2019-10-30 Thread H. S. Teoh via Digitalmars-d-learn
On Wed, Oct 30, 2019 at 09:03:49AM +0100, Robert M. Münch via 
Digitalmars-d-learn wrote:
> On 2019-10-29 17:43:47 +, H. S. Teoh said:
> 
> > On Tue, Oct 29, 2019 at 04:54:23PM +, ixid via Digitalmars-d-learn 
> > wrote:
> > > On Tuesday, 29 October 2019 at 16:11:45 UTC, Daniel Kozak wrote:
> > > > On Tue, Oct 29, 2019 at 5:09 PM Daniel Kozak  wrote:
> > > > 
> > > > AFAIK dmd use real for floating point operations instead of
> > > > double
> > > 
> > > Given x87 is deprecated and has been recommended against since
> > > 2003 at the latest it's hard to understand why this could be seen
> > > as a good idea.
> > 
> > Walter talked about this recently as one of the "misses" in D (one
> > of the things he predicted wrongly when he designed it).
> 
> Why should the real type be a wrong decision? Maybe the code
> generation should be optimized if all terms are double to avoid x87
> but overall more precision is better for some use-cases.

It wasn't a wrong *decision* per se, but a wrong *prediction* of where
the industry would be headed.  Walter was expecting that people would
move towards higher precision, but what with SSE2 and other such trends,
and the general neglect of x87 in hardware developments, it appears that
people have been moving towards 64-bit doubles rather than 80-bit
extended.

Though TBH, my opinion is that it's not so much neglecting higher
precision, but a general sentiment of the recent years towards
standardization, i.e., to be IEEE-compliant (64-bit floating point)
rather than work with a non-standard format (80-bit x87 reals).  I also
would prefer to have higher precision, but it would be nicer if that
higher precision was a standard format with guaranteed semantics that
isn't dependent upon a single vendor or implementation.


> I'm very happpy it exists, and x87 too because our app really needs
> this extended precision. I'm not sure what we would do if we only had
> doubles.
> 
> I'm not aware of any 128 bit real implementations done using SIMD
> instructions which get good speed. Anyone?
[...]

Do you mean *simulated* 128-bit reals (e.g. with a pair of 64-bit
doubles), or do you mean actual IEEE 128-bit reals?  'cos the two are
different, semantically.

I'm still longing for 128-bit reals (i.e., actual IEEE 128-bit format)
to show up in x86, but I'm not holding my breath.  In the meantime, I've
been looking into arbitrary-precision float libraries like libgmp
instead. It's software-simulated, and therefore slower, but for certain
applications where I want very high precision, it's the currently the
only option.


T

-- 
If Java had true garbage collection, most programs would delete themselves upon 
execution. -- Robert Sewell


Re: Accuracy of floating point calculations

2019-10-30 Thread berni44 via Digitalmars-d-learn

On Tuesday, 29 October 2019 at 20:15:13 UTC, kinke wrote:
Note that there's at least one bugzilla for these float/double 
math overloads already. For a start, one could simply wrap the 
corresponding C functions.


I guess, that this issue: 
https://issues.dlang.org/show_bug.cgi?id=20206 boils down to the 
same problem. I allready found out, that it's high likely that 
the bug is not to be found inside std.complex but probably the 
log function.


Re: Accuracy of floating point calculations

2019-10-30 Thread Robert M. Münch via Digitalmars-d-learn

On 2019-10-29 17:43:47 +, H. S. Teoh said:


On Tue, Oct 29, 2019 at 04:54:23PM +, ixid via Digitalmars-d-learn wrote:

On Tuesday, 29 October 2019 at 16:11:45 UTC, Daniel Kozak wrote:

On Tue, Oct 29, 2019 at 5:09 PM Daniel Kozak  wrote:

AFAIK dmd use real for floating point operations instead of double


Given x87 is deprecated and has been recommended against since 2003 at
the latest it's hard to understand why this could be seen as a good
idea.


Walter talked about this recently as one of the "misses" in D (one of
the things he predicted wrongly when he designed it).


Why should the real type be a wrong decision? Maybe the code generation 
should be optimized if all terms are double to avoid x87 but overall 
more precision is better for some use-cases.


I'm very happpy it exists, and x87 too because our app really needs 
this extended precision. I'm not sure what we would do if we only had 
doubles.


I'm not aware of any 128 bit real implementations done using SIMD 
instructions which get good speed. Anyone?


--
Robert M. Münch
http://www.saphirion.com
smarter | better | faster



Re: Accuracy of floating point calculations

2019-10-29 Thread kinke via Digitalmars-d-learn

On Tuesday, 29 October 2019 at 16:20:21 UTC, Daniel Kozak wrote:
On Tue, Oct 29, 2019 at 5:09 PM Daniel Kozak 
 wrote:


On Tue, Oct 29, 2019 at 4:45 PM Twilight via 
Digitalmars-d-learn  wrote:

>
> D calculation:
>mport std.stdio;

import std.math : pow;
import core.stdc.math;

void main()
{
 writefln("%12.3F",log(1-0.)/log(1-(1-0.6)^^20));
}

>writefln("%12.2F",log(1-0.)/log(1-(1-0.6)^^20));
>
> 837675572.38
>
> C++ calculation:
>
>cout<> (log(1-0.)/log(1-pow(1-0.6,20)))

> <<'\n';
>
> 837675573.587
>
> As a second data point, changing 0. to 0.75 yields
> 126082736.96 (Dlang) vs 126082737.142 (C++).
>
> The discrepancy stood out as I was ultimately taking the 
> ceil of the results and noticed an off by one anomaly. 
> Testing with octave, www.desmos.com/scientific, and 
> libreoffice(calc) gave results consistent with the C++ 
> result. Is the dlang calculation within the error bound of 
> what double precision should yield?


If you use gdc or ldc you will get same results as c++, or you 
can use C log directly:


import std.stdio;
import std.math : pow;
import core.stdc.math;

void main()
{
 writefln("%12.3F",log(1-0.)/log(1-(1-0.6)^^20));
}


My fault, for ldc and gdc you will get same result as C++ only 
when you use pow not ^^(operator) and use doubles:


import std.stdio;
import std.math;

void main()
{
 writefln("%12.3F",log(1-0.)/log((1-pow(1-0.6,20;
}


The real issue here IMO is that there's still only a `real` 
version of std.math.log. If there were proper double and float 
overloads, like for other std.math functions, the OP would get 
the expected result with his double inputs, and we wouldn't be 
having this discussion.


For LDC, it would only mean uncommenting 2 one-liners forwarding 
to the LLVM intrinsic; they're commented because otherwise you'd 
get different results with LDC compared to DMD, and other forum 
threads/bugzillas/GitHub issues would pop up.


Note that there's at least one bugzilla for these float/double 
math overloads already. For a start, one could simply wrap the 
corresponding C functions.


Re: Accuracy of floating point calculations

2019-10-29 Thread H. S. Teoh via Digitalmars-d-learn
On Tue, Oct 29, 2019 at 07:10:08PM +, Twilight via Digitalmars-d-learn 
wrote:
> On Tuesday, 29 October 2019 at 16:11:45 UTC, Daniel Kozak wrote:
> > On Tue, Oct 29, 2019 at 5:09 PM Daniel Kozak  wrote:
> > > If you use gdc or ldc you will get same results as c++, or you can
> > > use C log directly:
> > > 
> > > import std.stdio;
> > > import std.math : pow;
> > > import core.stdc.math;
> > > 
> > > void main()
> > > {
> > >  writefln("%12.3F",log(1-0.)/log(1-(1-0.6)^^20));
> > > }
> > 
> > AFAIK dmd use real  for floating point operations instead of double
> 
> Thanks for the clarification. It appears then that because of dmd's
> real calculations, it produces more accurate results, but maybe
> slower.

Yes, it will be somewhat more accurate, depending on the exact
calculation you're performing. But it depends on the x87 coprocessor,
which hasn't been improved for many years now, and not much attention
has been paid to it, so it would appear that 64-bit double arithmetic
using SIMD or MMX instructions would probably run faster.  (I'd profile
it just to be sure, though. Sometimes performance predictions can be
very wrong.)

So roughly speaking, if you want accuracy, use real, if you want speed,
use float or double.


> (Calculating the result with the high precision calculator at
> https://keisan.casio.com/calculator agrees with dmd.)

To verify accuracy, it's usually safer to use an arbitrary-precision
calculator instead of assuming that the most common answer is the right
one (it may be far off, depending on what exactly is being computed and
how, e.g., due to catastrophic cancellation and things like that). Like
`bc -l` if you're running *nix, e.g. the input:

scale=32
l(1 - 0.) / l(1 - (1 - 0.6)^20)

gives:

837675572.37859373067028812966306043501772

So it appears that the C++ answer is less accurate. Note that not all of
the digits above are trustworthy; running the same calculation with
scale=100 gives:

837675572.3785937306702880546932327627909527172023597021486261165664\
994508853029795054669322261827298817174322

which shows a divergence in digits after the 15th decimal, meaning that
the subsequent digits are probably garbage values. This is probably
because the logarithm function near 0 is poorly-conditioned, so you
could potentially be getting complete garbage from your floating-point
operations if you're not careful.

Floating-point is a bear. Every programmer should learn to tame it lest
they get mauled:

https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

:-D


T

-- 
May you live all the days of your life. -- Jonathan Swift


Re: Accuracy of floating point calculations

2019-10-29 Thread Twilight via Digitalmars-d-learn

On Tuesday, 29 October 2019 at 16:11:45 UTC, Daniel Kozak wrote:
On Tue, Oct 29, 2019 at 5:09 PM Daniel Kozak 
 wrote:



If you use gdc or ldc you will get same results as c++, or you 
can use C log directly:


import std.stdio;
import std.math : pow;
import core.stdc.math;

void main()
{
 writefln("%12.3F",log(1-0.)/log(1-(1-0.6)^^20));
}


AFAIK dmd use real  for floating point operations instead of 
double


Thanks for the clarification. It appears then that because of 
dmd's real calculations, it produces more accurate results, but 
maybe slower. (Calculating the result with the high precision 
calculator at https://keisan.casio.com/calculator agrees with 
dmd.)


Re: Accuracy of floating point calculations

2019-10-29 Thread H. S. Teoh via Digitalmars-d-learn
On Tue, Oct 29, 2019 at 04:54:23PM +, ixid via Digitalmars-d-learn wrote:
> On Tuesday, 29 October 2019 at 16:11:45 UTC, Daniel Kozak wrote:
> > On Tue, Oct 29, 2019 at 5:09 PM Daniel Kozak  wrote:
> > > If you use gdc or ldc you will get same results as c++, or you can
> > > use C log directly:
> > > 
> > > import std.stdio;
> > > import std.math : pow;
> > > import core.stdc.math;
> > > 
> > > void main()
> > > {
> > >  writefln("%12.3F",log(1-0.)/log(1-(1-0.6)^^20));
> > > }
> > 
> > AFAIK dmd use real  for floating point operations instead of double
> 
> Given x87 is deprecated and has been recommended against since 2003 at
> the latest it's hard to understand why this could be seen as a good
> idea.

Walter talked about this recently as one of the "misses" in D (one of
the things he predicted wrongly when he designed it).


T

-- 
Philosophy: how to make a career out of daydreaming.


Re: Accuracy of floating point calculations

2019-10-29 Thread ixid via Digitalmars-d-learn

On Tuesday, 29 October 2019 at 16:11:45 UTC, Daniel Kozak wrote:
On Tue, Oct 29, 2019 at 5:09 PM Daniel Kozak 
 wrote:



If you use gdc or ldc you will get same results as c++, or you 
can use C log directly:


import std.stdio;
import std.math : pow;
import core.stdc.math;

void main()
{
 writefln("%12.3F",log(1-0.)/log(1-(1-0.6)^^20));
}


AFAIK dmd use real  for floating point operations instead of 
double


Given x87 is deprecated and has been recommended against since 
2003 at the latest it's hard to understand why this could be seen 
as a good idea.


Re: Accuracy of floating point calculations

2019-10-29 Thread Daniel Kozak via Digitalmars-d-learn
On Tue, Oct 29, 2019 at 5:09 PM Daniel Kozak  wrote:
>
> On Tue, Oct 29, 2019 at 4:45 PM Twilight via Digitalmars-d-learn
>  wrote:
> >
> > D calculation:
> >mport std.stdio;
import std.math : pow;
import core.stdc.math;

void main()
{
 writefln("%12.3F",log(1-0.)/log(1-(1-0.6)^^20));
}
> >writefln("%12.2F",log(1-0.)/log(1-(1-0.6)^^20));
> >
> > 837675572.38
> >
> > C++ calculation:
> >
> >cout< > <<'\n';
> >
> > 837675573.587
> >
> > As a second data point, changing 0. to 0.75 yields
> > 126082736.96 (Dlang) vs 126082737.142 (C++).
> >
> > The discrepancy stood out as I was ultimately taking the ceil of
> > the results and noticed an off by one anomaly. Testing with
> > octave, www.desmos.com/scientific, and libreoffice(calc) gave
> > results consistent with the C++ result. Is the dlang calculation
> > within the error bound of what double precision should yield?
>
> If you use gdc or ldc you will get same results as c++, or you can use
> C log directly:
>
> import std.stdio;
> import std.math : pow;
> import core.stdc.math;
>
> void main()
> {
>  writefln("%12.3F",log(1-0.)/log(1-(1-0.6)^^20));
> }

My fault, for ldc and gdc you will get same result as C++ only when
you use pow not ^^(operator) and use doubles:

import std.stdio;
import std.math;

void main()
{
 writefln("%12.3F",log(1-0.)/log((1-pow(1-0.6,20;
}


Re: Accuracy of floating point calculations

2019-10-29 Thread Daniel Kozak via Digitalmars-d-learn
On Tue, Oct 29, 2019 at 5:09 PM Daniel Kozak  wrote:
>
>
> If you use gdc or ldc you will get same results as c++, or you can use
> C log directly:
>
> import std.stdio;
> import std.math : pow;
> import core.stdc.math;
>
> void main()
> {
>  writefln("%12.3F",log(1-0.)/log(1-(1-0.6)^^20));
> }

AFAIK dmd use real  for floating point operations instead of double


Re: Accuracy of floating point calculations

2019-10-29 Thread Daniel Kozak via Digitalmars-d-learn
On Tue, Oct 29, 2019 at 4:45 PM Twilight via Digitalmars-d-learn
 wrote:
>
> D calculation:
>
>writefln("%12.2F",log(1-0.)/log(1-(1-0.6)^^20));
>
> 837675572.38
>
> C++ calculation:
>
>cout< <<'\n';
>
> 837675573.587
>
> As a second data point, changing 0. to 0.75 yields
> 126082736.96 (Dlang) vs 126082737.142 (C++).
>
> The discrepancy stood out as I was ultimately taking the ceil of
> the results and noticed an off by one anomaly. Testing with
> octave, www.desmos.com/scientific, and libreoffice(calc) gave
> results consistent with the C++ result. Is the dlang calculation
> within the error bound of what double precision should yield?

If you use gdc or ldc you will get same results as c++, or you can use
C log directly:

import std.stdio;
import std.math : pow;
import core.stdc.math;

void main()
{
 writefln("%12.3F",log(1-0.)/log(1-(1-0.6)^^20));
}


Re: accuracy of floating point calculations: d vs cpp

2019-07-23 Thread Ali Çehreli via Digitalmars-d-learn

On 07/22/2019 08:48 PM, Timon Gehr wrote:

> This is probably not your problem, but it may be good to know anyway: D
> allows compilers to perform arbitrary "enhancement" of floating-point
> precision for parts of the computation, including those performed at
> compile time. I think this is stupid, but I haven't been able to
> convince Walter.

For completeness, at least C++ gives the same freedom to the compiler, 
right?


And if I'm not mistaken, an additional potential problem with floating 
point is the state of floating point flags: they are used for all 
floating point computations. If a function sets them for its use and 
then fails to reset them to the previous state, all further computations 
will be affected.


Ali



Re: accuracy of floating point calculations: d vs cpp

2019-07-22 Thread Timon Gehr via Digitalmars-d-learn

On 22.07.19 14:49, drug wrote:
I have almost identical (I believe it at least) implementation (D and 
C++) of the same algorithm that uses Kalman filtering. These 
implementations though show different results (least significant 
digits). Before I start investigating I would like to ask if this issue 
(different results of floating points calculation for D and C++) is well 
known? May be I can read something about that in web? Does D 
implementation of floating point types is different than the one of C++?


Most of all I'm interesting in equal results to ease comparing outputs 
of both implementations between each other. The accuracy itself is 
enough in my case, but this difference is annoying in some cases.


This is probably not your problem, but it may be good to know anyway: D 
allows compilers to perform arbitrary "enhancement" of floating-point 
precision for parts of the computation, including those performed at 
compile time. I think this is stupid, but I haven't been able to 
convince Walter.


Re: accuracy of floating point calculations: d vs cpp

2019-07-22 Thread drug via Digitalmars-d-learn

22.07.2019 17:19, drug пишет:

22.07.2019 16:26, Guillaume Piolat пишет:


Typical floating point operations in single-precision like a simple 
(a * b) + c will provide a -140dB difference if order is changed. 
It's likely the order of operations is not the same in your program, 
so the least significant digit should be different.


What I would recommend is compute the mean relative error, in double, 
and if it's below -200 dB, not bother. This is an incredibly low 
relative error of 0.0001%.
You will have no difficulty making your D program deterministic, but 
knowing exactly where the C++ and D differ will be long and serve no 
purpose.
Unfortunately error has been turned out to be much bigger than I guessed 
before. So obviously there is a problem either on D side or on C++ side. 
Error is too huge to ignore it.


There was a typo in C++ implementation. I did simple-n-dirt Python 
version and after the typo fixed all three implementations show the same 
result if one filter update occurs. But if several updates happen a 
subtle difference exists nevertheless, error accumulates somewhere else 
- time for numerical methods using.


Re: accuracy of floating point calculations: d vs cpp

2019-07-22 Thread drug via Digitalmars-d-learn

22.07.2019 16:26, Guillaume Piolat пишет:


Typical floating point operations in single-precision like a simple (a 
* b) + c will provide a -140dB difference if order is changed. It's 
likely the order of operations is not the same in your program, so the 
least significant digit should be different.


What I would recommend is compute the mean relative error, in double, 
and if it's below -200 dB, not bother. This is an incredibly low 
relative error of 0.0001%.
You will have no difficulty making your D program deterministic, but 
knowing exactly where the C++ and D differ will be long and serve no 
purpose.
Unfortunately error has been turned out to be much bigger than I guessed 
before. So obviously there is a problem either on D side or on C++ side. 
Error is too huge to ignore it.


Re: accuracy of floating point calculations: d vs cpp

2019-07-22 Thread Dennis via Digitalmars-d-learn

On Monday, 22 July 2019 at 12:49:24 UTC, drug wrote:
Before I start investigating I would like to ask if this issue 
(different results of floating points calculation for D and 
C++) is well known?


This likely has little to do with the language, and more with the 
implementation. Basic floating point operations at the same 
precision should give the same results. There can be differences 
in float printing (see [1]) and math functions (sqrt, cos, pow 
etc.) however.


Tips for getting consistent results between C/C++ and D:
- Use the same backend, so compare DMD with DMC, LDC with CLANG 
and GDC with GCC.
- Use the same C runtime library. On Unix glibc will likely be 
the default, on Windows you likely use snn.lib, libcmt.lib or 
msvcrt.dll.

- On the D side, use core.stdc.math instead of std.math
- Use the same optimizations. (Don't use -ffast-math for C)

[1] 
https://forum.dlang.org/post/fndyoiawueefqoeob...@forum.dlang.org


Re: accuracy of floating point calculations: d vs cpp

2019-07-22 Thread Guillaume Piolat via Digitalmars-d-learn

On Monday, 22 July 2019 at 13:23:26 UTC, Guillaume Piolat wrote:

On Monday, 22 July 2019 at 12:49:24 UTC, drug wrote:
I have almost identical (I believe it at least) implementation 
(D and C++) of the same algorithm that uses Kalman filtering. 
These implementations though show different results (least 
significant digits). Before I start investigating I would like 
to ask if this issue (different results of floating points 
calculation for D and C++) is well known? May be I can read 
something about that in web? Does D implementation of floating 
point types is different than the one of C++?


Most of all I'm interesting in equal results to ease comparing 
outputs of both implementations between each other. The 
accuracy itself is enough in my case, but this difference is 
annoying in some cases.


Typical floating point operations in single-precision like a 
simple (a * b) + c will provide a -140dB difference if order is 
changed. It's likely the order of operations is not the same in 
your program, so the least significant digit should be 
different.


What I would recommend is compute the mean relative error, in 
double, and if it's below -200 dB, not bother. This is an 
incredibly low relative error of 0.0001%.
You will have no difficulty making your D program deterministic, 
but knowing exactly where the C++ and D differ will be long and 
serve no purpose.


Re: accuracy of floating point calculations: d vs cpp

2019-07-22 Thread Guillaume Piolat via Digitalmars-d-learn

On Monday, 22 July 2019 at 12:49:24 UTC, drug wrote:
I have almost identical (I believe it at least) implementation 
(D and C++) of the same algorithm that uses Kalman filtering. 
These implementations though show different results (least 
significant digits). Before I start investigating I would like 
to ask if this issue (different results of floating points 
calculation for D and C++) is well known? May be I can read 
something about that in web? Does D implementation of floating 
point types is different than the one of C++?


Most of all I'm interesting in equal results to ease comparing 
outputs of both implementations between each other. The 
accuracy itself is enough in my case, but this difference is 
annoying in some cases.


Typical floating point operations in single-precision like a 
simple (a * b) + c will provide a -140dB difference if order is 
changed. It's likely the order of operations is not the same in 
your program, so the least significant digit should be different.





Re: accuracy of floating point calculations: d vs cpp

2019-07-22 Thread rikki cattermole via Digitalmars-d-learn

On 23/07/2019 12:58 AM, drug wrote:

22.07.2019 15:53, rikki cattermole пишет:


https://godbolt.org/z/EtZLG0


hmm, in short - this is my local problem?


That is not how I would describe it.

I would describe it as IEEE-754 doing what IEEE-754 is good at.
But my point is, you can get the results to match up, if you care about it.


Re: accuracy of floating point calculations: d vs cpp

2019-07-22 Thread drug via Digitalmars-d-learn

22.07.2019 15:53, rikki cattermole пишет:


https://godbolt.org/z/EtZLG0


hmm, in short - this is my local problem?


Re: accuracy of floating point calculations: d vs cpp

2019-07-22 Thread rikki cattermole via Digitalmars-d-learn

On 23/07/2019 12:49 AM, drug wrote:
I have almost identical (I believe it at least) implementation (D and 
C++) of the same algorithm that uses Kalman filtering. These 
implementations though show different results (least significant 
digits). Before I start investigating I would like to ask if this issue 
(different results of floating points calculation for D and C++) is well 
known? May be I can read something about that in web? Does D 
implementation of floating point types is different than the one of C++?


Most of all I'm interesting in equal results to ease comparing outputs 
of both implementations between each other. The accuracy itself is 
enough in my case, but this difference is annoying in some cases.


https://godbolt.org/z/EtZLG0