Re: Program run fails on Windows

2019-10-29 Thread Joel via Digitalmars-d-learn

On Wednesday, 30 October 2019 at 03:56:40 UTC, Joel wrote:
I have DLangUI program that works on macOS, but only compiles 
Windows. It returns -1. Is there a gotcha? It doesn't ask for 
any DLL's.


Windows 10 Pro


I got programs to compile and run with bindbc-sdl, for example.


Program run fails on Windows

2019-10-29 Thread Joel via Digitalmars-d-learn
I have DLangUI program that works on macOS, but only compiles 
Windows. It returns -1. Is there a gotcha? It doesn't ask for any 
DLL's.


Windows 10 Pro



Re: Read Once then reset/init value?

2019-10-29 Thread Simen Kjærås via Digitalmars-d-learn
On Tuesday, 29 October 2019 at 22:24:20 UTC, Robert M. Münch 
wrote:
I quite often have the pattern where a value should be read 
just once and after this reset itself. The idea is to avoid 
that others read the value by accident and get an older state, 
instead they get an "invalid/reset" value.


Is there a library function that can mimic such a behaviour?


Something like this?

T readOnce(T)(ref T value) {
auto tmp = value;
value = T.init;
return tmp;
} unittest {
int i = 3;
assert(i.readOnce == 3);
assert(i == 0);
}

If so, no, there is no library function for it, but feel free to 
use the above. You may very well have to change T.init to 
something more fitting for your use case, of course.


If this is not what you need, feel free to explain further, as 
I'm not sure I understood you correctly. :)


--
  Simen


Re: Eliding of slice range checking

2019-10-29 Thread Per Nordlöw via Digitalmars-d-learn

On Friday, 25 October 2019 at 21:33:26 UTC, Per Nordlöw wrote:
But it requires the function to be qualified as @trusted which 
might hide a @system == operator. How common is it for a == 
operator to be unsafe?


Ping.


Re: Eliding of slice range checking

2019-10-29 Thread Per Nordlöw via Digitalmars-d-learn

On Wednesday, 23 October 2019 at 13:51:19 UTC, kinke wrote:

You call this messy?!

cmpq%rdi, %rdx
jae .LBB0_2
xorl%eax, %eax
retq
.LBB0_2:
movq%rdi, %rax
testq   %rdi, %rdi
je  .LBB0_3
pushq   %rax
.cfi_def_cfa_offset 16
movq%rcx, %rdi
movq%rax, %rdx
callq   memcmp@PLT
testl   %eax, %eax
sete%al
addq$8, %rsp
.cfi_def_cfa_offset 8
retq
.LBB0_3:
movb$1, %al
retq


No, this is fine. Thanks.


Read Once then reset/init value?

2019-10-29 Thread Robert M. Münch via Digitalmars-d-learn
I quite often have the pattern where a value should be read just once 
and after this reset itself. The idea is to avoid that others read the 
value by accident and get an older state, instead they get an 
"invalid/reset" value.


Is there a library function that can mimic such a behaviour?

--
Robert M. Münch
http://www.saphirion.com
smarter | better | faster



Re: Accuracy of floating point calculations

2019-10-29 Thread kinke via Digitalmars-d-learn

On Tuesday, 29 October 2019 at 16:20:21 UTC, Daniel Kozak wrote:
On Tue, Oct 29, 2019 at 5:09 PM Daniel Kozak 
 wrote:


On Tue, Oct 29, 2019 at 4:45 PM Twilight via 
Digitalmars-d-learn  wrote:

>
> D calculation:
>mport std.stdio;

import std.math : pow;
import core.stdc.math;

void main()
{
 writefln("%12.3F",log(1-0.)/log(1-(1-0.6)^^20));
}

>writefln("%12.2F",log(1-0.)/log(1-(1-0.6)^^20));
>
> 837675572.38
>
> C++ calculation:
>
>cout<> (log(1-0.)/log(1-pow(1-0.6,20)))

> <<'\n';
>
> 837675573.587
>
> As a second data point, changing 0. to 0.75 yields
> 126082736.96 (Dlang) vs 126082737.142 (C++).
>
> The discrepancy stood out as I was ultimately taking the 
> ceil of the results and noticed an off by one anomaly. 
> Testing with octave, www.desmos.com/scientific, and 
> libreoffice(calc) gave results consistent with the C++ 
> result. Is the dlang calculation within the error bound of 
> what double precision should yield?


If you use gdc or ldc you will get same results as c++, or you 
can use C log directly:


import std.stdio;
import std.math : pow;
import core.stdc.math;

void main()
{
 writefln("%12.3F",log(1-0.)/log(1-(1-0.6)^^20));
}


My fault, for ldc and gdc you will get same result as C++ only 
when you use pow not ^^(operator) and use doubles:


import std.stdio;
import std.math;

void main()
{
 writefln("%12.3F",log(1-0.)/log((1-pow(1-0.6,20;
}


The real issue here IMO is that there's still only a `real` 
version of std.math.log. If there were proper double and float 
overloads, like for other std.math functions, the OP would get 
the expected result with his double inputs, and we wouldn't be 
having this discussion.


For LDC, it would only mean uncommenting 2 one-liners forwarding 
to the LLVM intrinsic; they're commented because otherwise you'd 
get different results with LDC compared to DMD, and other forum 
threads/bugzillas/GitHub issues would pop up.


Note that there's at least one bugzilla for these float/double 
math overloads already. For a start, one could simply wrap the 
corresponding C functions.


Re: Accuracy of floating point calculations

2019-10-29 Thread H. S. Teoh via Digitalmars-d-learn
On Tue, Oct 29, 2019 at 07:10:08PM +, Twilight via Digitalmars-d-learn 
wrote:
> On Tuesday, 29 October 2019 at 16:11:45 UTC, Daniel Kozak wrote:
> > On Tue, Oct 29, 2019 at 5:09 PM Daniel Kozak  wrote:
> > > If you use gdc or ldc you will get same results as c++, or you can
> > > use C log directly:
> > > 
> > > import std.stdio;
> > > import std.math : pow;
> > > import core.stdc.math;
> > > 
> > > void main()
> > > {
> > >  writefln("%12.3F",log(1-0.)/log(1-(1-0.6)^^20));
> > > }
> > 
> > AFAIK dmd use real  for floating point operations instead of double
> 
> Thanks for the clarification. It appears then that because of dmd's
> real calculations, it produces more accurate results, but maybe
> slower.

Yes, it will be somewhat more accurate, depending on the exact
calculation you're performing. But it depends on the x87 coprocessor,
which hasn't been improved for many years now, and not much attention
has been paid to it, so it would appear that 64-bit double arithmetic
using SIMD or MMX instructions would probably run faster.  (I'd profile
it just to be sure, though. Sometimes performance predictions can be
very wrong.)

So roughly speaking, if you want accuracy, use real, if you want speed,
use float or double.


> (Calculating the result with the high precision calculator at
> https://keisan.casio.com/calculator agrees with dmd.)

To verify accuracy, it's usually safer to use an arbitrary-precision
calculator instead of assuming that the most common answer is the right
one (it may be far off, depending on what exactly is being computed and
how, e.g., due to catastrophic cancellation and things like that). Like
`bc -l` if you're running *nix, e.g. the input:

scale=32
l(1 - 0.) / l(1 - (1 - 0.6)^20)

gives:

837675572.37859373067028812966306043501772

So it appears that the C++ answer is less accurate. Note that not all of
the digits above are trustworthy; running the same calculation with
scale=100 gives:

837675572.3785937306702880546932327627909527172023597021486261165664\
994508853029795054669322261827298817174322

which shows a divergence in digits after the 15th decimal, meaning that
the subsequent digits are probably garbage values. This is probably
because the logarithm function near 0 is poorly-conditioned, so you
could potentially be getting complete garbage from your floating-point
operations if you're not careful.

Floating-point is a bear. Every programmer should learn to tame it lest
they get mauled:

https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

:-D


T

-- 
May you live all the days of your life. -- Jonathan Swift


Re: Accuracy of floating point calculations

2019-10-29 Thread Twilight via Digitalmars-d-learn

On Tuesday, 29 October 2019 at 16:11:45 UTC, Daniel Kozak wrote:
On Tue, Oct 29, 2019 at 5:09 PM Daniel Kozak 
 wrote:



If you use gdc or ldc you will get same results as c++, or you 
can use C log directly:


import std.stdio;
import std.math : pow;
import core.stdc.math;

void main()
{
 writefln("%12.3F",log(1-0.)/log(1-(1-0.6)^^20));
}


AFAIK dmd use real  for floating point operations instead of 
double


Thanks for the clarification. It appears then that because of 
dmd's real calculations, it produces more accurate results, but 
maybe slower. (Calculating the result with the high precision 
calculator at https://keisan.casio.com/calculator agrees with 
dmd.)


Re: Accuracy of floating point calculations

2019-10-29 Thread H. S. Teoh via Digitalmars-d-learn
On Tue, Oct 29, 2019 at 04:54:23PM +, ixid via Digitalmars-d-learn wrote:
> On Tuesday, 29 October 2019 at 16:11:45 UTC, Daniel Kozak wrote:
> > On Tue, Oct 29, 2019 at 5:09 PM Daniel Kozak  wrote:
> > > If you use gdc or ldc you will get same results as c++, or you can
> > > use C log directly:
> > > 
> > > import std.stdio;
> > > import std.math : pow;
> > > import core.stdc.math;
> > > 
> > > void main()
> > > {
> > >  writefln("%12.3F",log(1-0.)/log(1-(1-0.6)^^20));
> > > }
> > 
> > AFAIK dmd use real  for floating point operations instead of double
> 
> Given x87 is deprecated and has been recommended against since 2003 at
> the latest it's hard to understand why this could be seen as a good
> idea.

Walter talked about this recently as one of the "misses" in D (one of
the things he predicted wrongly when he designed it).


T

-- 
Philosophy: how to make a career out of daydreaming.


Re: Accuracy of floating point calculations

2019-10-29 Thread ixid via Digitalmars-d-learn

On Tuesday, 29 October 2019 at 16:11:45 UTC, Daniel Kozak wrote:
On Tue, Oct 29, 2019 at 5:09 PM Daniel Kozak 
 wrote:



If you use gdc or ldc you will get same results as c++, or you 
can use C log directly:


import std.stdio;
import std.math : pow;
import core.stdc.math;

void main()
{
 writefln("%12.3F",log(1-0.)/log(1-(1-0.6)^^20));
}


AFAIK dmd use real  for floating point operations instead of 
double


Given x87 is deprecated and has been recommended against since 
2003 at the latest it's hard to understand why this could be seen 
as a good idea.


Re: Accuracy of floating point calculations

2019-10-29 Thread Daniel Kozak via Digitalmars-d-learn
On Tue, Oct 29, 2019 at 5:09 PM Daniel Kozak  wrote:
>
> On Tue, Oct 29, 2019 at 4:45 PM Twilight via Digitalmars-d-learn
>  wrote:
> >
> > D calculation:
> >mport std.stdio;
import std.math : pow;
import core.stdc.math;

void main()
{
 writefln("%12.3F",log(1-0.)/log(1-(1-0.6)^^20));
}
> >writefln("%12.2F",log(1-0.)/log(1-(1-0.6)^^20));
> >
> > 837675572.38
> >
> > C++ calculation:
> >
> >cout< > <<'\n';
> >
> > 837675573.587
> >
> > As a second data point, changing 0. to 0.75 yields
> > 126082736.96 (Dlang) vs 126082737.142 (C++).
> >
> > The discrepancy stood out as I was ultimately taking the ceil of
> > the results and noticed an off by one anomaly. Testing with
> > octave, www.desmos.com/scientific, and libreoffice(calc) gave
> > results consistent with the C++ result. Is the dlang calculation
> > within the error bound of what double precision should yield?
>
> If you use gdc or ldc you will get same results as c++, or you can use
> C log directly:
>
> import std.stdio;
> import std.math : pow;
> import core.stdc.math;
>
> void main()
> {
>  writefln("%12.3F",log(1-0.)/log(1-(1-0.6)^^20));
> }

My fault, for ldc and gdc you will get same result as C++ only when
you use pow not ^^(operator) and use doubles:

import std.stdio;
import std.math;

void main()
{
 writefln("%12.3F",log(1-0.)/log((1-pow(1-0.6,20;
}


Re: Accuracy of floating point calculations

2019-10-29 Thread Daniel Kozak via Digitalmars-d-learn
On Tue, Oct 29, 2019 at 5:09 PM Daniel Kozak  wrote:
>
>
> If you use gdc or ldc you will get same results as c++, or you can use
> C log directly:
>
> import std.stdio;
> import std.math : pow;
> import core.stdc.math;
>
> void main()
> {
>  writefln("%12.3F",log(1-0.)/log(1-(1-0.6)^^20));
> }

AFAIK dmd use real  for floating point operations instead of double


Re: Accuracy of floating point calculations

2019-10-29 Thread Daniel Kozak via Digitalmars-d-learn
On Tue, Oct 29, 2019 at 4:45 PM Twilight via Digitalmars-d-learn
 wrote:
>
> D calculation:
>
>writefln("%12.2F",log(1-0.)/log(1-(1-0.6)^^20));
>
> 837675572.38
>
> C++ calculation:
>
>cout< <<'\n';
>
> 837675573.587
>
> As a second data point, changing 0. to 0.75 yields
> 126082736.96 (Dlang) vs 126082737.142 (C++).
>
> The discrepancy stood out as I was ultimately taking the ceil of
> the results and noticed an off by one anomaly. Testing with
> octave, www.desmos.com/scientific, and libreoffice(calc) gave
> results consistent with the C++ result. Is the dlang calculation
> within the error bound of what double precision should yield?

If you use gdc or ldc you will get same results as c++, or you can use
C log directly:

import std.stdio;
import std.math : pow;
import core.stdc.math;

void main()
{
 writefln("%12.3F",log(1-0.)/log(1-(1-0.6)^^20));
}


Accuracy of floating point calculations

2019-10-29 Thread Twilight via Digitalmars-d-learn

D calculation:

  writefln("%12.2F",log(1-0.)/log(1-(1-0.6)^^20));

837675572.38

C++ calculation:

  cout<<<'\n';


837675573.587

As a second data point, changing 0. to 0.75 yields 
126082736.96 (Dlang) vs 126082737.142 (C++).


The discrepancy stood out as I was ultimately taking the ceil of 
the results and noticed an off by one anomaly. Testing with 
octave, www.desmos.com/scientific, and libreoffice(calc) gave 
results consistent with the C++ result. Is the dlang calculation 
within the error bound of what double precision should yield?


Re: Eliding of slice range checking

2019-10-29 Thread Kagamin via Digitalmars-d-learn

On Wednesday, 23 October 2019 at 11:20:59 UTC, Per Nordlöw wrote:
Does DMD/LDC avoid range-checking in slice-expressions such as 
the one in my array-overload of `startsWith` defined as


bool startsWith(T)(scope const(T)[] haystack,
   scope const(T)[] needle)
{
if (haystack.length >= needle.length)
{
return haystack[0 .. needle.length] == needle; // is 
slice range checking avoid here?

}
return false;
}


LDC is good at optimizing simple patterns, the only pitfall I 
know is 
https://forum.dlang.org/post/eoftnwkannqmubhjo...@forum.dlang.org


Re: Running unittests of a module with -betterC

2019-10-29 Thread mipri via Digitalmars-d-learn

On Monday, 28 October 2019 at 08:51:02 UTC, Daniel Kozak wrote:
On Mon, Oct 28, 2019 at 9:40 AM Per Nordlöw via 
Digitalmars-d-learn  wrote:


Is it possible to run the unittests of a module with -betterC 
like


 dmd -D -g -main -unittest -betterC f.d

?

This currently errors as

/usr/include/dmd/druntime/import/core/internal/entrypoint.d:34: error: 
undefined reference to '_d_run_main'

for an empty file f.d


AFAIK no,

https://dlang.org/spec/betterc.html#unittests


-unittest sets the 'unittest' version identifier. So this works:

  unittest {
 assert(0);
  }

  version(unittest) {
  extern(C) void main() {
  static foreach(u; __traits(getUnitTests, 
__traits(parent, main)))

  u();
  }
  }

dmd -betterC -unittest -run module.d


Blog Post #83: Notebook, Part VII - All Signals

2019-10-29 Thread Ron Tarrant via Digitalmars-d-learn
This post looks at a handful of Notebook signals, how they're 
triggered, and what they can be used for. We also go over the 
keyboard shortcuts used by the GTK Notebook. You'll find it all 
here: 
https://gtkdcoding.com/2019/10/29/0083-notebook-vii-notebook-all-signals.html