Re: initializing a static array

2017-10-10 Thread Simon Bürger via Digitalmars-d-learn

On Tuesday, 10 October 2017 at 13:54:16 UTC, Daniel Kozak wrote:

struct Double
{
double v = 0;
alias v this;
}

struct Foo(size_t n)
{
Double[n] bar;
}


Interesting approach. But this might introduce problems later. 
For example `Double` is implicitly convertible to `double`, but 
`Double[]` is not implicitly convertible to `double[]`. Therefore 
I will stick with jmh530's solution for now, but thank you anyway.




Re: initializing a static array

2017-10-10 Thread Simon Bürger via Digitalmars-d-learn

On Tuesday, 10 October 2017 at 13:48:16 UTC, Andrea Fontana wrote:

On Tuesday, 10 October 2017 at 13:36:56 UTC, Simon Bürger wrote:
Is there a good way to set them all to zero? The only way I 
can think of is using string-mixins to generate a string such 
as "[0,0,0,0]" with exactly n zeroes. But that seems quite an 
overkill for such a basic task. I suspect I might be missing 
something obvious here...


Maybe:

double[n] bar = 0.repeat(n).array;


This works fine, thanks a lot. I would have expected `.array` to 
return a dynamic array. But apparently the compiler is smart 
enough to know the length. Even the multi-dimensional case works 
fine:


double[n][n] bar = 0.repeat(n).array.repeat(n).array;




initializing a static array

2017-10-10 Thread Simon Bürger via Digitalmars-d-learn
I have a static array inside a struct which I would like to be 
initialized to all-zero like so


  struct Foo(size_t n)
  {
double[n] bar = ... all zeroes ...
  }

(note that the default-initializer of double is nan, and not zero)

I tried

  double[n] bar = 0;  // does not compile
  double[n] bar = {0}; // neither does this
  double[n] bar = [0]; // compiles, but only sets the first 
element, ignoring the rest


Is there a good way to set them all to zero? The only way I can 
think of is using string-mixins to generate a string such as 
"[0,0,0,0]" with exactly n zeroes. But that seems quite an 
overkill for such a basic task. I suspect I might be missing 
something obvious here...


Re: lambda function with "capture by value"

2017-08-06 Thread Simon Bürger via Digitalmars-d-learn

On Sunday, 6 August 2017 at 12:50:22 UTC, Adam D. Ruppe wrote:

On Saturday, 5 August 2017 at 19:58:08 UTC, Temtaime wrote:

(k){ dgs[k] = {writefln("%s", k); }; }(i);


Yeah, that's how I'd do it - make a function taking arguments 
by value that return the delegate you actually want to store. 
(Also use this pattern in Javascript btw for its `var`, though 
JS now has `let` which works without this trick... and D is 
supposed to work like JS `let` it is just buggy).


You could also define a struct with members for the values you 
want, populate it, and pass one of its methods as your 
delegate. It is syntactically the heaviest but does give the 
most precise control (and you can pass the struct itself by 
value to avoid the memory allocation entirely if you want).


But for the loop, the pattern Temtaime wrote is how I'd prolly 
do it.


I like the (kinda cryptic IMO) look of this '(k){...}(i)' 
construction. But for my actual code I went with struct+opCall 
without any delegate at all.


Anyway, thanks for all your suggestions.


Re: lambda function with "capture by value"

2017-08-05 Thread Simon Bürger via Digitalmars-d-learn

On Saturday, 5 August 2017 at 18:54:22 UTC, ikod wrote:

Maybe std.functional.partial can help you.


Nope.

int i = 1;
alias dg = partial!(writeln, i);
i = 2;
dg();

still prints '2' as it should because 'partial' takes 'i' as a 
symbol, which is - for this purpose - kinda like "by reference".


Anyway, I solved my problem already a while ago by replacing 
delegates with custom struct's that implement the call-operator. 
I started this thread just out of curiosity, because as I see it, 
the purpose of lambdas is pretty much to remove the need for such 
custom constructions.


Re: lambda function with "capture by value"

2017-08-05 Thread Simon Bürger via Digitalmars-d-learn

On Saturday, 5 August 2017 at 18:54:22 UTC, ikod wrote:

On Saturday, 5 August 2017 at 18:45:34 UTC, Simon Bürger wrote:

On Saturday, 5 August 2017 at 18:22:38 UTC, Stefan Koch wrote:

[...]


No, sometimes I want i to be the value it has at the time the 
delegate was defined. My actual usecase was more like this:


void delegate()[3] dgs;
for(int i = 0; i < 3; ++i)
dgs[i] = (){writefln("%s", i); };


And I want three different delegates, not three times the 
same. I tried the following:


void delegate()[3] dgs;
for(int i = 0; i < 3; ++i)
{
int j = i;
dgs[i] = (){writefln("%s", j); };
}

I thought that 'j' should be considered a new variable each 
time around, but sadly it doesn't work.


Maybe std.functional.partial can help you.


Thanks. But std.functional.partial takes the fixed arguments as 
template parameters, so they must be known at compile-time. 
Anyway, I solved my problem already a while ago by replacing 
delegates with custom structures which overload the 
call-operator. I opened this thread just out of curiosity. Takes 
a couple lines more but works fine.


Re: lambda function with "capture by value"

2017-08-05 Thread Simon Bürger via Digitalmars-d-learn

On Saturday, 5 August 2017 at 18:22:38 UTC, Stefan Koch wrote:

On Saturday, 5 August 2017 at 18:19:05 UTC, Stefan Koch wrote:

On Saturday, 5 August 2017 at 18:17:49 UTC, Simon Bürger wrote:
If a lambda function uses a local variable, that variable is 
captured using a hidden this-pointer. But this capturing is 
always by reference. Example:


int i = 1;
auto dg = (){ writefln("%s", i); };
i = 2;
dg(); // prints '2'

Is there a way to make the delegate "capture by value" so 
that the call prints '1'?


Note that in C++, both variants are available using
  [&]() { printf("%d", i); }
and
   [=]() { printf("%d", i); }
respectively.


No currently there is not.


and it'd be rather useless I guess.
You want i to be whatever the context i is a the point where 
you call the delegate.

Not at the point where you define the delegate.


No, sometimes I want i to be the value it has at the time the 
delegate was defined. My actual usecase was more like this:


void delegate()[3] dgs;
for(int i = 0; i < 3; ++i)
dgs[i] = (){writefln("%s", i); };


And I want three different delegates, not three times the same. I 
tried the following:


void delegate()[3] dgs;
for(int i = 0; i < 3; ++i)
{
int j = i;
dgs[i] = (){writefln("%s", j); };
}

I thought that 'j' should be considered a new variable each time 
around, but sadly it doesn't work.


lambda function with "capture by value"

2017-08-05 Thread Simon Bürger via Digitalmars-d-learn
If a lambda function uses a local variable, that variable is 
captured using a hidden this-pointer. But this capturing is 
always by reference. Example:


int i = 1;
auto dg = (){ writefln("%s", i); };
i = 2;
dg(); // prints '2'

Is there a way to make the delegate "capture by value" so that 
the call prints '1'?


Note that in C++, both variants are available using
   [&]() { printf("%d", i); }
and
   [=]() { printf("%d", i); }
respectively.


Re: Force usage of double (instead of higher precision)

2017-06-29 Thread Simon Bürger via Digitalmars-d-learn

On Thursday, 29 June 2017 at 00:07:35 UTC, kinke wrote:

On Wednesday, 28 June 2017 at 22:16:48 UTC, Simon Bürger wrote:

I am currently using LDC on 64-bit-Linux if that is relevant.


It is, as LDC on Windows/MSVC would use 64-bit compile-time 
reals. ;)


Changing it to double on other platforms is trivial if you 
compile LDC yourself. You'll want to use this alias: 
https://github.com/ldc-developers/ldc/blob/master/ddmd/root/ctfloat.d#L19, https://github.com/ldc-developers/ldc/blob/master/ddmd/root/ctfloat.h#L19


Huh, I will definitely look into this. This might be the most 
elegant solution. Thanks for the suggestion.


Re: Force usage of double (instead of higher precision)

2017-06-29 Thread Simon Bürger via Digitalmars-d-learn

Thanks a lot for your comments.

On Wednesday, 28 June 2017 at 23:56:42 UTC, Stefan Koch wrote:

[...]

Nice work can you re or dual license under the boost license ?
I'd like to incorporate the qd type into newCTFE.


The original work is not mine but traces back to 
http://crd-legacy.lbl.gov/~dhbailey/mpdist/ which is under a 
(modified) BSD license. I just posted the link for context, sorry 
for the confusion. Doing a port to D does not allow me to change 
the license even though I not a single line from the original 
would remain (I think?).


I might do a completely new D implementation (still based on the 
original authors research paper, not on the details of their 
code). But
1. I probably would only do a subset of functions I need for my 
work (i.e. double-double only, no quad-double, and only a limited 
set of trancendental functions).
2. Given that I have seen the original code, this might still be 
considered a "derivative work". I'm not sure, copyright-law is 
kinda confusing to me in these cases.


Indeed you'll have no way to get rid of the excess precision 
except for creating a function per sub-expression.


No, doesn't seem to work. Here is a minimal breaking example:

double sum(double x, double y) { return x + y; }
bool equals(double x, double y) { return x == y; }

enum pi = ddouble(3.141592653589793116e+00, 
1.224646799147353207e-16);


struct ddouble
{
double hi, lo;

invariant
{
if(!isNaN(hi) && !isNaN(lo))
assert(equals(sum(hi, lo),  hi));
}

this(double hi, double lo)
{
this.hi = hi;
this.lo = lo;
}
}

But there are workarounds that seem to work:
1. remove the constructor (I think this means the invariant is 
not checked anymore?)

2. disable the invariant in ctfe (using "if(__ctfe) return;")
3. Don't use any ctfe (by replacing enum with immutable globals, 
initialized in "static this").



I was using the newCTFE fork which fixes this.


Does this mean your new CTFE code (which is quite impressive work 
as far as I can tell), floating point no longer gets promoted to 
higher precision? That would be really good news for hackish 
floating-point code.


Honestly, this whole "compiler gets to decide which type to 
actually use" thing really bugs me. Kinda reminiscent of C/C++ 
integer types which could in principle be anything at all. I 
thought D had fixed this by specifying "int = 32-bit, long = 
64-bit, float = IEEE-single-precision, double = 
IEEE-double-precision". Apparently not.


If I write "double", I would like to get IEEE-conform 
double-precision operations. If I wanted something depending on 
target-platform and compiler-optimization-level I would have used 
"real". Also this 80-bit-extended type is just a bad idea in 
general and should never be used (IMHO). Even on x86 processors, 
it only exists for backward-compatibility. No current instruction 
set (like SEE/AVX) supports it. Sorry for the long rant. But I am 
puzzled that the spec (https://dlang.org/spec/float.html) 
actually encourages double<->real convertions while at the same 
time it (rightfully) disallows "unsafe math optimizations" such 
as "x-x=0".


Force usage of double (instead of higher precision)

2017-06-28 Thread Simon Bürger via Digitalmars-d-learn
According to the standard (http://dlang.org/spec/float.html), the 
compiler is allowed to compute any floating-point statement in a 
higher precision than specified. Is there a way to deactivate 
this behaviour?


Context (reason why I need this): I am building a "double double" 
type, which essentially takes two 64-bit double-precision numbers 
to emulate a (nearly) quadruple-precision number. A simplified 
version looks something like this:


struct ddouble
{
double high;
double low;

invariant
{
assert(high + low == high);
}

// ...implemententations of arithmetic operations...
}

Everything works fine at run-time, but if I declare a 
compile-time constant like


enum pi = ddouble(3.141592653589793116e+00, 
1.224646799147353207e-16);


the invariant fails because it is evaluated using 80-bit 
"extended precision" during CTFE. All arithmetic operations rely 
on IEEE-conform double-precision, so everything breaks down if 
the compiler decides to replace them with higher precision. I am 
currently using LDC on 64-bit-Linux if that is relevant.


(If you are interested in the "double double" type, take a look 
here:

https://github.com/BrianSwift/MetalQD
which includes a double-double and even quad-double 
implementation in C/C++/Fortran)


Re: nested inout return type

2016-06-14 Thread Simon Bürger via Digitalmars-d-learn
On Tuesday, 14 June 2016 at 14:47:11 UTC, Steven Schveighoffer 
wrote:

* only do one mutable version of opSlice
* add implicit cast (using "alias this") for const(Slice!T) ->
Slice!(const(T)).


Interesting, but unfortunately, the compiler isn't eager about 
this conversion. auto x = s[5 .. 7] isn't going to give you a 
Slice!(const(T)), like an array would. But I like the idea.


Hm, you are right, in fact it doesn't work. Somehow it seemed to 
in my usecase. Well, triplicate it is then. Which isn't that bad 
using something like


auto opSlice(size_t a, size_t b)
{
   // actual non-trivial code
}

auto opSlice(size_t a, size_t b) const
{
   return Slice!(const(T))(ptr, length).opSlice(a,b);
}

auto opSlice(size_t a, size_t b) immutable
{
   return Slice!(immutable(T))(ptr, length).opSlice(a,b);
}


Re: nested inout return type

2016-06-14 Thread Simon Bürger via Digitalmars-d-learn

On Tuesday, 14 June 2016 at 01:50:17 UTC, Era Scarecrow wrote:

On Monday, 13 June 2016 at 23:51:40 UTC, Era Scarecrow wrote:

 inout(Slice) opSlice(size_t a, size_t b) inout
 {
 return cast(inout(Slice)) Slice(ptr+a, b-a);
 }


 Seems the pointer has to be force-cast back to a normal 
pointer so the constructor can work. (Because the function is 
inout, ptr becomes inout(T*) )


 return cast(inout(Slice)) Slice(cast(T*)ptr+a, b-a);

 Beyond that it works as expected :) Writeln gives the 
following output on Slice with opSlices:


Slice!int(18FD60, 10)
const(Slice!int)(18FD60, 10)
immutable(Slice!int)(18FD90, 10)


Then instead of Slice!(const(T)) one would use const(Slice!T). 
Then there is no analogue of the following:


const(T)[] s = ...
s = s[5..7]

which is quite common when parsing strings for example. Still, 
might be the cleanest approach. However I found another solution 
for now without using any "inout":


* only do one mutable version of opSlice
* add implicit cast (using "alias this") for const(Slice!T) -> 
Slice!(const(T)).


So when trying to opSlice on a const it will first cast to a 
mutable-slice-of-const-elements and then do the slice. This is 
closer to the behavior of "const(T[])", though it might have 
issues when using immutable and not only const. Not sure.


Anyway, thanks for the help, and if someone cares, the full 
resulting code is on github.com/Krox/jive/blob/master/jive/array.d





nested inout return type

2016-06-13 Thread Simon Bürger via Digitalmars-d-learn
I'm writing a custom (originally multi-dimensional) Slice-type, 
analogous to the builtin T[], and stumbled upon the problem that 
the following code won't compile. The workaround is simple: just 
write the function three times for mutable/const/immutable. But 
as "inout" was invented to make that unneccessary I was wondering 
if there is a clever way to make this work.



struct Slice(T)
{
T* ptr;
size_t length;

Slice!(inout(T)) opSlice(size_t a, size_t b) inout
{
return Slice!(inout(T))(ptr+a, b-a);
}
}