Re: Need help with calling a list of functions

2018-11-03 Thread Ali Çehreli via Digitalmars-d-learn

On 11/03/2018 06:17 PM, Luigi wrote:
I need to call a function that can create a function from an array of 
functions and calls them in reverse order.  I am learning D any help 
would be



import std.stdio;
import std.algorithm;
import std.array : array;
import std.range;

auto comp(T)(T function(T) [] list) pure {
     auto backwards = retro(funs);
     return >;

}

void main()
{
     auto fun = comp([(real x)=>a/3.0,(real x)=>x*x,(real x)=>x+1.0]);
     writeln(fun(2.0));    // should print 3
}



Here is one that uses a loop:

import std.stdio;
import std.range : front;
import std.traits : ReturnType;

auto comp(Funcs...)(Funcs funcs) pure {
alias R = ReturnType!(typeof([funcs].front));

auto impl(R x) {
foreach_reverse (func; funcs) {
x = func(x);
}
return x;
}

return (double x) => impl(x);
}

void main()
{
auto fun = comp((real x)=>x/3.0,(real x)=>x*x,(real x)=>x+1.0);
assert(fun(2.0) == 3);
}

I used a variadic template parameter instead of an array parameter.

Ali


Re: Need help with calling a list of functions

2018-11-03 Thread Paul Backus via Digitalmars-d-learn

On Sunday, 4 November 2018 at 01:17:01 UTC, Luigi wrote:
I need to call a function that can create a function from an 
array of functions and calls them in reverse order.  I am 
learning D any help would be



import std.stdio;
import std.algorithm;
import std.array : array;
import std.range;

auto comp(T)(T function(T) [] list) pure {
auto backwards = retro(funs);
return >;

}

void main()
{
auto fun = comp([(real x)=>a/3.0,(real x)=>x*x,(real 
x)=>x+1.0]);

writeln(fun(2.0));// should print 3
}


Use recursion:

T delegate(T) comp(T)(T function(T) [] list) pure {
if (list.length == 1)
return (T arg) => list[0](arg);
else
return (T arg) => list[0](comp(list[1 .. $])(arg));
}


Need help with calling a list of functions

2018-11-03 Thread Luigi via Digitalmars-d-learn
I need to call a function that can create a function from an 
array of functions and calls them in reverse order.  I am 
learning D any help would be



import std.stdio;
import std.algorithm;
import std.array : array;
import std.range;

auto comp(T)(T function(T) [] list) pure {
auto backwards = retro(funs);
return >;

}

void main()
{
auto fun = comp([(real x)=>a/3.0,(real x)=>x*x,(real 
x)=>x+1.0]);

writeln(fun(2.0));// should print 3
}



Re: d word counting approach performs well but has higher mem usage

2018-11-03 Thread Stanislav Blinov via Digitalmars-d-learn

On Saturday, 3 November 2018 at 14:26:02 UTC, dwdv wrote:


Assoc array allocations?


Yup. AAs do keep their memory around (supposedly for reuse). You 
can insert calls to GC.stats (import core.memory) at various 
points to see actual GC heap usage. If you don't touch that AA at 
all you'll only use up some Kb of the GC heap when reading the 
file.

Why it consumes so much is a question to the implementation.


What did I do wrong?


Well, you didn't actually put the keys into the AA ;) I'm 
guessing you didn't look closely at the output, otherwise you 
would've noticed that something was wrong.


AAs want immutable keys. .byLine returns (in this case) a char[]. 
It's a slice of it's internal buffer that is reused on reading 
each line; it gets overwritten on every iteration. This way the 
reading loop only consumes as much as the longest line requires. 
But a `char[]` is not a `string` and you wouldn't be able to 
index the AA with it:


```
Error: associative arrays can only be assigned values with 
immutable keys, not char[]

```

But by putting `const` in `foreach` you tricked the compiler into 
letting you index the AA with a (supposed) const key. Which, of 
course, went fine as far as insertion/updates went, since hashes 
still matched. But when you iterate later, pretty much every key 
is in fact a reference to some older memory, which is still 
somewhere on the GC heap; you don't get a segfault, but neither 
do you get correct "words".


You *need* to have an actual `string` when you first insert into 
the AA.


```d 
===

void main()
{
import std.stdio, std.algorithm, std.range;

  import core.memory;


int[string] count;


  void updateCount(char[] word) {
  auto ptr = word in count;
  if (!ptr)
  // note the .idup!
  count[word.idup] = 1;
  else
  (*ptr)++;
  }

  // no const!

foreach(word; stdin.byLine.map!splitter.joiner) {

  updateCount(word);

}

//or even:
//foreach(line; stdin.byLine) {

// no const!

//foreach(word; line.splitter) {

  //updateCount(word);

//}
//}


  writeln(GC.stats);
  GC.collect;
  writeln(GC.stats);


count.byKeyValue
.array
.sort!((a, b) => a.value > b.value)
.each!(a => writefln("%d %s", a.value, a.key));


  writeln(GC.stats);
  GC.collect;
  writeln(GC.stats);

}
```


Note that if you .clear() and even .destroy() the AA, it'll still 
keep a bunch of memory allocated. I guess built-in AAs just love 
to hoard.


Re: Why use while if only iterating once ?

2018-11-03 Thread Venkat via Digitalmars-d-learn

Thankyou.

As the great Gump's mother said, stupid is as stupid does.


d word counting approach performs well but has higher mem usage

2018-11-03 Thread dwdv via Digitalmars-d-learn

Hi there,

the task is simple: count word occurrences from stdin (around 150mb in 
this case) and print sorted results to stdout in a somewhat idiomatic 
fashion.


Now, d is quite elegant while maintaining high performance compared to 
both c and c++, but I, as a complete beginner, can't identify where the 
10x memory usage (~300mb, see results below) is coming from.


Unicode overhead? Internal buffer? Is something slurping the whole file? 
Assoc array allocations? Couldn't find huge allocs with dmd -vgc and 
-profile=gc either. What did I do wrong?


```d ===
void main()
{
import std.stdio, std.algorithm, std.range;

int[string] count;
foreach(const word; stdin.byLine.map!splitter.joiner) {
++count[word];
}

//or even:
//foreach(line; stdin.byLine) {
//foreach(const word; line.splitter) {
//++count[word];
//}
//}

count.byKeyValue
.array
.sort!((a, b) => a.value > b.value)
.each!(a => writefln("%d %s", a.value, a.key));
}
```

```c++ (for reference) =
#include 
#include 
#include 
#include 

using namespace std;

int main() {
string s;
unordered_map count;
std::ios::sync_with_stdio(false);
while (cin >> s) {
count[s]++;
}

vector> temp {begin(count), end(count)};
sort(begin(temp), end(temp),
[](const auto& a, const auto& b) {return b.second < a.second;});
for (const auto& elem : temp) {
cout << elem.second << " " << elem.first << '\n';
}
}
```

Results on an old celeron dual core (wall clock and res mem):
0:08.78, 313732 kb <= d dmd
0:08.25, 318084 kb <= d ldc
0:08.38, 38512 kb  <= c++ idiomatic (above)
0:07.76, 30276 kb  <= c++ boost
0:08.42, 26756 kb  <= c verbose, hand-rolled hashtable

Mem and time measured like so:
/usr/bin/time -v $cmd < input >/dev/null

Input words file creation (around 300k * 50 words):
tr '\n' ' ' < /usr/share/dict/$lang > joined
for i in {1..50}; do cat joined >> input; done

word count sample output:
[... snip ...]
50 ironsmith
50 gloried
50 quindecagon
50 directory's
50 hydrobiological

Compilation flags:
dmd -O -release -mcpu=native -ofwc-d-dmd wc.d
ldc2 -O3 -release -flto=full -mcpu=native -ofwc-d-ldc wc.d
clang -std=c11 -O3 -march=native -flto -o wp-c-clang wp.c
clang++ -std=c++17 -O3 -march=native -flto -o wp-cpp-clang wp-boost.cpp

Versions:
dmd: v2.082.1
ldc: 1.12.0 (based on DMD v2.082.1 and LLVM 6.0.1)
llvm/clang: 6.0.1


Re: Why use while if only iterating once ?

2018-11-03 Thread lithium iodate via Digitalmars-d-learn

On Saturday, 3 November 2018 at 21:03:16 UTC, Venkat wrote:
The last break statement prevents the loop from returned for a 
second iteration. Then why use a while ?


The continue statement may abort the current iteration and start 
the next, causing the final break to not necessarily be executed 
every iteration.


Re: Why use while if only iterating once ?

2018-11-03 Thread Jonathan M Davis via Digitalmars-d-learn
On Saturday, November 3, 2018 3:03:16 PM MDT Venkat via Digitalmars-d-learn 
wrote:
>  while (1)
>  {
>  FLAGS f;
>  switch (*p)
>  {
>  case 'U':
>  case 'u':
>  f = FLAGS.unsigned;
>  goto L1;
>  case 'l':
>  f = FLAGS.long_;
>  error("lower case integer suffix 'l' is not
> allowed. Please use 'L' instead");
>  goto L1;
>  case 'L':
>  f = FLAGS.long_;
>  L1:
>  p++;
>  if ((flags & f) && !err)
>  {
>  error("unrecognized token");
>  err = true;
>  }
>  flags = cast(FLAGS)(flags | f);
>  continue;
>  default:
>  break;
>  }
>  break;
> }
>
>
> The last break statement prevents the loop from returned for a
> second iteration. Then why use a while ?

There's a continue right above the default case. So, if the code hits that
point, it will loop back to the top.

- Jonathan M Davis





Why use while if only iterating once ?

2018-11-03 Thread Venkat via Digitalmars-d-learn

while (1)
{
FLAGS f;
switch (*p)
{
case 'U':
case 'u':
f = FLAGS.unsigned;
goto L1;
case 'l':
f = FLAGS.long_;
error("lower case integer suffix 'l' is not 
allowed. Please use 'L' instead");

goto L1;
case 'L':
f = FLAGS.long_;
L1:
p++;
if ((flags & f) && !err)
{
error("unrecognized token");
err = true;
}
flags = cast(FLAGS)(flags | f);
continue;
default:
break;
}
break;
}


The last break statement prevents the loop from returned for a 
second iteration. Then why use a while ?


Re: Full precision double to string conversion

2018-11-03 Thread Ecstatic Coder via Digitalmars-d-learn
On Saturday, 3 November 2018 at 18:04:07 UTC, Stanislav Blinov 
wrote:
On Saturday, 3 November 2018 at 17:26:19 UTC, Ecstatic Coder 
wrote:



void main() {
double value = -12.000123456;
int precision = 50;

import std.stdio;
writefln("%.*g", precision, value);

import std.format;
string str = format("%.*g", precision, value);
writeln(str);
}

Prints:

-12.00012345600743415512260980904102325439453125
-12.00012345600743415512260980904102325439453125

That's not quite the -12.000123456 that you'd get from C#'s 
ToString().


Unfortunately, but that's still better though, thanks :)


I don't think you understood what I meant. Neither C# nor D 
attempt to exhaust the precision when converting, given default 
arguments. It's merely a matter of those defaults. The snippet 
above obviously provides *more* digits that the default 
.ToString() in C# would.


But indeed what I really need is a D function which gives a 
better decimal approximation to the provided double constant, 
exactly in the same way those in Dart and C# do.


Is there really no such function in D ?


When you call .ToString() in C# with no arguments, it assumes 
the "G" format specifier.


https://docs.microsoft.com/en-us/dotnet/standard/base-types/standard-numeric-format-strings?view=netframework-4.7.2#the-general-g-format-specifier

So for a double, it will use 15-digit precision. D's to!string 
simply uses lower default. If you want the exact same behavior 
as in C#, you can do this:


string toStringLikeInCSharp(double value) {
import std.format : format;
return format("%.15G", value);
}

void main() {
double value = -12.000123456;
import std.stdio;
writeln(value.toStringLikeInCSharp); // prints: 
-12.000123456

}


This version perfectly gets the job done!

Thanks a lot for your help :)



Re: Full precision double to string conversion

2018-11-03 Thread Stanislav Blinov via Digitalmars-d-learn
On Saturday, 3 November 2018 at 17:26:19 UTC, Ecstatic Coder 
wrote:



void main() {
double value = -12.000123456;
int precision = 50;

import std.stdio;
writefln("%.*g", precision, value);

import std.format;
string str = format("%.*g", precision, value);
writeln(str);
}

Prints:

-12.00012345600743415512260980904102325439453125
-12.00012345600743415512260980904102325439453125

That's not quite the -12.000123456 that you'd get from C#'s 
ToString().


Unfortunately, but that's still better though, thanks :)


I don't think you understood what I meant. Neither C# nor D 
attempt to exhaust the precision when converting, given default 
arguments. It's merely a matter of those defaults. The snippet 
above obviously provides *more* digits that the default 
.ToString() in C# would.


But indeed what I really need is a D function which gives a 
better decimal approximation to the provided double constant, 
exactly in the same way those in Dart and C# do.


Is there really no such function in D ?


When you call .ToString() in C# with no arguments, it assumes the 
"G" format specifier.


https://docs.microsoft.com/en-us/dotnet/standard/base-types/standard-numeric-format-strings?view=netframework-4.7.2#the-general-g-format-specifier

So for a double, it will use 15-digit precision. D's to!string 
simply uses lower default. If you want the exact same behavior as 
in C#, you can do this:


string toStringLikeInCSharp(double value) {
import std.format : format;
return format("%.15G", value);
}

void main() {
double value = -12.000123456;
import std.stdio;
writeln(value.toStringLikeInCSharp); // prints: -12.000123456
}


Re: Full precision double to string conversion

2018-11-03 Thread Ecstatic Coder via Digitalmars-d-learn
Actually, what I need is the D equivalent of the default 
ToString() function we have in Dart and C#.


I don't think it means what you think it means:

void main() {
double value = -12.000123456;
int precision = 50;

import std.stdio;
writefln("%.*g", precision, value);

import std.format;
string str = format("%.*g", precision, value);
writeln(str);
}

Prints:

-12.00012345600743415512260980904102325439453125
-12.00012345600743415512260980904102325439453125

That's not quite the -12.000123456 that you'd get from C#'s 
ToString().


Unfortunately, but that's still better though, thanks :)

All of them? Most implementations of conversion algorithms 
actually stop when it's "good enough". AFAIR, D doesn't even 
have it's own implementation and forwards to C, unless that 
changed in recent years.


What I meant was that getting too many significant digits would 
still be a better solution than not having them.


But indeed what I really need is a D function which gives a 
better decimal approximation to the provided double constant, 
exactly in the same way those in Dart and C# do.


Is there really no such function in D ?





Re: Full precision double to string conversion

2018-11-03 Thread Stanislav Blinov via Digitalmars-d-learn
On Saturday, 3 November 2018 at 13:20:22 UTC, Ecstatic Coder 
wrote:
On Saturday, 3 November 2018 at 12:45:03 UTC, Danny Arends 
wrote:


How can I convert a double value -12.000123456 to its string 
value "-12.000123456", i.e. without loosing double-precision 
digits ?


Specify how many digits you want with writefln:

writefln("%.8f", value);


Actually, what I need is the D equivalent of the default 
ToString() function we have in Dart and C#.


I don't think it means what you think it means:

void main() {
double value = -12.000123456;
int precision = 50;

import std.stdio;
writefln("%.*g", precision, value);

import std.format;
string str = format("%.*g", precision, value);
writeln(str);
}

Prints:

-12.00012345600743415512260980904102325439453125
-12.00012345600743415512260980904102325439453125

That's not quite the -12.000123456 that you'd get from C#'s 
ToString().


I mean a dumb double-to-string standard library conversion 
function which returns a string including all the double 
precision digits stored in the 52 significant bits of the 
value, preferably with the trailing zeroes removed.


All of them? Most implementations of conversion algorithms 
actually stop when it's "good enough". AFAIR, D doesn't even have 
it's own implementation and forwards to C, unless that changed in 
recent years.


Re: Full precision double to string conversion

2018-11-03 Thread Ecstatic Coder via Digitalmars-d-learn

On Saturday, 3 November 2018 at 12:45:03 UTC, Danny Arends wrote:
On Saturday, 3 November 2018 at 12:27:19 UTC, Ecstatic Coder 
wrote:

import std.conv;
import std.stdio;
void main()
{
double value = -12.000123456;
writeln( value.sizeof );
writeln( value );
writeln( value.to!string() );
writeln( value.to!dstring() );
}

/*
8
-12.0001
-12.0001
-12.0001
*/

In Dart, value.toString() returns "-12.000123456".

In C#, value.ToString() returns "-12.000123456".

In D, value.to!string() returns "-12.0001" :(

How can I convert a double value -12.000123456 to its string 
value "-12.000123456", i.e. without loosing double-precision 
digits ?


Specify how many digits you want with writefln:

writefln("%.8f", value);


Actually, what I need is the D equivalent of the default 
ToString() function we have in Dart and C#.


I mean a dumb double-to-string standard library conversion 
function which returns a string including all the double 
precision digits stored in the 52 significant bits of the value, 
preferably with the trailing zeroes removed.


For an unknown reason, D's default double-to-string conversion 
function only expose the single-precision significant digits :(




Re: how do I activate contracts for phobos functions in dmd

2018-11-03 Thread Kagamin via Digitalmars-d-learn

Just compile the needed module directly:
dmd myapp.d src/std/bitmanip.d


Re: Full precision double to string conversion

2018-11-03 Thread Danny Arends via Digitalmars-d-learn
On Saturday, 3 November 2018 at 12:27:19 UTC, Ecstatic Coder 
wrote:

import std.conv;
import std.stdio;
void main()
{
double value = -12.000123456;
writeln( value.sizeof );
writeln( value );
writeln( value.to!string() );
writeln( value.to!dstring() );
}

/*
8
-12.0001
-12.0001
-12.0001
*/

In Dart, value.toString() returns "-12.000123456".

In C#, value.ToString() returns "-12.000123456".

In D, value.to!string() returns "-12.0001" :(

How can I convert a double value -12.000123456 to its string 
value "-12.000123456", i.e. without loosing double-precision 
digits ?


Specify how many digits you want with writefln:

writefln("%.8f", value);



Full precision double to string conversion

2018-11-03 Thread Ecstatic Coder via Digitalmars-d-learn

import std.conv;
import std.stdio;
void main()
{
double value = -12.000123456;
writeln( value.sizeof );
writeln( value );
writeln( value.to!string() );
writeln( value.to!dstring() );
}

/*
8
-12.0001
-12.0001
-12.0001
*/

In Dart, value.toString() returns "-12.000123456".

In C#, value.ToString() returns "-12.000123456".

In D, value.to!string() returns "-12.0001" :(

How can I convert a double value -12.000123456 to its string 
value "-12.000123456", i.e. without loosing double-precision 
digits ?