Re: How to Declare a new pragma ?

2014-12-22 Thread FrankLike via Digitalmars-d-learn

On Monday, 22 December 2014 at 00:55:08 UTC, Mike Parker wrote:

On 12/22/2014 9:21 AM, FrankLike wrote:



Now ,x64  mainform  always  have  the  console  window,and  
the entry

is main.
could  you  do  it?
Thank  you.


Since 64-bit DMD uses the Microsoft toolchain, you need to pass 
a parameter on the command line to the MS linker. Linker 
parameters are passed with -L parameter


See [1] for information about the /SUBSYSTEM option, which is 
what you want in this case. Probably something like this:


-L/SUBSYSTEM:WINDOWS,5.02

[1] http://msdn.microsoft.com/en-us/library/fcc1zstk.aspx

Thank  you.
-L/ENTRY:mainCRTStartup
it's  ok


math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread Iov Gherman via Digitalmars-d-learn

Hi everybody,

I am a java developer and used C/C++ only for some home projects 
so I never mastered native programming.


I am currently learning D and I find it fascinating. I was 
reading the documentation about std.parallelism and I wanted to 
experiment a bit with the example Find the logarithm of every 
number from 1 to 10_000_000 in parallel.


So, first, I changed the limit to 1 billion and ran it. I was 
blown away by the performance, the program ran in: 4 secs, 670 ms 
and I used a workUnitSize of 200. I have an i7 4th generation 
processor with 8 cores.


Then I was curios to try the same test in Java just to see how 
much slower will that be (at least that was what I expected). I 
used Java's ExecutorService with a pool of 8 cores and created 
5_000_000 tasks, each task was calculating log() for 200 numbers. 
The whole program ran in 3 secs, 315 ms.


Now, can anyone explain why this program ran faster in Java? I 
ran both programs multiple times and the results were always 
close to this execution times.


Can the implementation of log() function be the reason for a 
slower execution time in D?


I then decided to ran the same program in a single thread, a 
simple foreach/for loop. I tried it in C and Go also. This are 
the results:

- D: 24 secs, 32 ms.
- Java: 20 secs, 881 ms.
- C: 21 secs
- Go: 37 secs

I run Arch Linux on my PC. I compiled D programs using dmd-2.066 
and used no compile arguments (dmd prog.d).
I used Oracle's Java 8 (tried 7 and 6, seems like with Java 6 the 
performance is a bit better then 7 and 8).

To compile the C program I used: gcc 4.9.2
For Go program I used go 1.4

I really really like the built in support in D for parallel 
processing and how easy is to schedule tasks taking advantage of 
workUnitSize.


Thanks,
Iov


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread bachmeier via Digitalmars-d-learn

On Monday, 22 December 2014 at 10:12:52 UTC, Iov Gherman wrote:
Now, can anyone explain why this program ran faster in Java? I 
ran both programs multiple times and the results were always 
close to this execution times.


Can the implementation of log() function be the reason for a 
slower execution time in D?


I then decided to ran the same program in a single thread, a 
simple foreach/for loop. I tried it in C and Go also. This are 
the results:

- D: 24 secs, 32 ms.
- Java: 20 secs, 881 ms.
- C: 21 secs
- Go: 37 secs

I run Arch Linux on my PC. I compiled D programs using 
dmd-2.066 and used no compile arguments (dmd prog.d).
I used Oracle's Java 8 (tried 7 and 6, seems like with Java 6 
the performance is a bit better then 7 and 8).

To compile the C program I used: gcc 4.9.2
For Go program I used go 1.4

I really really like the built in support in D for parallel 
processing and how easy is to schedule tasks taking advantage 
of workUnitSize.


Thanks,
Iov


DMD is generally going to produce the slowest code. LDC and GDC 
will normally do better.


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread Daniel Kozak via Digitalmars-d-learn

 I run Arch Linux on my PC. I compiled D programs using dmd-2.066 
 and used no compile arguments (dmd prog.d)

You should try use some arguments -O -release -inline -noboundscheck
and maybe try use gdc or ldc should help with performance

can you post your code in all languages somewhere? I like to try it on
my machine :)



Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread Daniel Kozak via Digitalmars-d-learn
On Monday, 22 December 2014 at 10:35:52 UTC, Daniel Kozak via 
Digitalmars-d-learn wrote:


I run Arch Linux on my PC. I compiled D programs using 
dmd-2.066 and used no compile arguments (dmd prog.d)


You should try use some arguments -O -release -inline 
-noboundscheck

and maybe try use gdc or ldc should help with performance

can you post your code in all languages somewhere? I like to 
try it on

my machine :)


Btw. try use C log function, maybe it would be faster:

import core.stdc.math;


Re: ini library in OSX

2014-12-22 Thread Robert burner Schadek via Digitalmars-d-learn

On Saturday, 20 December 2014 at 08:09:06 UTC, Joel wrote:
On Monday, 13 October 2014 at 16:06:42 UTC, Robert burner 
Schadek wrote:

On Saturday, 11 October 2014 at 22:38:20 UTC, Joel wrote:
On Thursday, 11 September 2014 at 10:49:48 UTC, Robert burner 
Schadek wrote:

some self promo:

http://code.dlang.org/packages/inifiled


I would like an example?


go to the link and scroll down a page


How do you use it with current ini files ([label] key=name)?


I think I don't follow?

readINIFile(CONFIG_STRUCT, filename.ini); ?


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread Russel Winder via Digitalmars-d-learn

On Mon, 2014-12-22 at 10:12 +, Iov Gherman via Digitalmars-d-learn wrote:
 […]
 - D: 24 secs, 32 ms.
 - Java: 20 secs, 881 ms.
 - C: 21 secs
 - Go: 37 secs
 
Without the source codes and the commands used to create and run, it 
is impossible to offer constructive criticism of the results. However a
priori the above does not surprise me. I'll wager ldc2 or gdc will 
beat dmd for CPU-bound code, so as others have said for benchmarking 
use ldc2 or gdc with all optimization on (-O3). If you used gc for Go 
then switch to gccgo (again with -O3) and see a huge performance 
improvement on CPU-bound code.

Java beating C and C++ is fairly normal these days due to the tricks 
you can play with JIT over AOT optimization. Once Java has proper 
support for GPGPU, it will be hard for native code languages to get 
any new converts from JVM.

Put the source up and I and others will try things out.
-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread aldanor via Digitalmars-d-learn

On Monday, 22 December 2014 at 11:11:07 UTC, aldanor wrote:


Just tried it out myself (E5 Xeon / Linux):

D version: 19.64 sec (avg 3 runs)

import core.stdc.math;

void main() {
double s = 0;
foreach (i; 1 .. 1_000_000_000)
s += log(i);
}

// build flags: -O -release

C version: 19.80 sec (avg 3 runs)

#include math.h

int main() {
double s = 0;
long i;
for (i = 1; i  10; i++)
s += log(i);
return 0;
}

// build flags: -O3 -lm


Replacing import core.stdc.math with import std.math in the D 
example increases the avg runtime from 19.64 to 23.87 seconds 
(~20% slower) which is consistent with OP's statement.


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread aldanor via Digitalmars-d-learn

On Monday, 22 December 2014 at 10:40:45 UTC, Daniel Kozak wrote:
On Monday, 22 December 2014 at 10:35:52 UTC, Daniel Kozak via 
Digitalmars-d-learn wrote:


I run Arch Linux on my PC. I compiled D programs using 
dmd-2.066 and used no compile arguments (dmd prog.d)


You should try use some arguments -O -release -inline 
-noboundscheck

and maybe try use gdc or ldc should help with performance

can you post your code in all languages somewhere? I like to 
try it on

my machine :)


Btw. try use C log function, maybe it would be faster:

import core.stdc.math;


Just tried it out myself (E5 Xeon / Linux):

D version: 19.64 sec (avg 3 runs)

import core.stdc.math;

void main() {
double s = 0;
foreach (i; 1 .. 1_000_000_000)
s += log(i);
}

// build flags: -O -release

C version: 19.80 sec (avg 3 runs)

#include math.h

int main() {
double s = 0;
long i;
for (i = 1; i  10; i++)
s += log(i);
return 0;
}

// build flags: -O3 -lm


Re: DUB build questions

2014-12-22 Thread uri via Digitalmars-d-learn
On Saturday, 20 December 2014 at 08:36:15 UTC, Russel Winder via 
Digitalmars-d-learn wrote:


On Sat, 2014-12-20 at 05:46 +, Dicebot via 
Digitalmars-d-learn wrote:
On Saturday, 20 December 2014 at 04:15:00 UTC, Rikki 
Cattermole wrote:
  b) Can I do parallel builds with dub. CMake gives me 
  Makefiles so I can

  make -j does dub have a similar option?
 
 No


Worth noting that it is not actually a dub problem as much, it 
is simply not worth adding parallel builds because separate
compilation is much much slower with existing D front-end 
implementation and even doing it in parallel is sub-optimal

compared to dump-it-all-at-once.



From previous rounds of this sort of question (for the SCons D
tooling), the consensus of the community appeared to be that 
the only
time separate module compilation was really useful was for 
mixed D, C,
C++, Fortran systems. For pure D systems, single call of the 
compiler
is deemed far better than traditional C, C++, Fortran 
compilation
strategy. This means the whole make -j thing is not an issue, 
it
just means that Dub is only really dealing with the all D 
situation.


The corollary to this is that DMD, LDC and GDC really need to 
make use
of all parallelism they can, which I suspect is more or less 
none.


Chapel has also gone the compile all modules with a single 
compiler
call strategy as this enables global optimization from source 
to

executable.



Thanks for the info everyone.


I've used dub for just on two days now and I'm hooked!

At first I was very unsure about giving up my Makefiles, being 
the build system control freak that I am, but it really shines at 
rapid development.


As for out of source builds, it is a non-issue really. I like 
running the build outside the project tree but I can use 
gitignore and targetPath. For larger projects where we need to 
manage dependencies, generate code, run SWIG etc. I'd still use 
both SCons or CMake.



Regarding parallel builds, make -j on CMake Makefiles and dub 
build feel about the same, and that's all I care about.


I'm still not sure how dub would scale for large projects with 
100s-1000s of source modules. DMD ran out of memory in the VM 
(1Gb) at around 70 modules but CMake works due to separate 
compilation of each module ... I think. However, I didn't 
investigate due to lack of time so I wouldn't score this against 
dub. I am sure it can do it if I take the time to figure it out 
properly.


Cheers,
uri


optimization / benchmark tips a good topic for wiki ?

2014-12-22 Thread Laeeth Isharc via Digitalmars-d-learn
Replacing import core.stdc.math with import std.math in the 
D example increases the avg runtime from 19.64 to 23.87 seconds 
(~20% slower) which is consistent with OP's statement.


+ GDC/LDC vs DMD
+ nobounds, release

Do you think we should start a topic on D wiki front page for 
benchmarking/performance tips to organize peoples' experience of 
what works?


I took a quick look and couldn't see anything already.  And it 
seems to be a topic that comes up quite frequently (less on forum 
than people doing their own benchmarks and it getting picked up 
on reddit etc).


I am not so experienced in this area otherwise I would write a 
first draft myself.


Laeeth


Re: DUB build questions

2014-12-22 Thread Rikki Cattermole via Digitalmars-d-learn

On 23/12/2014 1:39 a.m., uri wrote:

On Saturday, 20 December 2014 at 08:36:15 UTC, Russel Winder via
Digitalmars-d-learn wrote:


On Sat, 2014-12-20 at 05:46 +, Dicebot via Digitalmars-d-learn wrote:

On Saturday, 20 December 2014 at 04:15:00 UTC, Rikki Cattermole wrote:
  b) Can I do parallel builds with dub. CMake gives me  
Makefiles so I can
  make -j does dub have a similar option?
  No

Worth noting that it is not actually a dub problem as much, it is
simply not worth adding parallel builds because separate
compilation is much much slower with existing D front-end
implementation and even doing it in parallel is sub-optimal
compared to dump-it-all-at-once.



From previous rounds of this sort of question (for the SCons D

tooling), the consensus of the community appeared to be that the only
time separate module compilation was really useful was for mixed D, C,
C++, Fortran systems. For pure D systems, single call of the compiler
is deemed far better than traditional C, C++, Fortran compilation
strategy. This means the whole make -j thing is not an issue, it
just means that Dub is only really dealing with the all D situation.

The corollary to this is that DMD, LDC and GDC really need to make use
of all parallelism they can, which I suspect is more or less none.

Chapel has also gone the compile all modules with a single compiler
call strategy as this enables global optimization from source to
executable.



Thanks for the info everyone.


I've used dub for just on two days now and I'm hooked!

At first I was very unsure about giving up my Makefiles, being the build
system control freak that I am, but it really shines at rapid development.

As for out of source builds, it is a non-issue really. I like running
the build outside the project tree but I can use gitignore and
targetPath. For larger projects where we need to manage dependencies,
generate code, run SWIG etc. I'd still use both SCons or CMake.


Regarding parallel builds, make -j on CMake Makefiles and dub build
feel about the same, and that's all I care about.

I'm still not sure how dub would scale for large projects with
100s-1000s of source modules. DMD ran out of memory in the VM (1Gb) at
around 70 modules but CMake works due to separate compilation of each
module ... I think. However, I didn't investigate due to lack of time so
I wouldn't score this against dub. I am sure it can do it if I take the
time to figure it out properly.

Cheers,
uri


To build anything with dmd seriously you need about 2gb of ram 
available. Yes its a lot, but its fast.

Also use subpackages. They are your friend.


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread Iov Gherman via Digitalmars-d-learn

Hi Guys,

First of all, thank you all for responding so quick, it is so 
nice to see D having such an active community.


As I said in my first post, I used no other parameters to dmd 
when compiling because I don't know too much about dmd 
compilation flags. I can't wait to try the flags Daniel suggested 
with dmd (-O -release -inline -noboundscheck) and the other two 
compilers (ldc2 and gdc). Thank you guys for your suggestions.


Meanwhile, I created a git repository on github and I put there 
all my code. If you find any errors please let me know. Because I 
am keeping the results in a big array the programs take 
approximately 8Gb of RAM. If you don't have enough RAM feel free 
to decrease the size of the array. For java code you will also 
need to change 'compile-run.bsh' and use the right memory 
parameters.



Thank you all for helping,
Iov


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread bachmeier via Digitalmars-d-learn

On Monday, 22 December 2014 at 17:05:19 UTC, Iov Gherman wrote:

Hi Guys,

First of all, thank you all for responding so quick, it is so 
nice to see D having such an active community.


As I said in my first post, I used no other parameters to dmd 
when compiling because I don't know too much about dmd 
compilation flags. I can't wait to try the flags Daniel 
suggested with dmd (-O -release -inline -noboundscheck) and the 
other two compilers (ldc2 and gdc). Thank you guys for your 
suggestions.


Meanwhile, I created a git repository on github and I put there 
all my code. If you find any errors please let me know. Because 
I am keeping the results in a big array the programs take 
approximately 8Gb of RAM. If you don't have enough RAM feel 
free to decrease the size of the array. For java code you will 
also need to change 'compile-run.bsh' and use the right memory 
parameters.



Thank you all for helping,
Iov


Link to your repo?


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread Iov Gherman via Digitalmars-d-learn

On Monday, 22 December 2014 at 17:16:05 UTC, bachmeier wrote:

On Monday, 22 December 2014 at 17:05:19 UTC, Iov Gherman wrote:

Hi Guys,

First of all, thank you all for responding so quick, it is so 
nice to see D having such an active community.


As I said in my first post, I used no other parameters to dmd 
when compiling because I don't know too much about dmd 
compilation flags. I can't wait to try the flags Daniel 
suggested with dmd (-O -release -inline -noboundscheck) and 
the other two compilers (ldc2 and gdc). Thank you guys for 
your suggestions.


Meanwhile, I created a git repository on github and I put 
there all my code. If you find any errors please let me know. 
Because I am keeping the results in a big array the programs 
take approximately 8Gb of RAM. If you don't have enough RAM 
feel free to decrease the size of the array. For java code you 
will also need to change 'compile-run.bsh' and use the right 
memory parameters.



Thank you all for helping,
Iov


Link to your repo?


Sorry, forgot about it:
https://github.com/ghermaniov/benchmarks



Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread Iov Gherman via Digitalmars-d-learn

So, I did some more testing with the one processing in paralel:

--- dmd:
4 secs, 977 ms

--- dmd with flags: -O -release -inline -noboundscheck:
4 secs, 635 ms

--- ldc:
6 secs, 271 ms

--- gdc:
10 secs, 439 ms

I also pushed the new bash scripts to the git repository.


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread John Colvin via Digitalmars-d-learn

On Monday, 22 December 2014 at 17:28:12 UTC, Iov Gherman wrote:

So, I did some more testing with the one processing in paralel:

--- dmd:
4 secs, 977 ms

--- dmd with flags: -O -release -inline -noboundscheck:
4 secs, 635 ms

--- ldc:
6 secs, 271 ms

--- gdc:
10 secs, 439 ms

I also pushed the new bash scripts to the git repository.


Flag suggestions:

ldc2 -O3 -release -mcpu=native -singleobj

gdc -O3 -frelease -march=native


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread aldanor via Digitalmars-d-learn

On Monday, 22 December 2014 at 17:28:12 UTC, Iov Gherman wrote:

So, I did some more testing with the one processing in paralel:

--- dmd:
4 secs, 977 ms

--- dmd with flags: -O -release -inline -noboundscheck:
4 secs, 635 ms

--- ldc:
6 secs, 271 ms

--- gdc:
10 secs, 439 ms

I also pushed the new bash scripts to the git repository.


import std.math, std.stdio, std.datetime;

-- try replacing std.math with core.stdc.math.


Re: Inheritance and in-contracts

2014-12-22 Thread aldanor via Digitalmars-d-learn

https://github.com/D-Programming-Language/dmd/pull/4200


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread Iov Gherman via Digitalmars-d-learn

On Monday, 22 December 2014 at 18:00:18 UTC, aldanor wrote:

On Monday, 22 December 2014 at 17:28:12 UTC, Iov Gherman wrote:

So, I did some more testing with the one processing in paralel:

--- dmd:
4 secs, 977 ms

--- dmd with flags: -O -release -inline -noboundscheck:
4 secs, 635 ms

--- ldc:
6 secs, 271 ms

--- gdc:
10 secs, 439 ms

I also pushed the new bash scripts to the git repository.


import std.math, std.stdio, std.datetime;

-- try replacing std.math with core.stdc.math.


Tried it, it is worst:
6 secs, 78 ms while the initial one was 4 secs, 977 ms and 
sometimes even better.




Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread Iov Gherman via Digitalmars-d-learn

On Monday, 22 December 2014 at 17:50:20 UTC, John Colvin wrote:

On Monday, 22 December 2014 at 17:28:12 UTC, Iov Gherman wrote:

So, I did some more testing with the one processing in paralel:

--- dmd:
4 secs, 977 ms

--- dmd with flags: -O -release -inline -noboundscheck:
4 secs, 635 ms

--- ldc:
6 secs, 271 ms

--- gdc:
10 secs, 439 ms

I also pushed the new bash scripts to the git repository.


Flag suggestions:

ldc2 -O3 -release -mcpu=native -singleobj

gdc -O3 -frelease -march=native


Tried it, here are the results:

--- ldc:
6 secs, 271 ms

--- ldc -O3 -release -mcpu=native -singleobj:
5 secs, 686 ms

--- gdc:
10 secs, 439 ms

--- gdc -O3 -frelease -march=native:
9 secs, 180 ms



Re: Inheritance and in-contracts

2014-12-22 Thread aldanor via Digitalmars-d-learn

On Monday, 22 December 2014 at 19:11:13 UTC, Ali Çehreli wrote:

On 12/22/2014 10:06 AM, aldanor wrote:

https://github.com/D-Programming-Language/dmd/pull/4200


Thank you! This fixes a big problem with the contracts in D.

Ali


It's not my PR but I just thought this thread would be happy to 
know :)


Re: Inheritance and in-contracts

2014-12-22 Thread Ali Çehreli via Digitalmars-d-learn

On 12/22/2014 10:06 AM, aldanor wrote:

https://github.com/D-Programming-Language/dmd/pull/4200


Thank you! This fixes a big problem with the contracts in D.

Ali



Re: Inheritance and in-contracts

2014-12-22 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 22/12/14 19:06, aldanor via Digitalmars-d-learn wrote:

https://github.com/D-Programming-Language/dmd/pull/4200


Yes, I saw that PR with some joy -- thanks for the link! :-)


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread via Digitalmars-d-learn

On Monday, 22 December 2014 at 18:23:29 UTC, Iov Gherman wrote:

On Monday, 22 December 2014 at 18:00:18 UTC, aldanor wrote:

On Monday, 22 December 2014 at 17:28:12 UTC, Iov Gherman wrote:
So, I did some more testing with the one processing in 
paralel:


--- dmd:
4 secs, 977 ms

--- dmd with flags: -O -release -inline -noboundscheck:
4 secs, 635 ms

--- ldc:
6 secs, 271 ms

--- gdc:
10 secs, 439 ms

I also pushed the new bash scripts to the git repository.


import std.math, std.stdio, std.datetime;

-- try replacing std.math with core.stdc.math.


Tried it, it is worst:
6 secs, 78 ms while the initial one was 4 secs, 977 ms and 
sometimes even better.


Strange... for me, core.stdc.math.log is about twice as fast as 
std.math.log.


Re: Inheritance and in-contracts

2014-12-22 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 22/12/14 20:12, aldanor via Digitalmars-d-learn wrote:

On Monday, 22 December 2014 at 19:11:13 UTC, Ali Çehreli wrote:

On 12/22/2014 10:06 AM, aldanor wrote:

https://github.com/D-Programming-Language/dmd/pull/4200


Thank you! This fixes a big problem with the contracts in D.

Ali


It's not my PR but I just thought this thread would be happy to know :)


Actually, the author is a friend of mine, and an all-round wonderful guy. :-)




Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread John Colvin via Digitalmars-d-learn

On Monday, 22 December 2014 at 18:27:48 UTC, Iov Gherman wrote:

On Monday, 22 December 2014 at 17:50:20 UTC, John Colvin wrote:

On Monday, 22 December 2014 at 17:28:12 UTC, Iov Gherman wrote:
So, I did some more testing with the one processing in 
paralel:


--- dmd:
4 secs, 977 ms

--- dmd with flags: -O -release -inline -noboundscheck:
4 secs, 635 ms

--- ldc:
6 secs, 271 ms

--- gdc:
10 secs, 439 ms

I also pushed the new bash scripts to the git repository.


Flag suggestions:

ldc2 -O3 -release -mcpu=native -singleobj

gdc -O3 -frelease -march=native


Tried it, here are the results:

--- ldc:
6 secs, 271 ms

--- ldc -O3 -release -mcpu=native -singleobj:
5 secs, 686 ms

--- gdc:
10 secs, 439 ms

--- gdc -O3 -frelease -march=native:
9 secs, 180 ms


That's very different to my results.

I see no important difference between ldc and dmd when using 
std.math, but when using core.stdc.math ldc halves its time where 
dmd only manages to get to ~80%


How to get the processid by exe's name in D?

2014-12-22 Thread FrankLike via Digitalmars-d-learn
Now,if you  want  to  know  whether  a  exe  is  in  processes  
,you  must use the  win  API.Do  you  have any  other  idea?


Re: How to get the processid by exe's name in D?

2014-12-22 Thread Adam D. Ruppe via Digitalmars-d-learn
The windows api is how I'd do it - look up how to do it in C, 
then do the same thing in D.


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread Daniel Kozak via Digitalmars-d-learn


That's very different to my results.

I see no important difference between ldc and dmd when using 
std.math, but when using core.stdc.math ldc halves its time 
where dmd only manages to get to ~80%


What CPU do you have? On my Intel Core i3 I have similar 
experience as Iov Gherman, but on my Amd FX4200 I have same 
results as you. Seems std.math.log is not good for my AMD CPU :)




Re: ini library in OSX

2014-12-22 Thread Joel via Digitalmars-d-learn
On Monday, 22 December 2014 at 11:04:10 UTC, Robert burner 
Schadek wrote:

On Saturday, 20 December 2014 at 08:09:06 UTC, Joel wrote:
On Monday, 13 October 2014 at 16:06:42 UTC, Robert burner 
Schadek wrote:

On Saturday, 11 October 2014 at 22:38:20 UTC, Joel wrote:
On Thursday, 11 September 2014 at 10:49:48 UTC, Robert 
burner Schadek wrote:

some self promo:

http://code.dlang.org/packages/inifiled


I would like an example?


go to the link and scroll down a page


How do you use it with current ini files ([label] key=name)?


I think I don't follow?

readINIFile(CONFIG_STRUCT, filename.ini); ?


I have a ini file that has price goods date etc for each item. I 
couldn't see how you could have (see below). It's got more than 
section, I don't know how to do that with your library.


In this form:

[section0]
day=1
month=8
year=2013
item=Fish'n'Chips
cost=3
shop=Take aways
comment=Don't know the date
[section1]
day=1
month=8
year=2013
item=almond individually wrapped chocolates (for Cecily), 
through-ties

cost=7
shop=Putaruru - Dairy (near Hotel 79)
comment=Don't know the date or time.

Also, what is CONFIG_STRUCT? - is that for using 'struct' instead 
of 'class'?