Re: Parallel processing and further use of output

2015-09-28 Thread Russel Winder via Digitalmars-d-learn
On Mon, 2015-09-28 at 12:46 +, John Colvin via Digitalmars-d-learn
wrote:
> […]
> 
> Pretty much as expected. Locks are slow, shared accumulators 
> suck, much better to write to thread local and then merge.

Quite. Dataflow is where the parallel action is. (Except for those
writing concurrency and parallelism libraries) Anyone doing concurrency
and parallelism with shared memory multi-threading, locks,
synchronized, mutexes, etc. is doing it wrong. This has been known
since the 1970s, but the programming community got sidetracked by lack
of abstraction (*) for a couple of decades.


(*) I blame C, C++ and Java. And programmers who programmed before (or
worse, without) thinking.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Parallel processing and further use of output

2015-09-28 Thread Russel Winder via Digitalmars-d-learn
On Sat, 2015-09-26 at 15:56 +, Jay Norwood via Digitalmars-d-learn
wrote:
> std.parallelism.reduce documentation provides an example of a 
> parallel sum.
> 
> This works:
> auto sum3 = taskPool.reduce!"a + b"(iota(1.0,101.0));
> 
> This results in a compile error:
> auto sum3 = taskPool.reduce!"a + b"(iota(1UL,101UL));
> 
> I believe there was discussion of this problem recently ...

Which may or may not already have been fixed, or…

On the other hand:

taskPool.reduce!"a + b"(1UL, iota(101));

seems to work fine.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Parallel processing and further use of output

2015-09-28 Thread Russel Winder via Digitalmars-d-learn
On Mon, 2015-09-28 at 11:38 +, John Colvin via Digitalmars-d-learn
wrote:
> […]
> 
> It would be really great if someone knowledgable did a full 
> review of std.parallelism to find out the answer, hint, hint...  
> :)

Indeed, I would love to be able to do this. However I don't have time
in the next few months to do this on a volunteer basis, and no-one is
paying money whereby this review could happen as a side effect. Sad,
but…
-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Parallel processing and further use of output

2015-09-28 Thread Russel Winder via Digitalmars-d-learn
On Sat, 2015-09-26 at 14:33 +0200, anonymous via Digitalmars-d-learn
wrote:
> […]
> I'm pretty sure atomicOp is faster, though.

Rough and ready anecdotal evidence would indicate that this is a
reasonable statement, by quite a long way. However a proper benchmark
is needed for statistical significance.

On the other hand std.parallelism.taskPool.reduce surely has to be the
correct way of expressing the algorithm?

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Parallel processing and further use of output

2015-09-28 Thread Russel Winder via Digitalmars-d-learn
As a single data point:

==  anonymous_fix.d ==
5050

real0m0.168s
user0m0.200s
sys 0m0.380s
==  colvin_fix.d ==
5050

real0m0.036s
user0m0.124s
sys 0m0.000s
==  norwood_reduce.d ==
5050

real0m0.009s
user0m0.020s
sys 0m0.000s
==  original.d ==
218329750363

real0m0.024s
user0m0.076s
sys 0m0.000s


Original is the original, not entirely slow, but broken :-). anonymous
is the anonymous' synchronized keyword version, slow. colvin_fix is
John Colvin's use of atomicOp, correct but only ok-ish on speed. Jay
Norword first proposed the reduce answer on the list, I amended it a
tiddly bit, but clearly it is a resounding speed winner.

I guess we need a benchmark framework that can run these 100 times
taking processor times and then do the statistics on them. Most people
would assume normal distribution of results and do mean/std deviation
and median. 

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Parallel processing and further use of output

2015-09-28 Thread Russel Winder via Digitalmars-d-learn
On Sat, 2015-09-26 at 12:32 +, Zoidberg via Digitalmars-d-learn
wrote:
> > Here's a correct version:
> > 
> > import std.parallelism, std.range, std.stdio, core.atomic;
> > void main()
> > {
> > shared ulong i = 0;
> > foreach (f; parallel(iota(1, 100+1)))
> > {
> > i.atomicOp!"+="(f);
> > }
> > i.writeln;
> > }
> 
> Thanks! Works fine. So "shared" and "atomic" is a must?

Yes and no. But mostly no. If you have to do this as an explicit
iteration (very 1970s) then yes to avoid doing things wrong you have to
ensure the update to the shared mutable state is atomic.

A more modern (1930s/1950s) way of doing things is to use implicit
iteration – something Java, C++, etc. are all getting into more and
more, you should use a reduce call. People have previously mentioned:

taskPool.reduce!"a + b"(iota(1UL,101))

which I would suggest has to be seen as the best way of writing this
algorithm.

 
-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Parallel processing and further use of output

2015-09-28 Thread Russel Winder via Digitalmars-d-learn
On Sat, 2015-09-26 at 17:20 +, Jay Norwood via Digitalmars-d-learn
wrote:
> This is a work-around to get a ulong result without having the 
> ulong as the range variable.
> 
> ulong getTerm(int i)
> {
> return i;
> }
> auto sum4 = taskPool.reduce!"a + 
> b"(std.algorithm.map!getTerm(iota(11)));

Not needed as reduce can take an initial value that sets the type of
the template. See previous email.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Parallel processing and further use of output

2015-09-28 Thread John Colvin via Digitalmars-d-learn

On Monday, 28 September 2015 at 11:31:33 UTC, Russel Winder wrote:
On Sat, 2015-09-26 at 14:33 +0200, anonymous via 
Digitalmars-d-learn wrote:

[…]
I'm pretty sure atomicOp is faster, though.


Rough and ready anecdotal evidence would indicate that this is 
a reasonable statement, by quite a long way. However a proper 
benchmark is needed for statistical significance.


On the other hand std.parallelism.taskPool.reduce surely has to 
be the correct way of expressing the algorithm?


It would be really great if someone knowledgable did a full 
review of std.parallelism to find out the answer, hint, hint...  
:)


Re: Parallel processing and further use of output

2015-09-28 Thread Jay Norwood via Digitalmars-d-learn

On Saturday, 26 September 2015 at 15:56:54 UTC, Jay Norwood wrote:

This results in a compile error:
auto sum3 = taskPool.reduce!"a + b"(iota(1UL,101UL));

I believe there was discussion of this problem recently ...


https://issues.dlang.org/show_bug.cgi?id=14832

https://issues.dlang.org/show_bug.cgi?id=6446

looks like the problem has been reported a couple of times.  I 
probably saw the discussion of the 8/22 bug.





Re: Parallel processing and further use of output

2015-09-28 Thread John Colvin via Digitalmars-d-learn

On Monday, 28 September 2015 at 12:18:28 UTC, Russel Winder wrote:

As a single data point:

==  anonymous_fix.d == 5050

real0m0.168s
user0m0.200s
sys 0m0.380s
==  colvin_fix.d ==
5050

real0m0.036s
user0m0.124s
sys 0m0.000s
==  norwood_reduce.d ==
5050

real0m0.009s
user0m0.020s
sys 0m0.000s
==  original.d ==
218329750363

real0m0.024s
user0m0.076s
sys 0m0.000s


Original is the original, not entirely slow, but broken :-). 
anonymous is the anonymous' synchronized keyword version, slow. 
colvin_fix is John Colvin's use of atomicOp, correct but only 
ok-ish on speed. Jay Norword first proposed the reduce answer 
on the list, I amended it a tiddly bit, but clearly it is a 
resounding speed winner.


Pretty much as expected. Locks are slow, shared accumulators 
suck, much better to write to thread local and then merge.


Re: Parallel processing and further use of output

2015-09-26 Thread Zoidberg via Digitalmars-d-learn

Here's a correct version:

import std.parallelism, std.range, std.stdio, core.atomic;
void main()
{
shared ulong i = 0;
foreach (f; parallel(iota(1, 100+1)))
{
i.atomicOp!"+="(f);
}
i.writeln;
}


Thanks! Works fine. So "shared" and "atomic" is a must?


Re: Parallel processing and further use of output

2015-09-26 Thread John Colvin via Digitalmars-d-learn

On Saturday, 26 September 2015 at 12:18:16 UTC, Zoidberg wrote:
I've run into an issue, which I guess could be resolved easily, 
if I knew how...


[CODE]
ulong i = 0;
foreach (f; parallel(iota(1, 100+1)))
{
i += f;
}
thread_joinAll();
i.writeln;
[/CODE]

It's basically an example which adds all the numbers from 1 to 
100 and should therefore give 5050. Running the 
above code gives 205579930677, leaving out "thread_joinAll()" 
the output is 210161213519.


I suspect there's some sort of data race. Any hint how to get 
this straight?


Here's a correct version:

import std.parallelism, std.range, std.stdio, core.atomic;
void main()
{
shared ulong i = 0;
foreach (f; parallel(iota(1, 100+1)))
{
i.atomicOp!"+="(f);
}
i.writeln;
}


Re: Parallel processing and further use of output

2015-09-26 Thread Meta via Digitalmars-d-learn

On Saturday, 26 September 2015 at 12:33:45 UTC, anonymous wrote:

foreach (f; parallel(iota(1, 100+1)))
{
synchronized i += f;
}


Is this valid syntax? I've never seen synchronized used like this 
before.





Re: Parallel processing and further use of output

2015-09-26 Thread anonymous via Digitalmars-d-learn
On Saturday 26 September 2015 14:18, Zoidberg wrote:

> I've run into an issue, which I guess could be resolved easily, 
> if I knew how...
> 
> [CODE]
>  ulong i = 0;
>  foreach (f; parallel(iota(1, 100+1)))
>  {
>  i += f;
>  }
>  thread_joinAll();
>  i.writeln;
> [/CODE]
> 
> It's basically an example which adds all the numbers from 1 to 
> 100 and should therefore give 5050. Running the above 
> code gives 205579930677, leaving out "thread_joinAll()" the 
> output is 210161213519.
> 
> I suspect there's some sort of data race. Any hint how to get 
> this straight?

Definitely a race, yeah. You need to prevent two += operations happening 
concurrently.

You can use core.atomic.atomicOp!"+=" instead of plain +=:

shared ulong i = 0;
foreach (f; parallel(iota(1, 100+1)))
{
import core.atomic: atomicOp;
i.atomicOp!"+="(f);
}

i is shared because atomicOp requires a shared variable. I'm not sure what 
the implications of that are, if any.

Alternatively, you could use `synchronized`:

ulong i = 0;
foreach (f; parallel(iota(1, 100+1)))
{
synchronized i += f;
}

I'm pretty sure atomicOp is faster, though.


Parallel processing and further use of output

2015-09-26 Thread Zoidberg via Digitalmars-d-learn
I've run into an issue, which I guess could be resolved easily, 
if I knew how...


[CODE]
ulong i = 0;
foreach (f; parallel(iota(1, 100+1)))
{
i += f;
}
thread_joinAll();
i.writeln;
[/CODE]

It's basically an example which adds all the numbers from 1 to 
100 and should therefore give 5050. Running the above 
code gives 205579930677, leaving out "thread_joinAll()" the 
output is 210161213519.


I suspect there's some sort of data race. Any hint how to get 
this straight?


Re: Parallel processing and further use of output

2015-09-26 Thread anonymous via Digitalmars-d-learn

On Saturday, 26 September 2015 at 13:09:54 UTC, Meta wrote:

On Saturday, 26 September 2015 at 12:33:45 UTC, anonymous wrote:

foreach (f; parallel(iota(1, 100+1)))
{
synchronized i += f;
}


Is this valid syntax? I've never seen synchronized used like 
this before.


I'm sure it's valid.

A mutex is created for that instance of synchronized. I.e., only 
one thread can execute that piece of code at a time.


If you're missing the braces, they're optional for single 
statements, as usual.


http://dlang.org/statement.html#SynchronizedStatement


Re: Parallel processing and further use of output

2015-09-26 Thread Jay Norwood via Digitalmars-d-learn

btw, on my corei5, in debug build,
reduce (using double): 11msec
non_parallel: 37msec
parallel with atomicOp: 123msec

so, that is the reason for using parallel reduce, assuming the 
ulong range thing will get fixed.


Re: Parallel processing and further use of output

2015-09-26 Thread Jay Norwood via Digitalmars-d-learn
This is a work-around to get a ulong result without having the 
ulong as the range variable.


ulong getTerm(int i)
{
   return i;
}
auto sum4 = taskPool.reduce!"a + 
b"(std.algorithm.map!getTerm(iota(11)));




Re: Parallel processing and further use of output

2015-09-26 Thread John Colvin via Digitalmars-d-learn

On Saturday, 26 September 2015 at 17:20:34 UTC, Jay Norwood wrote:
This is a work-around to get a ulong result without having the 
ulong as the range variable.


ulong getTerm(int i)
{
   return i;
}
auto sum4 = taskPool.reduce!"a + 
b"(std.algorithm.map!getTerm(iota(11)));


or

auto sum4 = taskPool.reduce!"a + b"(0UL, iota(1_000_000_001));

works for me


Re: Parallel processing and further use of output

2015-09-26 Thread Jay Norwood via Digitalmars-d-learn
std.parallelism.reduce documentation provides an example of a 
parallel sum.


This works:
auto sum3 = taskPool.reduce!"a + b"(iota(1.0,101.0));

This results in a compile error:
auto sum3 = taskPool.reduce!"a + b"(iota(1UL,101UL));

I believe there was discussion of this problem recently ...



Re: Parallel processing and further use of output

2015-09-26 Thread Zoidberg via Digitalmars-d-learn

On Saturday, 26 September 2015 at 13:09:54 UTC, Meta wrote:

On Saturday, 26 September 2015 at 12:33:45 UTC, anonymous wrote:

foreach (f; parallel(iota(1, 100+1)))
{
synchronized i += f;
}


Is this valid syntax? I've never seen synchronized used like 
this before.


Atomic worked perfectly and reasonably fast. "Synchronized" may 
work as well, but I had to abort the execution prior to finishing 
because it seemed horribly slow.