Hartmut-

the HPXDataflowSimulator doesn't use async() anywhere. It's completely
dataflow-based.

Cheers
-Andi


On 06:44 Wed 24 Aug     , Hartmut Kaiser wrote:
> Andy,
> 
> I probably forgot to mention that not only the dataflow() calls (those we
> could even get away without it), but also all async() should be forked.
> 
> Regards Hartmut
> ---------------
> http://boost-spirit.com
> http://stellar.cct.lsu.edu
> 
> 
> > -----Original Message-----
> > From: Andreas Schäfer [mailto:[email protected]]
> > Sent: Wednesday, August 24, 2016 5:37 AM
> > To: [email protected]; [email protected]
> > Cc: 'Zach Byerly' <[email protected]>
> > Subject: Re: [hpx-users] Iterative construction of dataflow graph
> > 
> > Heya,
> > 
> > quick update: I was able to reproduce the effects described by Zach
> > with a simple test model[1], which breaks down (i.e. sends my test
> > machine into swapping with just 60 simulation entities) at 100k time
> > steps.
> > 
> > On 07:41 Fri 19 Aug     , Hartmut Kaiser wrote:
> > > To make a long story short, I'd suggest to change all dataflow(...)
> > function
> > > calls in DGSWEM/HPX with dataflow(hpx::launch::fork, ...) and see if
> > this
> > > helps the issue. It actually should - it would be nice to see whether
> > theory
> > > and practice are actually the same for a change ;)
> > 
> > I did measure hpx::launch::async and hpx::launch::fork for less than
> > 100k time steps (results attached). async broke down at ~75k time
> > steps, fork did mitigate this effect up to ~95k time steps but broke
> > down as well.
> > 
> > I then implemented a simple chunked approach with a barrier every
> > "chunkSize" time steps. This is terrible as it forces all cores to
> > drain prior to generating more work, but my tries to overlap dataflow
> > generation with work were unsuccessful (I spent 2h on this but only
> > got deadlocks. I'd be happy to merge a PR that would this issue).
> > chunkSize is configurable, though, so the actual effect shouldn't be
> > too bad. With this change I can easily go up to 300k time steps
> > without seeing any degradation of the time taken per time step.
> > 
> > Cheers
> > -Andi
> > 
> > [1]
> > https://github.com/gentryx/libgeodecomp/commit/c7fa53f4462b3562a189a2f5212
> > 21e6ddb22cafc
> > [2]
> > https://github.com/gentryx/libgeodecomp/commit/6a4a6a747484476b291d00a23fe
> > 515ffcd49b8d2
> > 
> > 
> > --
> > ==========================================================
> > Andreas Schäfer
> > HPC and Supercomputing
> > Institute for Multiscale Simulation
> > Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
> > +49 9131 85-20866
> > PGP/GPG key via keyserver
> > http://www.libgeodecomp.org
> > ==========================================================
> > 
> > (\___/)
> > (+'.'+)
> > (")_(")
> > This is Bunny. Copy and paste Bunny into your
> > signature to help him gain world domination!
> 
> 
> 

-- 
==========================================================
Andreas Schäfer
HPC and Supercomputing
Institute for Multiscale Simulation
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
+49 9131 85-20866
PGP/GPG key via keyserver
http://www.libgeodecomp.org
==========================================================

(\___/)
(+'.'+)
(")_(")
This is Bunny. Copy and paste Bunny into your
signature to help him gain world domination!

Attachment: signature.asc
Description: Digital signature

_______________________________________________
hpx-users mailing list
[email protected]
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users

Reply via email to