Hi Sepideh --

What I believe you want in order to keep a scalar running tally, delta, that's 
max-reduced across parallel tasks in that forall loop like this is a Chapel 
feature that is still being worked out, "reduction intents".  You can read 
about this feature if you go here:

http://chapel.cray.com/download.html#releaseNotes

and look at the first ("Language and Compiler Improvements") deck, and the 
first part about task intents -- specifically the slide labeled "Introducing 
reduce intents (#9).

However, unfortunately, I don't think that this feature is mature enough yet to 
support the behavior you want.  Specifically, min/max loop-based reduction 
intents are not yet supported, which is what you want for delta; secondly, one 
of the current limitations (bullet 3 on slide 13) prevents these intents from 
working on loops over standard domains and arrays.

We plan for the limitations to be above to be addressed in the next release 
this fall.

This limitation is why most Chapel-based stencil codes you'll see use two 
arrays to store results from two adjacent steps, and compute the reduction 
outside the loop at present.  Using loop unrolling to avoid the array 
copy-back, the performance of such an approach tends to do reasonably (though a 
scalar-based implementation likes yours would have a better memory footprint).  
If you haven't found it, see the "Run, Stencil, Run!" paper at 
http://chapel.cray.com/papers.html for a study of stencil loop idioms in Chapel.

Though I'm not familiar with G-S, I am surprised by one thing in your code 
which is that, by using one array, you're going to be overwriting values in 
your array before adjacent cells have read the previous ones, leading to races 
as to which values are read/written when.  In most stencil codes I'm familiar 
with, this isn't the intention, which is another reason to use two arrays.  Is 
that a bug, or does G-S permit data to be aggressively updated like that?

Assuming it's not a bug, one other way to get the one-array version working 
would be to have your delta be an atomic (or synchronzed) variable and have all 
parallel tasks write to it on every iteration, say with a compare-and-swap 
based approach.  The downside of this is that it will add a lot more overhead 
to those scalar routines -- I'm not certain which would be faster between using 
an atomic delta and using two arrays.

Best wishes,
-Brad


________________________________
From: Sepideh Khajehei [[email protected]]
Sent: Friday, May 29, 2015 9:20 AM
To: [email protected]
Subject: Gauss-Seidal in Chapel

Hello Guys,

I am completely new to Chapel. I want to write a program for doing the 
gauss-seidal iteration. I have two issues.
1) I want to have a temporally variable not an array to hold the new value and 
calculate delta based on that, then replace the array element with that.
2) Delta is again not an array so I think I should define it as atomic so 
everyone has access to it, then I have to synchronise it for finding the 
maximum value in each iteration. I could not find a way to do so.
Below you can find the code that wrote so far.

proc gauss_seidal(D: domain(2), x: [D] real, epsilon: real){
  const ID = D.expand(-1,-1); // domain for interior points
  var a: real=0;           // temporary variable for elements
  var delta: real; // measure of convergence,
  var cnt = 0;

do {
  forall ij in ID do {
  a=(x(ij+(0,1)) + x(ij+(0,-1))
                   + x(ij+(1,0)) + x(ij+(-1,0))) / 4.0;
  delta = abs(a- x[ij]);     //I should find the maximum value in each 
iteration for the main array
  x[ij] = a;}
  cnt +=1;

  if (verbose) {
      writeln("Iter: ", cnt, " (delta=", delta, ")\n");
      writeln(x);
    }
} while (delta > epsilon);
  return cnt;
  }

I would be grateful if you can help me.
Sepideh
------------------------------------------------------------------------------
_______________________________________________
Chapel-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/chapel-users

Reply via email to