On Mon, 2015-09-07 at 02:56 +0000, Charles via Digitalmars-d-learn wrote: > […] > > x = float[imax][jmax]; //x is about 8 GB of floats > for(j = 0; j < jmax; j++){ > //create some local variables. > for(i = 0; i < imax; i++){ > x[j][i] = complicatedFunction(i, x[j-1], other, local, > variables); > } > }
So as to run things through a compiler, I expanded the code fragment to: float complicatedFunction(int i, float[] x) pure { return 0.0; } void main() { immutable imax = 10; immutable jmax = 10; float[imax][jmax] x; for(int j = 1; j < jmax; j++){ for(int i = 0; i < imax; i++){ x[j][i] = complicatedFunction(i, x[j-1]); } } } Hopefully this is an accurate representation of the original code. Note the change in the j iteration since j-1 is being used as an index. Of course I immediately wanted to change this to: float complicatedFunction(int i, float[] x) pure { return 0.0; } void main() { immutable imax = 10; immutable jmax = 10; float[imax][jmax] x; foreach(int j; 1..jmax){ foreach(int i; 0..imax){ x[j][i] = complicatedFunction(i, x[j-1]); } } } Hopefully this is not now wrong as a representation of the original problem. > In C, I'd just stick a #pragma omp parallel for around the inner > loop (since the outer loop obviously can't be parallelized). I would hope you meant C++ , not C, there. ;-) I am not sure OpenMP would work to parallelize your C++ (or C) code. Given that complicatedFunction has access to the whole of x[j-1] there could be coupling between x[j-1][m] and x[j-1][n] in the function which would lead to potentially different results being computed in the sequential and parallel cases. This is not a C/C++/D thing this is a data coupling thing. So although Meta suggested a parallel foreach (the D equivalent of OpenMP parallel for pragma), something along the lines: import std.parallelism: parallel; float complicatedFunction(int i, float[] x) pure { return 0.0; } void main() { immutable imax = 10; immutable jmax = 10; float[imax][jmax] x; foreach(int j; 1..jmax){ foreach(int i, ref item; parallel(x[j-1])){ x[j][i] = complicatedFunction(i, item); } } } (though sadly, this doesn't compile for a reason I can't fathom instantly) this brings into stark relieve the fact that there is a potential coupling between x[j-1][m] and x[j-1][n] which means enforcing parallelism here will almost certainly result in the wrong values being calculated. This is a standard pipeline describable with a map, something along the lines of: import std.algorithm: map; float complicatedFunction(int i, float[] x) pure { return 0.0; } void main() { immutable imax = 10; immutable jmax = 10; float[imax][jmax] x; foreach(int j; 1..jmax){ x[j] = map!(a => complicatedFunction(a, x[j-1]))(x[j-1]); } } (but this also has a compilation error, which I hope someone can fix…) This is the step prior to using parallel map, but cast in this way highlights that in order to then be parallelized at all in any way, complicatedFunction must have no couplings between x[j-1][m] and x[j -1][n]. (I am guessing this is some form of cellular automaton or some Markov process problem?) > How should I go about this in D? I want to avoid copying data > around if it's possible since these arrays are huge. Indeed. With C, C++, D, (Go, Rust,…) you have to use references (aka pointers) and hope you do not get any ownership problems. It might be interesting to see whether a language such as Haskell, which has copy semantics but optimizes as much as it can away, would fare with this. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
signature.asc
Description: This is a digitally signed message part