On Sunday 24 January 2010 02:30:23 am Cunningham, David wrote:
> Currently the DistributedRail is a separate rail at each participating
> activity (same length everywhere), with collective operations to
> synchronise the data in those rails (which may involve communication
> between places as the activities need not be at the same place). The code
> is essentially a reference implementation, and is very raw. It has only
> been tested with KMeansCUDA.x10 and in fact I think that file has its own
> DistributedRail class which is more up-to-date than the one in the
> standard library. However KMeansCUDA does not work at the moment (I am
> currently fixing it).
>
> I would be very wary of DistributedRail if I were you, but by all means
> take a look, and please comment on its interface or implementation as this
> may help us with the reworking of the Array library.
>
> thanks
>
David,
I got my program working using the DistributedRail class. Unfortunately, the
result of v.collectiveReduce(Double.+) is unexpected (all values 0.00). Could
you please have a look, what is wrong here ? Thanks.
The program is attached to this post. It is an X10 project with C++ back end.
The output is printed below.
<map>
<host name="sirius" slots="1" max_slots="0">
<process rank="0"/>
<process rank="1"/>
</host>
</map>
<stdout rank="0">Matrix A at place 0</stdout>
<stdout rank="0">0.00 0.01 0.02 </stdout>
<stdout rank="0">1.00 1.01 1.02 </stdout>
<stdout rank="0">2.00 2.01 2.02 </stdout>
<stdout rank="0">3.00 3.01 3.02 </stdout>
<stdout rank="0">4.00 4.01 4.02 </stdout>
<stdout rank="0">5.00 5.01 5.02 </stdout>
<stdout rank="1">Matrix A at place 1</stdout>
<stdout rank="1">0.03 0.04 0.05 </stdout>
<stdout rank="1">1.03 1.04 1.05 </stdout>
<stdout rank="1">2.03 2.04 2.05 </stdout>
<stdout rank="1">3.03 3.04 3.05 </stdout>
<stdout rank="1">4.03 4.04 4.05 </stdout>
<stdout rank="1">5.03 5.04 5.05 </stdout>
<stdout rank="0">Rail initial v at place 0</stdout>
<stdout rank="0">1.00 1.00 1.00 1.00 1.00 1.00 </stdout>
<stdout rank="0">Rail v at place 0</stdout>
<stdout rank="0">0.03 3.03 6.03 9.03 12.03 15.03 </stdout>
<stdout rank="1">Rail v at place 1</stdout>
<stdout rank="1">0.12 3.12 6.12 9.12 12.12 15.12 </stdout>
<stdout rank="0">Rail final v at place 0</stdout>
<stdout rank="0">0.00 0.00 0.00 0.00 0.00 0.00 </stdout>
--
Mit freundlichen Grüßen / Kind regards
Dr. Christoph Pospiech
High Performance & Parallel Computing
Phone: +49-351 86269826
Mobile: +49-171-765 5871
E-Mail: christoph.pospi...@de.ibm.com
-------------------------------------
IBM Deutschland GmbH
Vorsitzender des Aufsichtsrats: Erich Clementi
Geschäftsführung: Martin Jetter (Vorsitzender),
Reinhard Reschke, Christoph Grandpierre,
Klaus Lintelmann, Michael Diemer, Martina Koederitz
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB
14562 WEEE-Reg.-Nr. DE 99369940
import x10.util.DistributedRail;
/**
* Class matmul
*/
public class matmul {
global var A: Array[double]{rank==2};
// while the matrix A is distributed,
// we keep the vector as DistributedRail
// at every place
global var v: DistributedRail[double];
global var v_tmp: DistributedRail[double];
global val vsize:Int;
/**
* special constructor
*/
public def this(n:Int, axis:Int) {
// populate vsize
vsize = n;
// set up distributions
val D = Dist.makeBlock([0..n-1, 0..n-1], axis);
// the following distributions are no longer used
// and kept here only for reference.
val D_row:Dist = ( axis == 0 ?
Dist.makeConstant([0..n-1]) :
Dist.makeBlock([0..n-1], 0) );
val D_col:Dist = ( axis == 0 ?
Dist.makeBlock([0..n-1], 0) :
Dist.makeConstant([0..n-1]) );
// This declares a unit matrix
// A=Array.make[double](D,
// (p(i,j):Point) => { i==j ? 1.0
: 0.0} );
// This declares matrix with debug entries
A = Array.make[double](D,
(p(i,j):Point) => { i*1.0 + j*0.01 } );
v = new DistributedRail[double](n,
// This will be the final initialization
// (i:Int) => {(i*1.0) as double});
// but for now we just take a constant
// vector.
(i:Int) => {(1.0) as double});
v_tmp = new DistributedRail[double](n,
(i:Int) => {(0.0) as double});
}
/**
* Default constructor
*/
public def this() { this(4,0); }
/**
* Methods used for printing
*/
static def format(x:double, numDecimals:int) {
return String.format("%1."+numDecimals+"f", [x as
Box[Double]]);
}
def prettyPrintMatrix(A:Array[double]{rank == 2}) {
finish for (p in A.dist().places() ) {
at (p) {
Console.OUT.println("Matrix A at place "+p.id()
);
for ((i) in (A|p).region.projection(0)) {
for ((j) in (A|p).region.projection(1))
{
val str = format(A.apply(i,j),
2) + " ";
Console.OUT.print(str);
}
Console.OUT.println();
}
Console.OUT.flush();
}
}
}
def prettyPrintRail(Name:String,
w:DistributedRail[double],
p:Place) {
at (p) {
Console.OUT.println("Rail "+Name+" at place "
+p.id());
for (var i:Int = 0; i<vsize; i++) {
val str = format(w.get()(i), 2) + " ";
Console.OUT.print(str);
}
Console.OUT.println();
Console.OUT.flush();
}
}
/**
* Method to multiply matrix A with vector v
*/
def ClassicMatrixMultiply() {
/*
* first do the local part of the matrix
* multiply,
*/
finish for (p in A.dist().places() ) {
async at (p) {
for (var k:Int=0; k<vsize; k++) {
v_tmp(k) = 0.0;
}
for ( q(i,j) in (A|p) ) {
v_tmp.get()(i) +=
A.apply(q)*v.get()(j);
}
// now we can store the value back
// - locally.
v = v_tmp;
}
}
/*
* Temporaily, we print the vector
*/
finish for (p in A.dist().places() ) {
prettyPrintRail("v",v, p);
}
/*
* Last, we have to aggregate the values
* of v across the places.
*/
v.collectiveReduce(Double.+);
}
/**
* Main method
*/
public static def main(args:Rail[String]): Void {
//val size = ( args.length > 0 ?
// Int.parseInt(args(0)) : 4 );
val size = 6;
val max_print_size = 6;
val s = new matmul(size, 1);
if ( size <= max_print_size) {
s.prettyPrintMatrix(s.A);
s.prettyPrintRail("initial v",
s.v,Place.FIRST_PLACE);
}
s.ClassicMatrixMultiply();
if ( size <= max_print_size) {
s.prettyPrintRail("final v",
s.v,Place.FIRST_PLACE);
}
}
}
------------------------------------------------------------------------------
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
_______________________________________________
X10-users mailing list
X10-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/x10-users