I am doing a presentation on Thursday at ACCU 2010 on why the JVM is the
future -- because it supports parallelism so well -- cf.

http://accu.org/index.php/conferences/accu_conference_2010/accu2010_sessions#Parallelism:%20The%20JVM%20Rocks.
 

My emphasis is really on Actor Model, dataflow architectures and
(especially) CSP, since that is what we are currently working on in
GPars (http://gpars.codehaus.org).  However, I feel there ought to be a
part of the session on PGAS, which means X10.  The problem is that X10
appears increasingly to be ignoring the JVM and following Titanium's
route of creating native code (via C in the case of Titanium and C++ in
the case of X10).

Is this move away from the JVM towards native code a strategic one for
X10?

I spent the last couple of days (*) trying to find enough information to
create an X10 versions of my "pi by quadrature" example for which I have
examples in Java (using java.concurrent.utils, and ParallelJava), Scala,
Groovy and Clojure.  The problem is embarrassingly parallel (the whole
point so as to look at scaling) and admits futures and parallel arrays
as implementation techniques.

I tried looking at the MontyPi.x10 example but:

--  The closure used to initialize the array is a nullary one in that
example whereas I need to pass in some parameters.

--  It is not obvious how and why this is a parallel solution.

--  It is not obvious how to match the number of cores available to the
JVM to the problem so as to show scaling.  The relationship between
places, regions, points and threads is not obvious to me for my context
of a twin-Xeon Linux workstation (i.e. 8-cores presented as 8 processors
via Linux) using OpenJDK.

--  I couldn't get an Array-based version -- nor a Rail-based version,
but Rail looks to be on the way to deprecation and doesn't have a reduce
function -- of "pi by quadrature" to compute the right result :-((

Unless I have just missed finding things, what seems to be missing is
example code highlighting paradigms and idioms of use for some small
programs highlighting how to control parallelism.   The ones there are
all focus on recursive decomposition or the heat equation.  If there are
some other examples hidden away, I'd appreciate pointers so I can
represent PGAS on the JVM reasonably.

(My focus in this session is multicore, so clustering, Terracotta and
Hadoop are not issues for me.  Maybe next year for that.)

Thanks.

(*) I know, you can't learn a programming language in less than 6 months
but . . .   

PS  It appears as though the X10 users list rejects all emails with
digital signatures with the message:

The message's content type was not explicitly allowed

Is this intended?

-- 
Russel.
=============================================================================
Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Road    m: +44 7770 465 077   xmpp: rus...@russel.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



------------------------------------------------------------------------------
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
X10-users mailing list
X10-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/x10-users

Reply via email to