Dear all

I am trying out Chapel with some simple example to see how it fares with
MPI.

I solve a linear advection equation in 2-d using Chapel and Petsc with a
finite volume method. The Petsc code makes use of Petsc vectors to handle
the MPI communication. I dont really solve any matrix problem.

I have these two codes here

https://github.com/cpraveen/cfdlab/tree/master/chapel/convect2d
https://github.com/cpraveen/cfdlab/tree/master/petsc/convect2d

Both codes are solving the same problem with same algorithm. I run these
codes on my 4-core macbook pro laptop as follows.

*Chapel 1.4*

time ./convect2d --n=100 --Tf=10.0 --cfl=0.4 --si=100000


3532  9.99

3533  9.99283

3534  9.99566

3535  9.99849

3536  10.0


real 0m4.451s

user 0m15.767s

sys 0m0.599s


*Petsc 2.7.3 (MPI)*

time mpirun -np 4 ./convect -Tf 10.0 -da_grid_x 100 -da_grid_y 100 -cfl 0.4
-si 100000


it, t = 3532, 9.990005

it, t = 3533, 9.992833

it, t = 3534, 9.995661

it, t = 3535, 9.998490

it, t = 3536, 10.000000


real 0m1.677s

user 0m6.370s

sys 0m0.242s


The Petsc (MPI) code is about 2.5 times faster than Chapel.

I would like to know if I am making comparison in a fair manner ? Am I
using the best optimization flags ? These are present in the makefiles.

With MPI, I used 4 processors but I did not specify anything with Chapel.
Is this a fair way to compare them ? If not what should be done ?

Thanks
praveen
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
Chapel-users mailing list
Chapel-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/chapel-users

Reply via email to