[OMPI users] The Architecture of Open Source Applications (vol 2)

2012-05-08 Thread Jeff Squyres (jsquyres)
I wrote a chapter about Open MPI in "The Architecture of Open Source Applications, volume 2", which was just made available in dead tree form today: http://blogs.cisco.com/performance/the-architecture-of-open-source-applicat ions-volume-ii/ All royalties from this book go to Amnesty

Re: [OMPI users] GPU and CPU timing - OpenMPI and Thrust

2012-05-08 Thread Rohan Deshpande
Yep you are correct. I did the same and it worked. When I have more than 3 MPI tasks there is lot of overhead on GPU. But for CPU there is not overhead. All three machines have 4 quad core processors with 3.8 GB RAM. Just wondering why there is no degradation of performance on CPU ? On Tue, May

Re: [hwloc-users] hwloc_get_last_cpu_location on AIX

2012-05-08 Thread Brice Goglin
Le 08/05/2012 14:33, Hendryk Bockelmann a écrit : > Hello, > > I just ran into trouble using hwloc_get_last_cpu_location on our > POWER6 cluster with AIX6.1 > My plan is to find out if the binding of the job-scheduler was correct > for MPI-tasks and OpenMP-threads. This is what I want to use: > >

[hwloc-users] hwloc_get_last_cpu_location on AIX

2012-05-08 Thread Hendryk Bockelmann
Hello, I just ran into trouble using hwloc_get_last_cpu_location on our POWER6 cluster with AIX6.1 My plan is to find out if the binding of the job-scheduler was correct for MPI-tasks and OpenMP-threads. This is what I want to use: support = hwloc_topology_get_support(topology); ret =

Re: [OMPI users] GPU and CPU timing - OpenMPI and Thrust

2012-05-08 Thread Rolf vandeVaart
You should be running with one GPU per MPI process. If I understand correctly, you have a 3 node cluster and each node has a GPU so you should run with np=3. Maybe you can try that and see if your numbers come out better. From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On

Re: [OMPI users] problem in installation

2012-05-08 Thread Jeff Squyres
On May 8, 2012, at 7:39 AM, ahmed lasheen wrote: > I am using pgi10.0.0 and it works with nonMPI application well. I think if you try complex non-MPI applications, you may run into issues. I say this based on seeing lines like this in your logs: - PGC-W-0221-Redefinition of symbol CPU_SET

Re: [OMPI users] problem in installation

2012-05-08 Thread ahmed lasheen
I am using pgi10.0.0 and it works with nonMPI application well. I will upgrade pgi and I will tell you the result. thanks On Tue, May 8, 2012 at 1:15 PM, Jeff Squyres wrote: > Looks like your PGI compiler installation may be busted. > > Can you compile non-MPI applications

Re: [OMPI users] problem in installation

2012-05-08 Thread Jeff Squyres
Looks like your PGI compiler installation may be busted. Can you compile non-MPI applications with it? It looks like you have PGI 10.x. Try upgrading to the latest version of the PGI 10.x series (i.e., get all the relevant PGI compiler bug fixes). On May 8, 2012, at 6:43 AM, ahmed lasheen

Re: [OMPI users] problem in installation

2012-05-08 Thread ahmed lasheen
Thanks I put cc and cxx ,But I also get the following errors PGC/x86-64 Linux 10.0-0: compilation completed with severe errors make[3]: *** [keyval_lex.lo] Error 1 I have attached the configure and make output. thanks in advance. On Tue, May 8, 2012 at 2:54 AM, Jeff Squyres (jsquyres)

Re: [OMPI users] Regarding the execution time calculation

2012-05-08 Thread TERRY DONTJE
On 5/7/2012 8:40 PM, Jeff Squyres (jsquyres) wrote: On May 7, 2012, at 8:31 PM, Jingcha Joba wrote: So in the above stated example, end-start will be: + 20ms ? (time slice of P2 + P3 = 20ms) More or less (there's nonzero amount of time required for the kernel scheduler, and the time