This has something to do with OpenMPI. I cannot reproduce this issue with 
MPICH. Can you try switching to MPICH (--download-mpich).

Shri
On Feb 14, 2013, at 6:58 AM, Gon?alo Albuquerque wrote:

> Dear All,
> 
> I'm experimenting with PETSc hybrid MPI/OpenMP capabilities and I have run a 
> rather simple test case (2D magnetostatic) using PETSc compiled with both 
> OpenMPI and thread support (both PThreads and OpenMP) on a Ubuntu 12.04 
> system. I cannot figure out the results obtained when comparing runs made 
> using the same number of MPI processes (2) and specifying either no threads 
> (-threadcomm_type nothread) or 1 OpenMP thread (-threadcomm_type openmp 
> -threadcomm_nthreads 1). I attached the logs of both runs. It seems that the 
> communication time has literally exploded. A grep over the logs gives:
> 
> No threading:
> Average time for MPI_Barrier(): 1.38283e-06
> Average time for zero size MPI_Send(): 7.03335e-06
> 
> 1 OpenMP thread:
> Average time for MPI_Barrier(): 0.00870218
> Average time for zero size MPI_Send(): 0.00614798
> 
> The same things is occurring when running ksp ex5 (see attached logs).
> 
> Any ideas as to what I'm missing?
> 
> Many thanks in advance,
> 
> Gon?alo
> <nothread.log><openmp_nthreads_1.log><ex5_nothread.log><ex5_openmp_nthreads_1.log>

Reply via email to