Attached are some results from scaling runs on Vulcan that I posted on another 
thread.  The
results are before and after fixing a problem with VecNorm.

And the second set of plots shows a comparison with the pure MPI case and MPI + 
pthreads.

In both cases, the runs used two threads or ranks per core.

Dave

________________________________________
From: Jed Brown [[email protected]]
Sent: Monday, November 11, 2013 12:09 PM
To: Mark Adams
Cc: Nystrom, William D; For users of the development version of PETSc
Subject: Re: [petsc-dev] Mira

Mark Adams <[email protected]> writes:
> I'm not sure if they really do have threads but they just wanted to see if
> PETSc could use threads.  It looks like only 40% of the run time is in
> PETSc so we are shelving this but I wanted to try threads anyway.

Okay, some programming is needed to make it function correctly on BG/Q.

If you just want to get a rough sense, you could try the OpenMP branch
>From the folks at Imperial.

<<attachment: KSPSolve_Time_vs_Node_Count_CPU_Pthread_6400_vulcan.png>>

<<attachment: KSPSolve_Time_vs_Node_Count_CPU_Pthread_MPI_6400_vulcan.png>>

Reply via email to