of load imbalance. At coarser grids, it
gets worse. But I need to confirm this caused the poor scaling and big
vecscatter delays in the experiment.
Thanks.
--Junchao Zhang
On Tue, Jun 12, 2018 at 12:42 AM, Michael Becker
<mailto:michael.bec...@physik.uni-giessen.de>> wrote:
Hello,
any new insights yet?
Michael
Am 04.06.2018 um 21:56 schrieb Junchao Zhang:
Miachael, I can compile and run you test. I am now profiling it. Thanks.
--Junchao Zhang
ow).
This should give us a better idea if your large VecScatter costs are
from slow communication or if it catching some sort of load imbalance.
--Junchao Zhang
On Wed, May 30, 2018 at 3:27 AM, Michael Becker
mailto:michael.bec...@physik.uni-giessen.de>> wrote:
Barr
ou distributed the 125 MPI ranks evenly.
--Junchao Zhang
On Tue, May 29, 2018 at 6:18 AM, Michael Becker
<mailto:michael.bec...@physik.uni-giessen.de>> wrote:
Hello again,
here are the updated log_view files for 125 and 1000 processors. I
ran both problems twice,
Hello again,
here are the updated log_view files for 125 and 1000 processors. I ran
both problems twice, the first time with all processors per node
allocated ("-1.txt"), the second with only half on twice the number of
nodes ("-2.txt").
On May 24, 2018, at 12:2
a bar chart of each event for the two cases to see which
ones are taking more time and which are taking less (we cannot tell with the
two logs you sent us since they are for different solvers.)
On May 24, 2018, at 12:24 AM, Michael Becker
<michael.bec...@physik.uni-giessen.de> wro
k
On Thu, May 24, 2018 at 5:10 AM, Michael Becker
<michael.bec...@physik.uni-giessen.de
<mailto:michael.bec...@physik.uni-giessen.de>> wrote:
CG/GCR: I accidentally kept gcr in the batch file. That's still
from when I was experimenting with the different methods. The
);CHKERRQ(ierr);
For the 125 case the arrays l_Nx, l_Ny, l_Nz have dimension 5 and every
element has value 30. VecGetLocalSize() returns 27000 for every rank. Is
there something I didn't consider?
Michael
Am 24.05.2018 um 09:39 schrieb Lawrence Mitchell:
On 24 May 2018, at 06:24, Michael Becker
Hello,
I added a PETSc solver class to our particle-in-cell simulation code and
all calculations seem to be correct. However, some weak scaling tests I
did are rather disappointing because the solver's runtime keeps
increasing with system size although the number of cores are scaled up
Michael
Am 03.06.2016 um 14:32 schrieb Matthew Knepley:
On Fri, Jun 3, 2016 at 5:56 AM, Dave May <dave.mayhe...@gmail.com
<mailto:dave.mayhe...@gmail.com>> wrote:
On 3 June 2016 at 11:37, Michael Becker
<michael.bec...@physik.uni-giessen.de
<mailto:michael.bec...
Dear all,
I have a few questions regarding possible performance enhancements for
the PETSc solver I included in my project.
It's a particle-in-cell plasma simulation written in C++, where
Poisson's equation needs to be solved repeatedly on every timestep.
The simulation domain is discretized
11 matches
Mail list logo