Ok - I have some preliminary results suggesting that this change
_completely_ does away with the memory oscillation. Memory now stays
within a couple of megabytes during the entire solve process (after it
gets ramped up of course).
After I do a bit more testing I'll work up a patch and get it
com
On Jun 24, 2008, at 4:14 PM, Roy Stogner wrote:
>
> The DiffSystem::assembly() call in PetscDiffSolver uses
> current_local_solution to plug into the weighted residual equations,
> thus ensuring that DoFs which are owned by another processor have
> correct values. How does that work here? When y
On Tue, 24 Jun 2008, Derek Gaston wrote:
> On Jun 24, 2008, at 1:19 PM, Roy Stogner wrote:
>
>> I think I noticed this problem and tried to avoid it in
>> petsc_diff_solver.C; you might cut and paste some of that code into
>> Ben's solver to see if using my "swap vectors, System::update"
>> loca
On Jun 24, 2008, at 1:19 PM, Roy Stogner wrote:
> I think I noticed this problem and tried to avoid it in
> petsc_diff_solver.C; you might cut and paste some of that code into
> Ben's solver to see if using my "swap vectors, System::update"
> localization works any better.
Ok - I've tried to impl
> I agree. I've always thought it would be cool to have a home-grown
> DistributedMatrix to go with it as well (LibMesh's own SparseMatrix
> implementation) so I'd like to see it stay as a leaf class if
> possible...
That would be my intent. But by pushing the majority of the implementation
into
On Tue, Jun 24, 2008 at 2:51 PM, Roy Stogner <[EMAIL PROTECTED]> wrote:
>
>
> On Tue, 24 Jun 2008, Benjamin Kirk wrote:
>
>>> Assuming you mean NumericVector not DistributedVector, that sounds
>>> like an excellent idea
>>
>> Actually, I meant DistributedVector<>, and the inheritance would change.
On Tue, 24 Jun 2008, Benjamin Kirk wrote:
>> Assuming you mean NumericVector not DistributedVector, that sounds
>> like an excellent idea
>
> Actually, I meant DistributedVector<>, and the inheritance would change.
> But your point is well taken. The implementation could just as easily be
> don
Hi all,
I've just checked in some changes to the quadrature classes. The most
general change was the addition of a public
bool allow_rules_with_negative_weights;
to the QBase class. By default this is true, and you will obtain the
same behavior which has been present in the past. If you set t
> Assuming you mean NumericVector not DistributedVector, that sounds
> like an excellent idea
Actually, I meant DistributedVector<>, and the inheritance would change.
But your point is well taken. The implementation could just as easily be
done in NumericVector<>, and then the DistributedVector<>
On Tue, 24 Jun 2008, Benjamin Kirk wrote:
> Big picture, the concept of serialized vectors needs to be wholesale
> replaced with vectors+ghost padding. PETSc offers a way to do this, and
> Trilinos has something similar.
Keep in mind, for anywhere we still need to use serialized vectors
(Derek'
On Jun 24, 2008, at 1:29 PM, Benjamin Kirk wrote:
>>
> FWIW, I have no swap on my compute nodes. (No disk for that matter
> too.)
> I'd rather have a memory allocation request that does not fit in RAM
> kill
> the process than swap!!
I completely agree with this. I've even gone to having no
> OTOH, Derek, this may be why your "back of the envelope" problem size
> calculation caused the machines to swap ;-)
FWIW, I have no swap on my compute nodes. (No disk for that matter too.)
I'd rather have a memory allocation request that does not fit in RAM kill
the process than swap!!
-ben
>> I've got a good guess as to where the memory spikes are occuring:
>> check out __libmesh_petsc_snes_residual() in petsc_nonlinear_solver.C.
>> Is that X_local a serial vector?
>
> Good catch Roy... that's along the lines of what I was thinking.
>
>> I think I noticed this problem and tried to
On Tue, Jun 24, 2008 at 2:20 PM, Derek Gaston <[EMAIL PROTECTED]> wrote:
> On Jun 24, 2008, at 1:05 PM, Roy Stogner wrote:
>
>> Another question: what NonlinearSystem class? If you're using
>> NonlinearImplicitSystem, doesn't that allocate a matrix whether you
>> and PETSc eventually use it or not
On Jun 24, 2008, at 1:19 PM, Roy Stogner wrote:
> I've got a good guess as to where the memory spikes are occuring:
> check out __libmesh_petsc_snes_residual() in petsc_nonlinear_solver.C.
> Is that X_local a serial vector?
Good catch Roy... that's along the lines of what I was thinking.
> I thi
On Jun 24, 2008, at 1:05 PM, Roy Stogner wrote:
> Another question: what NonlinearSystem class? If you're using
> NonlinearImplicitSystem, doesn't that allocate a matrix whether you
> and PETSc eventually use it or not?
Yes NonlinearImplicitSystem
Crap... I just went and dug down into ImplicitS
On Tue, 24 Jun 2008, Derek Gaston wrote:
> I really haven't looked into it. It's more of a "feeling" you can watch the
> cpu's peg out for a few seconds... then they all go crazy and the network
> traffic spikes. It could be something specific to the way the
> NonlinearSystem solve progress
On Tue, 24 Jun 2008, Derek Gaston wrote:
> I thought I would share some numbers All I'm doing is solving pure
> diffusion with a Dirichlet BC and a forcing function in 3d on
> hexes but I'm doing it completely matrix free using the
> NonlinearSystem class.
Another question: what NonlinearSy
On Jun 24, 2008, at 12:50 PM, Roy Stogner wrote:
> First order elements?
Yep
> Define "communication steps" - I assume you're not talking about
> synching up ghost DoFs during a solve?
I really haven't looked into it. It's more of a "feeling" you can
watch the cpu's peg out for a few seconds.
On Jun 24, 2008, at 12:46 PM, John Peterson wrote:
> I believe mpip (http://mpip.sourceforge.net/) can report more detailed
> memory usage, but implementing the profiling does require additional
> work.
Thanks for the tip... I'll check into it when I get a chance (which
might be next year!).
>
On Tue, 24 Jun 2008, Derek Gaston wrote:
> Using Roy's workaround so that partitioning doesn't happen with
> ParallelMesh I've been able to run some pretty big problems today, and
> I thought I would share some numbers All I'm doing is solving pure
> diffusion with a Dirichlet BC and a forcing f
Hi Derek,
On Tue, Jun 24, 2008 at 1:30 PM, Derek Gaston <[EMAIL PROTECTED]> wrote:
>
> Here is how much each proc is using:
>
> #CPU : #MB/proc
> 256 : 200-700
> 128 : 350-700
> 64 : 450-800
Thanks! These numbers are pretty interesting. I know there are some
places in the code where we go ahead
Hey guys,
Using Roy's workaround so that partitioning doesn't happen with
ParallelMesh I've been able to run some pretty big problems today, and
I thought I would share some numbers All I'm doing is solving pure
diffusion with a Dirichlet BC and a forcing function in 3d on
hexes but I
> But as a temporary workaround, would you go to partitioner.C and
> uncomment the "don't repartition in parallel" code on lines 47-48?
> I'm not yet sure whether there's a bug in Ben's redistribution code or
> whether that's just triggering a bug in my core or refinement code,
> but I at least can
On Jun 23, 2008, at 8:55 PM, Roy Stogner wrote:
> But as a temporary workaround, would you go to partitioner.C and
> uncomment the "don't repartition in parallel" code on lines 47-48?
That does seem to work for now. Is there a way to just turn off
repartitioning altogether? I don't need it a
25 matches
Mail list logo