According to this it is:
######
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
--- Event Stage 0: Main Stage
Index Set 117 117 1433016 0
Vec 42 42 1029000 0
Vec Scatter 117 117 0 0
MatMFFD 1 1 0 0
Matrix 2 2 882388 0
SNES 1 1 124 0
Krylov Solver 1 1 16880 0
Preconditioner 1 1 144 0
######
Note that this was for a small run (3000 DOFs)... that's why the numbers are
low.
Also... I can report HUGE SUCCESS with the whole using system.update()
instead of localize() thing! Now when I'm running the cpus are just
absolutely pegged. They never move from 100% with just little network blips
here and there and memory is constant. This is a HUGE improvement!
Derek
On Tue, Jun 24, 2008 at 5:18 PM, Kirk, Benjamin (JSC-EG) <
[EMAIL PROTECTED]> wrote:
> Can you run with -log_summary and see if memory is actually allocated for
> that matrix?
>
> (Sorry, blackberry brief message. Holding 11 month son and emailing at
the
> same time...)
>
> -Ben
>
>
> ----- Original Message -----
> From: [EMAIL PROTECTED]
> <[EMAIL PROTECTED]>
> To: Roy Stogner <[EMAIL PROTECTED]>
> Cc: [email protected]
> <[email protected]>
> Sent: Tue Jun 24 18:00:58 2008
> Subject: Re: [Libmesh-devel] Matrix Free Memory Scaling with ParallelMesh
>
> Ok - I have some preliminary results suggesting that this change
> _completely_ does away with the memory oscillation. Memory now stays
> within a couple of megabytes during the entire solve process (after it
> gets ramped up of course).
>
> After I do a bit more testing I'll work up a patch and get it
> committed tomorrow (Ben, do you want to see the patch first?).
>
> Thanks to everyone for their input today!
>
> Next thing to tackle is _not_ creating a Matrix for _Matrix Free_
> computations....
>
> Derek
>
> On Tue, Jun 24, 2008 at 4:24 PM, Derek Gaston <[EMAIL PROTECTED]> wrote:
>> On Jun 24, 2008, at 4:14 PM, Roy Stogner wrote:
>>
>>>
>>> The DiffSystem::assembly() call in PetscDiffSolver uses
>>> current_local_solution to plug into the weighted residual equations,
>>> thus ensuring that DoFs which are owned by another processor have
>>> correct values. How does that work here? When you swap the solution
>>> out again, the current_local_solution doesn't come with it - you're
>>> still passing to residual() a vector that only has locally owned DoFs,
>>> right?
>>>
>>
>> Thanks Roy... that gave me the little kick I needed. All I need to do
was
>> pass current_local_solution instead of X_global.... like this:
>>
>> solver->residual (*sys.current_local_solution.get(), R);
>>
>> That runs anyway... I've yet to be able to run a test to see if that
takes
>> care of the memory oscillation. I'll get back to you guys on that (with
a
>> patch if it works).
>>
>> Derek
>>
>
> -------------------------------------------------------------------------
> Check out the new SourceForge.net Marketplace.
> It's the best place to buy or sell services for
> just about anything Open Source.
> http://sourceforge.net/services/buy/index.php
> _______________________________________________
> Libmesh-devel mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/libmesh-devel
>
-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
_______________________________________________
Libmesh-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-devel