On Wed, Oct 30, 2013 at 9:08 AM, Kirk, Benjamin (JSC-EG311) <
benjamin.k...@nasa.gov> wrote:
> On Oct 30, 2013, at 10:03 AM, Cody Permann
> wrote:
>
> > Very interesting, we are all about making sensible default choices for
> our users. We might make MOOSE default to sfc_hilbert for meshes buil
On Oct 30, 2013, at 10:03 AM, Cody Permann
wrote:
> Very interesting, we are all about making sensible default choices for our
> users. We might make MOOSE default to sfc_hilbert for meshes built with the
> internal generator. At least until we figure out how to make Metis/Parmetis
> work b
Very interesting, we are all about making sensible default choices for our
users. We might make MOOSE default to sfc_hilbert for meshes built with
the internal generator. At least until we figure out how to make
Metis/Parmetis work better. I can't even come close to getting this
problem to run o
> Thanks Ben! I wasn't even aware of that Partitioner. I just tried it on my
> very large 3D cube domain simulation and it's giving me a 5% boost in
> performance over linear with no other changes. I'm running on 120 processors
> across 60 nodes + threading (using tons of memory). I guess th
On Tue, Oct 29, 2013 at 3:17 PM, Kirk, Benjamin (JSC-EG311) <
benjamin.k...@nasa.gov> wrote:
> If forced to choose one of those options for a cube, though, I'd suggest
> the SFC option.
>
> -Ben
>
>
Thanks Ben! I wasn't even aware of that Partitioner. I just tried it on
my very large 3D cube d
Just to add something to this - we've been seeing a memory "leak" associated
with métis during adaptive simulations in parallel... every time it
repartitions it doesn't seem like we get all the memory back.
I don't remember if we ran that through valgrind yet or not. It may not
actually "leak"
On Tue, Oct 29, 2013 at 4:08 PM, John Peterson wrote:
>
>
>
> Ben, it looks like we currently base our partitioning algorithm choice
>> solely on the number of partitions... Do you recall if PartGraphKway is
>> any more memory efficient than the PartGraphRecursive algorithm? If so,
>> perhaps we
Thanks, John. I've had limited access today by hopefully can get caught up and
contribute something tonight/tomorrow.
-Ben
On Oct 29, 2013, at 5:09 PM, "John Peterson" wrote:
> On Tue, Oct 29, 2013 at 2:48 PM, John Peterson wrote:
>
>>
>> I just checked out the hash immediately prior to
On Tue, Oct 29, 2013 at 2:48 PM, John Peterson wrote:
>
> I just checked out the hash immediately prior to the latest Metis/Parmetis
> refresh (git co 5771c42933), ran the same tests again, and got basically
> the same results on the 200^3 case.
>
> So I don't think the metis/parmetis refresh int
If forced to choose one of those options for a cube, though, I'd suggest the
SFC option.
-Ben
On Oct 29, 2013, at 2:58 PM, "John Peterson"
mailto:jwpeter...@gmail.com>> wrote:
On Tue, Oct 29, 2013 at 1:38 PM, ernestol
mailto:ernes...@lncc.br>> wrote:
Thanks for the answers.
So wich of th
On Tue, Oct 29, 2013 at 12:31 PM, John Peterson wrote:
>
>
>
> On Tue, Oct 29, 2013 at 11:19 AM, John Peterson wrote:
>
>> On Tue, Oct 29, 2013 at 9:32 AM, Cody Permann wrote:
>>
>>>
>>> On Tue, Oct 29, 2013 at 5:54 AM, ernestol wrote:
>>>
>>> > I am using an cluster with 23 node for a total of 1
On Tue, Oct 29, 2013 at 1:38 PM, ernestol wrote:
> Thanks for the answers.
>
> So wich of the three partitioner do you recommend and how can I change it?
>
I wouldn't say any of them are actually "recommended" for production code,
but you can certainly try them by first including the relevant he
On Tue, Oct 29, 2013 at 11:19 AM, John Peterson wrote:
> On Tue, Oct 29, 2013 at 9:32 AM, Cody Permann wrote:
>
>>
>> On Tue, Oct 29, 2013 at 5:54 AM, ernestol wrote:
>>
>> > I am using an cluster with 23 node for a total of 184 cores, and each
>> node
>> > additionally has 16GB of RAM. I was thi
Could you manually use the space filling curve partitioner (hard code?) to see
if that is better behaved?
On Oct 29, 2013, at 12:19 PM, "John Peterson" wrote:
> On Tue, Oct 29, 2013 at 9:32 AM, Cody Permann wrote:
>
>>
>> On Tue, Oct 29, 2013 at 5:54 AM, ernestol wrote:
>>
>>> I am using a
On Tue, Oct 29, 2013 at 9:32 AM, Cody Permann wrote:
>
> On Tue, Oct 29, 2013 at 5:54 AM, ernestol wrote:
>
> > I am using an cluster with 23 node for a total of 184 cores, and each
> node
> > additionally has 16GB of RAM. I was thinking that the problem maybe is in
> > the code. Because if I ru
15 matches
Mail list logo