Thanks Ben... I do think this will help... I think I was doing this
manipulation myself external to the DofMap in my current code. I'll switch
to using this on Monday when I get back to work.
I'm off for an extended weekend in Boise... see talk to you guys on Monday.
Derek
On Thu, Nov 6, 2008 at
I have just checked in a small change to the DofMap that will report the
[first,last) local degree of freedom indices for a specified variable.
It will only work when the hackish --node_major_dofs is not specified, which
should not be a problem for anyone except me.
It could easily be extended to
On Thu, 6 Nov 2008, Benjamin Kirk wrote:
>>> What about instead using MPI_Probe on the receiving side to get the message
>>> size and resizing the buffer when need be?
>>
>> Okay, one more question: what's our definition of "when need be"? If
>> we're willing to shrink as well as enlarge the buf
> ...
>
> Okay, done.
>
>> And at any rate, if we *don't* resize() then we have to return the
>> size in the status object, which seems unnecessarily awkward...
>
> That's already implicitly happening. Should we now change the return
> type from Status to void? We now already know the size, we
>> What about instead using MPI_Probe on the receiving side to get the message
>> size and resizing the buffer when need be?
>
> Okay, one more question: what's our definition of "when need be"? If
> we're willing to shrink as well as enlarge the buffer, then it shouldn't
> hurt to call vector::r
Benjamin Kirk wrote:
>> Unless MPI_Probe involves some performance hit (and I don't see why it
>> should) this is the way to go.
>
> Along these lines, since send_recv is so integral in a lot of algorithms,
That may not be the case forever; I've tried to factor those algorithms
out into the thre
Benjamin Kirk wrote:
> What about instead using MPI_Probe on the receiving side to get the message
> size and resizing the buffer when need be?
Okay, one more question: what's our definition of "when need be"? If
we're willing to shrink as well as enlarge the buffer, then it shouldn't
hurt to
> Unless MPI_Probe involves some performance hit (and I don't see why it
> should) this is the way to go.
Along these lines, since send_recv is so integral in a lot of algorithms,
should we implement it internally with
(1) nonblocking send
(2) blocking probe to get recv size
(3) resize recv buff
Benjamin Kirk wrote:
>> Or don't bother writing that case at all? In the places where nonblocking
>> receives make sense, often any blocking at all would lead to a deadlock -
>> you'd really want to handle vector sizing manually.
>
> Agreed. If someone is calling the nonblocking methods they pr
>>> Does anyone have any idea as to what to name the "non-nice" methods?
>>> receive_array? fill? I'm trying to think of something that's
>>> descriptive enough but that won't require users to call
>>> Parallel::nonblocking_receive_into_pre_sized_vector().
>>
>> Is it possible to resize for the
John Peterson wrote:
> On Thu, Nov 6, 2008 at 4:17 PM, Roy Stogner <[EMAIL PROTECTED]> wrote:
>> Does anyone know which STL vector operations exist reduce capacity()? As far
>> as I know swap() and reserve() are the only ways to do so, but I'd hate to
>> find out that some resize() implementation a
> Does anyone have any idea as to what to name the "non-nice" methods?
> receive_array? fill? I'm trying to think of something that's
> descriptive enough but that won't require users to call
> Parallel::nonblocking_receive_into_pre_sized_vector().
Is it possible to resize for the user in the ca
Benjamin Kirk wrote:
>> Does anyone have any idea as to what to name the "non-nice" methods?
>> receive_array? fill? I'm trying to think of something that's
>> descriptive enough but that won't require users to call
>> Parallel::nonblocking_receive_into_pre_sized_vector().
>
> Is it possible to
On Thu, Nov 6, 2008 at 4:17 PM, Roy Stogner <[EMAIL PROTECTED]> wrote:
>
> Does anyone know which STL vector operations exist reduce capacity()? As far
> as I know swap() and reserve() are the only ways to do so, but I'd hate to
> find out that some resize() implementation also "helpfully" reduced
John Peterson wrote:
> On Thu, Nov 6, 2008 at 3:14 PM, Roy Stogner <[EMAIL PROTECTED]> wrote:
>> John Peterson wrote:
>>
>>> Non-nice methods could be called receive_n or send_receive_n.
>> Better than "into_pre_sized_vector", but sounds too cryptic.
>
> OK, I am all for self-documenting code, but
On Thu, Nov 6, 2008 at 2:49 PM, Roy Stogner <[EMAIL PROTECTED]> wrote:
>
> My questions are:
>
> Does anyone have any thoughts/disagreement?
>
> Does anyone have any idea as to what to name the "non-nice" methods?
> receive_array? fill? I'm trying to think of something that's
> descriptive enough
We've currently got a discrepancy between how Parallel::send_receive
behaves on vectors (transmitting size information first, then resizing
the vector at the receiving end) and how Parallel::receive (or
nonblocking_receive) behaves (assuming that the user has already resized
the receiving vecto
On Tue, 4 Nov 2008, Norbert Stoop wrote:
> It seems that equation_system.C does not handle the situation where
> a NonlinearImplicitSystem is read back in because it does not recognize
> the equation system type. Attached is a tiny patch for equation_system.C
> which resolves this. Seems it was j
John Peterson wrote:
> On Thu, Nov 6, 2008 at 11:03 AM, Roy Stogner <[EMAIL PROTECTED]> wrote:
>> Replace those two central pyramids with four tets? Should be just as easy
>> and arguably just as good, especially if we do the same sort of
>> geometrically adaptive axis selection that we do in tet
John Peterson wrote:
> 1.) A possible refinement pattern for the pyramid involves 10
> children: 4 sub-pyramids on the base, one in the apex, and one pyramid
> upside-down below the apex pyramid. Then, there are 4 tets filling in
> the gaps. (This is a little hard to visualize, but I can send yo
Derek Gaston wrote:
> Wouldn't the above loop over a _lot_ (like millions in some cases) of
> unnecessary nodes when using Serial mesh?
At the moment, I think so, but that just makes it a wash efficiency-wise
right now, and it wouldn't be hard to adjust our iterators to be more
efficient on a
Benjamin Kirk wrote:
> But perhaps some of
> Roy's more exotic FE spaces?
For now, you can forget about seeing very exotic FE spaces on our
pyramids and tets: topological DoF storage only works if you've got a
DofObject for every necessary topological connection, and we'd need
elements with a m
On Thu, Nov 6, 2008 at 11:03 AM, Roy Stogner <[EMAIL PROTECTED]> wrote:
> John Peterson wrote:
>
>> 1.) A possible refinement pattern for the pyramid involves 10
>> children: 4 sub-pyramids on the base, one in the apex, and one pyramid
>> upside-down below the apex pyramid. Then, there are 4 tets
> 2.) *Any* refinement pattern for the pyramid will likely involve a
> splitting where some children have a different type than the parent.
> Since this would be the only element with such an inhomogeneous
> splitting, I'd like to discuss the best way to do this in the present
> context of the embe
> Wouldn't the above loop over a _lot_ (like millions in some cases) of
> unnecessary nodes when using Serial mesh?
Well, the exact same thing happens when you loop over active_local_elements.
For the case of a serial mesh (or any mesh for that matter) you are *really*
iterating over all the eleme
On Nov 6, 2008, at 8:57 AM, Kirk, Benjamin (JSC-EG) wrote:
Right...
The fastest thing to do may be to loop over all local elements, and
only look at their internal degreees of freedom, which are
guaranteed to be owned by the processor. Then you can loop over all
local nodes. Should be fas
Hi all,
I just fixed up the quadrature rules for pyramids, which have been
wrong since I first coded them up years ago!! If you've ever done
anything with pyramids in the library, it was probably at the very
least inaccurate and (most likely) totally wrong :)
Anyway, I actually wanted to start a
Dear John and all,
On Thu, 30 Oct 2008, John Peterson wrote:
>> It seems as if the thing is now running quite well for me. (I fixed some
>> bugs in the meantime, all of which were all on the application side.) I've
>> still not been able to test it in parallel though, since the cluster is
>> st
Thanks to John, who pointed out Sourceforge re-enabled limited shell access.
After opening up the permissions on a few directories, the wiki seems to be
fully functional again. I have logged in from a few machines and it remembers
my identity as I traverse pages. I can also edit, so things loo
Right...
The fastest thing to do may be to loop over all local elements, and only look
at their internal degreees of freedom, which are guaranteed to be owned by the
processor. Then you can loop over all local nodes. Should be faster than the
typical loop over elements / loop over element's
aha - thanks Roy... that's what I needed.
Derek
On Nov 6, 2008, at 8:49 AM, Roy Stogner wrote:
>
>
> On Thu, 6 Nov 2008, Derek Gaston wrote:
>
>> So... I'm putting together a capability for finding the local dof
>> indices for a variable and I've got a question about who owns
>> what.
>>
>
On Thu, 6 Nov 2008, Derek Gaston wrote:
> So... I'm putting together a capability for finding the local dof
> indices for a variable and I've got a question about who owns what.
>
> If a processor owns an element does it own all of the degrees of
> freedom on that element?
It owns any D
So... I'm putting together a capability for finding the local dof
indices for a variable and I've got a question about who owns what.
If a processor owns an element does it own all of the degrees of
freedom on that element?
I guess I'm having trouble figuring out how to get a unique s
On Nov 5, 2008, at 5:52 PM, Kirk, Benjamin (JSC-EG) wrote:
I see your nutty and raise you bat-shit crazy:
PETSc allows you to provide your own buffer for storing the local
elements when you create a vector... Does Trilinos have something
similar? If so, I propose we take the implementatio
Currently, I think we're solving for 7 variables... but we've seen
where we could have 9. It's currently something pretty close to what
John was solving a while ago: double diffusive convection in porous
media... except that we solve for a little more that just convection
for one of the te
35 matches
Mail list logo