Per yesterday's concall I did some experiments with the padding changes
and looking at MPI_Comm structures in dbx. I believe the concern from
George Bosilca was that using the padding changes you wouldn't be able
to print out the structures values.
What I found with dbx and Sun Studio is that prior to call MPI_Init the
ompi_communicator_t forward reference was unresolved so any attempt t
print a communicator structure failed because the structure was
undefined. However once MPI_Init was called the communicator structure
printed out fine and exactly the same as the non-padded implementation.
I believe non-padded implementation worked because there was extern
struct ompi_communicator_t that was resolved to the library which I
imagine pulled in the real structure definition. One could probably
force the same for the padded implementation by defining dummy
structures that can be externed in mpi.h. To me this seems gross
however I wonder does it actually makes sense to print out an MPI
communicator before MPI_Init is called? The values of the field should
be either 0 or garbage. So I am really curious if the above is a
problem anyways.
--td
Terry Dontje wrote:
Another update for this RFC. It turns out that using pointers instead
of structures as initializers would prevent someone from initializing
a global to one of the predefined handles. So instead, we decided to
go the route of padding the structures to provide us with the ability
to not overrun the bss section.
I would like to discuss any objections to this solution on tomorrow's
OMPI concall.
thanks,
--td
Terry Dontje wrote:
Just wanted to give an update. On a workspace with just the
predefined communicators converted to opaque pointers I've ran
netpipe and hpcc performance tests and compared the results before
and after the changes. The differences in performance with 10 sample
run was undetectable.
I've also tested using comm_world that I can have an a.out compile
and link with a non-debug version of the library and then run the
a.out successfully with a debug version of the library. At a simple
level this proves that the change actually does what we believe it
should.
I will be completing the rest of handles in the next couple days.
Upon completion I will rerun the same tests above and test running
hpcc with a debug and non-debug version of the library without
recompiling.
I believe I am on track to putting this back to the trunk by the end
of next week. So if anyone has any issues with this please speak up.
thanks,
--td
Graham, Richard L. wrote:
No specific test, just an idea how this might impact an app. I am
guessing it won't even be noticable.
Rich
----- Original Message -----
From: devel-boun...@open-mpi.org <devel-boun...@open-mpi.org>
To: Open MPI Developers <de...@open-mpi.org>
Sent: Thu Dec 18 07:13:08 2008
Subject: Re: [OMPI devel] RFC: make predefined handles extern to
pointers
Richard Graham wrote:
Terry,
Is there any way you can quantify the cost ? This seems
reasonable, but
would be nice to get an idea what the performance cost is (and not
within a
tight loop where everything stays in cache).
Rich
Ok, I guess that would eliminate any of the simple perf tests like
IMB, netperf, and such. So do you have something else in mind,
maybe HPCC?
--td
On 12/16/08 10:41 AM, "Terry D. Dontje" <terry.don...@sun.com> wrote:
WHAT: To make predefined handles extern to pointers instead of an
address of an extern to a structure.
WHY: To make OMPI more backwards compatible in regards to changes to
structures that define predefined handles.
WHERE: In the trunk. ompi/include/mpi.h.in and places in ompi that
directly use the predefined handles.
WHEN: 01/24/2009
TIMEOUT: 01/10/2009
____________________
The point of this change is to improve the odds that an MPI
application
does not have to recompile when changes are made to the OMPI library.
In this case specifically the predefined handles that use the
structures
for communicators, groups, ops, datatypes, error handlers, win, file,
and info.
An example of the changes for the communicator predefined handles
can be
found in the hg tmp workspace at
ssh://www.open-mpi.org/~tdd/hg/predefcompat.
Note, the one downfall that Jeff and I could think of by doing
this is
you potentially add one level of indirection but I believe that
will be
a small overhead and if you use one of the predefined handles
repetitively (like in a loop) that the address will probably be
stored
in a register once and no additional over should be seen due to
this change.
_______________________________________________
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel
_______________________________________________
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel
_______________________________________________
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel
_______________________________________________
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel
_______________________________________________
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel
_______________________________________________
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel