Expect this rev to die all over the place.. I had a bug in my r13481
checkin that prevented OB1 from getting selected, I corrected this in
r13482.
sorry bout that..
- Galen
On Feb 2, 2007, at 7:44 PM, MPI Team wrote:
Creating nightly snapshot SVN tarball was a success.
Snapshot:
More like trying to work around the race condition that exists: The
server side sends an rdma message first thus violating the iwarp
protocol. For those who want the gory details: when the server sends
first -and- that rdma message arrives at the client _before_ the
client
transitions into
The problem is that MCA_BTL_DES_FLAGS_PRIORITY was meant to indicate
that the fragment was higher priority, but the fragment isn't higher
priority. It simply needs to be ordered w.r.t. a previous fragment,
an RDMA in this case.
But after the change priority flags is totally ignored.
So
doh,, correction..
On May 27, 2007, at 10:23 AM, Galen Shipman wrote:
The problem is that MCA_BTL_DES_FLAGS_PRIORITY was meant to
indicate
that the fragment was higher priority, but the fragment isn't higher
priority. It simply needs to be ordered w.r.t. a previous fragment,
an RDMA
Can we get rid of mca_pml_ob1_send_fin_btl and just have
mca_pml_ob1_send_fin? It seems we should just always send the fin
over the same btl and this would clean up the code a bit.
Thanks,
Galen
On May 27, 2007, at 2:29 AM, g...@osl.iu.edu wrote:
Author: gleb
Date: 2007-05-27 04:29:38
Actually, we still need MCA_BTL_FLAGS_FAKE_RDMA , it can be used as
a hint for components such as one-sided.
Galen
On May 27, 2007, at 5:25 AM, g...@osl.iu.edu wrote:
Author: gleb
Date: 2007-05-27 07:25:39 EDT (Sun, 27 May 2007)
New Revision: 14782
URL:
27, 2007 at 10:19:09AM -0600, Galen Shipman wrote:
With current code this is not the case. Order tag is set during a
fragment
allocation. It seems wrong according to your description. Attached
patch fixes
this. If no specific ordering tag is provided to allocation
function order
semantics of the BTL, whatever they may be.
I have created a wiki here to help describe the issue as I currently
see it, please feel free to add to this with suggestions/etc..
https://svn.open-mpi.org/trac/ompi/wiki/BTLSemantics
- Galen
On Jun 7, 2007, at 9:55 AM, Galen Shipman wrote
Are people available today to discuss this over the phone?
- Galen
On Jun 7, 2007, at 11:28 AM, Gleb Natapov wrote:
On Thu, Jun 07, 2007 at 11:11:12AM -0400, George Bosilca wrote:
) I expect you to revise the patch in order to propose a generic
solution or I'll trigger a vote against the
on Monday/Tuesday.
- Galen
-DON
George Bosilca wrote:
I'm available this afternoon.
george.
On Jun 7, 2007, at 2:35 PM, Galen Shipman wrote:
Are people available today to discuss this over the phone?
- Galen
On Jun 7, 2007, at 11:28 AM, Gleb Natapov wrote:
On Thu, Jun 07, 2007 at 11
On Jun 11, 2007, at 8:25 AM, Jeff Squyres wrote:
I leave it to the thread subgroup to decide... Should we discuss on
the call tomorrow?
I don't have a strong opinion; I was just testing both because it was
easy to do so. If we want to concentrate on the trunk, I can adjust
my MTT setup.
Hi Gleb,
As we have discussed before I am working on adding support for
multiple QPs with either per peer resources or shared resources.
As a result of this I am trying to clean up a lot of the OpenIB code.
It has grown up organically over the years and needs some attention.
Perhaps we can
On Jun 13, 2007, at 9:49 AM, Torsten Hoefler wrote:
Hi Galen,Gleb,
there is also something weird going on if I call the basic alltoall
during the module_init() of a collective module (I need to wire up my
own QPs in my coll component). It takes 7 seconds for 4 nodes and more
than 30 minutes
On Jun 13, 2007, at 11:33 AM, Jeff Squyres wrote:
On Jun 13, 2007, at 1:15 PM, Nysal Jan wrote:
There is a ticket (closed) here: https://svn.open-mpi.org/trac/ompi/
ticket/548
It was fixed by Galen for 1.2.
Ah -- I forgot to look at closed tickets. I think we broke it again;
it certainly
On Jun 13, 2007, at 12:07 PM, Gleb Natapov wrote:
On Wed, Jun 13, 2007 at 02:05:00PM -0400, Jeff Squyres wrote:
On Jun 13, 2007, at 1:54 PM, Jeff Squyres wrote:
With today's trunk, I still see the problem:
Same thing happens on v1.2 branch. I'll re-open #548.
I am sure it was never
On Jun 13, 2007, at 12:23 PM, Jeff Squyres wrote:
On Jun 13, 2007, at 1:40 PM, Gleb Natapov wrote:
[snip]
coordination kind of teleconference. If people think this is a
good
idea, I can setup the call.
sounds good to me.
Sounds good to me to. Pasha also works on async event thread.
On Jun 13, 2007, at 12:52 PM, Gleb Natapov wrote:
On Wed, Jun 13, 2007 at 02:48:02PM -0400, Jeff Squyres wrote:
On Jun 13, 2007, at 2:41 PM, Gleb Natapov wrote:
Pasha tells me that the best times for Ishai and him are:
- 2000-2030 Israel time
- 1300-1300 US Eastern
- 1100-1130 US Mountain
The patch applies to ib_multifrag as is without a conflict. But the
branch
doesn't compile with or without the patch so I was not able to test
it.
Do you have some uncommitted changes that may generate a conflict? Can
you commit them so they can be resolved? If there is no conflict
...@open-mpi.org] On Behalf Of Galen Shipman
Sent: ב 09 יולי 2007 15:44
To: Open MPI Developers
Subject: Re: [devel-core] Collective Communications Optimization -
MeetingScheduled in Albuquerque!
All,
I have confirmed the meeting to be held at the HPC facility at UNM on
Aug 6,7,8.
Here i
In working on my changes in the ib_multifrag branch I modified the
ompi_free_list.
The change enables a free list to have a bit more personality than
what is dictated by the type of the item on the free list. The
overall problem was that we often use different free list item types
to
Ok here is the numbers on my machines:
0 bytes
mvapich with header caching: 1.56
mvapich without header caching: 1.79
ompi 1.2: 1.59
So on zero bytes ompi not so bad. Also we can see that header
caching
decrease the mvapich latency on 0.23
1 bytes
mvapich with header caching: 1.58
mvapich
21 matches
Mail list logo