Can you attach the patch to the bug (vs. Pasting it in)? That'll make it easier to apply.
Thanks! On 9/20/06 4:57 PM, "Open MPI" <b...@open-mpi.org> wrote: > #183: DR deadlock > ---------------------+------------------------------------------------------ > Reporter: afriedle | Owner: afriedle > Type: defect | Status: new > Priority: critical | Milestone: Open MPI 1.2 > Version: trunk | Resolution: > Keywords: | > ---------------------+------------------------------------------------------ > Comment (by afriedle): > > Lets clear some of these DR tickets up.. here's a diff that solves the > problem: > > {{{ > Index: pml_dr_sendreq.h > =================================================================== > --- pml_dr_sendreq.h (revision 11719) > +++ pml_dr_sendreq.h (working copy) > @@ -248,7 +248,6 @@ > mca_pml_base_bsend_request_fini((ompi_request_t*)sendreq); > \ > } > \ > \ > - OPAL_THREAD_LOCK(&ompi_request_lock); > \ > if( false == sendreq->req_send.req_base.req_ompi.req_complete ) { > \ > /* Should only be called for long messages (maybe synchronous) */ > \ > MCA_PML_DR_SEND_REQUEST_MPI_COMPLETE(sendreq); > \ > @@ -265,7 +264,6 @@ > ompi_convertor_set_position(&sendreq->req_send.req_convertor, > &offset); \ > } > \ > } > \ > - OPAL_THREAD_UNLOCK(&ompi_request_lock); > \ > } while (0) > > /* > Index: pml_dr_recvreq.h > =================================================================== > --- pml_dr_recvreq.h (revision 11719) > +++ pml_dr_recvreq.h (working copy) > @@ -123,7 +123,6 @@ > } > \ > OPAL_THREAD_UNLOCK((recvreq)->req_mutex); > \ > \ > - OPAL_THREAD_LOCK(&ompi_request_lock); > \ > opal_list_remove_item(&(recvreq)->req_proc->matched_receives, > \ > (opal_list_item_t*)(recvreq)); > \ > \ > @@ -143,7 +142,6 @@ > if( true == recvreq->req_recv.req_base.req_free_called ) { > \ > MCA_PML_DR_RECV_REQUEST_RETURN( recvreq ); > \ > } > \ > - OPAL_THREAD_UNLOCK(&ompi_request_lock); > \ > } while(0) > > > > }}} > > This is against trunk r11719, I've had this fix on my UD branch for > several months now and has not caused problems. -- Jeff Squyres Server Virtualization Business Unit Cisco Systems