Hi,

I'm debugging a performance problem with running IMB-MP1/barrier in an NP64 cluster (8 nodes, 8 cores each). I'm using openmpi-1.4.1 from the OFED-1.5.1 distribution. The BTL is openib/iWARP via Chelsio's T3 RNIC. In short, a NP60 and smaller run completes in a timely manner as expected, but NP61 and larger runs come to a crawl at the 8KB IO size and take ~5-10min to complete. It does complete though. It behaves this way even if I run on > 8 nodes so there are available cores. IE a NP64 on a 16 node cluster still behaves the same way even though there are only 4 ranks on each node. So its apparently not a thread starvation issue due to lack of cores. When in the stalled state, I see on the order of 100 or so established iwarp connections on each node. And the connection count increases VERY slowly and sporadically (at its peak there are around 800 connections for a NP64 gather operation). In comparison, when I run the <= NP60 runs, the connections quickly ramp up to the expected amount. I added hooks in the openib BTL to track the time it takes to setup each connection. In all runs, both <= NP60 and > NP60, the average connection setup time is around 200ms. And the max setup time seen is never much above this value. That tells me that its not individual connection setup that is the issue. I then added printfs/fflushes in librdmacm to visually see when a connection is attempted and when it is accepted. When I run with these printfs, I see the connections get setup quickly and evently in the <= NP60 case. Initially when the job is started, I see a small flurry of connections getting setup, then the run begins and at around 1KB IO size I see a 2nd large flurry of connection setups. Then the test continues and completes. With the >NP60 case, this second round of connection setups is very sporadic and slow. Very slow! I'll see little bursts of ~10-20 connections setup, then long random pauses. The net is that full connection setup for the job takes 5-10min. During this time the ranks are basically spinning idle awaiting the connections to get setup. So I'm concluding that something above the BTL layer isn't issuing the endpoint connect requests in a timely manner.

Attached are 3 padb dumps during the stall. Anybody see anything interesting in these?

Any ideas how I can further debug this? Once I get above the openib BTL layer my eyes glaze over and I get lost quickly. :) I would greatly appreciate any ideas from the OpenMPI experts!


Thanks in advance,

Steve.

-----------------
[0-63] (64 processes)
-----------------
main() at ?:?
  IMB_init_buffers_iter() at ?:?
    IMB_gather() at ?:?
      PMPI_Gather() at pgather.c:175
        mca_coll_sync_gather() at coll_sync_gather.c:46
          ompi_coll_tuned_gather_intra_dec_fixed() at 
coll_tuned_decision_fixed.c:714
            -----------------
            [0-1,4-63] (62 processes)
            -----------------
            ompi_coll_tuned_gather_intra_linear_sync() at 
coll_tuned_gather.c:248
              mca_pml_ob1_recv() at pml_ob1_irecv.c:104
                ompi_request_wait_completion() at 
../../../../ompi/request/request.h:375
                  opal_condition_wait() at 
../../../../opal/threads/condition.h:99
            -----------------
            [2-3] (2 processes)
            -----------------
            ompi_coll_tuned_gather_intra_linear_sync() at 
coll_tuned_gather.c:315
              ompi_request_default_wait() at request/req_wait.c:37
                ompi_request_wait_completion() at ../ompi/request/request.h:375
                  opal_condition_wait() at ../opal/threads/condition.h:99
-----------------
[0-23,25-63] (63 processes)
-----------------
main() at ?:?
  IMB_init_buffers_iter() at ?:?
    IMB_gather() at ?:?
      PMPI_Gather() at pgather.c:175
        mca_coll_sync_gather() at coll_sync_gather.c:46
          ompi_coll_tuned_gather_intra_dec_fixed() at 
coll_tuned_decision_fixed.c:714
            -----------------
            [0-3,6-23,25-63] (61 processes)
            -----------------
            ompi_coll_tuned_gather_intra_linear_sync() at 
coll_tuned_gather.c:248
              mca_pml_ob1_recv() at pml_ob1_irecv.c:104
                ompi_request_wait_completion() at 
../../../../ompi/request/request.h:375
                  opal_condition_wait() at 
../../../../opal/threads/condition.h:99
            -----------------
            4 (1 processes)
            -----------------
            ompi_coll_tuned_gather_intra_linear_sync() at 
coll_tuned_gather.c:302
              mca_pml_ob1_send() at pml_ob1_isend.c:125
                ompi_request_wait_completion() at 
../../../../ompi/request/request.h:375
                  opal_condition_wait() at 
../../../../opal/threads/condition.h:99
            -----------------
            5 (1 processes)
            -----------------
            ompi_coll_tuned_gather_intra_linear_sync() at 
coll_tuned_gather.c:315
              ompi_request_default_wait() at request/req_wait.c:37
                ompi_request_wait_completion() at ../ompi/request/request.h:375
                  opal_condition_wait() at ../opal/threads/condition.h:99
-----------------
24 (1 processes)
-----------------
ThreadId: 1
  ??() at ?:?
    ??() at ?:?
      ??() at ?:?
        poll_device() at btl_openib_component.c:3020
          ??() at ?:?
            ??() at ?:?
              ??() at ?:?
                ??() at ?:?
                  ??() at ?:?
                    t3b_poll_cq() at src/cq.c:445
                      ThreadId: 2
                        clone() at ?:?
                          start_thread() at ?:?
                            btl_openib_async_thread() at btl_openib_async.c:344
                              poll() at ?:?
                                ThreadId: 3
                                  clone() at ?:?
                                    start_thread() at ?:?
                                      service_thread_start() at 
btl_openib_fd.c:432
                                        select() at ?:?
-----------------
[0-63] (64 processes)
-----------------
main() at ?:?
  IMB_init_buffers_iter() at ?:?
    IMB_gather() at ?:?
      PMPI_Gather() at pgather.c:175
        mca_coll_sync_gather() at coll_sync_gather.c:46
          ompi_coll_tuned_gather_intra_dec_fixed() at 
coll_tuned_decision_fixed.c:714
            -----------------
            [0-6,9-63] (62 processes)
            -----------------
            ompi_coll_tuned_gather_intra_linear_sync() at 
coll_tuned_gather.c:248
              mca_pml_ob1_recv() at pml_ob1_irecv.c:104
                ompi_request_wait_completion() at 
../../../../ompi/request/request.h:375
                  opal_condition_wait() at 
../../../../opal/threads/condition.h:99
            -----------------
            7 (1 processes)
            -----------------
            ompi_coll_tuned_gather_intra_linear_sync() at 
coll_tuned_gather.c:302
              mca_pml_ob1_send() at pml_ob1_isend.c:125
                ompi_request_wait_completion() at 
../../../../ompi/request/request.h:375
                  opal_condition_wait() at 
../../../../opal/threads/condition.h:99
            -----------------
            8 (1 processes)
            -----------------
            ompi_coll_tuned_gather_intra_linear_sync() at 
coll_tuned_gather.c:315
              ompi_request_default_wait() at request/req_wait.c:37
                ompi_request_wait_completion() at ../ompi/request/request.h:375
                  opal_condition_wait() at ../opal/threads/condition.h:99

Reply via email to