Hi Dmitry,
I have only tried with the latest release 1.6.2. Ill check out the master
branch 1.7 from github and let you know..
Regards,Mohammed
Am Donnerstag, 22. November 2018, 15:24:53 MEZ hat Gladkov, Dmitry
<[email protected]> Folgendes geschrieben:
#yiv3062498308 #yiv3062498308 -- _filtered #yiv3062498308
{font-family:Helvetica;panose-1:2 11 6 4 2 2 2 2 2 4;} _filtered #yiv3062498308
{panose-1:2 4 5 3 5 4 6 3 2 4;} _filtered #yiv3062498308
{font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 4;} _filtered #yiv3062498308
{panose-1:2 11 6 9 4 5 4 2 2 4;}#yiv3062498308 #yiv3062498308
p.yiv3062498308MsoNormal, #yiv3062498308 li.yiv3062498308MsoNormal,
#yiv3062498308 div.yiv3062498308MsoNormal
{margin:0cm;margin-bottom:.0001pt;font-size:12.0pt;font-family:New
serif;}#yiv3062498308 a:link, #yiv3062498308 span.yiv3062498308MsoHyperlink
{color:blue;text-decoration:underline;}#yiv3062498308 a:visited, #yiv3062498308
span.yiv3062498308MsoHyperlinkFollowed
{color:purple;text-decoration:underline;}#yiv3062498308
span.yiv3062498308EmailStyle17
{font-family:sans-serif;color:#1F497D;}#yiv3062498308
.yiv3062498308MsoChpDefault {font-size:10.0pt;} _filtered #yiv3062498308
{margin:72.0pt 72.0pt 72.0pt 72.0pt;}#yiv3062498308
div.yiv3062498308WordSection1 {}#yiv3062498308
Hi Mohammed,
I don’t recommend to verbs provider with FI_EP_RDM EP type. We deprecating this
provider since libfabric 1.7.
Now our default provider is RxM/verbs (verbs with FI_EP_MSG EP type) to run
libfabric on IB/iWARP/RoCE verbs devices.
If you use libfabric 1.6.x, RxM/verbs should be the default provider.
Can you see the hangs when using libfabric’s master branch?
--
Dmitry
From: Mohammed Shaheen [mailto:[email protected]]
Sent: Thursday, November 22, 2018 4:54 PM
To: Gladkov, Dmitry <[email protected]>; Hefty, Sean
<[email protected]>; [email protected];
[email protected]; Ilango, Arun <[email protected]>
Subject: Re: [libfabric-users] intel mpi with libfabric
Thanks Arun and Dmitry for your support.
Well, I am building my own libfabric, and I export the right variables and
source intel mpi with -ofi_internal=0. I figured out where the problem is:
1. If libfabric is built for all providers, i.e. run ./configure without
including and exluding providers, it will build ibverbs among others; however,
the mpi test program will hang during execution.
2. If libfabric configured with only enabling ibverbs and setting all other
providers, i.e. ./configure --enable-verbs=yes --enable-rxm=no --enable-rxd=no
--enable-sockets=no --enable-tcp=no --enable-udp=no, mpi test program will run
through
Another observation when I enable debug, --enable-debug, I get the
aforementioned message (here it is again):
prov/verbs/src/ep_rdm/verbs_rdm_cm.c:337: fi_ibv_rdm_process_addr_resolved:
Assertion `id->verbs == ep->domain->verbs' failed.
and the mpi test program runs through in case 2 above. I am not sure whether or
not I should take this message seriously?
I did not see any difference in the test mpi program behaviour if I build
ibverbs as a DSO (--enable-verbs=dl) or as the default which I suppose would be
part of libfabric (--enable-verbs=yes) except in case of DSO, the
FI_PROVIDER_PATH must be exported. However, worth mentioning as a bug
(probably), when ibverbs (or any other provider I assume) is built as a DSO,
the libfabric folder under which the provider DSOs are put has wrong
permissions, which means if you build libfabric as a root and use default
installation folders (/usr/local/lib), your mpi program would not run through
if you launch it as some other user.
Regards,
Mohammed
Am Mittwoch, 21.November 2018, 19:42:24 MEZ hat Ilango, Arun
<[email protected]> Folgendes geschrieben:
Mohammed,
Just to add what Dmitry said, if you're using your own libfabric, please make
sure it's the latest (i.e. v1.6.2). You can check the version by running
fi_info --version.
Other things to check:
1. Make sure you have librdmacm package installed.
2. Check if the IPoIB interface of the node has been configured with an IP
address and is pingable from other nodes in the cluster.
Thanks,
Arun.
-----Original Message-----
From: Gladkov, Dmitry
Sent: Wednesday, November 21, 2018 10:31 AM
To: Hefty, Sean <[email protected]>; Mohammed Shaheen
<[email protected]>;[email protected];[email protected]
Cc: Ilango, Arun <[email protected]>
Subject: RE: [libfabric-users] intel mpi with libfabric
Hi Mohammed,
Do you use your own version of libfabirc?
IMPI 2019 U1 uses its internal libfabric by default.
If you use your libfabric, please, specify LD_LIBRABRY_PATH to your library and
FI_PROVIDER_PATH to path to OFI DL providers (<ofi_install_dir>/lib/libfabric)
if you use DL provider, or unset this variable (mpivars.sh sets it).
--
Dmitry
-----Original Message-----
From: Hefty, Sean
Sent: Wednesday, November 21, 2018 8:52 PM
To: Mohammed Shaheen
<[email protected]>;[email protected];[email protected]
Cc: Ilango, Arun <[email protected]>; Gladkov, Dmitry
<[email protected]>
Subject: RE: [libfabric-users] intel mpi with libfabric
Copying ofiwg and key developers for this issue.
- Sean
> I get the following error running a small mpi test program using intel
> mpi 2019 from intel parallel studio cluster edition update 1 (the
> newest) on Mellanox FDR Cluster:
>
>
>
> test.e: prov/verbs/src/ep_rdm/verbs_rdm_cm.c:337:
> fi_ibv_rdm_process_addr_resolved: Assertion `id->verbs == ep->domain-
> >verbs' failed.
>
>
>
> The program hangs on this error message. I installed the newest
> release of libfabric and configured it with only ibverbs support. I
> used the inbox (sles 11 sp4 and sles 12 sp3) ibverbs and rdma
> libraries. I also tried with mellanox ofed to no avail.
>
>
>
>
> Any ideas how to go about it?
>
>
>
>
>
> Regards,
>
> Mohammed
--------------------------------------------------------------------
Joint Stock Company Intel A/O
Registered legal address: Krylatsky Hills Business Park,
17 Krylatskaya Str., Bldg 4, Moscow 121614,
Russian Federation
This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.
_______________________________________________
ofiwg mailing list
[email protected]
https://lists.openfabrics.org/mailman/listinfo/ofiwg