The ovirt node-ng installer iso creates imgbased systems with baseos and
appstream repos disabled,
not something you would have with a regular base OS installed system with
added oVirt repos...

so with node, 'dnf install foo' usually fails if not adding an extra
--enablerepo flag, which seems to change with the OS version.

here's some old notes I had.

download latest mlnx ofed archive
tar xfvz *.tgz
cd *64

# mount -o loop MLNX*iso /mnt
# cd /mnt

#./mlnxinstall requires more RPMS to be installed
# note: some versions of CentOS use different case reponames, look at
contents of /etc/yum.repos.d files
yum --enablerepo baseos install perl-Term-ANSIColor
yum --enablerepo baseos --enablerepo appstream install
perl-Getopt-Long tcl gcc-gfortran tcsh tk make
./mlnxinstall


On Thu, Aug 5, 2021 at 3:32 PM Vinícius Ferrão <[email protected]>
wrote:

> Hi Edward, it seems that running mlnxofedinstall would do the job.
> Although I've some questions.
>
> You mentioned the --enable-repo option but I didn't find it. There's a
> disable one, so I'm assuming that it's enabled by default. Anyway there's
> no repos added after the script.
>
> I've run the script with the arguments: ./mlnxofedinstall --with-nfsrdma
> -vvv; and everything went fine:
>
> [root@rhvepyc2 mnt]# /etc/init.d/openibd status
>
>   HCA driver loaded
>
> Configured IPoIB devices:
> ib0
>
> Currently active IPoIB devices:
> ib0
> Configured Mellanox EN devices:
>
> Currently active Mellanox devices:
> ib0
>
> The following OFED modules are loaded:
>
>   rdma_ucm
>   rdma_cm
>   ib_ipoib
>   mlx5_core
>   mlx5_ib
>   ib_uverbs
>   ib_umad
>   ib_cm
>   ib_core
>   mlxfw
>
> [root@rhvepyc2 mnt]# rpm -qa | grep -i mlnx
> libibverbs-54mlnx1-1.54103.x86_64
> infiniband-diags-54mlnx1-1.54103.x86_64
> mlnx-ethtool-5.10-1.54103.x86_64
> rdma-core-54mlnx1-1.54103.x86_64
> dapl-utils-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
> kmod-mlnx-nfsrdma-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
> dapl-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
> mlnx-tools-5.2.0-0.54103.x86_64
> libibumad-54mlnx1-1.54103.x86_64
> opensm-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
> kmod-kernel-mft-mlnx-4.17.0-1.rhel8u4.x86_64
> ibacm-54mlnx1-1.54103.x86_64
> dapl-devel-static-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
> ar_mgr-1.0-5.9.0.MLNX20210617.g5dd71ee.54103.x86_64
> mlnx-ofa_kernel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
> rdma-core-devel-54mlnx1-1.54103.x86_64
> opensm-static-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
> srp_daemon-54mlnx1-1.54103.x86_64
> sharp-2.5.0.MLNX20210613.83fe753-1.54103.x86_64
> mlnx-iproute2-5.11.0-1.54103.x86_64
> kmod-knem-1.1.4.90mlnx1-OFED.5.1.2.5.0.1.rhel8u4.x86_64
> librdmacm-54mlnx1-1.54103.x86_64
> opensm-libs-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
> mlnx-ofa_kernel-devel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
> dapl-devel-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
> dump_pr-1.0-5.9.0.MLNX20210617.g5dd71ee.54103.x86_64
> mlnxofed-docs-5.4-1.0.3.0.noarch
> opensm-devel-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
> knem-1.1.4.90mlnx1-OFED.5.1.2.5.0.1.rhel8u4.x86_64
> librdmacm-utils-54mlnx1-1.54103.x86_64
> mlnx-fw-updater-5.4-1.0.3.0.x86_64
> kmod-mlnx-ofa_kernel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
> libibverbs-utils-54mlnx1-1.54103.x86_64
> ibutils2-2.1.1-0.136.MLNX20210617.g4883fca.54103.x86_64
>
> As a final question, did you selected the option: --add-kernel-support on
> the script? I couldn't find the difference between enabling it or not.
>
> Thank you.
>
> On 5 Aug 2021, at 15:20, Vinícius Ferrão <[email protected]>
> wrote:
>
> Hmmm. Running the mlnx_ofed_install.sh script is a pain. But I got your
> idea. I'll do this test right now and report back. Ideally using the repo
> would guarantee an easy upgrade path between release, but Mellanox is
> lacking on this part.
>
> And yes Edward, I want to use the virtual Infiniband interfaces too.
>
> Thank you.
>
> On 5 Aug 2021, at 10:52, Edward Berger <[email protected]> wrote:
>
> I don't know if you can just remove the gluster-rdma rpm.
>
> I'm using mlnx ofed on some 4.4 ovirt node hosts by installing it via the
> mellanox tar/iso and
> running the mellanox install script after adding the required dependencies
> with --enable-repo,
> which isn't the same as adding a repository and 'dnf install'.  So I would
> try that on a test host.
>
> I use it for the 'virtual infiniband' interfaces that get attached to VMs
> as 'host device passthru'.
>
> I'll note the node versions of gluster are 7.8 (node 4.4.4.0/CentOS8.3)
> and 7.9 (node 4.4.4.1/CentOS8.3).
> unlike your glusterfs version 6.0.x
>
> I'll be trying to install mellanox ofed on node 4.4.7.1 (CentOS 8 stream)
> soon to see how that works out.
>
>
>
> On Wed, Aug 4, 2021 at 10:04 PM Vinícius Ferrão via Users <[email protected]>
> wrote:
>
>> Hello,
>>
>> Is there a way to keep Mellanox OFED and oVirt/RHV playing nice with each
>> other?
>>
>> The real issue is regarding GlusterFS. It seems to be a Mellanox issue,
>> but I would like to know if there's something that we can do make both play
>> nice on the same machine:
>>
>> [root@rhvepyc2 ~]# dnf update --nobest
>> Updating Subscription Management repositories.
>> Last metadata expiration check: 0:14:25 ago on Wed 04 Aug 2021 02:01:11
>> AM -03.
>> Dependencies resolved.
>>
>>  Problem: both package mlnx-ofed-all-user-only-5.4-1.0.3.0.rhel8.4.noarch
>> and mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsolete glusterfs-rdma
>>   - cannot install the best update candidate for package
>> glusterfs-rdma-6.0-49.1.el8.x86_64
>>   - package ovirt-host-4.4.7-1.el8ev.x86_64 requires glusterfs-rdma, but
>> none of the providers can be installed
>>   - package mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsoletes
>> glusterfs-rdma provided by glusterfs-rdma-6.0-49.1.el8.x86_64
>>   - package glusterfs-rdma-3.12.2-40.2.el8.x86_64 requires
>> glusterfs(x86-64) = 3.12.2-40.2.el8, but none of the providers can be
>> installed
>>   - package glusterfs-rdma-6.0-15.el8.x86_64 requires glusterfs(x86-64) =
>> 6.0-15.el8, but none of the providers can be installed
>>   - package glusterfs-rdma-6.0-20.el8.x86_64 requires glusterfs(x86-64) =
>> 6.0-20.el8, but none of the providers can be installed
>>   - package glusterfs-rdma-6.0-37.el8.x86_64 requires glusterfs(x86-64) =
>> 6.0-37.el8, but none of the providers can be installed
>>   - package glusterfs-rdma-6.0-37.2.el8.x86_64 requires glusterfs(x86-64)
>> = 6.0-37.2.el8, but none of the providers can be installed
>>   - cannot install both glusterfs-3.12.2-40.2.el8.x86_64 and
>> glusterfs-6.0-49.1.el8.x86_64
>>   - cannot install both glusterfs-6.0-15.el8.x86_64 and
>> glusterfs-6.0-49.1.el8.x86_64
>>   - cannot install both glusterfs-6.0-20.el8.x86_64 and
>> glusterfs-6.0-49.1.el8.x86_64
>>   - cannot install both glusterfs-6.0-37.el8.x86_64 and
>> glusterfs-6.0-49.1.el8.x86_64
>>   - cannot install both glusterfs-6.0-37.2.el8.x86_64 and
>> glusterfs-6.0-49.1.el8.x86_64
>>   - cannot install the best update candidate for package
>> ovirt-host-4.4.7-1.el8ev.x86_64
>>   - cannot install the best update candidate for package
>> glusterfs-6.0-49.1.el8.x86_64
>>
>> =============================================================================================================================================================
>>  Package                            Architecture            Version
>>                      Repository
>>     Size
>>
>> =============================================================================================================================================================
>> Installing dependencies:
>>  openvswitch                        x86_64
>> 2.14.1-1.54103                    mlnx_ofed_5.4-1.0.3.0_base
>>                 17 M
>>  ovirt-openvswitch                  noarch                  2.11-1.el8ev
>>                     rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms
>>   8.7 k
>>      replacing  rhv-openvswitch.noarch 1:2.11-7.el8ev
>>  unbound                            x86_64                  1.7.3-15.el8
>>                     rhel-8-for-x86_64-appstream-rpms
>>  895 k
>> Skipping packages with conflicts:
>> (add '--best --allowerasing' to command line to force their upgrade):
>>  glusterfs                          x86_64
>> 3.12.2-40.2.el8                   rhel-8-for-x86_64-baseos-rpms
>>                 558 k
>>  glusterfs                          x86_64                  6.0-15.el8
>>                     rhel-8-for-x86_64-baseos-rpms
>>   658 k
>>  glusterfs                          x86_64                  6.0-20.el8
>>                     rhel-8-for-x86_64-baseos-rpms
>>   659 k
>>  glusterfs                          x86_64                  6.0-37.el8
>>                     rhel-8-for-x86_64-baseos-rpms
>>   663 k
>>  glusterfs                          x86_64                  6.0-37.2.el8
>>                     rhel-8-for-x86_64-baseos-rpms
>>   662 k
>> Skipping packages with broken dependencies:
>>  glusterfs-rdma                     x86_64
>> 3.12.2-40.2.el8                   rhel-8-for-x86_64-baseos-rpms
>>                  49 k
>>  glusterfs-rdma                     x86_64                  6.0-15.el8
>>                     rhel-8-for-x86_64-baseos-rpms
>>    46 k
>>  glusterfs-rdma                     x86_64                  6.0-20.el8
>>                     rhel-8-for-x86_64-baseos-rpms
>>    46 k
>>  glusterfs-rdma                     x86_64                  6.0-37.2.el8
>>                     rhel-8-for-x86_64-baseos-rpms
>>    48 k
>>  glusterfs-rdma                     x86_64                  6.0-37.el8
>>                     rhel-8-for-x86_64-baseos-rpms
>>    48 k
>>
>> Transaction Summary
>>
>> =============================================================================================================================================================
>> Install   3 Packages
>> Skip     10 Packages
>>
>> Total size: 18 M
>> Is this ok [y/N]:
>>
>> I really don't care for GlusterFS on this cluster, but Mellanox OFED is
>> much more relevant do me.
>>
>> Thank you all,
>> Vinícius.
>> _______________________________________________
>> Users mailing list -- [email protected]
>> To unsubscribe send an email to [email protected]
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/[email protected]/message/MGQBIBM4BCHBBMLCY2QDKAR3Q6OE5LCX/
>>
>
>
>
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/KX5CE6UGMG5IUYBSISHPV3OATK3D4QJI/

Reply via email to