[ovirt-users] Re: Is there a way to support Mellanox OFED with oVirt/RHV?

2021-08-05 Thread Vinícius Ferrão via Users
Oh I got it. --enablerepo is on yum/dnf, and not mlnxofedinstall.

Alright, I run mlnxofedinstall without arguments. That would do the job. Thank 
you Edward!

On 5 Aug 2021, at 17:26, Edward Berger 
mailto:edwber...@gmail.com>> wrote:

The ovirt node-ng installer iso creates imgbased systems with baseos and 
appstream repos disabled,
not something you would have with a regular base OS installed system with added 
oVirt repos...

so with node, 'dnf install foo' usually fails if not adding an extra 
--enablerepo flag, which seems to change with the OS version.

here's some old notes I had.

download latest mlnx ofed archive
tar xfvz *.tgz
cd *64

# mount -o loop MLNX*iso /mnt
# cd /mnt

#./mlnxinstall requires more RPMS to be installed
# note: some versions of CentOS use different case reponames, look at contents 
of /etc/yum.repos.d files
yum --enablerepo baseos install perl-Term-ANSIColor
yum --enablerepo baseos --enablerepo appstream install perl-Getopt-Long tcl 
gcc-gfortran tcsh tk make
./mlnxinstall

On Thu, Aug 5, 2021 at 3:32 PM Vinícius Ferrão 
mailto:fer...@versatushpc.com.br>> wrote:
Hi Edward, it seems that running mlnxofedinstall would do the job. Although 
I've some questions.

You mentioned the --enable-repo option but I didn't find it. There's a disable 
one, so I'm assuming that it's enabled by default. Anyway there's no repos 
added after the script.

I've run the script with the arguments: ./mlnxofedinstall --with-nfsrdma -vvv; 
and everything went fine:

[root@rhvepyc2 mnt]# /etc/init.d/openibd status

  HCA driver loaded

Configured IPoIB devices:
ib0

Currently active IPoIB devices:
ib0
Configured Mellanox EN devices:

Currently active Mellanox devices:
ib0

The following OFED modules are loaded:

  rdma_ucm
  rdma_cm
  ib_ipoib
  mlx5_core
  mlx5_ib
  ib_uverbs
  ib_umad
  ib_cm
  ib_core
  mlxfw

[root@rhvepyc2 mnt]# rpm -qa | grep -i mlnx
libibverbs-54mlnx1-1.54103.x86_64
infiniband-diags-54mlnx1-1.54103.x86_64
mlnx-ethtool-5.10-1.54103.x86_64
rdma-core-54mlnx1-1.54103.x86_64
dapl-utils-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
kmod-mlnx-nfsrdma-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
dapl-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
mlnx-tools-5.2.0-0.54103.x86_64
libibumad-54mlnx1-1.54103.x86_64
opensm-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
kmod-kernel-mft-mlnx-4.17.0-1.rhel8u4.x86_64
ibacm-54mlnx1-1.54103.x86_64
dapl-devel-static-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
ar_mgr-1.0-5.9.0.MLNX20210617.g5dd71ee.54103.x86_64
mlnx-ofa_kernel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
rdma-core-devel-54mlnx1-1.54103.x86_64
opensm-static-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
srp_daemon-54mlnx1-1.54103.x86_64
sharp-2.5.0.MLNX20210613.83fe753-1.54103.x86_64
mlnx-iproute2-5.11.0-1.54103.x86_64
kmod-knem-1.1.4.90mlnx1-OFED.5.1.2.5.0.1.rhel8u4.x86_64
librdmacm-54mlnx1-1.54103.x86_64
opensm-libs-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
mlnx-ofa_kernel-devel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
dapl-devel-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
dump_pr-1.0-5.9.0.MLNX20210617.g5dd71ee.54103.x86_64
mlnxofed-docs-5.4-1.0.3.0.noarch
opensm-devel-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
knem-1.1.4.90mlnx1-OFED.5.1.2.5.0.1.rhel8u4.x86_64
librdmacm-utils-54mlnx1-1.54103.x86_64
mlnx-fw-updater-5.4-1.0.3.0.x86_64
kmod-mlnx-ofa_kernel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
libibverbs-utils-54mlnx1-1.54103.x86_64
ibutils2-2.1.1-0.136.MLNX20210617.g4883fca.54103.x86_64

As a final question, did you selected the option: --add-kernel-support on the 
script? I couldn't find the difference between enabling it or not.

Thank you.

On 5 Aug 2021, at 15:20, Vinícius Ferrão 
mailto:fer...@versatushpc.com.br>> wrote:

Hmmm. Running the mlnx_ofed_install.sh script is a pain. But I got your idea. 
I'll do this test right now and report back. Ideally using the repo would 
guarantee an easy upgrade path between release, but Mellanox is lacking on this 
part.

And yes Edward, I want to use the virtual Infiniband interfaces too.

Thank you.

On 5 Aug 2021, at 10:52, Edward Berger 
mailto:edwber...@gmail.com>> wrote:

I don't know if you can just remove the gluster-rdma rpm.

I'm using mlnx ofed on some 4.4 ovirt node hosts by installing it via the 
mellanox tar/iso and
running the mellanox install script after adding the required dependencies with 
--enable-repo,
which isn't the same as adding a repository and 'dnf install'.  So I would try 
that on a test host.

I use it for the 'virtual infiniband' interfaces that get attached to VMs as 
'host device passthru'.

I'll note the node versions of gluster are 7.8 (node 
4.4.4.0/CentOS8.3) and 7.9 (node 
4.4.4.1/CentOS8.3).
unlike your glusterfs version 6.0.x

I'll be trying to install mellanox ofed on node 4.4.7.1 (CentOS 8 stream) soon 
to see how that works out.



On Wed, Aug 4, 2021 at 10:04 PM Vinícius Ferrão via Users 
mailto:users@ovirt.org>> wrote:
Hello,

Is there a way to keep Mellanox OFED and oVirt/RHV 

[ovirt-users] Re: Is there a way to support Mellanox OFED with oVirt/RHV?

2021-08-05 Thread Edward Berger
The ovirt node-ng installer iso creates imgbased systems with baseos and
appstream repos disabled,
not something you would have with a regular base OS installed system with
added oVirt repos...

so with node, 'dnf install foo' usually fails if not adding an extra
--enablerepo flag, which seems to change with the OS version.

here's some old notes I had.

download latest mlnx ofed archive
tar xfvz *.tgz
cd *64

# mount -o loop MLNX*iso /mnt
# cd /mnt

#./mlnxinstall requires more RPMS to be installed
# note: some versions of CentOS use different case reponames, look at
contents of /etc/yum.repos.d files
yum --enablerepo baseos install perl-Term-ANSIColor
yum --enablerepo baseos --enablerepo appstream install
perl-Getopt-Long tcl gcc-gfortran tcsh tk make
./mlnxinstall


On Thu, Aug 5, 2021 at 3:32 PM Vinícius Ferrão 
wrote:

> Hi Edward, it seems that running mlnxofedinstall would do the job.
> Although I've some questions.
>
> You mentioned the --enable-repo option but I didn't find it. There's a
> disable one, so I'm assuming that it's enabled by default. Anyway there's
> no repos added after the script.
>
> I've run the script with the arguments: ./mlnxofedinstall --with-nfsrdma
> -vvv; and everything went fine:
>
> [root@rhvepyc2 mnt]# /etc/init.d/openibd status
>
>   HCA driver loaded
>
> Configured IPoIB devices:
> ib0
>
> Currently active IPoIB devices:
> ib0
> Configured Mellanox EN devices:
>
> Currently active Mellanox devices:
> ib0
>
> The following OFED modules are loaded:
>
>   rdma_ucm
>   rdma_cm
>   ib_ipoib
>   mlx5_core
>   mlx5_ib
>   ib_uverbs
>   ib_umad
>   ib_cm
>   ib_core
>   mlxfw
>
> [root@rhvepyc2 mnt]# rpm -qa | grep -i mlnx
> libibverbs-54mlnx1-1.54103.x86_64
> infiniband-diags-54mlnx1-1.54103.x86_64
> mlnx-ethtool-5.10-1.54103.x86_64
> rdma-core-54mlnx1-1.54103.x86_64
> dapl-utils-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
> kmod-mlnx-nfsrdma-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
> dapl-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
> mlnx-tools-5.2.0-0.54103.x86_64
> libibumad-54mlnx1-1.54103.x86_64
> opensm-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
> kmod-kernel-mft-mlnx-4.17.0-1.rhel8u4.x86_64
> ibacm-54mlnx1-1.54103.x86_64
> dapl-devel-static-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
> ar_mgr-1.0-5.9.0.MLNX20210617.g5dd71ee.54103.x86_64
> mlnx-ofa_kernel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
> rdma-core-devel-54mlnx1-1.54103.x86_64
> opensm-static-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
> srp_daemon-54mlnx1-1.54103.x86_64
> sharp-2.5.0.MLNX20210613.83fe753-1.54103.x86_64
> mlnx-iproute2-5.11.0-1.54103.x86_64
> kmod-knem-1.1.4.90mlnx1-OFED.5.1.2.5.0.1.rhel8u4.x86_64
> librdmacm-54mlnx1-1.54103.x86_64
> opensm-libs-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
> mlnx-ofa_kernel-devel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
> dapl-devel-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
> dump_pr-1.0-5.9.0.MLNX20210617.g5dd71ee.54103.x86_64
> mlnxofed-docs-5.4-1.0.3.0.noarch
> opensm-devel-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
> knem-1.1.4.90mlnx1-OFED.5.1.2.5.0.1.rhel8u4.x86_64
> librdmacm-utils-54mlnx1-1.54103.x86_64
> mlnx-fw-updater-5.4-1.0.3.0.x86_64
> kmod-mlnx-ofa_kernel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
> libibverbs-utils-54mlnx1-1.54103.x86_64
> ibutils2-2.1.1-0.136.MLNX20210617.g4883fca.54103.x86_64
>
> As a final question, did you selected the option: --add-kernel-support on
> the script? I couldn't find the difference between enabling it or not.
>
> Thank you.
>
> On 5 Aug 2021, at 15:20, Vinícius Ferrão 
> wrote:
>
> Hmmm. Running the mlnx_ofed_install.sh script is a pain. But I got your
> idea. I'll do this test right now and report back. Ideally using the repo
> would guarantee an easy upgrade path between release, but Mellanox is
> lacking on this part.
>
> And yes Edward, I want to use the virtual Infiniband interfaces too.
>
> Thank you.
>
> On 5 Aug 2021, at 10:52, Edward Berger  wrote:
>
> I don't know if you can just remove the gluster-rdma rpm.
>
> I'm using mlnx ofed on some 4.4 ovirt node hosts by installing it via the
> mellanox tar/iso and
> running the mellanox install script after adding the required dependencies
> with --enable-repo,
> which isn't the same as adding a repository and 'dnf install'.  So I would
> try that on a test host.
>
> I use it for the 'virtual infiniband' interfaces that get attached to VMs
> as 'host device passthru'.
>
> I'll note the node versions of gluster are 7.8 (node 4.4.4.0/CentOS8.3)
> and 7.9 (node 4.4.4.1/CentOS8.3).
> unlike your glusterfs version 6.0.x
>
> I'll be trying to install mellanox ofed on node 4.4.7.1 (CentOS 8 stream)
> soon to see how that works out.
>
>
>
> On Wed, Aug 4, 2021 at 10:04 PM Vinícius Ferrão via Users 
> wrote:
>
>> Hello,
>>
>> Is there a way to keep Mellanox OFED and oVirt/RHV playing nice with each
>> other?
>>
>> The real issue is regarding GlusterFS. It seems to be a Mellanox issue,
>> but I would like to know if there's something that we can do make both play
>> nice on the same machine:

[ovirt-users] Re: Is there a way to support Mellanox OFED with oVirt/RHV?

2021-08-05 Thread Vinícius Ferrão via Users
Hi Edward, it seems that running mlnxofedinstall would do the job. Although 
I've some questions.

You mentioned the --enable-repo option but I didn't find it. There's a disable 
one, so I'm assuming that it's enabled by default. Anyway there's no repos 
added after the script.

I've run the script with the arguments: ./mlnxofedinstall --with-nfsrdma -vvv; 
and everything went fine:

[root@rhvepyc2 mnt]# /etc/init.d/openibd status

  HCA driver loaded

Configured IPoIB devices:
ib0

Currently active IPoIB devices:
ib0
Configured Mellanox EN devices:

Currently active Mellanox devices:
ib0

The following OFED modules are loaded:

  rdma_ucm
  rdma_cm
  ib_ipoib
  mlx5_core
  mlx5_ib
  ib_uverbs
  ib_umad
  ib_cm
  ib_core
  mlxfw

[root@rhvepyc2 mnt]# rpm -qa | grep -i mlnx
libibverbs-54mlnx1-1.54103.x86_64
infiniband-diags-54mlnx1-1.54103.x86_64
mlnx-ethtool-5.10-1.54103.x86_64
rdma-core-54mlnx1-1.54103.x86_64
dapl-utils-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
kmod-mlnx-nfsrdma-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
dapl-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
mlnx-tools-5.2.0-0.54103.x86_64
libibumad-54mlnx1-1.54103.x86_64
opensm-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
kmod-kernel-mft-mlnx-4.17.0-1.rhel8u4.x86_64
ibacm-54mlnx1-1.54103.x86_64
dapl-devel-static-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
ar_mgr-1.0-5.9.0.MLNX20210617.g5dd71ee.54103.x86_64
mlnx-ofa_kernel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
rdma-core-devel-54mlnx1-1.54103.x86_64
opensm-static-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
srp_daemon-54mlnx1-1.54103.x86_64
sharp-2.5.0.MLNX20210613.83fe753-1.54103.x86_64
mlnx-iproute2-5.11.0-1.54103.x86_64
kmod-knem-1.1.4.90mlnx1-OFED.5.1.2.5.0.1.rhel8u4.x86_64
librdmacm-54mlnx1-1.54103.x86_64
opensm-libs-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
mlnx-ofa_kernel-devel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
dapl-devel-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
dump_pr-1.0-5.9.0.MLNX20210617.g5dd71ee.54103.x86_64
mlnxofed-docs-5.4-1.0.3.0.noarch
opensm-devel-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
knem-1.1.4.90mlnx1-OFED.5.1.2.5.0.1.rhel8u4.x86_64
librdmacm-utils-54mlnx1-1.54103.x86_64
mlnx-fw-updater-5.4-1.0.3.0.x86_64
kmod-mlnx-ofa_kernel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
libibverbs-utils-54mlnx1-1.54103.x86_64
ibutils2-2.1.1-0.136.MLNX20210617.g4883fca.54103.x86_64

As a final question, did you selected the option: --add-kernel-support on the 
script? I couldn't find the difference between enabling it or not.

Thank you.

On 5 Aug 2021, at 15:20, Vinícius Ferrão 
mailto:fer...@versatushpc.com.br>> wrote:

Hmmm. Running the mlnx_ofed_install.sh script is a pain. But I got your idea. 
I'll do this test right now and report back. Ideally using the repo would 
guarantee an easy upgrade path between release, but Mellanox is lacking on this 
part.

And yes Edward, I want to use the virtual Infiniband interfaces too.

Thank you.

On 5 Aug 2021, at 10:52, Edward Berger 
mailto:edwber...@gmail.com>> wrote:

I don't know if you can just remove the gluster-rdma rpm.

I'm using mlnx ofed on some 4.4 ovirt node hosts by installing it via the 
mellanox tar/iso and
running the mellanox install script after adding the required dependencies with 
--enable-repo,
which isn't the same as adding a repository and 'dnf install'.  So I would try 
that on a test host.

I use it for the 'virtual infiniband' interfaces that get attached to VMs as 
'host device passthru'.

I'll note the node versions of gluster are 7.8 (node 
4.4.4.0/CentOS8.3) and 7.9 (node 
4.4.4.1/CentOS8.3).
unlike your glusterfs version 6.0.x

I'll be trying to install mellanox ofed on node 4.4.7.1 (CentOS 8 stream) soon 
to see how that works out.



On Wed, Aug 4, 2021 at 10:04 PM Vinícius Ferrão via Users 
mailto:users@ovirt.org>> wrote:
Hello,

Is there a way to keep Mellanox OFED and oVirt/RHV playing nice with each other?

The real issue is regarding GlusterFS. It seems to be a Mellanox issue, but I 
would like to know if there's something that we can do make both play nice on 
the same machine:

[root@rhvepyc2 ~]# dnf update --nobest
Updating Subscription Management repositories.
Last metadata expiration check: 0:14:25 ago on Wed 04 Aug 2021 02:01:11 AM -03.
Dependencies resolved.

 Problem: both package mlnx-ofed-all-user-only-5.4-1.0.3.0.rhel8.4.noarch and 
mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsolete glusterfs-rdma
  - cannot install the best update candidate for package 
glusterfs-rdma-6.0-49.1.el8.x86_64
  - package ovirt-host-4.4.7-1.el8ev.x86_64 requires glusterfs-rdma, but none 
of the providers can be installed
  - package mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsoletes glusterfs-rdma 
provided by glusterfs-rdma-6.0-49.1.el8.x86_64
  - package glusterfs-rdma-3.12.2-40.2.el8.x86_64 requires glusterfs(x86-64) = 
3.12.2-40.2.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-15.el8.x86_64 requires glusterfs(x86-64) = 
6.0-15.el8, but none of 

[ovirt-users] Re: Is there a way to support Mellanox OFED with oVirt/RHV?

2021-08-05 Thread Vinícius Ferrão
Hmmm. Running the mlnx_ofed_install.sh script is a pain. But I got your idea. 
I'll do this test right now and report back. Ideally using the repo would 
guarantee an easy upgrade path between release, but Mellanox is lacking on this 
part.

And yes Edward, I want to use the virtual Infiniband interfaces too.

Thank you.

On 5 Aug 2021, at 10:52, Edward Berger 
mailto:edwber...@gmail.com>> wrote:

I don't know if you can just remove the gluster-rdma rpm.

I'm using mlnx ofed on some 4.4 ovirt node hosts by installing it via the 
mellanox tar/iso and
running the mellanox install script after adding the required dependencies with 
--enable-repo,
which isn't the same as adding a repository and 'dnf install'.  So I would try 
that on a test host.

I use it for the 'virtual infiniband' interfaces that get attached to VMs as 
'host device passthru'.

I'll note the node versions of gluster are 7.8 (node 
4.4.4.0/CentOS8.3) and 7.9 (node 
4.4.4.1/CentOS8.3).
unlike your glusterfs version 6.0.x

I'll be trying to install mellanox ofed on node 4.4.7.1 (CentOS 8 stream) soon 
to see how that works out.



On Wed, Aug 4, 2021 at 10:04 PM Vinícius Ferrão via Users 
mailto:users@ovirt.org>> wrote:
Hello,

Is there a way to keep Mellanox OFED and oVirt/RHV playing nice with each other?

The real issue is regarding GlusterFS. It seems to be a Mellanox issue, but I 
would like to know if there's something that we can do make both play nice on 
the same machine:

[root@rhvepyc2 ~]# dnf update --nobest
Updating Subscription Management repositories.
Last metadata expiration check: 0:14:25 ago on Wed 04 Aug 2021 02:01:11 AM -03.
Dependencies resolved.

 Problem: both package mlnx-ofed-all-user-only-5.4-1.0.3.0.rhel8.4.noarch and 
mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsolete glusterfs-rdma
  - cannot install the best update candidate for package 
glusterfs-rdma-6.0-49.1.el8.x86_64
  - package ovirt-host-4.4.7-1.el8ev.x86_64 requires glusterfs-rdma, but none 
of the providers can be installed
  - package mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsoletes glusterfs-rdma 
provided by glusterfs-rdma-6.0-49.1.el8.x86_64
  - package glusterfs-rdma-3.12.2-40.2.el8.x86_64 requires glusterfs(x86-64) = 
3.12.2-40.2.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-15.el8.x86_64 requires glusterfs(x86-64) = 
6.0-15.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-20.el8.x86_64 requires glusterfs(x86-64) = 
6.0-20.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-37.el8.x86_64 requires glusterfs(x86-64) = 
6.0-37.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-37.2.el8.x86_64 requires glusterfs(x86-64) = 
6.0-37.2.el8, but none of the providers can be installed
  - cannot install both glusterfs-3.12.2-40.2.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-15.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-20.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-37.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-37.2.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install the best update candidate for package 
ovirt-host-4.4.7-1.el8ev.x86_64
  - cannot install the best update candidate for package 
glusterfs-6.0-49.1.el8.x86_64
=
 PackageArchitectureVersion 
  RepositorySize
=
Installing dependencies:
 openvswitchx86_64  2.14.1-1.54103  
  mlnx_ofed_5.4-1.0.3.0_base17 M
 ovirt-openvswitch  noarch  2.11-1.el8ev
  rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms  8.7 k
 replacing  rhv-openvswitch.noarch 1:2.11-7.el8ev
 unboundx86_64  1.7.3-15.el8
  rhel-8-for-x86_64-appstream-rpms 895 k
Skipping packages with conflicts:
(add '--best --allowerasing' to command line to force their upgrade):
 glusterfs  x86_64  3.12.2-40.2.el8 
  rhel-8-for-x86_64-baseos-rpms558 k
 glusterfs  x86_64  6.0-15.el8  
  rhel-8-for-x86_64-baseos-rpms658 k
 glusterfs  x86_64  6.0-20.el8  

[ovirt-users] Re: Is there a way to support Mellanox OFED with oVirt/RHV?

2021-08-05 Thread Vinícius Ferrão via Users
Yes it is deprecated on RHGS 3.5; but I really don't care for Gluster and I 
don't use it. What I would like to use is things like NFS over RDMA, that only 
Mellanox OFED provides and the host have other users that we need MLNX OFED to 
get support from Mellanox.

That's why I'm trying to install the MLNX OFED distribution. This is a 
development machine, it's not for production so we don't care we things break. 
But even when I try to force the install of MLNX OFED packages things does not 
work as expected.

Thank you.

On 5 Aug 2021, at 06:55, Strahil Nikolov 
mailto:hunter86...@yahoo.com>> wrote:

As far as I know rdma is deprecated ong glusterfs, but it most probably works.

Best Regards,
Strahil Nikolov

On Thu, Aug 5, 2021 at 5:05, Vinícius Ferrão via Users
mailto:users@ovirt.org>> wrote:
Hello,

Is there a way to keep Mellanox OFED and oVirt/RHV playing nice with each other?

The real issue is regarding GlusterFS. It seems to be a Mellanox issue, but I 
would like to know if there's something that we can do make both play nice on 
the same machine:

[root@rhvepyc2 ~]# dnf update --nobest
Updating Subscription Management repositories.
Last metadata expiration check: 0:14:25 ago on Wed 04 Aug 2021 02:01:11 AM -03.
Dependencies resolved.

Problem: both package mlnx-ofed-all-user-only-5.4-1.0.3.0.rhel8.4.noarch and 
mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsolete glusterfs-rdma
  - cannot install the best update candidate for package 
glusterfs-rdma-6.0-49.1.el8.x86_64
  - package ovirt-host-4.4.7-1.el8ev.x86_64 requires glusterfs-rdma, but none 
of the providers can be installed
  - package mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsoletes glusterfs-rdma 
provided by glusterfs-rdma-6.0-49.1.el8.x86_64
  - package glusterfs-rdma-3.12.2-40.2.el8.x86_64 requires glusterfs(x86-64) = 
3.12.2-40.2.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-15.el8.x86_64 requires glusterfs(x86-64) = 
6.0-15.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-20.el8.x86_64 requires glusterfs(x86-64) = 
6.0-20.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-37.el8.x86_64 requires glusterfs(x86-64) = 
6.0-37.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-37.2.el8.x86_64 requires glusterfs(x86-64) = 
6.0-37.2.el8, but none of the providers can be installed
  - cannot install both glusterfs-3.12.2-40.2.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-15.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-20.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-37.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-37.2.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install the best update candidate for package 
ovirt-host-4.4.7-1.el8ev.x86_64
  - cannot install the best update candidate for package 
glusterfs-6.0-49.1.el8.x86_64
=
PackageArchitectureVersion  
RepositorySize
=
Installing dependencies:
openvswitchx86_64  2.14.1-1.54103   
 mlnx_ofed_5.4-1.0.3.0_base17 M
ovirt-openvswitch  noarch  2.11-1.el8ev 
 rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms  8.7 k
replacing  rhv-openvswitch.noarch 1:2.11-7.el8ev
unboundx86_64  1.7.3-15.el8 
 rhel-8-for-x86_64-appstream-rpms895 k
Skipping packages with conflicts:
(add '--best --allowerasing' to command line to force their upgrade):
glusterfs  x86_64  3.12.2-40.2.el8  
rhel-8-for-x86_64-baseos-rpms558 k
glusterfs  x86_64  6.0-15.el8   
 rhel-8-for-x86_64-baseos-rpms658 k
glusterfs  x86_64  6.0-20.el8   
 rhel-8-for-x86_64-baseos-rpms659 k
glusterfs  x86_64  6.0-37.el8   
 rhel-8-for-x86_64-baseos-rpms663 k
glusterfs  x86_64  6.0-37.2.el8 
 rhel-8-for-x86_64-baseos-rpms662 k
Skipping packages with broken dependencies:

[ovirt-users] Re: Is there a way to support Mellanox OFED with oVirt/RHV?

2021-08-05 Thread Edward Berger
 I don't know if you can just remove the gluster-rdma rpm.

I'm using mlnx ofed on some 4.4 ovirt node hosts by installing it via the
mellanox tar/iso and
running the mellanox install script after adding the required dependencies
with --enable-repo,
which isn't the same as adding a repository and 'dnf install'.  So I would
try that on a test host.

I use it for the 'virtual infiniband' interfaces that get attached to VMs
as 'host device passthru'.

I'll note the node versions of gluster are 7.8 (node 4.4.4.0/CentOS8.3) and
7.9 (node 4.4.4.1/CentOS8.3).
unlike your glusterfs version 6.0.x

I'll be trying to install mellanox ofed on node 4.4.7.1 (CentOS 8 stream)
soon to see how that works out.



On Wed, Aug 4, 2021 at 10:04 PM Vinícius Ferrão via Users 
wrote:

> Hello,
>
> Is there a way to keep Mellanox OFED and oVirt/RHV playing nice with each
> other?
>
> The real issue is regarding GlusterFS. It seems to be a Mellanox issue,
> but I would like to know if there's something that we can do make both play
> nice on the same machine:
>
> [root@rhvepyc2 ~]# dnf update --nobest
> Updating Subscription Management repositories.
> Last metadata expiration check: 0:14:25 ago on Wed 04 Aug 2021 02:01:11 AM
> -03.
> Dependencies resolved.
>
>  Problem: both package mlnx-ofed-all-user-only-5.4-1.0.3.0.rhel8.4.noarch
> and mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsolete glusterfs-rdma
>   - cannot install the best update candidate for package
> glusterfs-rdma-6.0-49.1.el8.x86_64
>   - package ovirt-host-4.4.7-1.el8ev.x86_64 requires glusterfs-rdma, but
> none of the providers can be installed
>   - package mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsoletes
> glusterfs-rdma provided by glusterfs-rdma-6.0-49.1.el8.x86_64
>   - package glusterfs-rdma-3.12.2-40.2.el8.x86_64 requires
> glusterfs(x86-64) = 3.12.2-40.2.el8, but none of the providers can be
> installed
>   - package glusterfs-rdma-6.0-15.el8.x86_64 requires glusterfs(x86-64) =
> 6.0-15.el8, but none of the providers can be installed
>   - package glusterfs-rdma-6.0-20.el8.x86_64 requires glusterfs(x86-64) =
> 6.0-20.el8, but none of the providers can be installed
>   - package glusterfs-rdma-6.0-37.el8.x86_64 requires glusterfs(x86-64) =
> 6.0-37.el8, but none of the providers can be installed
>   - package glusterfs-rdma-6.0-37.2.el8.x86_64 requires glusterfs(x86-64)
> = 6.0-37.2.el8, but none of the providers can be installed
>   - cannot install both glusterfs-3.12.2-40.2.el8.x86_64 and
> glusterfs-6.0-49.1.el8.x86_64
>   - cannot install both glusterfs-6.0-15.el8.x86_64 and
> glusterfs-6.0-49.1.el8.x86_64
>   - cannot install both glusterfs-6.0-20.el8.x86_64 and
> glusterfs-6.0-49.1.el8.x86_64
>   - cannot install both glusterfs-6.0-37.el8.x86_64 and
> glusterfs-6.0-49.1.el8.x86_64
>   - cannot install both glusterfs-6.0-37.2.el8.x86_64 and
> glusterfs-6.0-49.1.el8.x86_64
>   - cannot install the best update candidate for package
> ovirt-host-4.4.7-1.el8ev.x86_64
>   - cannot install the best update candidate for package
> glusterfs-6.0-49.1.el8.x86_64
>
> =
>  PackageArchitectureVersion
>Repository
>   Size
>
> =
> Installing dependencies:
>  openvswitchx86_64
> 2.14.1-1.54103mlnx_ofed_5.4-1.0.3.0_base
> 17 M
>  ovirt-openvswitch  noarch  2.11-1.el8ev
> rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms
>   8.7 k
>  replacing  rhv-openvswitch.noarch 1:2.11-7.el8ev
>  unboundx86_64  1.7.3-15.el8
> rhel-8-for-x86_64-appstream-rpms
>  895 k
> Skipping packages with conflicts:
> (add '--best --allowerasing' to command line to force their upgrade):
>  glusterfs  x86_64
> 3.12.2-40.2.el8   rhel-8-for-x86_64-baseos-rpms
> 558 k
>  glusterfs  x86_64  6.0-15.el8
> rhel-8-for-x86_64-baseos-rpms
>   658 k
>  glusterfs  x86_64  6.0-20.el8
> rhel-8-for-x86_64-baseos-rpms
>   659 k
>  glusterfs  x86_64  6.0-37.el8
> rhel-8-for-x86_64-baseos-rpms
>   663 k
>  glusterfs  x86_64  6.0-37.2.el8
> rhel-8-for-x86_64-baseos-rpms
>   662 k
> Skipping packages with broken dependencies:
>  glusterfs-rdma x86_64
> 3.12.2-40.2.el8   rhel-8-for-x86_64-baseos-rpms
>  49 k
>  glusterfs-rdma  

[ovirt-users] Re: Is there a way to support Mellanox OFED with oVirt/RHV?

2021-08-05 Thread Strahil Nikolov via Users
As far as I know rdma is deprecated ong glusterfs, but it most probably works.
Best Regards,Strahil Nikolov
 
 
  On Thu, Aug 5, 2021 at 5:05, Vinícius Ferrão via Users 
wrote:   Hello,

Is there a way to keep Mellanox OFED and oVirt/RHV playing nice with each other?

The real issue is regarding GlusterFS. It seems to be a Mellanox issue, but I 
would like to know if there's something that we can do make both play nice on 
the same machine:

[root@rhvepyc2 ~]# dnf update --nobest
Updating Subscription Management repositories.
Last metadata expiration check: 0:14:25 ago on Wed 04 Aug 2021 02:01:11 AM -03.
Dependencies resolved.

 Problem: both package mlnx-ofed-all-user-only-5.4-1.0.3.0.rhel8.4.noarch and 
mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsolete glusterfs-rdma
  - cannot install the best update candidate for package 
glusterfs-rdma-6.0-49.1.el8.x86_64
  - package ovirt-host-4.4.7-1.el8ev.x86_64 requires glusterfs-rdma, but none 
of the providers can be installed
  - package mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsoletes glusterfs-rdma 
provided by glusterfs-rdma-6.0-49.1.el8.x86_64
  - package glusterfs-rdma-3.12.2-40.2.el8.x86_64 requires glusterfs(x86-64) = 
3.12.2-40.2.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-15.el8.x86_64 requires glusterfs(x86-64) = 
6.0-15.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-20.el8.x86_64 requires glusterfs(x86-64) = 
6.0-20.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-37.el8.x86_64 requires glusterfs(x86-64) = 
6.0-37.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-37.2.el8.x86_64 requires glusterfs(x86-64) = 
6.0-37.2.el8, but none of the providers can be installed
  - cannot install both glusterfs-3.12.2-40.2.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-15.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-20.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-37.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-37.2.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install the best update candidate for package 
ovirt-host-4.4.7-1.el8ev.x86_64
  - cannot install the best update candidate for package 
glusterfs-6.0-49.1.el8.x86_64
=
 Package                            Architecture            Version             
             Repository                                                Size
=
Installing dependencies:
 openvswitch                        x86_64                  2.14.1-1.54103      
              mlnx_ofed_5.4-1.0.3.0_base                                17 M
 ovirt-openvswitch                  noarch                  2.11-1.el8ev        
              rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms                  8.7 k
    replacing  rhv-openvswitch.noarch 1:2.11-7.el8ev
 unbound                            x86_64                  1.7.3-15.el8        
              rhel-8-for-x86_64-appstream-rpms                        895 k
Skipping packages with conflicts:
(add '--best --allowerasing' to command line to force their upgrade):
 glusterfs                          x86_64                  3.12.2-40.2.el8     
             rhel-8-for-x86_64-baseos-rpms                            558 k
 glusterfs                          x86_64                  6.0-15.el8          
              rhel-8-for-x86_64-baseos-rpms                            658 k
 glusterfs                          x86_64                  6.0-20.el8          
              rhel-8-for-x86_64-baseos-rpms                            659 k
 glusterfs                          x86_64                  6.0-37.el8          
              rhel-8-for-x86_64-baseos-rpms                            663 k
 glusterfs                          x86_64                  6.0-37.2.el8        
              rhel-8-for-x86_64-baseos-rpms                            662 k
Skipping packages with broken dependencies:
 glusterfs-rdma                    x86_64                  3.12.2-40.2.el8      
            rhel-8-for-x86_64-baseos-rpms                            49 k
 glusterfs-rdma                    x86_64                  6.0-15.el8           
             rhel-8-for-x86_64-baseos-rpms                            46 k
 glusterfs-rdma                    x86_64                  6.0-20.el8           
             rhel-8-for-x86_64-baseos-rpms                            46 k
 glusterfs-rdma                    x86_64                  6.0-37.2.el8         
             rhel-8-for-x86_64-baseos-rpms                            48 k
 glusterfs-rdma