[ovirt-users] Re: OVIRT INSTALLATION IN SAS RAID

2022-03-17 Thread muhammad . riyaz
Hii 
The resulf after running "lspci -n | grep '03:00.0' is 03:00.0 1000:0060 pci 
RAID bus controller: Broadcom / LSI MegaRAID SAS 1078 (rev 04)
Which says the driver is kmod-megaraid_sas 
I tried https://elrepo.org/linux/dud/el8/x86_64/dd-megaraid_sas-07.717.02.00... 
 But it says "modprobe: ERROR: Could not insert 'megaraid-sas' : Invalid 
argument.

Also I tried below DUD also

dd-megaraid_mbox-2.20.5.1-2.el8_1.elrepo.iso
dd-megaraid_mbox-2.20.5.1-3.el8_2.elrepo.iso
dd-megaraid_mbox-2.20.5.1-4.el8_3.elrepo.iso
dd-megaraid_mbox-2.20.5.1-5.el8_4.elrepo.iso
dd-megaraid_mbox-2.20.5.1-6.el8_5.elrepo.iso
dd-megaraid_sas-07.707.50.00-1.el8_0.elrepo.iso
dd-megaraid_sas-07.707.51.00-1.el8_1.elrepo.iso
dd-megaraid_sas-07.710.50.00-1.el8_2.elrepo.iso
dd-megaraid_sas-07.714.04.00-1.el8_3.elrepo.iso
dd-megaraid_sas-07.714.04.00-3.el8_4.elrepo.iso
dd-megaraid_sas-07.717.02.00-1.el8_5.elrepo.iso

but all give the same error.




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U2CDTRTW4NCMG33QB5KNUITAM36ABVVS/


[ovirt-users] Re: OVIRT INSTALLATION IN SAS RAID

2022-03-17 Thread Gilboa Davara
Hello,

On Wed, Mar 16, 2022 at 11:18 PM Strahil Nikolov via Users 
wrote:

> Check the perl script from https://forums.centos.org/viewtopic.php?t=73634
>
> According to http://elrepo.org/tiki/DeviceIDs you should run "lspci -n |
> grep '03:00.0' " and then search for the vendor:device ID pair .
>
>
> http://elrepoproject.blogspot.com/2019/08/rhel-80-and-support-for-removed-adapters.html?m=1
> there are instructions (and link to video) about dud and how to use it.
> A link to the dud images: https://elrepo.org/linux/dud/el8/x86_64/
>
> As previously mentioned you might need
> https://elrepo.org/linux/dud/el8/x86_64/dd-megaraid_sas-07.717.02.00-1.el8_5.elrepo.iso
>
>
>
> Best Regards,
> Strahil Nikolov
>

In my case the DUD files didn't work (due to kernel version mismatch) on a
couple of R710 (megaraid) and Generic-Intel-Xeon (isci SATA).
As such, I use the manual path:
- On a machine (or VM) running the latest CentOS 8 kernel:
* Download the relevant DUD source RPMs (
https://elrepo.org/linux/elrepo/el8/SRPMS/) and unpack them. (E.g.
https://elrepo.org/linux/elrepo/el8/SRPMS/kmod-megaraid_sas-07.717.02.00-1.el8_5.elrepo.src.rpm
)
* From the module directory, manually build the missing kernel drivers by
hand. (if you don't know what to do, follow the commands in the %build
segment in the DUD spec file) and save the resulting .ko kernel modules
(E.g. megaraid_sas.ko, isci.ko, etc).
- Start the machine(s) using the CentOS8 stream installation USB. Wait for
the first screen but *don't* do anything.
- Switch to a console (Ctrl-Alt-F1).
- Start the network and scp the newly built kernel modules into
/lib/modules/$(uname -r)/extra
- Load the newly copied kernel module using insmod. (E.g. insmod
/lib/modules/$(uname -r)/extra/megaraid_sas.ko)
- Check the kernel log (via dmesg) and make sure the storage controller has
initialized correctly.
- Switch back to the anaconda installer (Ctrl-Alt-F6 or F7).
- Continue installation as usual. *
(* I think I had to manually copy the kernel module to the newly installed
machine, and dracut -f to get it included in the initramfs image, but I'm
not certain).

- Gilboa
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G42PQAK7V34TWOTGCSMXTZYRFMM26AVU/


[ovirt-users] Re: can not add Storage Domain using offload iscsi

2022-03-17 Thread Sandro Bonazzola
oVirt 4.3 is EOL, please upgrade to oVirt 4.4 as soon as practical.

Il giorno gio 17 mar 2022 alle ore 11:13  ha scritto:

> Hi,
>
> some facts:
> ovirt-node 4.3.10  (based on Centos 7)
> hosts are HP blades BL460C with network card supporting iscsi HBA (there
> are no iscsi nics visible on OS level, but i had to configure ip addresses
> manually since they were not inherited from BIOS )
>
> # iscsiadm -m iface
> default tcp
> iser iser
> be2iscsi.c4:34:6b:b3:85:75.ipv4.0
> be2iscsi,c4:34:6b:b3:85:75,172.40.2.21,,iqn.1990-07.com.emulex:ovirt1worker1
> be2iscsi.c4:34:6b:b3:85:75.ipv6.0
> be2iscsi,c4:34:6b:b3:85:75,,,iqn.1990-07.com.emulex:ovirt1worker1
> be2iscsi.c4:34:6b:b3:85:71.ipv6.0
> be2iscsi,c4:34:6b:b3:85:71,,,iqn.1990-07.com.emulex:ovirt1worker1
> be2iscsi.c4:34:6b:b3:85:71.ipv4.0
> be2iscsi,c4:34:6b:b3:85:71,172.40.1.21,,iqn.1990-07.com.emulex:ovirt1worker1
>
> # iscsiadm -m session
> be2iscsi: [1] 172.40.2.1:3260,12 iqn.1992-04.com.emc:cx.ckm00143501947.a0
> (non-flash)
> be2iscsi: [2] 172.40.2.2:3260,6 iqn.1992-04.com.emc:cx.ckm00143501947.b1
> (non-flash)
> be2iscsi: [5] 172.40.1.1:3260,5 iqn.1992-04.com.emc:cx.ckm00143501947.a1
> (non-flash)
> be2iscsi: [6] 172.40.1.2:3260,4 iqn.1992-04.com.emc:cx.ckm00143501947.b0
> (non-flash)
>
> [root@worker1 ~]# multipath -l
> 3600601604a003a00ee4b8ec05aa5ec11 dm-47 DGC ,VRAID
> size=100G features='2 queue_if_no_path retain_attached_hw_handler'
> hwhandler='1 alua' wp=rw
> |-+- policy='service-time 0' prio=0 status=active
> | |- 3:0:2:1 sds 65:32 active undef running
> | `- 4:0:0:1 sdo 8:224 active undef running
> `-+- policy='service-time 0' prio=0 status=enabled
>   |- 3:0:3:1 sdu 65:64 active undef running
>   `- 4:0:1:1 sdq 65:0  active undef running
> ...
>
> target SAN storage is detected, and exposed LUNs are visible. I can even
> partition them, create filesystem and mount them in OS, all when doing
> manually step by step.
>
> when trying to add Storage Domain in iscsi the LUNs/targets are nicely
> visible in GUI, buf after choosing a LUN the domain becomes locked and
> finally enters detatched mode. can not attach the Datacenter.
>
>
> in engine.log similar entry for each host:
>
> 2022-03-16 22:27:40,736+01 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-)
> [e8778752-1bc1-4ba9-b9c0-ce651d35d824] START,
> ConnectStorageServerVDSCommand(HostName = worker1,
> StorageServerConnectionManagementVDSParameters:{hostId='d009f919-b817-4220-874e-edb0e072faa1',
> storagePoolId='----', storageType='ISCSI',
> connectionList='[StorageServerConnections:{id='4dd97e5d-c162-4997-8eda-3d8881c44e31',
> connection='172.40.1.2', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.b0',
> vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null',
> nfsTimeo='null', iface='null', netIfaceName='null'},
> StorageServerConnections:{id='6e52a5bb-0157-4cbe-baa3-cfc8001d35b2',
> connection='172.40.2.1', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.a0',
> vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null',
> nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnecti
>  ons:{id='87194270-bb0e-49d8-9700-17436f2a3e28', connection='172.40.1.1',
> iqn='iqn.1992-04.com.emc:cx.ckm00143501947.a1', vfsType='null',
> mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
> iface='null', netIfaceName='null'},
> StorageServerConnections:{id='ef8e2fbd-cbf6-45e9-8e83-f85a50001c2d',
> connection='172.40.2.2', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.b1',
> vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null',
> nfsTimeo='null', iface='null', netIfaceName='null'}]',
> sendNetworkEventOnFailure='true'}), log id: 317c3ffd
>
> ...
>
> 2022-03-16 22:30:40,836+01 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-)
> [e8778752-1bc1-4ba9-b9c0-ce651d35d824] Command
> 'ConnectStorageServerVDSCommand(HostName = worker1,
> StorageServerConnectionManagementVDSParameters:{hostId='d009f919-b817-4220-874e-edb0e072faa1',
> storagePoolId='----', storageType='ISCSI',
> connectionList='[StorageServerConnections:{id='4dd97e5d-c162-4997-8eda-3d8881c44e31',
> connection='172.40.1.2', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.b0',
> vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null',
> nfsTimeo='null', iface='null', netIfaceName='null'},
> StorageServerConnections:{id='6e52a5bb-0157-4cbe-baa3-cfc8001d35b2',
> connection='172.40.2.1', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.a0',
> vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null',
> nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnec
>  tions:{id='87194270-bb0e-49d8-9700-17436f2a3e28',
> connection='172.40.1.1', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.a1',
> vfsType='null', 

[ovirt-users] Re: So - no updates nor mentions about CVE-2022-0847 (Dirty Pipe)?

2022-03-17 Thread Sandro Bonazzola
Here's ovirt tracker:
Bug 2061694  -
CVE-2022-0847  -
kernel: improper initialization of the "flags" member of the new
pipe_buffer [ovirt-4.4]

CentOS Stream 8 didn't provide a kernel with the fix yet:
https://koji.mbox.centos.org/koji/packageinfo?packageID=866
So on oVirt Node there's nothing we can do other than wait. You're welcome
to ask CentOS Stream to provide a fix.


Il giorno gio 17 mar 2022 alle ore 16:10  ha scritto:

> Sorry if I missed this, but oVirt Engine is impacted. I was hoping to see
> some sort of release or announcement about this. If there was one,
> apologies. RHV has mitigated this:
>
> https://access.redhat.com/errata/RHSA-2022:0841
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/J6OEABN74DGGLVYSMI3WDXNVKYYMV4EH/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N3VH2DI677YIBKGW3ZU244TZLXIXMWNE/


[ovirt-users] Re: Welcome to oVirt 4.5.0 Alpha test day!

2022-03-17 Thread Sandro Bonazzola
Many thanks to:
- Jöran Malek
- Diego Ercolani
- Tomáš Golembiovský

For having joined today's test day and reporting their findings.

And to:
- Benny Zlotnik
- Alfredo Moralejo Alonso
- Yedidyah Bar David

For having assisted looking into reported issues.

Today test day resulted with the opening / hitting the following bugs:
*RHEL:*
Bug 2065287  - text
installation doesn't set correctly the keymap
Not reported: Node installation with Radeon HD 4350


oVirt:
Bug 2065068  - Too
large subtable console errors in engine admin portal
Bug 2065195  - ETL
service sampling has encountered an error
Bug 2065294  - ceph
support via cinderlib is failing due to too old python-psycopg2
Bug 2065152  - Implicit
CPU pinning for NUMA VMs destroyed because of invalid CPU policy
Bug 2063112  - imgbased
fails to handle grub config on CentOS Stream 9

oVirt Website / Docs:
Add oVirt Node 4.5-pre section #2787

drop Language Support configuration option from node instructions #2790

wrong link to host requirements #2786

oVirt lifecycle points to RHV lifecycle #2785

missing variable substitution on install guide #2784

Table of content has wrong tree rooted on "oVirt Key Components" #2783


The dedicated test day is reaching its end but it doesn't mean you can't
continue the testing.
As a result of today, I think we can say this alpha release has a very high
quality level (except for ceph storage issue which should be already solved
now).

-- 

Sandro Bonazzola
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HIC5BBR3RYCC56XA2J2ASX4NODIMZOBB/


[ovirt-users] So - no updates nor mentions about CVE-2022-0847 (Dirty Pipe)?

2022-03-17 Thread jasonmicron
Sorry if I missed this, but oVirt Engine is impacted. I was hoping to see some 
sort of release or announcement about this. If there was one, apologies. RHV 
has mitigated this:

https://access.redhat.com/errata/RHSA-2022:0841
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/J6OEABN74DGGLVYSMI3WDXNVKYYMV4EH/


[ovirt-users] Re: Moving from oVirt 4.4.4 to 4.4.10

2022-03-17 Thread jorgevisentini
That seems coherent to me.
So you would have a new and clean environment.

I would do it this way.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N5QZHQOYEUXFMUAYN43WSQBVUMHPO6VA/


[ovirt-users] Re: oVirt Nodes 'Setting Host state to Non-Operational' - looking for the cause.

2022-03-17 Thread Strahil Nikolov via Users
If there is a bug in gluster , gfid split brain is still possible.Check the 
file attributes of the affected file.Stale file handle can be identified in 
FUSE mount point /rhev/long/path/to/storage/mounted by missing user group and 
size (all visible as  ??? ).
Best Regards,Strahil Nikolov
 
 
Thanks Strahil,

The Environment is as follows:

oVirt Open Virtualization Manager:
Software Version:4.4.9.5-1.el8

oVirt Node:
OS Version: RHEL - 8.4.2105.0 - 3.el8
OS Description: oVirt Node 4.4.6
GlusterFS Version: glusterfs-8.5-1.el8

The Volumes are Arbiter (2+1) volumes so split brain should not be an issue.

Regards

Simon...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2AKCCVWH7GRLAVISA2KQAXSMTKTVNVX4/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U4HJHZZUVTBRRVGRZPPXACIEAHEX2XJ7/


[ovirt-users] Re: ovirt-node-ng state "Bond status: NONE"

2022-03-17 Thread Ales Musil
On Thu, Mar 17, 2022 at 11:43 AM Renaud RAKOTOMALALA <
renaud.rakotomal...@alterway.fr> wrote:

> Hi Ales,
>
> Le mer. 16 mars 2022 à 07:11, Ales Musil  a écrit :
>
>>
>> [../..]
>>
>
>> I am trying to add a new ovirt-node-ng 4.4.10 node to my cluster managed
>> by an ovirt-engine version 4.4.10.
>>
>> My cluster is composed of other ovirt-node-ng which have been
>> successively updated from version 4.4.4 to version 4.4.10 without any
>> problem.
>>
>> This new node is integrated normally in the cluster, however when I look
>> at the status of the network part in the tab "Network interface" I see that
>> all interfaces are "down".
>>
>
> Did you try to call "Refresh Capabilities"? It might be the case that the
> engine presents a different state that is on the host after upgrade.
>
> I tried but and I show the pull in the vdsm.log on my faulty node. But the
> bond/interfaces states still "down". I tried a fresh install several time
> the node with "ovirt-node-ng-installer-4.4.10-2022030308.el8.iso" but the
> issue still there.
>
>
>
>>
>>> I have a paperclip at the "bond0" interface that says: "Bond state; NONE"
>>>
>>> I compared the content of "/etc/sysconfig/network-script" between an
>>> hypervisor which works and the one which has the problem and I notice that
>>> a whole bunch of files are missing and in particular the "ifup/ifdown"
>>> files. The folder contains only the cluster specific files + the
>>> "ovirtmgmt" interface.
>>>
>>
>> Since 4.4 in general we don't use initscripts anymore, so those files are
>> really not a good indicator of anything. We are using nmstate +
>> NetworkManager, if the connection are correctly presented here
>> everything should be fine.
>>
>>
>
> networkManager show interfaces and bond up and running from the node
> perspective
>
> nmcli con show --active
> NAME   UUID  TYPE  DEVICE
> ovirtmgmt  6b08c819-6091-44de-9546-X  bridgeovirtmgmt
> virbr0 91cb9d5c-b64d-4655-ac2a-X  bridgevirbr0
> bond0  ad33d8b0-1f7b-cab9-9447-X  bond  bond0
> eno1   abf4c85b-57cc-4484-4fa9-X  ethernet  eno1
> eno2   b186f945-cc80-911d-668c-X  ethernet  eno2
>
>
>
> nmstatectl show return correct states
>
> - name: bond0
>   type: bond
>   state: up
>   accept-all-mac-addresses: false
>   ethtool:
> feature:
> [../..]
>   ipv4:
> enabled: false
> address: []
> dhcp: false
>   ipv6:
> enabled: false
> address: []
> autoconf: false
> dhcp: false
>   link-aggregation:
> mode: active-backup
> options:
>   all_slaves_active: dropped
>   arp_all_targets: any
>   arp_interval: 0
>   arp_validate: none
>   downdelay: 0
>   fail_over_mac: none
>   miimon: 100
>   num_grat_arp: 1
>   num_unsol_na: 1
>   primary: eno1
>   primary_reselect: always
>   resend_igmp: 1
>   updelay: 0
>   use_carrier: true
> port:
> - eno1
> - eno2
>   lldp:
> enabled: false
>   mac-address: X
>   mtu: 1500
>
>
> The state for eno1 and eno2 is "up".
>
>
>>> The hypervisor which has the problem seems to be perfectly functional,
>>> ovirt-engine does not raise any problem.
>>>
>>
>> This really sounds like something that a simple call to "Refresh
>> Capabilities" could fix.
>>
>
> I did it several times. Everything is fetched (I checked in the logs), but
> the states are still down for all interfaces... If I do a fresh install in
> 4.4.4, the states shown by rhevm are OK, if I reinstall in 4.4.10 the WebUI
> Hosts//Network Interfaces is KO.
>
>

That's really strange, I would suggest removing the host completely from
the engine if possible and then adding it again. That should also remove
the host from DB and clear the references.

Is it only one host that's affected or multiple?

Best regards,
Ales



-- 

Ales Musil

Senior Software Engineer - RHV Network

Red Hat EMEA 

amu...@redhat.comIM: amusil

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LYYVNPGG5NDJGXHGT6COBXLTQJQJ6PUK/


[ovirt-users] Re: ovirt-node-ng state "Bond status: NONE"

2022-03-17 Thread Renaud RAKOTOMALALA
Hi Ales,

Le mer. 16 mars 2022 à 07:11, Ales Musil  a écrit :

>
> [../..]
>

> I am trying to add a new ovirt-node-ng 4.4.10 node to my cluster managed
> by an ovirt-engine version 4.4.10.
>
> My cluster is composed of other ovirt-node-ng which have been successively
> updated from version 4.4.4 to version 4.4.10 without any problem.
>
> This new node is integrated normally in the cluster, however when I look
> at the status of the network part in the tab "Network interface" I see that
> all interfaces are "down".
>

Did you try to call "Refresh Capabilities"? It might be the case that the
engine presents a different state that is on the host after upgrade.

I tried but and I show the pull in the vdsm.log on my faulty node. But the
bond/interfaces states still "down". I tried a fresh install several time
the node with "ovirt-node-ng-installer-4.4.10-2022030308.el8.iso" but the
issue still there.



>
>> I have a paperclip at the "bond0" interface that says: "Bond state; NONE"
>>
>> I compared the content of "/etc/sysconfig/network-script" between an
>> hypervisor which works and the one which has the problem and I notice that
>> a whole bunch of files are missing and in particular the "ifup/ifdown"
>> files. The folder contains only the cluster specific files + the
>> "ovirtmgmt" interface.
>>
>
> Since 4.4 in general we don't use initscripts anymore, so those files are
> really not a good indicator of anything. We are using nmstate +
> NetworkManager, if the connection are correctly presented here
> everything should be fine.
>
>

networkManager show interfaces and bond up and running from the node
perspective

nmcli con show --active
NAME   UUID  TYPE  DEVICE
ovirtmgmt  6b08c819-6091-44de-9546-X  bridgeovirtmgmt
virbr0 91cb9d5c-b64d-4655-ac2a-X  bridgevirbr0
bond0  ad33d8b0-1f7b-cab9-9447-X  bond  bond0
eno1   abf4c85b-57cc-4484-4fa9-X  ethernet  eno1
eno2   b186f945-cc80-911d-668c-X  ethernet  eno2



nmstatectl show return correct states

- name: bond0
  type: bond
  state: up
  accept-all-mac-addresses: false
  ethtool:
feature:
[../..]
  ipv4:
enabled: false
address: []
dhcp: false
  ipv6:
enabled: false
address: []
autoconf: false
dhcp: false
  link-aggregation:
mode: active-backup
options:
  all_slaves_active: dropped
  arp_all_targets: any
  arp_interval: 0
  arp_validate: none
  downdelay: 0
  fail_over_mac: none
  miimon: 100
  num_grat_arp: 1
  num_unsol_na: 1
  primary: eno1
  primary_reselect: always
  resend_igmp: 1
  updelay: 0
  use_carrier: true
port:
- eno1
- eno2
  lldp:
enabled: false
  mac-address: X
  mtu: 1500


The state for eno1 and eno2 is "up".


>> The hypervisor which has the problem seems to be perfectly functional,
>> ovirt-engine does not raise any problem.
>>
>
> This really sounds like something that a simple call to "Refresh
> Capabilities" could fix.
>

I did it several times. Everything is fetched (I checked in the logs), but
the states are still down for all interfaces... If I do a fresh install in
4.4.4, the states shown by rhevm are OK, if I reinstall in 4.4.10 the WebUI
Hosts//Network Interfaces is KO.


>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WYTRZR5Z57RJAEUXCU256WCADEB6KPOS/


[ovirt-users] can not add Storage Domain using offload iscsi

2022-03-17 Thread lkopyt
Hi,

some facts:
ovirt-node 4.3.10  (based on Centos 7)
hosts are HP blades BL460C with network card supporting iscsi HBA (there are no 
iscsi nics visible on OS level, but i had to configure ip addresses manually 
since they were not inherited from BIOS )

# iscsiadm -m iface
default tcp
iser iser
be2iscsi.c4:34:6b:b3:85:75.ipv4.0 
be2iscsi,c4:34:6b:b3:85:75,172.40.2.21,,iqn.1990-07.com.emulex:ovirt1worker1
be2iscsi.c4:34:6b:b3:85:75.ipv6.0 
be2iscsi,c4:34:6b:b3:85:75,,,iqn.1990-07.com.emulex:ovirt1worker1
be2iscsi.c4:34:6b:b3:85:71.ipv6.0 
be2iscsi,c4:34:6b:b3:85:71,,,iqn.1990-07.com.emulex:ovirt1worker1
be2iscsi.c4:34:6b:b3:85:71.ipv4.0 
be2iscsi,c4:34:6b:b3:85:71,172.40.1.21,,iqn.1990-07.com.emulex:ovirt1worker1

# iscsiadm -m session
be2iscsi: [1] 172.40.2.1:3260,12 iqn.1992-04.com.emc:cx.ckm00143501947.a0 
(non-flash)
be2iscsi: [2] 172.40.2.2:3260,6 iqn.1992-04.com.emc:cx.ckm00143501947.b1 
(non-flash)
be2iscsi: [5] 172.40.1.1:3260,5 iqn.1992-04.com.emc:cx.ckm00143501947.a1 
(non-flash)
be2iscsi: [6] 172.40.1.2:3260,4 iqn.1992-04.com.emc:cx.ckm00143501947.b0 
(non-flash)

[root@worker1 ~]# multipath -l
3600601604a003a00ee4b8ec05aa5ec11 dm-47 DGC ,VRAID
size=100G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 
alua' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 3:0:2:1 sds 65:32 active undef running
| `- 4:0:0:1 sdo 8:224 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
  |- 3:0:3:1 sdu 65:64 active undef running
  `- 4:0:1:1 sdq 65:0  active undef running
...

target SAN storage is detected, and exposed LUNs are visible. I can even 
partition them, create filesystem and mount them in OS, all when doing manually 
step by step.

when trying to add Storage Domain in iscsi the LUNs/targets are nicely visible 
in GUI, buf after choosing a LUN the domain becomes locked and finally enters 
detatched mode. can not attach the Datacenter.


in engine.log similar entry for each host:

2022-03-16 22:27:40,736+01 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-) 
[e8778752-1bc1-4ba9-b9c0-ce651d35d824] START, 
ConnectStorageServerVDSCommand(HostName = worker1, 
StorageServerConnectionManagementVDSParameters:{hostId='d009f919-b817-4220-874e-edb0e072faa1',
 storagePoolId='----', storageType='ISCSI', 
connectionList='[StorageServerConnections:{id='4dd97e5d-c162-4997-8eda-3d8881c44e31',
 connection='172.40.1.2', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.b0', 
vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', 
nfsTimeo='null', iface='null', netIfaceName='null'}, 
StorageServerConnections:{id='6e52a5bb-0157-4cbe-baa3-cfc8001d35b2', 
connection='172.40.2.1', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.a0', 
vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', 
nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnecti
 ons:{id='87194270-bb0e-49d8-9700-17436f2a3e28', connection='172.40.1.1', 
iqn='iqn.1992-04.com.emc:cx.ckm00143501947.a1', vfsType='null', 
mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', 
iface='null', netIfaceName='null'}, 
StorageServerConnections:{id='ef8e2fbd-cbf6-45e9-8e83-f85a50001c2d', 
connection='172.40.2.2', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.b1', 
vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', 
nfsTimeo='null', iface='null', netIfaceName='null'}]', 
sendNetworkEventOnFailure='true'}), log id: 317c3ffd

...

2022-03-16 22:30:40,836+01 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-) 
[e8778752-1bc1-4ba9-b9c0-ce651d35d824] Command 
'ConnectStorageServerVDSCommand(HostName = worker1, 
StorageServerConnectionManagementVDSParameters:{hostId='d009f919-b817-4220-874e-edb0e072faa1',
 storagePoolId='----', storageType='ISCSI', 
connectionList='[StorageServerConnections:{id='4dd97e5d-c162-4997-8eda-3d8881c44e31',
 connection='172.40.1.2', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.b0', 
vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', 
nfsTimeo='null', iface='null', netIfaceName='null'}, 
StorageServerConnections:{id='6e52a5bb-0157-4cbe-baa3-cfc8001d35b2', 
connection='172.40.2.1', iqn='iqn.1992-04.com.emc:cx.ckm00143501947.a0', 
vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', 
nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnec
 tions:{id='87194270-bb0e-49d8-9700-17436f2a3e28', connection='172.40.1.1', 
iqn='iqn.1992-04.com.emc:cx.ckm00143501947.a1', vfsType='null', 
mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', 
iface='null', netIfaceName='null'}, 
StorageServerConnections:{id='ef8e2fbd-cbf6-45e9-8e83-f85a50001c2d', 
connection='172.40.2.2', 

[ovirt-users] 'check_rhv' plugin for Icinga/Nagios is not maintained anymore ?

2022-03-17 Thread Abhishekh Patil
Hi,

We have a customer who says 'check_rhv' is no longer maintained [A].

Do we have any other alternate plugin for Icinga/Nagios ?

Thanks,
[A] ==> https://github.com/rk-it-at/check_rhv
-- 

Abhishekh patil

senior technical support engineer, RHC{SA,E},CKA

Red Hat India 

Level-5, Tower X, Cybercity

Magarpatta City, Pune 411013

abpa...@redhat.com M: +91-8446891297
 IM: IRC :
abpatil
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G6DGQKLB7IPNVB27P4SJ4PVOPJA7CCT4/


[ovirt-users] Re: setup Ovirt-node without internet connection

2022-03-17 Thread Yedidyah Bar David
On Thu, Mar 17, 2022 at 10:48 AM david  wrote:
>
> Hi
>
> unable add Ovirt-node to the cluster without internet connection
> the node is a pre-build ISO image 
> ovirt-node-ng-installer-4.4.10-2022030308.el8.iso
> the node hasn't internet connection
>
> the error message displayed::
> "Host test-node2 installation failed. Task Ensure Python3 is installed for 
> CentOS/RHEL8 hosts failed to execute. Please check logs for more details: 
> /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20220315080157-test-node2.imp.int-b8648e98-c2eb-481c-8155-38319cb041f7.log"

Dana, can you perhaps have a look? Thanks.
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FH2IJAKVDJB3WM34FM5AT44HLYZHC3HD/


[ovirt-users] setup Ovirt-node without internet connection

2022-03-17 Thread david
Hi

unable add Ovirt-node to the cluster without internet connection
the node is a pre-build ISO image
ovirt-node-ng-installer-4.4.10-2022030308.el8.iso
the node hasn't internet connection

the error message displayed::
"Host test-node2 installation failed. Task Ensure Python3 is installed for
CentOS/RHEL8 hosts failed to execute. Please check logs for more details:
/var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20220315080157-test-node2.imp.int-b8648e98-c2eb-481c-8155-38319cb041f7.log"
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/75SXVDVYERYAMDGWPKIEYS7YN5OGVHVT/


[ovirt-users] Re: how to move from hosted-engine to standalone engine

2022-03-17 Thread Yedidyah Bar David
On Wed, Mar 16, 2022 at 5:41 PM Pascal D  wrote:
>
> One issue I have with hosted-engine is that when something goes wrong it has 
> a domino effect because hosted-engine cannot communicate with its database.  
> I have been thinking to host ovirt engine separately on a different 
> hypervisor and have all my host undeployed. However for efficiency my 
> networks are separated by their functions. So my questions are as follow
>
> 1) is it a good idea to host the engine on a separate kvm

It is, for certain cases.

>
> 2) what network does this engine need to access. Obviously ovirtmgmt, and 
> display to access it but what about storage. Does it need access to itor can 
> it access it through ovirtmgmt and the SPM?

The engine does not need access to the storage network. It does all
storage operations through hosts.

>
> 3) Is there a recipe or howto available to follow

This was discussed several times on this list, also recently - please
check archives.

Some people claimed they had bad results using the search function on
the website. You can also use your favourite search engine - most
allow e.g. 'site:lists.ovirt.org something'.

Good luck and best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RH5DHKDI6HO65CMHEWGDKGXRX45D76RK/


[ovirt-users] Welcome to oVirt 4.5.0 Alpha test day!

2022-03-17 Thread Sandro Bonazzola
Hi, 4.5.0 Alpha was released yesterday!
As for oVirt 4.4 test day we have a trello board at
https://trello.com/b/3FZ7gdhM/ovirt-450-test-day .
If you have troubles accessing the trello board please let me know.

A release management draft page has been created at:
https://www.ovirt.org/release/4.5.0/

If you're willing to help testing the release during the test days please
join the oVirt development mailing list at
https://lists.ovirt.org/archives/list/de...@ovirt.org/ and report your
feedback there.
Please join the trello board for sharing what you're going to test so
others can focus on different areas not covered by your test.
If you don't want to register to trello, please share on the oVirt
development mailing list and we'll add it to the board.
The board is publicly visible also to non-registered users.
Instructions for installing oVirt 4.5.0 Alpha and oVirt 4.5.0 Beta for
testing have been added to the release page
https://www.ovirt.org/release/4.5.0/ and to relevant documentation sections
on the oVirt website.

Professional Services, Integrators and Backup vendors: please run a test
session against your additional services, integrated solutions,
downstream rebuilds, backup solution accordingly.
If you're not listed here:
https://ovirt.org/community/user-stories/users-and-providers.html
consider adding your company there.

If you're willing to help updating the localization for oVirt 4.5.0 please
follow https://ovirt.org/develop/localization.html

If you're willing to help promoting the oVirt 4.5.0 release you can submit
your banner proposals for the oVirt home page and for the
social media advertising at https://github.com/oVirt/ovirt-site/issues
As an alternative please consider submitting a case study as in
https://ovirt.org/community/user-stories/user-stories.html

Feature owners: please start recording a presentation of your feature for
oVirt Youtube channel: https://www.youtube.com/c/ovirtproject
If you have some new feature requiring community feedback / testing please
add your case under the "Test looking for volunteer" section.

Do you want to contribute to getting ready for this release?
Read more about oVirt community at https://ovirt.org/community/ and join
the oVirt developers https://ovirt.org/develop/

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QU262G6YRUOOCSJJE5O7BKUYVETBYDOJ/