[ovirt-users] Re: oVirt Nodes 'Setting Host state to Non-Operational' - looking for the cause.

2022-03-16 Thread simon
Thanks Strahil,

The Environment is as follows:

oVirt Open Virtualization Manager:
Software Version:4.4.9.5-1.el8

oVirt Node:
OS Version: RHEL - 8.4.2105.0 - 3.el8
OS Description: oVirt Node 4.4.6
GlusterFS Version: glusterfs-8.5-1.el8

The Volumes are Arbiter (2+1) volumes so split brain should not be an issue.

Regards

Simon...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2AKCCVWH7GRLAVISA2KQAXSMTKTVNVX4/


[ovirt-users] Re: oVirt Nodes 'Setting Host state to Non-Operational' - looking for the cause.

2022-03-16 Thread Strahil Nikolov via Users
Stale file handle is an indication of a split brain situation. On a 3-way 
replica, this could only mean gfid mismatch (gfid is unique id for each file in 
gluster).
I think those .prob can be deleted safely, but I am not fully convinced.
What version of oVirt are you using ? What about gluster version ?
Best Regards,Strahil Nikolov
 
 
2 days ago I found that 2 of the 3 oVirt nodes had been set to 
'Non-Operational'. GlusterFS seemed to be ok from the commandline, but the 
oVirt engine WebUI was reporting 2 out of 3 bricks per volume as down and event 
logs were filling up with the following types of messages.


Failed to connect Host ddmovirtprod03 to the Storage Domains data03.
The error message for connection ddmovirtprod03-strg:/data03 returned by VDSM 
was: Problem while trying to mount target
Failed to connect Host ddmovirtprod03 to Storage Serverthe s
Host ddmovirtprod03 cannot access the Storage Domain(s) data03 attached to the 
Data Center DDM_Production_DC. Setting Host state to Non-Operational.
Failed to connect Host ddmovirtprod03 to Storage Pool 


Host ddmovirtprod01 reports about one of the Active Storage Domains as 
Problematic.
Host ddmovirtprod01 cannot access the Storage Domain(s) data03 attached to the 
Data Center DDM_Production_DC. Setting Host state to Non-Operational.
Failed to connect Host ddmovirtprod01 to Storage Pool DDM_Production_DC


The following is from the vdsm.log on host01:

[root@ddmovirtprod01 vdsm]# tail -f /var/log/vdsm/vdsm.log | grep "WARN"
2022-03-15 11:37:14,299+ WARN (ioprocess/232748) [IOProcess] 
(6bf1ef03-77e1-423b-850e-9bb6030b590d) Failed to create a probe file: 
'/rhev/data-center/mnt/glusterSD/ddmovirtprod03-strg:data03/.prob-6c101766-4e5d-40c6-8fa8-0f7e3b3e931e',
 error: 'Stale file handle' (init:461)
2022-03-15 11:37:24,313+ WARN (ioprocess/232748) [IOProcess] 
(6bf1ef03-77e1-423b-850e-9bb6030b590d) Failed to create a probe file: 
'/rhev/data-center/mnt/glusterSD/ddmovirtprod03-strg:_data03/.prob-c3fa017b-94dc-47d1-89a4-8ee046509a32',
 error: 'Stale file handle' (init:461)
2022-03-15 11:37:34,325+ WARN (ioprocess/232748) [IOProcess] 
(6bf1ef03-77e1-423b-850e-9bb6030b590d) Failed to create a probe file: 
'/rhev/data-center/mnt/glusterSD/ddmovirtprod03-strg:_data03/.prob-e173ecac-4d4d-4b59-a437-61eb5d0beb83',
 error: 'Stale file handle' (init:461)
2022-03-15 11:37:44,337+ WARN (ioprocess/232748) [IOProcess] 
(6bf1ef03-77e1-423b-850e-9bb6030b590d) Failed to create a probe file: 
'/rhev/data-center/mnt/glusterSD/ddmovirtprod03-strg:_data03/.prob-baf13698-0f43-4672-90a4-86cecdf9f8d0',
 error: 'Stale file handle' (init:461)
2022-03-15 11:37:54,350+ WARN (ioprocess/232748) [IOProcess] 
(6bf1ef03-77e1-423b-850e-9bb6030b590d) Failed to create a probe file: 
'/rhev/data-center/mnt/glusterSD/ddmovirtprod03-strg:_data03/.prob-1e92fdfd-d8e9-48b4-84a9-a2b84fc0d14c',
 error: 'Stale file handle' (init_:461)


After trying different methods to resolve without success I did the following.

1. Moved any VM disks using Storage Domain data03 onto other Storage Domains.
2. Placed data03 Storage Domain ionto Maintenance mode.
3. Placed host03 into Maintenance mode, stopping Gluster services and rebooting.
4. Ensuring all Bricks were up, the peers connected and healing started.
5. Once Gluster volumes were healed I activated host03, at which point host01 
also activated.
6. Host01 was showing as disconnected on most bricks so I rebooted it which 
resolved this.
7. I activated Storage Domain data03 without issue.

The system has been left for 24hrs with no further issues.

The issue is now resolved but it would be helful to know what happened to cause 
the issues with the Storage Domain data03 and where do I look to confirm.

Regards

Simon...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/55XNGNKOGS3ONWTWDGGJSBORZ2D2MZUT/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TVSGTE5GEIINNZ7QOF6V3PYFRHZTU66S/


[ovirt-users] Re: Replace storage behind ovirt engine

2022-03-16 Thread Strahil Nikolov via Users
I would go this way:1. Backup2. Reinstall the host (as you migrate from 4.3. to 
4.4 we need EL8)3. use command provided in previous e-mail (hosted-engine ...). 
to deploy to the iSCSI. Ensure the old HE VM is offline and all hosts see the 
iSCSI becore the restore.4. If the deployment is OK and the new engine sees the 
hosts -> storage migrate the VMs from the Gluster to iSCSI
Another approach is to move the VMs first and last -> restore the HE on the 
iSCSI storage.
Best Regards,Strahil Nikolov
 
 
  On Wed, Mar 16, 2022 at 17:24, Yedidyah Bar David wrote:   
On Wed, Mar 16, 2022 at 4:45 PM Demeter Tibor  wrote:
>
> Dear Didi,
> Thank you for your reply.
>
> My glusterfs uses the host's internal disks. I have 4 hosts, but the 
> glusterfs use only 3.  It is a centos7 based system.
>
> As I think, I have to elminate the glusterfs first, because I can't upgrade 
> the hosts until the engine running on there. That's why I think the engine 
> will access  to them after the reinstall.

Sorry, I do not think I understand your reasoning/plan. Please clarify.

> Is it fine?

If the question is:

"I have a HC hosted-engine+gluster on 3 hosts + another host. I want
to move the hosted-engine (on one of the hosts? not clear) to external
iscsi storage. Will the new engine be able to access the gluster
storage on the existing hosts?"

Then, sadly, I don't know. I *think* it will work - that gluster does
not need the engine to function - but never tried this.

> What pitfalls can I except in this case?

Many, if you ask me - mainly, because it's a flow probably hardly
anyone ever tried, and it definitely goes out of the set of
assumptions built into the basic design. I strongly suggest to test
first on a test env (can be on nested-kvm VMs, no need for physical
hosts) and to have good backups. But in theory, I can't think of any
concrete point.

Best regards,

>
> Thanks in advance,
> Tibor
>
>
> 
> Feladó: Yedidyah 
> Címzett: Demeter 
> Másolat: users 
> Dátum: 2022. március 16., szerda 15:08 CET
> Tantárgy: Re: [ovirt-users] Replace storage behind ovirt engine
>
> On Wed, Mar 16, 2022 at 1:39 PM Demeter Tibor  wrote:
> >
> > Dear Users,
> >
> > I have to upgrade our hyperconverged ovirt system from 4.3 to 4.4, but 
> > meanwhile I would like to change the storage backend under the engine. At 
> > this moment it is a gluster based clustered fs, but I don't really like 
> > it I would like to change to a hardver based iscsi storage.
> > I just wondering, when I do the engine reinstall, I will install it to an 
> > other storage.
> > Is it possible somehow?
>
> The "canonical" way is:
> 1. On current engine machine: engine-backup --file=f1
> 2. (Re)Install a new host (with 4.4?), copy there f1, and:
> hosted-engine --deploy --restore-from-file=f1
>
> This only handles your engine VM, not others. The new engine (with its
> hosted_storage on the new iscsi storage) will need access to the
> existing gluster storage holding the other VMs, if you plan to somehow
> import/migrate/whatever them. If unsure how to continue, perhaps
> clarify your plan with more details.
>
> Good luck,
> --
> Didi
>


-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ALOVHR4JL7L4Y7NQZUECAFNWNVNT/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q4P2F4H4ZI2SV3H6IJOZJLZ3G6VXUZ6Q/


[ovirt-users] Re: OVIRT INSTALLATION IN SAS RAID

2022-03-16 Thread Strahil Nikolov via Users
Check the perl script from https://forums.centos.org/viewtopic.php?t=73634
According to http://elrepo.org/tiki/DeviceIDs you should run "lspci -n | grep 
'03:00.0' " and then search for the vendor:device ID pair .
http://elrepoproject.blogspot.com/2019/08/rhel-80-and-support-for-removed-adapters.html?m=1
 there are instructions (and link to video) about dud and how to use it.A link 
to the dud images: https://elrepo.org/linux/dud/el8/x86_64/
As previously mentioned you might need 
https://elrepo.org/linux/dud/el8/x86_64/dd-megaraid_sas-07.717.02.00-1.el8_5.elrepo.iso
 
 

Best Regards,Strahil Nikolov
 
I am checking the mega SAS drivers from elrepo one by one but it is getting 
"modprobe: ERROR: Could not insert 'megaraid-sas' : Invalid argument. I am 
confused with which driver will support my controller

03:00.0 RAID bus controller: Broadcom / LSI MegaRAID SAS 1078 (rev 04)
Subsystem: Dell PERC 6/i Integrated RAID Controller
Kernel driver in use: megaraid_sas
Kernel modules: megaraid_sas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PTHG6M6Y7OODJJBLFYULVBGWISQFLGDR/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2FJURQ6I2WTO6LNO7UVCZJKAVFCV3ABF/


[ovirt-users] oVirt Nodes 'Setting Host state to Non-Operational' - looking for the cause.

2022-03-16 Thread simon
2 days ago I found that 2 of the 3 oVirt nodes had been set to 
'Non-Operational'. GlusterFS seemed to be ok from the commandline, but the 
oVirt engine WebUI was reporting 2 out of 3 bricks per volume as down and event 
logs were filling up with the following types of messages.


Failed to connect Host ddmovirtprod03 to the Storage Domains data03.
The error message for connection ddmovirtprod03-strg:/data03 returned by VDSM 
was: Problem while trying to mount target
Failed to connect Host ddmovirtprod03 to Storage Serverthe s
Host ddmovirtprod03 cannot access the Storage Domain(s) data03 attached to the 
Data Center DDM_Production_DC. Setting Host state to Non-Operational.
Failed to connect Host ddmovirtprod03 to Storage Pool 


Host ddmovirtprod01 reports about one of the Active Storage Domains as 
Problematic.
Host ddmovirtprod01 cannot access the Storage Domain(s) data03 attached to the 
Data Center DDM_Production_DC. Setting Host state to Non-Operational.
Failed to connect Host ddmovirtprod01 to Storage Pool DDM_Production_DC


The following is from the vdsm.log on host01:

[root@ddmovirtprod01 vdsm]# tail -f /var/log/vdsm/vdsm.log | grep "WARN"
2022-03-15 11:37:14,299+ WARN (ioprocess/232748) [IOProcess] 
(6bf1ef03-77e1-423b-850e-9bb6030b590d) Failed to create a probe file: 
'/rhev/data-center/mnt/glusterSD/ddmovirtprod03-strg:data03/.prob-6c101766-4e5d-40c6-8fa8-0f7e3b3e931e',
 error: 'Stale file handle' (init:461)
2022-03-15 11:37:24,313+ WARN (ioprocess/232748) [IOProcess] 
(6bf1ef03-77e1-423b-850e-9bb6030b590d) Failed to create a probe file: 
'/rhev/data-center/mnt/glusterSD/ddmovirtprod03-strg:_data03/.prob-c3fa017b-94dc-47d1-89a4-8ee046509a32',
 error: 'Stale file handle' (init:461)
2022-03-15 11:37:34,325+ WARN (ioprocess/232748) [IOProcess] 
(6bf1ef03-77e1-423b-850e-9bb6030b590d) Failed to create a probe file: 
'/rhev/data-center/mnt/glusterSD/ddmovirtprod03-strg:_data03/.prob-e173ecac-4d4d-4b59-a437-61eb5d0beb83',
 error: 'Stale file handle' (init:461)
2022-03-15 11:37:44,337+ WARN (ioprocess/232748) [IOProcess] 
(6bf1ef03-77e1-423b-850e-9bb6030b590d) Failed to create a probe file: 
'/rhev/data-center/mnt/glusterSD/ddmovirtprod03-strg:_data03/.prob-baf13698-0f43-4672-90a4-86cecdf9f8d0',
 error: 'Stale file handle' (init:461)
2022-03-15 11:37:54,350+ WARN (ioprocess/232748) [IOProcess] 
(6bf1ef03-77e1-423b-850e-9bb6030b590d) Failed to create a probe file: 
'/rhev/data-center/mnt/glusterSD/ddmovirtprod03-strg:_data03/.prob-1e92fdfd-d8e9-48b4-84a9-a2b84fc0d14c',
 error: 'Stale file handle' (init_:461)


After trying different methods to resolve without success I did the following.

1. Moved any VM disks using Storage Domain data03 onto other Storage Domains.
2. Placed data03 Storage Domain ionto Maintenance mode.
3. Placed host03 into Maintenance mode, stopping Gluster services and rebooting.
4. Ensuring all Bricks were up, the peers connected and healing started.
5. Once Gluster volumes were healed I activated host03, at which point host01 
also activated.
6. Host01 was showing as disconnected on most bricks so I rebooted it which 
resolved this.
7. I activated Storage Domain data03 without issue.

The system has been left for 24hrs with no further issues.

The issue is now resolved but it would be helful to know what happened to cause 
the issues with the Storage Domain data03 and where do I look to confirm.

Regards

Simon...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/55XNGNKOGS3ONWTWDGGJSBORZ2D2MZUT/


[ovirt-users] how to move from hosted-engine to standalone engine

2022-03-16 Thread Pascal D
One issue I have with hosted-engine is that when something goes wrong it has a 
domino effect because hosted-engine cannot communicate with its database.  I 
have been thinking to host ovirt engine separately on a different hypervisor 
and have all my host undeployed. However for efficiency my networks are 
separated by their functions. So my questions are as follow

1) is it a good idea to host the engine on a separate kvm

2) what network does this engine need to access. Obviously ovirtmgmt, and 
display to access it but what about storage. Does it need access to itor can it 
access it through ovirtmgmt and the SPM?

3) Is there a recipe or howto available to follow

Thanks

Pascal
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7N4V2SCLBFTRGOIXFVKLS64NPCLHDHH6/


[ovirt-users] Re: Replace storage behind ovirt engine

2022-03-16 Thread Yedidyah Bar David
On Wed, Mar 16, 2022 at 4:45 PM Demeter Tibor  wrote:
>
> Dear Didi,
> Thank you for your reply.
>
> My glusterfs uses the host's internal disks. I have 4 hosts, but the 
> glusterfs use only 3.  It is a centos7 based system.
>
> As I think, I have to elminate the glusterfs first, because I can't upgrade 
> the hosts until the engine running on there. That's why I think the engine 
> will access  to them after the reinstall.

Sorry, I do not think I understand your reasoning/plan. Please clarify.

> Is it fine?

If the question is:

"I have a HC hosted-engine+gluster on 3 hosts + another host. I want
to move the hosted-engine (on one of the hosts? not clear) to external
iscsi storage. Will the new engine be able to access the gluster
storage on the existing hosts?"

Then, sadly, I don't know. I *think* it will work - that gluster does
not need the engine to function - but never tried this.

> What pitfalls can I except in this case?

Many, if you ask me - mainly, because it's a flow probably hardly
anyone ever tried, and it definitely goes out of the set of
assumptions built into the basic design. I strongly suggest to test
first on a test env (can be on nested-kvm VMs, no need for physical
hosts) and to have good backups. But in theory, I can't think of any
concrete point.

Best regards,

>
> Thanks in advance,
> Tibor
>
>
> 
> Feladó: Yedidyah 
> Címzett: Demeter 
> Másolat: users 
> Dátum: 2022. március 16., szerda 15:08 CET
> Tantárgy: Re: [ovirt-users] Replace storage behind ovirt engine
>
> On Wed, Mar 16, 2022 at 1:39 PM Demeter Tibor  wrote:
> >
> > Dear Users,
> >
> > I have to upgrade our hyperconverged ovirt system from 4.3 to 4.4, but 
> > meanwhile I would like to change the storage backend under the engine. At 
> > this moment it is a gluster based clustered fs, but I don't really like 
> > it I would like to change to a hardver based iscsi storage.
> > I just wondering, when I do the engine reinstall, I will install it to an 
> > other storage.
> > Is it possible somehow?
>
> The "canonical" way is:
> 1. On current engine machine: engine-backup --file=f1
> 2. (Re)Install a new host (with 4.4?), copy there f1, and:
> hosted-engine --deploy --restore-from-file=f1
>
> This only handles your engine VM, not others. The new engine (with its
> hosted_storage on the new iscsi storage) will need access to the
> existing gluster storage holding the other VMs, if you plan to somehow
> import/migrate/whatever them. If unsure how to continue, perhaps
> clarify your plan with more details.
>
> Good luck,
> --
> Didi
>


-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ALOVHR4JL7L4Y7NQZUECAFNWNVNT/


[ovirt-users] Re: Replace storage behind ovirt engine

2022-03-16 Thread Demeter Tibor
 
Dear Didi, 
Thank you for your reply. 
  
My glusterfs uses the host's internal disks. I have 4 hosts, but the glusterfs 
use only 3.  It is a centos7 based system. 
  
As I think, I have to elminate the glusterfs first, because I can't upgrade the 
hosts until the engine running on there. That's why I think the engine will 
access  to them after the reinstall. 
Is it fine? 
What pitfalls can I except in this case? 
   
 
Thanks in advance, 
Tibor  

 
   

-Eredeti üzenet-

Feladó: Yedidyah 
Címzett: Demeter 
Másolat: users 
Dátum: 2022. március 16., szerda 15:08 CET
Tantárgy: Re: [ovirt-users] Replace storage behind ovirt engine

On Wed, Mar 16, 2022 at 1:39 PM Demeter Tibor  wrote: 
> 
> Dear Users, 
> 
> I have to upgrade our hyperconverged ovirt system from 4.3 to 4.4, but 
> meanwhile I would like to change the storage backend under the engine. At 
> this moment it is a gluster based clustered fs, but I don't really like 
> it I would like to change to a hardver based iscsi storage. 
> I just wondering, when I do the engine reinstall, I will install it to an 
> other storage. 
> Is it possible somehow? 

The "canonical" way is: 
1. On current engine machine: engine-backup --file=f1 
2. (Re)Install a new host (with 4.4?), copy there f1, and: 
hosted-engine --deploy --restore-from-file=f1 

This only handles your engine VM, not others. The new engine (with its 
hosted_storage on the new iscsi storage) will need access to the 
existing gluster storage holding the other VMs, if you plan to somehow 
import/migrate/whatever them. If unsure how to continue, perhaps 
clarify your plan with more details. 

Good luck, 
-- 
Didi 

   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NKBOPMSVIUOKWAQNS6X3LZE4AHA6YZRQ/


[ovirt-users] Re: Replace storage behind ovirt engine

2022-03-16 Thread Yedidyah Bar David
On Wed, Mar 16, 2022 at 1:39 PM Demeter Tibor  wrote:
>
> Dear Users,
>
> I have to upgrade our hyperconverged ovirt system from 4.3 to 4.4, but 
> meanwhile I would like to change the storage backend under the engine. At 
> this moment it is a gluster based clustered fs, but I don't really like 
> it  I would like to change to a hardver based iscsi storage.
> I just wondering, when I do the engine reinstall, I will install it to an 
> other storage.
> Is it possible somehow?

The "canonical" way is:
1. On current engine machine: engine-backup --file=f1
2. (Re)Install a new host (with 4.4?), copy there f1, and:
hosted-engine --deploy --restore-from-file=f1

This only handles your engine VM, not others. The new engine (with its
hosted_storage on the new iscsi storage) will need access to the
existing gluster storage holding the other VMs, if you plan to somehow
import/migrate/whatever them. If unsure how to continue, perhaps
clarify your plan with more details.

Good luck,
--
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WRMH4KBVJ4T4UZJSFVT56NVHVUAHUDOK/


[ovirt-users] Replace storage behind ovirt engine

2022-03-16 Thread Demeter Tibor

Dear Users, 
  
I have to upgrade our hyperconverged ovirt system from 4.3 to 4.4, but 
meanwhile I would like to change the storage backend under the engine. At this 
moment it is a gluster based clustered fs, but I don't really like it  I 
would like to change to a hardver based iscsi storage.  
I just wondering, when I do the engine reinstall, I will install it to an other 
storage. 
Is it possible somehow?  
  
Thanks in advance. 
  
Tibor 
 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LWFPIARG7JKIIZ6F2CRFEBHJ2P5LICP6/


[ovirt-users] Re: OVIRT INSTALLATION IN SAS RAID

2022-03-16 Thread muhammad . riyaz
I am checking the mega SAS drivers from elrepo one by one but it is getting 
"modprobe: ERROR: Could not insert 'megaraid-sas' : Invalid argument. I am 
confused with which driver will support my controller

03:00.0 RAID bus controller: Broadcom / LSI MegaRAID SAS 1078 (rev 04)
Subsystem: Dell PERC 6/i Integrated RAID Controller
Kernel driver in use: megaraid_sas
Kernel modules: megaraid_sas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PTHG6M6Y7OODJJBLFYULVBGWISQFLGDR/


[ovirt-users] Re: OVIRT INSTALLATION IN SAS RAID

2022-03-16 Thread muhammad . riyaz
hereafter running lspci command in centos 7


[root@localhost riyaz]# lspci | grep RAID
03:00.0 RAID bus controller: Broadcom / LSI MegaRAID SAS 1078 (rev 04)
[root@localhost riyaz]# lspci -k -s 03:00.0
03:00.0 RAID bus controller: Broadcom / LSI MegaRAID SAS 1078 (rev 04)
Subsystem: Dell PERC 6/i Integrated RAID Controller
Kernel driver in use: megaraid_sas
Kernel modules: megaraid_sas

what might be the correct elrepo driver for this??
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ONJ4WWIYES2ACY3SOIA4UTADZQUJ5YZD/


[ovirt-users] Moving from oVirt 4.4.4 to 4.4.10

2022-03-16 Thread ayoub souihel
Dears , 

I hope you are doing well .

I have a cluster of 02 nodes and one standalone Manager , all of them are 
running on top of CentOS 8.3 witch already EOL , i would upgrade to the latest 
version witch is based on CentOS stream , but i am really confused about the 
best way to do it , basically i draft tis action plan , please advice me :
- Redeploy a new  engine with CentOS Stream .
- Restore the ovirt backup on this new engine .
- Redeploy one host with 4.4.10 .
- Add the node to the cluster .
- Move the VMs to the new hosts .
- upgrade the second node .

thank you in  advance .

Regards,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/627KPNP7OMAPTKDB3JVYRY2EPLUDHUNE/


[ovirt-users] Re: ovirt-node-ng state "Bond status: NONE"

2022-03-16 Thread Ales Musil
On Tue, Mar 15, 2022 at 5:19 PM Renaud RAKOTOMALALA <
renaud.rakotomal...@smile.fr> wrote:

> Hello,
>

Hi,


>
> I am trying to add a new ovirt-node-ng 4.4.10 node to my cluster managed
> by an ovirt-engine version 4.4.10.
>
> My cluster is composed of other ovirt-node-ng which have been successively
> updated from version 4.4.4 to version 4.4.10 without any problem.
>
> This new node is integrated normally in the cluster, however when I look
> at the status of the network part in the tab "Network interface" I see that
> all interfaces are "down".
>

Did you try to call "Refresh Capabilities"? It might be the case that the
engine presents a different state that is on the host after upgrade.


> I have a paperclip at the "bond0" interface that says: "Bond state; NONE"
>
> I compared the content of "/etc/sysconfig/network-script" between an
> hypervisor which works and the one which has the problem and I notice that
> a whole bunch of files are missing and in particular the "ifup/ifdown"
> files. The folder contains only the cluster specific files + the
> "ovirtmgmt" interface.
>

Since 4.4 in general we don't use initscripts anymore, so those files are
really not a good indicator of anything. We are using nmstate +
NetworkManager, if the connection are correctly presented here
everything should be fine.


>
> The hypervisor which has the problem seems to be perfectly functional,
> ovirt-engine does not raise any problem.
>

This really sounds like something that a simple call to "Refresh
Capabilities" could fix.


>
> Have you already encountered this type of problem?
>
> Cheers,
> Renaud
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XGOKO22HWF6OMLDCJW6XAWLE2DNPTQCB/
>


Best regards,
Ales.

-- 

Ales Musil

Senior Software Engineer - RHV Network

Red Hat EMEA 

amu...@redhat.comIM: amusil

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KCSTH2IH6E6I4GQ2QXAR2AWUZO5AL6BK/