[ovirt-users] noVNC error.

2021-01-18 Thread tommy
Hi:

 

I use novnc console can connect to engine vm.

 

But when I using novnc console connect to other vm in other datacenter,
failed.

 

The ovirt-websocket-proxy log is:

 

Jan 19 15:43:32 ooeng.tltd.com ovirt-websocket-proxy.py[1312]:
192.168.10.104 - - [19/Jan/2021 15:43:32] connecting to:
ohost2.tltd.com:5900 (using SSL)

Jan 19 15:43:32 ooeng.tltd.com ovirt-websocket-proxy.py[1312]:
ovirt-websocket-proxy[24096] INFO msg:824 handler exception: [SSL:
UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:618)

 

What reason ?

 

 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MHY4QCLAIDPL5AHJ6YURGKKJEM73LZT2/


[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Benny Zlotnik
Ceph support is available via Managed Block Storage (tech preview), it
cannot be used instead of gluster for hyperconverged setups.

Moreover, it is not possible to use a pure Managed Block Storage setup
at all, there has to be at least one regular storage domain in a
datacenter

On Mon, Jan 18, 2021 at 11:58 AM Shantur Rathore  wrote:
>
> Thanks Strahil for your reply.
>
> Sorry just to confirm,
>
> 1. Are you saying Ceph on oVirt Node NG isn't possible?
> 2. Would you know which devs would be best to ask about the recent Ceph 
> changes?
>
> Thanks,
> Shantur
>
> On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users  
> wrote:
>>
>> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>>
>> Hi Strahil,
>>
>> Thanks for your reply, I have 16 nodes for now but more on the way.
>>
>> The reason why Ceph appeals me over Gluster because of the following reasons.
>>
>> 1. I have more experience with Ceph than Gluster.
>>
>> That is a good reason to pick CEPH.
>>
>> 2. I heard in Managed Block Storage presentation that it leverages storage 
>> software to offload storage related tasks.
>> 3. Adding Gluster storage limits to 3 hosts at a time.
>>
>> Only if you wish the nodes to be both Storage and Compute. Yet, you can add 
>> as many as you wish as a compute node (won't be part of Gluster) and later 
>> you can add them to the Gluster TSP (this requires 3 nodes at a time).
>>
>> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No 
>> such limitation if I go via Ceph.
>>
>> Actually , it's about Red Hat support for RHHI and not for Gluster + oVirt. 
>> As both oVirt and Gluster ,that are used, are upstream projects, support is 
>> on best effort from the community.
>>
>> In my initial testing I was able to enable Centos repositories in Node Ng 
>> but if I remember correctly, there were some librbd versions present in Node 
>> Ng which clashed with the version I was trying to install.
>> Does Ceph hyperconverge still make sense?
>>
>> Yes it is. You got the knowledge to run the CEPH part, yet consider talking 
>> with some of the devs on the list - as there were some changes recently in 
>> oVirt's support for CEPH.
>>
>> Regards
>> Shantur
>>
>> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users  
>> wrote:
>>
>> Hi Shantur,
>>
>> the main question is how many nodes you have.
>> Ceph integration is still in development/experimental and it should be wise 
>> to consider Gluster also. It has a great integration and it's quite easy to 
>> work with).
>>
>>
>> There are users reporting using CEPH with their oVirt , but I can't tell how 
>> good it is.
>> I doubt that oVirt nodes come with CEPH components , so you most probably 
>> will need to use a full-blown distro. In general, using extra software on 
>> oVirt nodes is quite hard .
>>
>> With such setup, you will need much more nodes than a Gluster setup due to 
>> CEPH's requirements.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore 
>>  написа:
>>
>>
>>
>>
>>
>> Hi all,
>>
>> I am planning my new oVirt cluster on Apple hosts. These hosts can only have 
>> one disk which I plan to partition and use for hyper converged setup. As 
>> this is my first oVirt cluster I need help in understanding few bits.
>>
>> 1. Is Hyper converged setup possible with Ceph using cinderlib?
>> 2. Can this hyper converged setup be on oVirt Node Next hosts or only Centos?
>> 3. Can I install cinderlib on oVirt Node Next hosts?
>> 4. Are there any pit falls in such a setup?
>>
>>
>> Thanks for your help
>>
>> Regards,
>> Shantur
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS2O675QAXZRTSULPD2S/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubsc

[ovirt-users] Re: ovirt-ha-agent

2021-01-18 Thread Yedidyah Bar David
On Mon, Jan 18, 2021 at 10:29 PM penguin pages  wrote:
>
> I was avoiding reloading the OS.  This to me was like "reboot as a fix"...  
> to wipe environment out and restart, vs repair where I learn how to debug.

There is no simple and reliable way to undo 'hosted-engine --deploy'.
It's not designed for that. 'ovirt-hosted-engine-cleanup' is only a
partial solution, and if it does not work for you, an OS reinstall
should usually be quick and easy (especially if you have PXE setup,
etc.).

This is unlike 'engine-setup'. engine-setup is used also for upgrades,
so it's very important that it cleanly rolls back to a previous clean
state if it runs into a problem. In that regard, 'engine-cleanup' is
not that important as well, although we do fix bugs in it.

'hosted-engine --deploy' should always be ran on a clean and new
system anyway, so if it runs into a problem, it should not be a loss
to reinstall the OS.

>
> But after weeks... I am running out of time.

Exactly. It's a tradeoff. It's simply not worth it.

Sorry for the time you wasted. Please reinstall :-).

That said, if you run into concrete bugs that can reliably be
reproduced, including in ovirt-hosted-engine-cleanup, please report
them! Thanks.

Good luck and best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XX4IXYFTNWOI5EPAE3LCYM7Y7VAIDKXX/


[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Sandro Bonazzola
Il giorno lun 18 gen 2021 alle ore 20:04 Strahil Nikolov <
hunter86...@yahoo.com> ha scritto:

> Most probably it will be easier if you stick with full-blown distro.
>
> @Sandro Bonazzola can help with CEPH status.
>

Letting the storage team have a voice here :-)
+Tal Nisan  , +Eyal Shenitzky  , +Nir
Soffer 


>
> Best Regards,Strahil Nikolov
>
>
>
>
>
>
> В понеделник, 18 януари 2021 г., 11:44:32 Гринуич+2, Shantur Rathore <
> rathor...@gmail.com> написа:
>
>
>
>
>
> Thanks Strahil for your reply.
>
> Sorry just to confirm,
>
> 1. Are you saying Ceph on oVirt Node NG isn't possible?
> 2. Would you know which devs would be best to ask about the recent Ceph
> changes?
>
> Thanks,
> Shantur
>
> On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users 
> wrote:
> > В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
> >> Hi Strahil,
> >>
> >> Thanks for your reply, I have 16 nodes for now but more on the way.
> >>
> >> The reason why Ceph appeals me over Gluster because of the following
> reasons.
> >>
> >> 1. I have more experience with Ceph than Gluster.
> > That is a good reason to pick CEPH.
> >> 2. I heard in Managed Block Storage presentation that it leverages
> storage software to offload storage related tasks.
> >> 3. Adding Gluster storage limits to 3 hosts at a time.
> > Only if you wish the nodes to be both Storage and Compute. Yet, you can
> add as many as you wish as a compute node (won't be part of Gluster) and
> later you can add them to the Gluster TSP (this requires 3 nodes at a time).
> >> 4. I read that there is a limit of maximum 12 hosts in Gluster setup.
> No such limitation if I go via Ceph.
> > Actually , it's about Red Hat support for RHHI and not for Gluster +
> oVirt. As both oVirt and Gluster ,that are used, are upstream projects,
> support is on best effort from the community.
> >> In my initial testing I was able to enable Centos repositories in Node
> Ng but if I remember correctly, there were some librbd versions present in
> Node Ng which clashed with the version I was trying to install.
> >> Does Ceph hyperconverge still make sense?
> > Yes it is. You got the knowledge to run the CEPH part, yet consider
> talking with some of the devs on the list - as there were some changes
> recently in oVirt's support for CEPH.
> >
> >> Regards
> >> Shantur
> >>
> >> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users <
> users@ovirt.org> wrote:
> >>> Hi Shantur,
> >>>
> >>> the main question is how many nodes you have.
> >>> Ceph integration is still in development/experimental and it should be
> wise to consider Gluster also. It has a great integration and it's quite
> easy to work with).
> >>>
> >>>
> >>> There are users reporting using CEPH with their oVirt , but I can't
> tell how good it is.
> >>> I doubt that oVirt nodes come with CEPH components , so you most
> probably will need to use a full-blown distro. In general, using extra
> software on oVirt nodes is quite hard .
> >>>
> >>> With such setup, you will need much more nodes than a Gluster setup
> due to CEPH's requirements.
> >>>
> >>> Best Regards,
> >>> Strahil Nikolov
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
> shantur.rath...@gmail.com> написа:
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> Hi all,
> >>>
> >>> I am planning my new oVirt cluster on Apple hosts. These hosts can
> only have one disk which I plan to partition and use for hyper converged
> setup. As this is my first oVirt cluster I need help in understanding few
> bits.
> >>>
> >>> 1. Is Hyper converged setup possible with Ceph using cinderlib?
> >>> 2. Can this hyper converged setup be on oVirt Node Next hosts or only
> Centos?
> >>> 3. Can I install cinderlib on oVirt Node Next hosts?
> >>> 4. Are there any pit falls in such a setup?
> >>>
> >>>
> >>> Thanks for your help
> >>>
> >>> Regards,
> >>> Shantur
> >>>
> >>> ___
> >>> Users mailing list -- users@ovirt.org
> >>> To unsubscribe send an email to users-le...@ovirt.org
> >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >>> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >>> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
> >>> ___
> >>> Users mailing list -- users@ovirt.org
> >>> To unsubscribe send an email to users-le...@ovirt.org
> >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >>> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >>> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
> >>>
> >>
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> 

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Konstantin Shalygin
Faster than fuse-rbd, not qemu.
Main issue is kernel pagecache and client upgrades, for example cluster with 
700 osd and 1000 clients we need update client version for new features. With 
current oVirt realization we need update kernel then reboot host. With librbd 
we just need update package and activate host.


k

Sent from my iPhone

> On 18 Jan 2021, at 19:13, Shantur Rathore  wrote:
> 
> Thanks for pointing that out to me Konstantin.
> 
> I understand that it would use a kernel client instead of userland rbd lib.
> Isn't it better as I have seen kernel clients 20x faster than userland??
> 
> I am probably missing something important here, would you mind detailing that.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TL32D27O5GDQZHMUX57IV5FUYFPKWAKZ/


[ovirt-users] Re: Guest don't start : Cannot access backing file

2021-01-18 Thread Lionel Caignec
Hi
It's a hardware san dell compellent. Corruption seems to appear 12days agos. 
Vérité, shutting down system, it was running like a charm.

--
Lionel Caignec
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MSIR4KQZ7FFVYADMP5G7PH6J6CXV367P/


[ovirt-users] Re: Guest don't start : Cannot access backing file

2021-01-18 Thread Strahil Nikolov via Users
Hm... this sounds bad . If it was deleted by oVirt, it would ask you whether to 
remove the disk or not and would wipe the VM configuration.

Most probably you got a data corruption there . Are you using TrueNAS ?


Best Regards,
Strahil Nikolov






В вторник, 19 януари 2021 г., 00:06:15 Гринуич+2, Lionel Caignec 
 написа: 





Hi,

i've a big problem, i juste shutdown (power off completely) a guest to make a 
cold restart. And at startup the guest say : "Cannot access backing file 
'/rhev/data-center/mnt/blockSD/69348aea-7f55-41be-ae4e-febd86c33855/images/8224b2b0-39ba-44ef-ae41-18fe726f26ca/ca141675-c6f5-4b03-98b0-0312254f91e8'"
When i look from shell on Hypervisor the device file is red blinking...

Trying to change SPM, look device into all hosts, copy disk etc... no way to 
get my disk back online. It seems ovirt completely lost (delete?) the block 
device.

There is a way to manually dump (dd) the device in command line in order to 
import it inside ovirt?

My environment 
Storage : SAN (managed by ovirt) 
Ovirt-engine 4.4.3.12-1.el8
Host centos 8.2
Vdsm 4.40.26

Thanks for help i'm stuck and it's really urgent

--
Lionel Caignec 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VKTXX3DP5T7AXIKJJC4ZUW65N5JVXFID/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CSD3EXJSZ5GKAYQR7UYQ72GJEI7L2T3X/


[ovirt-users] Guest don't start : Cannot access backing file

2021-01-18 Thread Lionel Caignec
Hi,

i've a big problem, i juste shutdown (power off completely) a guest to make a 
cold restart. And at startup the guest say : "Cannot access backing file 
'/rhev/data-center/mnt/blockSD/69348aea-7f55-41be-ae4e-febd86c33855/images/8224b2b0-39ba-44ef-ae41-18fe726f26ca/ca141675-c6f5-4b03-98b0-0312254f91e8'"
When i look from shell on Hypervisor the device file is red blinking...

Trying to change SPM, look device into all hosts, copy disk etc... no way to 
get my disk back online. It seems ovirt completely lost (delete?) the block 
device.

There is a way to manually dump (dd) the device in command line in order to 
import it inside ovirt?

My environment 
Storage : SAN (managed by ovirt) 
Ovirt-engine 4.4.3.12-1.el8
Host centos 8.2
Vdsm 4.40.26

Thanks for help i'm stuck and it's really urgent

--
Lionel Caignec 


smime.p7s
Description: S/MIME Cryptographic Signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VKTXX3DP5T7AXIKJJC4ZUW65N5JVXFID/


[ovirt-users] Re: ovirt-ha-agent

2021-01-18 Thread penguin pages
I was avoiding reloading the OS.  This to me was like "reboot as a fix"...  to 
wipe environment out and restart, vs repair where I learn how to debug.

But after weeks... I am running out of time.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CMLU7X2QYLZUB57EACBYSGNPIGIGSUIQ/


[ovirt-users] Re: Ovirt reporting dashboard not working

2021-01-18 Thread Strahil Nikolov via Users
Most probably the dwh is far in the future.

The following is not the correct procedure , but it works:

ssh root@engine
su - postgres
source /opt/rh/rh-postgresql10/enable
psql engine

engine=# select * from dwh_history_timekeeping ;

Best Regards,
Strahil Nikolov






В понеделник, 18 януари 2021 г., 19:22:51 Гринуич+2, José Ferradeira via Users 
 написа: 





Hello,

Had a problem with the engine server, the clock changed to 2026 and now I don't 
have any report on the dashboard.
The version is 4.2.3.8-1.el7

Any idea?

Thanks

-- 

Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KPW5FFKG3AI6EINW4G74IKTYB2E4A5DT/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QIRHU7XXB4LUCKJ53A4TDUGLUGAICIHT/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-18 Thread Strahil Nikolov via Users
I think that it's complaining for the firewall. Try to restore with running 
firewalld.

Best Regards,
Strahil Nikolov






В понеделник, 18 януари 2021 г., 17:52:04 Гринуич+2, penguin pages 
 написа: 







Following document to redploy engine...

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/self-hosted_engine_guide/cleaning_up_a_failed_self-hosted_engine_deployment

 From Host which had listed engine as in its inventory ### 
[root@medusa ~]# /usr/sbin/ovirt-hosted-engine-cleanup
This will de-configure the host to run ovirt-hosted-engine-setup from scratch.
Caution, this operation should be used with care.

Are you sure you want to proceed? [y/n]
y
  -=== Destroy hosted-engine VM ===-
error: failed to get domain 'HostedEngine'

  -=== Stop HA services ===-
  -=== Shutdown sanlock ===-
shutdown force 1 wait 0
shutdown done 0
  -=== Disconnecting the hosted-engine storage domain ===-
  -=== De-configure VDSM networks ===-
ovirtmgmt
A previously configured management bridge has been found on the system, this 
will try to de-configure it. Under certain circumstances you can loose network 
connection.
Caution, this operation should be used with care.

Are you sure you want to proceed? [y/n]
y
  -=== Stop other services ===-
Warning: Stopping libvirtd.service, but it can still be activated by:
  libvirtd.socket
  libvirtd-ro.socket
  libvirtd-admin.socket
  -=== De-configure external daemons ===-
Removing database file /var/lib/vdsm/storage/managedvolume.db
  -=== Removing configuration files ===-
? /etc/init/libvirtd.conf already missing
- removing /etc/libvirt/nwfilter/vdsm-no-mac-spoofing.xml
? /etc/ovirt-hosted-engine/answers.conf already missing
- removing /etc/ovirt-hosted-engine/hosted-engine.conf
- removing /etc/vdsm/vdsm.conf
- removing /etc/pki/vdsm/certs/cacert.pem
- removing /etc/pki/vdsm/certs/vdsmcert.pem
- removing /etc/pki/vdsm/keys/vdsmkey.pem
- removing /etc/pki/vdsm/libvirt-migrate/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-migrate/server-cert.pem
- removing /etc/pki/vdsm/libvirt-migrate/server-key.pem
- removing /etc/pki/vdsm/libvirt-spice/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-spice/server-cert.pem
- removing /etc/pki/vdsm/libvirt-spice/server-key.pem
- removing /etc/pki/vdsm/libvirt-vnc/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-vnc/server-cert.pem
- removing /etc/pki/vdsm/libvirt-vnc/server-key.pem
- removing /etc/pki/CA/cacert.pem
- removing /etc/pki/libvirt/clientcert.pem
- removing /etc/pki/libvirt/private/clientkey.pem
? /etc/pki/ovirt-vmconsole/*.pem already missing
- removing /var/cache/libvirt/qemu
? /var/run/ovirt-hosted-engine-ha/* already missing
? /var/tmp/localvm* already missing
  -=== Removing IP Rules ===-
[root@medusa ~]# 
[root@medusa ~]# hosted-engine --deploy
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
          During customization use CTRL-D to abort.
          Continuing will configure this host for serving as hypervisor and 
will create a local VM with a running engine.
          The locally running engine will be used to configure a new storage 
domain and create a VM there.


1) Error about firewall
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The conditional check 
'firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 
'masked'' failed. The error was: error while evaluating conditional 
(firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 
'masked'): 'dict object' has no attribute 'SubState'\n\nThe error appears to be 
in 
'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_firewalld.yml':
 line 8, column 5, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\n    register: 
firewalld_s\n  - name: Enforce firewalld status\n    ^ here\n"}

###  Hmm.. that is dumb.. its disabled to avoid issues
[root@medusa ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
  Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor 
preset: enabled)
  Active: inactive (dead)
    Docs: man:firewalld(1)


2) Error about ssh to host ovirte01.penguinpages.local 
[ ERROR ] fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed 
to connect to the host via ssh: ssh: connect to host 
ovirte01.penguinpages.local port 22: No route to host", "skip_reason": "Host 
localhost is unreachable", "unreachable": true}

###.. Hmm.. well.. no kidding.. it is suppose to deploy the engine so IP should 
be offline till it does it.  And as VMs to run DNS are down.. I am using hosts 
file to ignite the enviornment.  Not sure what it expects 
[root@medusa ~]# cat /etc/hosts |grep ovir
172.16.100.31 ovirte01.penguinpages.local ovirte01



Did not go well. 

Attached is deployment details as well as logs. 

Maybe someone can point out what I am doing wrong.  Last time I did this I did 
the HCI wiz

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Strahil Nikolov via Users
Most probably it will be easier if you stick with full-blown distro.

@Sandro Bonazzola can help with CEPH status.

Best Regards,Strahil Nikolov






В понеделник, 18 януари 2021 г., 11:44:32 Гринуич+2, Shantur Rathore 
 написа: 





Thanks Strahil for your reply.

Sorry just to confirm,

1. Are you saying Ceph on oVirt Node NG isn't possible?
2. Would you know which devs would be best to ask about the recent Ceph changes?

Thanks,
Shantur

On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users  
wrote:
> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>> Hi Strahil,
>> 
>> Thanks for your reply, I have 16 nodes for now but more on the way.
>> 
>> The reason why Ceph appeals me over Gluster because of the following reasons.
>> 
>> 1. I have more experience with Ceph than Gluster.
> That is a good reason to pick CEPH.
>> 2. I heard in Managed Block Storage presentation that it leverages storage 
>> software to offload storage related tasks. 
>> 3. Adding Gluster storage limits to 3 hosts at a time.
> Only if you wish the nodes to be both Storage and Compute. Yet, you can add 
> as many as you wish as a compute node (won't be part of Gluster) and later 
> you can add them to the Gluster TSP (this requires 3 nodes at a time).
>> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No 
>> such limitation if I go via Ceph.
> Actually , it's about Red Hat support for RHHI and not for Gluster + oVirt. 
> As both oVirt and Gluster ,that are used, are upstream projects, support is 
> on best effort from the community.
>> In my initial testing I was able to enable Centos repositories in Node Ng 
>> but if I remember correctly, there were some librbd versions present in Node 
>> Ng which clashed with the version I was trying to install.
>> Does Ceph hyperconverge still make sense?
> Yes it is. You got the knowledge to run the CEPH part, yet consider talking 
> with some of the devs on the list - as there were some changes recently in 
> oVirt's support for CEPH.
> 
>> Regards
>> Shantur
>> 
>> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users  
>> wrote:
>>> Hi Shantur,
>>> 
>>> the main question is how many nodes you have.
>>> Ceph integration is still in development/experimental and it should be wise 
>>> to consider Gluster also. It has a great integration and it's quite easy to 
>>> work with).
>>> 
>>> 
>>> There are users reporting using CEPH with their oVirt , but I can't tell 
>>> how good it is.
>>> I doubt that oVirt nodes come with CEPH components , so you most probably 
>>> will need to use a full-blown distro. In general, using extra software on 
>>> oVirt nodes is quite hard .
>>> 
>>> With such setup, you will need much more nodes than a Gluster setup due to 
>>> CEPH's requirements.
>>> 
>>> Best Regards,
>>> Strahil Nikolov
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore 
>>>  написа: 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> Hi all,
>>> 
>>> I am planning my new oVirt cluster on Apple hosts. These hosts can only 
>>> have one disk which I plan to partition and use for hyper converged setup. 
>>> As this is my first oVirt cluster I need help in understanding few bits.
>>> 
>>> 1. Is Hyper converged setup possible with Ceph using cinderlib?
>>> 2. Can this hyper converged setup be on oVirt Node Next hosts or only 
>>> Centos?
>>> 3. Can I install cinderlib on oVirt Node Next hosts?
>>> 4. Are there any pit falls in such a setup?
>>> 
>>> 
>>> Thanks for your help
>>> 
>>> Regards,
>>> Shantur
>>> 
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>>> 
>> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS2O675QAXZRTSULPD2S/
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ov

[ovirt-users] Re: VG issue / Non Operational

2021-01-18 Thread Strahil Nikolov via Users
Are you sure that ovirt doesn't still use it (storage domains)?

Best Regards,
Strahil Nikolov






В понеделник, 18 януари 2021 г., 09:11:18 Гринуич+2, Christian Reiss 
 написа: 





Update:

I found out that 4a62cdb4-b314-4c7f-804e-8e7275518a7f is an iscsi target 
outside of gluster. It is a test that we do not need anymore but we cant 
remove. According to

[root@node03 ~]# iscsiadm -m session
tcp: [1] 10.100.200.20:3260,1 iqn.2005-10.org.freenas.ctl:ovirt-data 
(non-flash)

its attached, but something is still missing...


On 17/01/2021 11:45, Strahil Nikolov via Users wrote:
> What is the output of 'lsblk -t' on all nodes ?
> 
> Best Regards,
> Strahil NIkolov692371
> 
> В 11:19 +0100 на 17.01.2021 (нд), Christian Reiss написа:
>> Hey folks,
>>
>> quick (I hope) question: On my 3-node cluster I am swapping out all
>> the
>> SSDs with fewer but higher capacity ones. So I took one node down
>> (maintenance, stop), then removed all SSDs, set up a new RAID, set
>> up
>> lvm and gluster, let it resync. Gluster health status shows no
>> unsynced
>> entries.
>>
>> Uppon going from maintenance to online from ovirt management It goes
>> into non-operational status, vdsm log on the node shows:
>>
>> 2021-01-17 11:13:29,051+0100 INFO  (jsonrpc/6) [api.host] START
>> getAllVmStats() from=::1,48580 (api:48)
>> 2021-01-17 11:13:29,051+0100 INFO  (jsonrpc/6) [api.host] FINISH
>> getAllVmStats return={'status': {'message': 'Done', 'code': 0},
>> 'statsList': (suppressed)} from=::1,48580 (api:54)
>> 2021-01-17 11:13:29,052+0100 INFO  (jsonrpc/6)
>> [jsonrpc.JsonRpcServer]
>> RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:312)
>> 2021-01-17 11:13:30,420+0100 WARN  (monitor/4a62cdb) [storage.LVM]
>> Reloading VGs failed (vgs=[u'4a62cdb4-b314-4c7f-804e-8e7275518a7f']
>> rc=5
>> out=[] err=['  Volume group "4a62cdb4-b314-4c7f-804e-8e7275518a7f"
>> not
>> found', '  Cannot process volume group
>> 4a62cdb4-b314-4c7f-804e-8e7275518a7f']) (lvm:470)
>> 2021-01-17 11:13:30,424+0100 ERROR (monitor/4a62cdb)
>> [storage.Monitor]
>> Setting up monitor for 4a62cdb4-b314-4c7f-804e-8e7275518a7f failed
>> (monitor:330)
>> Traceback (most recent call last):
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
>> line
>> 327, in _setupLoop
>>      self._setupMonitor()
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
>> line
>> 349, in _setupMonitor
>>      self._produceDomain()
>>    File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159,
>> in
>> wrapper
>>      value = meth(self, *a, **kw)
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
>> line
>> 367, in _produceDomain
>>      self.domain = sdCache.produce(self.sdUUID)
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
>> 110, in produce
>>      domain.getRealDomain()
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
>> 51,
>> in getRealDomain
>>      return self._cache._realProduce(self._sdUUID)
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
>> 134, in _realProduce
>>      domain = self._findDomain(sdUUID)
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
>> 151, in _findDomain
>>      return findMethod(sdUUID)
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
>> 176, in _findUnfetchedDomain
>>      raise se.StorageDomainDoesNotExist(sdUUID)
>>
>>
>> I assume due to the changed LVM UUID it fails, right? Can I someone
>> fix/change the UUID and get the node back up again? It does not seem
>> to
>> be a major issue, to be honest.
>>
>> I can see the gluster mount (what ovirt mounts when it onlines a
>> node)
>> already, and gluster is happy too.
>>
>> Any help is appreciated!
>>
>> -Chris.
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LS4OEXZOE6UA4CDQYXFKC3TZCCO42SU4/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3IYV67D7JBE3CK6SQ7SEUUNRVWU5QNYK/
> 

-- 
with kind regards,
mit freundlichen Gruessen,


Christian Reiss

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lis

[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-18 Thread Matt Snow
Complete logs can be found here:
vdsm.log - https://paste.c-net.org/DownloadPressure
supervdsm.log - https://paste.c-net.org/LaterScandals
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M5BFTUM6IC6OXGGODBJTOEFGTS3VALBQ/


[ovirt-users] Ovirt reporting dashboard not working

2021-01-18 Thread José Ferradeira via Users
Hello, 

Had a problem with the engine server, the clock changed to 2026 and now I don't 
have any report on the dashboard. 
The version is 4.2.3.8-1.el7 

Any idea? 

Thanks 

-- 

Jose Ferradeira 
http://www.logicworks.pt 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KPW5FFKG3AI6EINW4G74IKTYB2E4A5DT/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-18 Thread Matt Snow
In case its useful here is the mount occurring.

[root@brick setup_debugging_logs]# while :; do mount | grep stumpy ; sleep 1; 
done
   
stumpy:/tanker/ovirt/host_storage on 
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage type nfs4 
(rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=172.16.1.49,local_lock=none,addr=172.16.1.50)
stumpy:/tanker/ovirt/host_storage on 
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage type nfs4 
(rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=172.16.1.49,local_lock=none,addr=172.16.1.50)
stumpy:/tanker/ovirt/host_storage on 
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage type nfs4 
(rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=172.16.1.49,local_lock=none,addr=172.16.1.50)


^C
[root@brick setup_debugging_logs]# 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/52UOCFHIOW6PB7EDMJWWYUMXRRAKIRLI/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-18 Thread penguin pages


After looking into logs..  I think issue is about storage where it should 
deploy.   Wizard did not seem to focus on that..  I A$$umed it was aware of 
volume per previous detected deployment... but...



2021-01-18 10:34:07,917-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : Clean 
local storage pools]
2021-01-18 10:34:08,418-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 ok: [localhost]
2021-01-18 10:34:08,919-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : 
Destroy local storage-pool {{ he_local_vm_dir | basename }}]
2021-01-18 10:34:09,320-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
{'msg': 'Unexpected templating type error occurred on (virsh -c 
qemu:///system?authfile={{ he_libvirt_authfile }} pool-destroy {{ 
he_local_vm_dir | basename }}): expected str, bytes or os.PathLike object, not 
NoneType', '_ansible_no_log': False}
2021-01-18 10:34:09,421-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
ignored: [localhost]: FAILED! => {"msg": "Unexpected templating type error 
occurred on (virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} 
pool-destroy {{ he_local_vm_dir | basename }}): expected str, bytes or 
os.PathLike object, not NoneType"}
2021-01-18 10:34:09,821-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : 
Undefine local storage-pool {{ he_local_vm_dir | basename }}]
2021-01-18 10:34:10,223-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
{'msg': 'Unexpected templating type error occurred on (virsh -c 
qemu:///system?authfile={{ he_libvirt_authfile }} pool-undefine {{ 
he_local_vm_dir | basename }}): expected str, bytes or os.PathLike object, not 
NoneType', '_ansible_no_log': False}
2021-01-18 10:34:10,323-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
ignored: [localhost]: FAILED! => {"msg": "Unexpected templating type error 
occurred on (virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} 
pool-undefine {{ he_local_vm_dir | basename }}): expected str, bytes or 
os.PathLike object, not NoneType"}
2021-01-18 10:34:10,724-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : 
Destroy local storage-pool {{ local_vm_disk_path.split('/')[5] }}]
2021-01-18 10:34:11,125-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
{'msg': 'The task includes an option with an undefined variable. The error was: 
\'local_vm_disk_path\' is undefined\n\nThe error appears to be in 
\'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/clean_local_storage_pools.yml\':
 line 16, column 5, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\nchanged_when: 
true\n  - name: Destroy local storage-pool {{ 
local_vm_disk_path.split(\'/\')[5] }}\n^ here\nWe could be wrong, but this 
one looks like it might be an issue with\nmissing quotes. Always quote template 
expression brackets when they\nstart a value. For instance:\n\n
with_items:\n  - {{ foo }}\n\nShould be written as:\n\nwith_items:\n
  - "{{ foo }}"\n', '_ansible_no_log': False}
2021-01-18 10:34:11,226-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
ignored: [localhost]: FAILED! => {"msg": "The task includes an option with an 
undefined variable. The error was: 'local_vm_disk_path' is undefined\n\nThe 
error appears to be in 
'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/clean_local_storage_pools.yml':
 line 16, column 5, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\nchanged_when: 
true\n  - name: Destroy local storage-pool {{ local_vm_disk_path.split('/')[5] 
}}\n^ here\nWe could be wrong, but this one looks like it might be an issue 
with\nmissing quotes. Always quote template expression brackets when 
they\nstart a value. For instance:\n\nwith_items:\n  - {{ foo 
}}\n\nShould be written as:\n\nwith_items:\n  - \"{{ foo }}\"\n"}
2021-01-18 10:34:11,626-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : 
Undefine local storage-pool {{ local_vm_disk_path.split('/')[5] }}]
2021-01-18 10:34:12,028-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
{'msg': 'The task includes an option with an undefined variable. The error was: 
\'local_v

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Shantur Rathore
Thanks for pointing that out to me Konstantin.

I understand that it would use a kernel client instead of userland rbd lib.
Isn't it better as I have seen kernel clients 20x faster than userland??

I am probably missing something important here, would you mind detailing
that.

Regards,
Shantur


On Mon, Jan 18, 2021 at 3:27 PM Konstantin Shalygin  wrote:

> Beware about Ceph and oVirt Managed Block Storage, current integration is
> only possible with kernel, not with qemu-rbd.
>
>
> k
>
> Sent from my iPhone
>
> On 18 Jan 2021, at 13:00, Shantur Rathore  wrote:
>
> 
> Thanks Strahil for your reply.
>
> Sorry just to confirm,
>
> 1. Are you saying Ceph on oVirt Node NG isn't possible?
> 2. Would you know which devs would be best to ask about the recent Ceph
> changes?
>
> Thanks,
> Shantur
>
> On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users 
> wrote:
>
>> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>>
>> Hi Strahil,
>>
>> Thanks for your reply, I have 16 nodes for now but more on the way.
>>
>> The reason why Ceph appeals me over Gluster because of the following
>> reasons.
>>
>> 1. I have more experience with Ceph than Gluster.
>>
>> That is a good reason to pick CEPH.
>>
>> 2. I heard in Managed Block Storage presentation that it leverages
>> storage software to offload storage related tasks.
>> 3. Adding Gluster storage limits to 3 hosts at a time.
>>
>> Only if you wish the nodes to be both Storage and Compute. Yet, you can
>> add as many as you wish as a compute node (won't be part of Gluster) and
>> later you can add them to the Gluster TSP (this requires 3 nodes at a time).
>>
>> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No
>> such limitation if I go via Ceph.
>>
>> Actually , it's about Red Hat support for RHHI and not for Gluster +
>> oVirt. As both oVirt and Gluster ,that are used, are upstream projects,
>> support is on best effort from the community.
>>
>> In my initial testing I was able to enable Centos repositories in Node Ng
>> but if I remember correctly, there were some librbd versions present in
>> Node Ng which clashed with the version I was trying to install.
>> Does Ceph hyperconverge still make sense?
>>
>> Yes it is. You got the knowledge to run the CEPH part, yet consider
>> talking with some of the devs on the list - as there were some changes
>> recently in oVirt's support for CEPH.
>>
>> Regards
>> Shantur
>>
>> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users 
>> wrote:
>>
>> Hi Shantur,
>>
>> the main question is how many nodes you have.
>> Ceph integration is still in development/experimental and it should be
>> wise to consider Gluster also. It has a great integration and it's quite
>> easy to work with).
>>
>>
>> There are users reporting using CEPH with their oVirt , but I can't tell
>> how good it is.
>> I doubt that oVirt nodes come with CEPH components , so you most probably
>> will need to use a full-blown distro. In general, using extra software on
>> oVirt nodes is quite hard .
>>
>> With such setup, you will need much more nodes than a Gluster setup due
>> to CEPH's requirements.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
>> shantur.rath...@gmail.com> написа:
>>
>>
>>
>>
>>
>> Hi all,
>>
>> I am planning my new oVirt cluster on Apple hosts. These hosts can only
>> have one disk which I plan to partition and use for hyper converged setup.
>> As this is my first oVirt cluster I need help in understanding few bits.
>>
>> 1. Is Hyper converged setup possible with Ceph using cinderlib?
>> 2. Can this hyper converged setup be on oVirt Node Next hosts or only
>> Centos?
>> 3. Can I install cinderlib on oVirt Node Next hosts?
>> 4. Are there any pit falls in such a setup?
>>
>>
>> Thanks for your help
>>
>> Regards,
>> Shantur
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/comm

[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-18 Thread Matt Snow
Hi Didi,
I did log clean up and am re-running ovirt-hosted-engine-cleanup && 
ovirt-hosted-engine-setup to get you cleaner log files. 

searching for host_storage in vdsm.log...
**snip**
2021-01-18 08:43:18,842-0700 INFO  (jsonrpc/3) [api.host] FINISH getStats 
return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} 
from=:::192.168.222.53,39612 (api:54)
2021-01-18 08:43:19,963-0700 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=fb80c883-2447-4ed2-b344-aa0c0fb65809 (api:48)
2021-01-18 08:43:19,963-0700 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=fb80c883-2447-4ed2-b344-aa0c0fb65809 (api:54)
2021-01-18 08:43:19,964-0700 INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:726)
2021-01-18 08:43:20,441-0700 INFO  (jsonrpc/4) [vdsm.api] START 
connectStorageServer(domType=1, spUUID='----', 
conList=[{'password': '', 'protocol_version': 'auto', 'port': '', 
'iqn': '', 'connection': 'stumpy:/tanker/ovirt/host_storage', 'ipv6_enabled': 
'false', 'id': '----', 'user': '', 'tpgt': 
'1'}], options=None) from=:::192.168.222.53,39612, 
flow_id=2227465c-5040-4199-b1f9-f5305b10b5e5, 
task_id=032afa50-381a-44af-a067-d25bcc224355 (api:48)
2021-01-18 08:43:20,446-0700 INFO  (jsonrpc/4) 
[storage.StorageServer.MountConnection] Creating directory 
'/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage' (storageServer:167)
2021-01-18 08:43:20,446-0700 INFO  (jsonrpc/4) [storage.fileUtils] Creating 
directory: /rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage mode: None 
(fileUtils:201)
2021-01-18 08:43:20,447-0700 INFO  (jsonrpc/4) [storage.Mount] mounting 
stumpy:/tanker/ovirt/host_storage at 
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage (mount:207)
2021-01-18 08:43:21,271-0700 INFO  (jsonrpc/4) [IOProcessClient] (Global) 
Starting client (__init__:340)
2021-01-18 08:43:21,313-0700 INFO  (ioprocess/51124) [IOProcess] (Global) 
Starting ioprocess (__init__:465)
2021-01-18 08:43:21,373-0700 INFO  (jsonrpc/4) [storage.StorageDomainCache] 
Invalidating storage domain cache (sdc:74)
2021-01-18 08:43:21,373-0700 INFO  (jsonrpc/4) [vdsm.api] FINISH 
connectStorageServer return={'statuslist': [{'id': 
'----', 'status': 0}]} 
from=:::192.168.222.53,39612, flow_id=2227465c-5040-4199-b1f9-f5305b10b5e5, 
task_id=032afa50-381a-44af-a067-d25bcc224355 (api:54)
2021-01-18 08:43:21,497-0700 INFO  (jsonrpc/5) [vdsm.api] START 
getStorageDomainsList(spUUID='----', 
domainClass=1, storageType='', remotePath='stumpy:/tanker/ovirt/host_storage', 
options=None) from=:::192.168.222.53,39612, 
flow_id=2227465c-5040-4199-b1f9-f5305b10b5e5, 
task_id=e37eb000-13da-440f-9197-07495e53ce52 (api:48)
2021-01-18 08:43:21,497-0700 INFO  (jsonrpc/5) [storage.StorageDomainCache] 
Refreshing storage domain cache (resize=True) (sdc:80)
2021-01-18 08:43:21,498-0700 INFO  (jsonrpc/5) [storage.ISCSI] Scanning iSCSI 
devices (iscsi:442)
2021-01-18 08:43:21,628-0700 INFO  (jsonrpc/5) [storage.ISCSI] Scanning iSCSI 
devices: 0.13 seconds (utils:390)
2021-01-18 08:43:21,629-0700 INFO  (jsonrpc/5) [storage.HBA] Scanning FC 
devices (hba:60)
2021-01-18 08:43:21,908-0700 INFO  (jsonrpc/5) [storage.HBA] Scanning FC 
devices: 0.28 seconds (utils:390)
2021-01-18 08:43:21,969-0700 INFO  (jsonrpc/5) [storage.Multipath] Resizing 
multipath devices (multipath:104)
2021-01-18 08:43:21,975-0700 INFO  (jsonrpc/5) [storage.Multipath] Resizing 
multipath devices: 0.01 seconds (utils:390)
2021-01-18 08:43:21,975-0700 INFO  (jsonrpc/5) [storage.StorageDomainCache] 
Refreshing storage domain cache: 0.48 seconds (utils:390)
2021-01-18 08:43:22,167-0700 INFO  (tmap-0/0) [IOProcessClient] 
(stumpy:_tanker_ovirt_host__storage) Starting client (__init__:340)
2021-01-18 08:43:22,204-0700 INFO  (ioprocess/51144) [IOProcess] 
(stumpy:_tanker_ovirt_host__storage) Starting ioprocess (__init__:465)
2021-01-18 08:43:22,208-0700 INFO  (jsonrpc/5) [vdsm.api] FINISH 
getStorageDomainsList return={'domlist': []} from=:::192.168.222.53,39612, 
flow_id=2227465c-5040-4199-b1f9-f5305b10b5e5, 
task_id=e37eb000-13da-440f-9197-07495e53ce52 (api:54)
2021-01-18 08:43:22,999-0700 INFO  (jsonrpc/7) [vdsm.api] START 
connectStorageServer(domType=1, spUUID='----', 
conList=[{'password': '', 'protocol_version': 'auto', 'port': '', 
'iqn': '', 'connection': 'stumpy:/tanker/ovirt/host_storage', 'ipv6_enabled': 
'false', 'id': 'bc87e1a4-004e-41b4-b569-9e9413e9c027', 'user': '', 'tpgt': 
'1'}], options=None) from=:::192.168.222.53,39612, flow_id=5618fb28, 
task_id=51daa36a-e1cf-479d-a93c-1c87f21ce934 (api:48)
2021-01-18 08:43:23,007-0700 INFO  (jsonrpc/7) [storage.StorageDomainCache] 
Invalidating storage domain cache (sdc:74)
2021-01-

[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-18 Thread penguin pages


Following document to redploy engine...

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/self-hosted_engine_guide/cleaning_up_a_failed_self-hosted_engine_deployment

 From Host which had listed engine as in its inventory ### 
[root@medusa ~]# /usr/sbin/ovirt-hosted-engine-cleanup
 This will de-configure the host to run ovirt-hosted-engine-setup from scratch.
Caution, this operation should be used with care.

Are you sure you want to proceed? [y/n]
y
  -=== Destroy hosted-engine VM ===-
error: failed to get domain 'HostedEngine'

  -=== Stop HA services ===-
  -=== Shutdown sanlock ===-
shutdown force 1 wait 0
shutdown done 0
  -=== Disconnecting the hosted-engine storage domain ===-
  -=== De-configure VDSM networks ===-
ovirtmgmt
 A previously configured management bridge has been found on the system, this 
will try to de-configure it. Under certain circumstances you can loose network 
connection.
Caution, this operation should be used with care.

Are you sure you want to proceed? [y/n]
y
  -=== Stop other services ===-
Warning: Stopping libvirtd.service, but it can still be activated by:
  libvirtd.socket
  libvirtd-ro.socket
  libvirtd-admin.socket
  -=== De-configure external daemons ===-
Removing database file /var/lib/vdsm/storage/managedvolume.db
  -=== Removing configuration files ===-
? /etc/init/libvirtd.conf already missing
- removing /etc/libvirt/nwfilter/vdsm-no-mac-spoofing.xml
? /etc/ovirt-hosted-engine/answers.conf already missing
- removing /etc/ovirt-hosted-engine/hosted-engine.conf
- removing /etc/vdsm/vdsm.conf
- removing /etc/pki/vdsm/certs/cacert.pem
- removing /etc/pki/vdsm/certs/vdsmcert.pem
- removing /etc/pki/vdsm/keys/vdsmkey.pem
- removing /etc/pki/vdsm/libvirt-migrate/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-migrate/server-cert.pem
- removing /etc/pki/vdsm/libvirt-migrate/server-key.pem
- removing /etc/pki/vdsm/libvirt-spice/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-spice/server-cert.pem
- removing /etc/pki/vdsm/libvirt-spice/server-key.pem
- removing /etc/pki/vdsm/libvirt-vnc/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-vnc/server-cert.pem
- removing /etc/pki/vdsm/libvirt-vnc/server-key.pem
- removing /etc/pki/CA/cacert.pem
- removing /etc/pki/libvirt/clientcert.pem
- removing /etc/pki/libvirt/private/clientkey.pem
? /etc/pki/ovirt-vmconsole/*.pem already missing
- removing /var/cache/libvirt/qemu
? /var/run/ovirt-hosted-engine-ha/* already missing
? /var/tmp/localvm* already missing
  -=== Removing IP Rules ===-
[root@medusa ~]# 
[root@medusa ~]# hosted-engine --deploy
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
  During customization use CTRL-D to abort.
  Continuing will configure this host for serving as hypervisor and 
will create a local VM with a running engine.
  The locally running engine will be used to configure a new storage 
domain and create a VM there.


1) Error about firewall
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The conditional check 
'firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 
'masked'' failed. The error was: error while evaluating conditional 
(firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 
'masked'): 'dict object' has no attribute 'SubState'\n\nThe error appears to be 
in 
'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_firewalld.yml':
 line 8, column 5, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\nregister: 
firewalld_s\n  - name: Enforce firewalld status\n^ here\n"}

###  Hmm.. that is dumb.. its disabled to avoid issues
[root@medusa ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor 
preset: enabled)
   Active: inactive (dead)
 Docs: man:firewalld(1)


2) Error about ssh to host ovirte01.penguinpages.local 
[ ERROR ] fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed 
to connect to the host via ssh: ssh: connect to host 
ovirte01.penguinpages.local port 22: No route to host", "skip_reason": "Host 
localhost is unreachable", "unreachable": true}

###.. Hmm.. well.. no kidding.. it is suppose to deploy the engine so IP should 
be offline till it does it.  And as VMs to run DNS are down.. I am using hosts 
file to ignite the enviornment.  Not sure what it expects 
[root@medusa ~]# cat /etc/hosts |grep ovir
172.16.100.31 ovirte01.penguinpages.local ovirte01



Did not go well. 

Attached is deployment details as well as logs. 

Maybe someone can point out what I am doing wrong.  Last time I did this I did 
the HCI wizard.. but the hosted engine dashboard for "Virtualization" in 
cockpit https://172.16.100.101:9090/ovirt-dashboard#/he   no longer offers a 
deployment UI option.



## Deployment att

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Konstantin Shalygin
Beware about Ceph and oVirt Managed Block Storage, current integration is only 
possible with kernel, not with qemu-rbd.


k

Sent from my iPhone

> On 18 Jan 2021, at 13:00, Shantur Rathore  wrote:
> 
> 
> Thanks Strahil for your reply.
> 
> Sorry just to confirm,
> 
> 1. Are you saying Ceph on oVirt Node NG isn't possible?
> 2. Would you know which devs would be best to ask about the recent Ceph 
> changes?
> 
> Thanks,
> Shantur
> 
>> On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users  
>> wrote:
>> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>>> Hi Strahil,
>>> 
>>> Thanks for your reply, I have 16 nodes for now but more on the way.
>>> 
>>> The reason why Ceph appeals me over Gluster because of the following 
>>> reasons.
>>> 
>>> 1. I have more experience with Ceph than Gluster.
>> That is a good reason to pick CEPH.
>>> 2. I heard in Managed Block Storage presentation that it leverages storage 
>>> software to offload storage related tasks. 
>>> 3. Adding Gluster storage limits to 3 hosts at a time.
>> Only if you wish the nodes to be both Storage and Compute. Yet, you can add 
>> as many as you wish as a compute node (won't be part of Gluster) and later 
>> you can add them to the Gluster TSP (this requires 3 nodes at a time).
>>> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No 
>>> such limitation if I go via Ceph.
>> Actually , it's about Red Hat support for RHHI and not for Gluster + oVirt. 
>> As both oVirt and Gluster ,that are used, are upstream projects, support is 
>> on best effort from the community.
>>> In my initial testing I was able to enable Centos repositories in Node Ng 
>>> but if I remember correctly, there were some librbd versions present in 
>>> Node Ng which clashed with the version I was trying to install.
>>> Does Ceph hyperconverge still make sense?
>> Yes it is. You got the knowledge to run the CEPH part, yet consider talking 
>> with some of the devs on the list - as there were some changes recently in 
>> oVirt's support for CEPH.
>> 
>>> Regards
>>> Shantur
>>> 
 On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users  
 wrote:
 Hi Shantur,
 
 the main question is how many nodes you have.
 Ceph integration is still in development/experimental and it should be 
 wise to consider Gluster also. It has a great integration and it's quite 
 easy to work with).
 
 
 There are users reporting using CEPH with their oVirt , but I can't tell 
 how good it is.
 I doubt that oVirt nodes come with CEPH components , so you most probably 
 will need to use a full-blown distro. In general, using extra software on 
 oVirt nodes is quite hard .
 
 With such setup, you will need much more nodes than a Gluster setup due to 
 CEPH's requirements.
 
 Best Regards,
 Strahil Nikolov
 
 
 
 
 
 
 В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore 
  написа: 
 
 
 
 
 
 Hi all,
 
 I am planning my new oVirt cluster on Apple hosts. These hosts can only 
 have one disk which I plan to partition and use for hyper converged setup. 
 As this is my first oVirt cluster I need help in understanding few bits.
 
 1. Is Hyper converged setup possible with Ceph using cinderlib?
 2. Can this hyper converged setup be on oVirt Node Next hosts or only 
 Centos?
 3. Can I install cinderlib on oVirt Node Next hosts?
 4. Are there any pit falls in such a setup?
 
 
 Thanks for your help
 
 Regards,
 Shantur
 
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/privacy-policy.html
 oVirt Code of Conduct: 
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives: 
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/privacy-policy.html
 oVirt Code of Conduct: 
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives: 
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS2O675QAXZRTSULPD2S/
> ___
> Users mailing list -

[ovirt-users] Re: ovirt-ha-agent

2021-01-18 Thread Aries Ahito
its fine now. i just shutdown and start the engine using hosted-engine
--vm-shutdown hosted-engine --vm-start

thank you very much

On Mon, 18 Jan 2021, 21:19 Yedidyah Bar David,  wrote:

> After you removed the hosts, and before you added them again: Did you
> reinstall the OS? If not, there might be some mess left behind.
> I suggest to try again: Remove a host, reinstall the OS on it, then
> add it back and make sure you choose "Deploy". Then give it some time
> to update (should not be more than a few minutes, let's say 1 hour).
> If it still does not look good, please check/share all relevant logs.
> Thanks.
>
> Best regards,
>
> On Mon, Jan 18, 2021 at 3:14 PM Aries Ahito 
> wrote:
> >
> > i tried all to remove all the hosts. run clean self hosted engine
> deployment. then i tried to add host again. and in hosted-engine tab. i
> tick drop down and chose "deploy".
> >
> > after adding all host have active ovirt-ha-agent and have score of
> (3400) when i tried to migrate the hosted-engine it keeps on failing.
> except for number 8 hosts. i can now migrated from host 1 and host 8. but
> rest of the hosts keeps failing..
> >
> > On Sun, Jan 17, 2021 at 4:15 PM Aries Ahito 
> wrote:
> >>
> >> 1. yes
> >>
> >> 2. yes it can migrate any of the hosts
> >>
> >> 3. yes we just tried the new ovirt 4.4. then we decided to used fiber
> ports and have it bonded. so we reinstalled everything from sratch.
> >>
> >> 4. Host 1 and the rest of the hosta was completely reinstalled from
> sratch.  even our separate glusterfs storage we deleted the volumes bricks
> and redo everything. ok ill check it later thanks for your prompt response.
> >>
> >>
> >>
> >>
> >> On Sun, 17 Jan 2021, 16:01 Yedidyah Bar David,  wrote:
> >>>
> >>> On Sun, Jan 17, 2021 at 9:41 AM Aries Ahito 
> wrote:
> >>> >
> >>> > Hi Yedidyah. yep we reinstall everything from scratch even the
> operating system we reinstalled it. so technically its a fresh install.
> >>> >
> >>> > after successfully setting up the hosted-engine we did add  the rest
> of the node. when i check the node status. its says ovirt agent HA N/A
> >>>
> >>> Let me understand:
> >>>
> >>> 1. You had a hosted-engine setup. You started this by deploying on one
> >>> host, let's call it host1, and then added another one, let's call it
> >>> host2. Right? Perhaps you had more than 2, that's not the point, for
> >>> now.
> >>>
> >>> 2. At this point, you could migrate engine VM from host1 to host2 and
> >>> back and it all worked.
> >>>
> >>> 3. Then you decided to reinstall. It sounds like you reinstalled the
> >>> OS on host1, and then did another hosted-engine deployment there. Did
> >>> you also use backup? --restore-from-file? Did you use new storage?
> >>> Existing? If existing, did you clear it before reinstalling?
> >>>
> >>> 4. Assuming that you somehow got to a good state re host1, host2 now
> >>> still points at the old hosted-engine storage domain, so does not
> >>> "see" the new deployment. You should reinstall the OS there, and then
> >>> add it as a host to the new engine. Make sure you mark the checkbox
> >>> "hosted-engine" in "Add host" dialog.
> >>>
> >>> Best regards,
> >>>
> >>> >
> >>> > On Sun, 17 Jan 2021, 15:32 Yedidyah Bar David, 
> wrote:
> >>> >>
> >>> >> On Sat, Jan 16, 2021 at 4:31 AM Ariez Ahito <
> aristotleah...@gmail.com> wrote:
> >>> >> >
> >>> >> > last dec i installed hosted-engine seems to be working, we can
> migrate the engine to different host. but we need to reinstall everything
> because of gluster additional configuration.
> >>> >> > so we did installed hosted-engine. but as per checking. we cannot
> migrate the engine to other hosts. and the ovirt-ha-agent and
> ovirt-ha-broker is status is inactive (dead) what are we missing?
> >>> >>
> >>> >> Please clarify what exact steps you took.
> >>> >>
> >>> >> Did you reinstall everything? How?
> >>> >>
> >>> >> Did you reinstall just one host? How?
> >>> >>
> >>> >> Best regards,
> >>> >> --
> >>> >> Didi
> >>> >>
> >>>
> >>>
> >>> --
> >>> Didi
> >>>
> >
> >
> > --
> > Aristotle D. Ahito
> > --
> >
>
>
> --
> Didi
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z3JMZLZMHKC3JFDHKL2DAHNKVQ4TRV2B/


[ovirt-users] Re: ovirt-ha-agent

2021-01-18 Thread Yedidyah Bar David
After you removed the hosts, and before you added them again: Did you
reinstall the OS? If not, there might be some mess left behind.
I suggest to try again: Remove a host, reinstall the OS on it, then
add it back and make sure you choose "Deploy". Then give it some time
to update (should not be more than a few minutes, let's say 1 hour).
If it still does not look good, please check/share all relevant logs.
Thanks.

Best regards,

On Mon, Jan 18, 2021 at 3:14 PM Aries Ahito  wrote:
>
> i tried all to remove all the hosts. run clean self hosted engine deployment. 
> then i tried to add host again. and in hosted-engine tab. i tick drop down 
> and chose "deploy".
>
> after adding all host have active ovirt-ha-agent and have score of (3400) 
> when i tried to migrate the hosted-engine it keeps on failing. except for 
> number 8 hosts. i can now migrated from host 1 and host 8. but rest of the 
> hosts keeps failing..
>
> On Sun, Jan 17, 2021 at 4:15 PM Aries Ahito  wrote:
>>
>> 1. yes
>>
>> 2. yes it can migrate any of the hosts
>>
>> 3. yes we just tried the new ovirt 4.4. then we decided to used fiber ports 
>> and have it bonded. so we reinstalled everything from sratch.
>>
>> 4. Host 1 and the rest of the hosta was completely reinstalled from sratch.  
>> even our separate glusterfs storage we deleted the volumes bricks and redo 
>> everything. ok ill check it later thanks for your prompt response.
>>
>>
>>
>>
>> On Sun, 17 Jan 2021, 16:01 Yedidyah Bar David,  wrote:
>>>
>>> On Sun, Jan 17, 2021 at 9:41 AM Aries Ahito  
>>> wrote:
>>> >
>>> > Hi Yedidyah. yep we reinstall everything from scratch even the operating 
>>> > system we reinstalled it. so technically its a fresh install.
>>> >
>>> > after successfully setting up the hosted-engine we did add  the rest of 
>>> > the node. when i check the node status. its says ovirt agent HA N/A
>>>
>>> Let me understand:
>>>
>>> 1. You had a hosted-engine setup. You started this by deploying on one
>>> host, let's call it host1, and then added another one, let's call it
>>> host2. Right? Perhaps you had more than 2, that's not the point, for
>>> now.
>>>
>>> 2. At this point, you could migrate engine VM from host1 to host2 and
>>> back and it all worked.
>>>
>>> 3. Then you decided to reinstall. It sounds like you reinstalled the
>>> OS on host1, and then did another hosted-engine deployment there. Did
>>> you also use backup? --restore-from-file? Did you use new storage?
>>> Existing? If existing, did you clear it before reinstalling?
>>>
>>> 4. Assuming that you somehow got to a good state re host1, host2 now
>>> still points at the old hosted-engine storage domain, so does not
>>> "see" the new deployment. You should reinstall the OS there, and then
>>> add it as a host to the new engine. Make sure you mark the checkbox
>>> "hosted-engine" in "Add host" dialog.
>>>
>>> Best regards,
>>>
>>> >
>>> > On Sun, 17 Jan 2021, 15:32 Yedidyah Bar David,  wrote:
>>> >>
>>> >> On Sat, Jan 16, 2021 at 4:31 AM Ariez Ahito  
>>> >> wrote:
>>> >> >
>>> >> > last dec i installed hosted-engine seems to be working, we can migrate 
>>> >> > the engine to different host. but we need to reinstall everything 
>>> >> > because of gluster additional configuration.
>>> >> > so we did installed hosted-engine. but as per checking. we cannot 
>>> >> > migrate the engine to other hosts. and the ovirt-ha-agent and 
>>> >> > ovirt-ha-broker is status is inactive (dead) what are we missing?
>>> >>
>>> >> Please clarify what exact steps you took.
>>> >>
>>> >> Did you reinstall everything? How?
>>> >>
>>> >> Did you reinstall just one host? How?
>>> >>
>>> >> Best regards,
>>> >> --
>>> >> Didi
>>> >>
>>>
>>>
>>> --
>>> Didi
>>>
>
>
> --
> Aristotle D. Ahito
> --
>


-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/J254RBL2GZAY47HFH62KRUE6QNWJY5YT/


[ovirt-users] Re: encrypted GENEVE traffic

2021-01-18 Thread Pavel Nakonechnyi
Dear oVirt team,

On Thursday, 12 December 2019 16:53:41 CET Pavel Nakonechnyi wrote:
> 
> > > however, I was not able to find any clues where in particular it is
> > > implemented...
> > > 
> > > Once this is understood, it will be possible to consider altering the
> > > corresponding code to include ipsec-related options.
> > 
> > This would be amazing!
> 
> I have added some notes on my attempts to the aforementioned bug (https://
> bugzilla.redhat.com/show_bug.cgi?id=1782056). So, it appears to be almost 
> working.

Could someone please clarify the status of the discussed issue? Bugzilla entry 
https://bugzilla.redhat.com/show_bug.cgi?id=1782056
recently received strange update with a link to Red-Hat knowledge base. Does 
this mean that the discussed functionality actually works or can be made 
working with some efforts?


--
 WBR, Pavel



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DQ7Z4IV5JJQ2VOF5XYWE2FSF47OHP34T/


[ovirt-users] VM Disks order

2021-01-18 Thread Erez Zarum
When attaching a disk it is not possible to set the disk order nor modify the 
order later.
Example:
A new VM is provisioned with 5 disks, Disk0 is the OS and then later attached 
disks by order up to Disk4.
Removing Disk3 and then later attaching does not promise it will be attached 
back as Disk3.
In most other platforms it is possible to set the order.

Am i missing something? if not, is there a plan to add this feature?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7TPSXUQ4WKVAHUP4QV5GITAXFF2BJBYY/


[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Shantur Rathore
Thanks Strahil for your reply.

Sorry just to confirm,

1. Are you saying Ceph on oVirt Node NG isn't possible?
2. Would you know which devs would be best to ask about the recent Ceph
changes?

Thanks,
Shantur

On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users 
wrote:

> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>
> Hi Strahil,
>
> Thanks for your reply, I have 16 nodes for now but more on the way.
>
> The reason why Ceph appeals me over Gluster because of the following
> reasons.
>
> 1. I have more experience with Ceph than Gluster.
>
> That is a good reason to pick CEPH.
>
> 2. I heard in Managed Block Storage presentation that it leverages storage
> software to offload storage related tasks.
> 3. Adding Gluster storage limits to 3 hosts at a time.
>
> Only if you wish the nodes to be both Storage and Compute. Yet, you can
> add as many as you wish as a compute node (won't be part of Gluster) and
> later you can add them to the Gluster TSP (this requires 3 nodes at a time).
>
> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No
> such limitation if I go via Ceph.
>
> Actually , it's about Red Hat support for RHHI and not for Gluster +
> oVirt. As both oVirt and Gluster ,that are used, are upstream projects,
> support is on best effort from the community.
>
> In my initial testing I was able to enable Centos repositories in Node Ng
> but if I remember correctly, there were some librbd versions present in
> Node Ng which clashed with the version I was trying to install.
> Does Ceph hyperconverge still make sense?
>
> Yes it is. You got the knowledge to run the CEPH part, yet consider
> talking with some of the devs on the list - as there were some changes
> recently in oVirt's support for CEPH.
>
> Regards
> Shantur
>
> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users 
> wrote:
>
> Hi Shantur,
>
> the main question is how many nodes you have.
> Ceph integration is still in development/experimental and it should be
> wise to consider Gluster also. It has a great integration and it's quite
> easy to work with).
>
>
> There are users reporting using CEPH with their oVirt , but I can't tell
> how good it is.
> I doubt that oVirt nodes come with CEPH components , so you most probably
> will need to use a full-blown distro. In general, using extra software on
> oVirt nodes is quite hard .
>
> With such setup, you will need much more nodes than a Gluster setup due to
> CEPH's requirements.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
> shantur.rath...@gmail.com> написа:
>
>
>
>
>
> Hi all,
>
> I am planning my new oVirt cluster on Apple hosts. These hosts can only
> have one disk which I plan to partition and use for hyper converged setup.
> As this is my first oVirt cluster I need help in understanding few bits.
>
> 1. Is Hyper converged setup possible with Ceph using cinderlib?
> 2. Can this hyper converged setup be on oVirt Node Next hosts or only
> Centos?
> 3. Can I install cinderlib on oVirt Node Next hosts?
> 4. Are there any pit falls in such a setup?
>
>
> Thanks for your help
>
> Regards,
> Shantur
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS2O675QAXZRTSULPD2S/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6WBVRC4GJTAIL3XYPJEEYGOBCCNZY4ZV/


[ovirt-users] Re: VG issue / Non Operational

2021-01-18 Thread Christian Reiss

Update & Fix:

There were remnant entries of filters in both

  /etc/lvm/lvm.conf and
  /etc/multipath.conf

that worked well with the Gluster LVM but went crazy with iscsi lvm mounts.

Fixed the entries, rebooted the server. Now it works and it is back up.

Cheerio!

-Chris.

On 18/01/2021 08:10, Christian Reiss wrote:

Update:

I found out that 4a62cdb4-b314-4c7f-804e-8e7275518a7f is an iscsi target 
outside of gluster. It is a test that we do not need anymore but we cant 
remove. According to


[root@node03 ~]# iscsiadm -m session
tcp: [1] 10.100.200.20:3260,1 iqn.2005-10.org.freenas.ctl:ovirt-data 
(non-flash)


its attached, but something is still missing...


On 17/01/2021 11:45, Strahil Nikolov via Users wrote:

What is the output of 'lsblk -t' on all nodes ?

Best Regards,
Strahil NIkolov692371

В 11:19 +0100 на 17.01.2021 (нд), Christian Reiss написа:

Hey folks,

quick (I hope) question: On my 3-node cluster I am swapping out all
the
SSDs with fewer but higher capacity ones. So I took one node down
(maintenance, stop), then removed all SSDs, set up a new RAID, set
up
lvm and gluster, let it resync. Gluster health status shows no
unsynced
entries.

Uppon going from maintenance to online from ovirt management It goes
into non-operational status, vdsm log on the node shows:

2021-01-17 11:13:29,051+0100 INFO  (jsonrpc/6) [api.host] START
getAllVmStats() from=::1,48580 (api:48)
2021-01-17 11:13:29,051+0100 INFO  (jsonrpc/6) [api.host] FINISH
getAllVmStats return={'status': {'message': 'Done', 'code': 0},
'statsList': (suppressed)} from=::1,48580 (api:54)
2021-01-17 11:13:29,052+0100 INFO  (jsonrpc/6)
[jsonrpc.JsonRpcServer]
RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:312)
2021-01-17 11:13:30,420+0100 WARN  (monitor/4a62cdb) [storage.LVM]
Reloading VGs failed (vgs=[u'4a62cdb4-b314-4c7f-804e-8e7275518a7f']
rc=5
out=[] err=['  Volume group "4a62cdb4-b314-4c7f-804e-8e7275518a7f"
not
found', '  Cannot process volume group
4a62cdb4-b314-4c7f-804e-8e7275518a7f']) (lvm:470)
2021-01-17 11:13:30,424+0100 ERROR (monitor/4a62cdb)
[storage.Monitor]
Setting up monitor for 4a62cdb4-b314-4c7f-804e-8e7275518a7f failed
(monitor:330)
Traceback (most recent call last):
    File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
line
327, in _setupLoop
  self._setupMonitor()
    File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
line
349, in _setupMonitor
  self._produceDomain()
    File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159,
in
wrapper
  value = meth(self, *a, **kw)
    File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
line
367, in _produceDomain
  self.domain = sdCache.produce(self.sdUUID)
    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
110, in produce
  domain.getRealDomain()
    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
51,
in getRealDomain
  return self._cache._realProduce(self._sdUUID)
    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
134, in _realProduce
  domain = self._findDomain(sdUUID)
    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
151, in _findDomain
  return findMethod(sdUUID)
    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
176, in _findUnfetchedDomain
  raise se.StorageDomainDoesNotExist(sdUUID)


I assume due to the changed LVM UUID it fails, right? Can I someone
fix/change the UUID and get the node back up again? It does not seem
to
be a major issue, to be honest.

I can see the gluster mount (what ovirt mounts when it onlines a
node)
already, and gluster is happy too.

Any help is appreciated!

-Chris.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LS4OEXZOE6UA4CDQYXFKC3TZCCO42SU4/ 


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3IYV67D7JBE3CK6SQ7SEUUNRVWU5QNYK/ 






___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TZLQ5HPNZBTHLGPODFT3K6KZE6EKJP3Q/



--
with kind regards,
mit freundlichen Gruessen,

Christian Reiss



OpenPGP_0x44E29126ABCD43C5.asc
D

[ovirt-users] Re: How to configure Power Management Fence Protocol for Libvirtd VM ?

2021-01-18 Thread tommy
But on every KVM hosts, using fence_xvm command success.

 

[root@ohost1 ~]# fence_xvm -o list

1.ovs1   7fd9b01e-236c-4d08-9c07-ad0b710139e2 off

1.ovs2   6b0e73b1-649a-470d-9c1b-ca6919e0514d off

2.host1  476e5157-1211-4701-bf0d-425ebb251817 off

2.host2  f7523d48-37dd-45ac-8c5d-dc20300f42db off

3.ohost1 d2b5c362-9c2e-49fd-adef-aebe80716c22 on

3.ohost2 b192cb11-1be8-41fa-b156-5363513f6478 on

3.ohost3 b83f1fe4-44ee-45fc-87e5-c79d2ac55fb1 on

4.foreman6c5ee3a0-bdc6-4bd6-bd01-78c497c387ce off

4.foreman2   0715da46-abf5-49df-9057-d7baaaf2f500 off

[root@ohost1 ~]#

[root@ohost1 ~]#

[root@ohost1 ~]#

[root@ohost1 ~]# fence_xvm -a 225.0.0.12 -k /etc/cluster/fence_xvm.key -H
3.ohost1 -o status

Status: ON

[root@ohost1 ~]#

 

 

 

 

 

 

 

 

From: users-boun...@ovirt.org  On Behalf Of tommy
Sent: Monday, January 18, 2021 2:08 PM
To: 'users' 
Subject: [ovirt-users] Re: How to configure Power Management Fence Protocol
for Libvirtd VM ?

 



 

 

It raises JSON-RPC error.

 

The engine log is :

 

2021-01-18 14:05:33,181+08 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-5) [a19cd4a1-58d4-407b-ae6d-9ee11d6d5eb2] EVENT_ID:
VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host
ohost1.tltd.com.Internal JSON-RPC error

2021-01-18 14:05:33,181+08 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default
task-5) [a19cd4a1-58d4-407b-ae6d-9ee11d6d5eb2] FINISH, FenceVdsVDSCommand,
return: FenceOperationResult:{status='ERROR', powerStatus='UNKNOWN',
message='Internal JSON-RPC error'}, log id: 5b821a80

2021-01-18 14:05:33,191+08 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-5) [a19cd4a1-58d4-407b-ae6d-9ee11d6d5eb2] EVENT_ID:
FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED(9,021), Execution of power
management status on Host ohost1.tltd.com using Proxy Host ohost3.tltd.com
and Fence Agent xvm:192.168.10.12 failed.

2021-01-18 14:05:33,191+08 WARN
[org.ovirt.engine.core.bll.pm.FenceAgentExecutor] (default task-5)
[a19cd4a1-58d4-407b-ae6d-9ee11d6d5eb2] Fence action failed using proxy host
'ohost3.tltd.com', trying another proxy

2021-01-18 14:05:33,238+08 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-5) [a19cd4a1-58d4-407b-ae6d-9ee11d6d5eb2] EVENT_ID:
FENCE_OPERATION_USING_AGENT_AND_PROXY_STARTED(9,020), Executing power
management status on Host ohost1.tltd.com using Proxy Host ohost2.tltd.com
and Fence Agent xvm:192.168.10.12.

2021-01-18 14:05:33,240+08 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default
task-5) [a19cd4a1-58d4-407b-ae6d-9ee11d6d5eb2] START,
FenceVdsVDSCommand(HostName = ohost2.tltd.com,
FenceVdsVDSCommandParameters:{hostId='e2333e4e-cc5b-47c0-a007-541ae5aac2df',
targetVdsId='0dc3ca3e-1643-4c92-aa1f-4860f3a3c3fe', action='STATUS',
agent='FenceAgent:{id='null', hostId='null', order='1', type='xvm',
ip='192.168.10.12', port='1229', user='root', password='***',
encryptOptions='false', options='ipport=1229'}', policy='null'}), log id:
69fdd7b2

 

 

 

 

 

 

From: users-boun...@ovirt.org 
mailto:users-boun...@ovirt.org> > On Behalf Of
tommy
Sent: Monday, January 18, 2021 12:24 PM
To: 'users' mailto:users@ovirt.org> >
Subject: [ovirt-users] How to configure Power Management Fence Protocol for
Libvirtd VM ?

 

Hi, everyone:

 

My test env is running on Ubuntu Libvirtd server, I configure the QEMU VM as
Physical Host, the hosts list is:

 

root@ubts1:~# virsh

Welcome to virsh, the virtualization interactive terminal.

 

Type:  'help' for help with commands

   'quit' to quit

virsh # list

Id   NameState

---

24   3.ooengh1   running

25   3.ooengh2   running

 

 

root@ubts2:~# virsh

Welcome to virsh, the virtualization interactive terminal.

 

Type:  'help' for help with commands

   'quit' to quit

virsh # list

Id   Name   State

--

26   3.ohost1   running

27   3.ohost2   running

28   3.ohost3   running

 

The ooengh1 and ooengh2 are configured for hosted-engine, and ohst1 ohost2
ohost3 are configured for KVM server.

 

Now, I want to test the Power Management service using my test env, how can
I choose the fence protocol ?

 



 

 

 

 

 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NKGEEQYZQU4IUP3SB6BDKEDOVHEFJ7FJ/


[ovirt-users] Re: Q: [SOLVED] Node SW Install from oVirt Engine UI Failed

2021-01-18 Thread Andrei Verovski
Hi,

I solved this issue.

dnf config-manager —set-disabled spp

For whatever reason HP hardware drivers repo conflicted with oVirt node install.


> On 15 Jan 2021, at 17:30, Andrei Verovski  wrote:
> 
> Hi,
> 
> After installing fresh CentOS stream (on HP ProLiant Node) with recommended 
> partitioning scheme I did in oVirt Engine Web UI Hosts -> New (add node ended 
> with success), the Hosts -> Installation -> Install, and this failed. 
> 
> 
>> On 15 Jan 2021, at 17:11, penguin pages  wrote:
>> 
>> What is in /var/log/messages
>> 
>> 
>> What steps did you take to deploy post CentOS 8 Streams deployment?  Did you 
>> launch from Cockpit UI?
>> 
>> The GUI deployment process does output reasonable logs
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RPYUG7T5724BOHTUEKVAEBKP2WYJNA5R/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/W57F2T2UYOFCET2YOM2UH4LS2WDBGFC4/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3YR24F6ZVL5T4UQ34VAA2NSXW2LPRAIX/