[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2021-01-21 Thread Konstantin Shalygin
All connection data should be comes from cinderlib, as for current cinder 
integration. Gorka says the same


Thanks,
k

Sent from my iPhone

> On 21 Jan 2021, at 16:54, Nir Soffer  wrote:
> 
> To make this work, engine needs to configure the ceph authentication
> secrets on all hosts in the DC. We have code to do this for old cinder storage
> doman, but it is not used for new cinderlib setup. I'm not sure how easy is to
> use the same mechanism for cinderlib.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FZNC5EDJAQUQDDKBAEEW22QZCGKF6UQF/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-21 Thread Matt Snow
Hi  Nir,
I will check selinux on both NFS servers.
I've created a centos8.3 cloud image KVM on the Ubuntu 20.04 laptop. I am
stepping through the installing ovirt as a standalone manager with local
databases instructions

.
Right now I am waiting for the host add to finish in the ovirt manager and
will try to add the NFS shares once that is done.

It would be slick if the self hosted engine worked out of the box with
node. Maybe in the future. :)  I will let you know what I find out
tomorrow.

Thank you for your help so far!

..Matt

On Thu, Jan 21, 2021 at 6:27 AM Nir Soffer  wrote:

> On Wed, Jan 20, 2021 at 7:36 PM Matt Snow  wrote:
> >
> >
> >
> > On Wed, Jan 20, 2021 at 2:46 AM Nir Soffer  wrote:
> >>
> >> On Mon, Jan 18, 2021 at 8:58 AM Matt Snow  wrote:
> >> >
> >> > I installed ovirt node 4.4.4 as well as 4.4.5-pre and experience the
> same problem with both versions. The issue occurs in both cockpit UI and
> tmux'd CLI of ovirt-hosted-engine-setup. I get passed the point where the
> VM is created and running.
> >> > I tried to do some debugging on my own before reaching out to this
> list. Any help is much appreciated!
> >> >
> >> > ovirt node hardware: NUC format Jetway w/ Intel N3160 (Braswell 4
> cores/4threads), 8GB RAM, 64GB SSD. I understand this is underspec'd, but I
> believe it meets the minimum requirements.
> >> >
> >> > NFS server:
> >> > * Ubuntu 19.10 w/ ZFS share w/ 17TB available space.
> >>
> >> We don't test ZFS, but since  you tried also non-ZFS server with same
> issue, the
> >> issue is probably not ZFS.
> >>
> >> > * NFS share settings are just 'rw=@172.16.1.0/24' but have also
> tried 'rw,sec=sys,anon=0' and '@172.16.1.0/24,insecure'
> >>
> >> You are missing anonuid=36,anongid=36. This should not affect sanlock,
> but
> >> you will have issues with libvirt without this.
> >
> >
> > I found this setting in another email thread and have applied it
> >
> > root@stumpy:/tanker/ovirt/host_storage# zfs get all tanker/ovirt| grep
> nfs
> >
> > tanker/ovirt  sharenfs  rw,anonuid=36,anongid=36  local
> >
> > root@stumpy:/tanker/ovirt/host_storage#
> >>
> >>
> >> Here is working export from my test system:
> >>
> >> /export/00 *(rw,sync,anonuid=36,anongid=36,no_subtree_check)
> >>
> > I have updated both ZFS server and the Ubuntu 20.04 NFS server to use
> the above settings.
> >  # ZFS Server
> >
> > root@stumpy:/tanker/ovirt/host_storage# zfs set
> sharenfs="rw,sync,anonuid=36,anongid=36,no_subtree_check" tanker/ovirt
> >
> > root@stumpy:/tanker/ovirt/host_storage# zfs get all tanker/ovirt| grep
> nfs
> >
> > tanker/ovirt  sharenfs
> rw,sync,anonuid=36,anongid=36,no_subtree_check  local
> >
> > root@stumpy:/tanker/ovirt/host_storage#
> >
> > # Ubuntu laptop
> >
> > ls -ld /exports/host_storage
> >
> > drwxr-xr-x 2 36 36 4096 Jan 16 19:42 /exports/host_storage
> >
> > root@msnowubntlt:/exports/host_storage# showmount -e dongle
> >
> > Export list for dongle:
> >
> > /exports/host_storage *
> >
> > root@msnowubntlt:/exports/host_storage# grep host_storage /etc/exports
> >
> > /exports/host_storage *(rw,sync,anonuid=36,anongid=36,no_subtree_check)
> >
> > root@msnowubntlt:/exports/host_storage#
> >
> >
> >
> > Interesting point - upon being re-prompted to configure storage after
> initial failure I provided *different* NFS server information (the ubuntu
> laptop)
> > **snip**
> >
> > [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage
> domain]
> >
> > [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault
> detail is "[]". HTTP response code is 400.
> >
> > [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
> "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP
> response code is 400."}
> >
> >   Please specify the storage you would like to use (glusterfs,
> iscsi, fc, nfs)[nfs]: nfs
> >
> >   Please specify the nfs version you would like to use (auto,
> v3, v4, v4_0, v4_1, v4_2)[auto]:
> >
> >   Please specify the full shared storage connection path to use
> (example: host:/path): dongle:/exports/host_storage
> >
> >   If needed, specify additional mount options for the connection
> to the hosted-engine storagedomain (example: rsize=32768,wsize=32768) []:
> >
> > [ INFO  ] Creating Storage Domain
> >
> > [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a
> specific set of steps]
> >
> > **snip**
> >
> > [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage
> domain]
> >
> > [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault
> detail is "[]". HTTP response code is 400.
> >
> > [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
> "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP
> response code is 400."}
> >
> >   Please specify the 

[ovirt-users] [ANN] oVirt 4.4.5 Second Release Candidate is now available for testing

2021-01-21 Thread Sandro Bonazzola
oVirt 4.4.5 Second Release Candidate is now available for testing

The oVirt Project is pleased to announce the availability of oVirt 4.4.5
Second Release Candidate for testing, as of January 21st, 2021.

This update is the fifth in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1

Note: Upgrading from 4.4.2 GA or later should not require re-doing these
steps, if already performed while upgrading from 4.4.1 to 4.4.2 GA. These
are only required to be done once.

Due to Bug 1837864  -
Host enter emergency mode after upgrading to latest build

If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.5 you may get your
host entering emergency mode.

In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:

   1.

   Remove the current lvm filter while still on 4.4.1, or in emergency mode
   (if rebooted).
   2.

   Reboot.
   3.

   Upgrade to 4.4.5 (redeploy in case of already being on 4.4.5).
   4.

   Run vdsm-tool config-lvm-filter to confirm there is a new filter in
   place.
   5.

   Only if not using oVirt Node:
   - run "dracut --force --add multipath” to rebuild initramfs with the
   correct filter configuration
   6.

   Reboot.

Documentation

   -

   If you want to try oVirt as quickly as possible, follow the instructions
   on the Download  page.
   -

   For complete installation, administration, and usage instructions, see
   the oVirt Documentation .
   -

   For upgrading from a previous version, see the oVirt Upgrade Guide
   .
   -

   For a general overview of oVirt, see About oVirt
   .

Important notes before you try it

Please note this is a pre-release build.

The oVirt Project makes no guarantees as to its suitability or usefulness.

This pre-release must not be used in production.
Installation instructions

For installation instructions and additional information please refer to:

https://ovirt.org/documentation/

This release is available now on x86_64 architecture for:

* Red Hat Enterprise Linux 8.3 or newer

* CentOS Linux (or similar) 8.3 or newer

This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:

* Red Hat Enterprise Linux 8.3 or newer

* CentOS Linux (or similar) 8.3 or newer

* oVirt Node 4.4 based on CentOS Linux 8.3 (available for x86_64 only)

See the release notes [1] for installation instructions and a list of new
features and bugs fixed.

Notes:

- oVirt Appliance is already available for CentOS Linux 8

- oVirt Node NG is already available for CentOS Linux 8

- We found a few issues while testing on CentOS Stream so we are still
basing oVirt 4.4.5 Node and Appliance on CentOS Linux.

Additional Resources:

* Read more about the oVirt 4.4.5 release highlights:
http://www.ovirt.org/release/4.4.5/

* Get more oVirt project updates on Twitter: https://twitter.com/ovirt

* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/


[1] http://www.ovirt.org/release/4.4.5/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EFRJHLMV6GTSM5PYNAKETOXO7FMQ2PO7/


[ovirt-users] OVirt rest api 4.3. How do you get the job id started by the async parameter

2021-01-21 Thread pascal
I am using the rest api to create a VM, because the VM is cloned from the 
template and it takes a long time, I am also passing the async parameters 
hoping to receive back a job id, which I could then query

https://x/ovirt-engine/api/vms?async=true=true

however I get the new VM record which is fine but then I have no way of knowing 
the job id I should query to know when it is finished. And looking at all jobs 
there is no reference back to the VM execept for the description


 

  
  

Creating VM DEMO-PCC-4 from Template MASTER-W10-20H2-CDrive in 
Cluster d1-c2

true
false
2021-01-21T12:49:06.700-08:00
2021-01-21T12:48:59.453-08:00
started

  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TGZQLI55EFZOSEBNEU5CCBDZ2EDXMINQ/


[ovirt-users] Re: Hosted Engine stuck in bios

2021-01-21 Thread Joseph Gelinas
I found `engine-setup 
--otopi-environment=OVESETUP_CONFIG/continueSetupOnHEVM=bool:True` from [1] and 
now have the ovirt-engine web interface reachable again. But do have one more 
question; when I try to change the Custom Chipset/Firmware Type to Q35 Chipset 
with BIOS, I get the error; HostedEngine: There was an attempt to change the 
Hosted Engine VM values that are locked.

How do I make the removal of the loader/nvram lines permanent?

[1] 
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/2AC57LTHFKJBU6OYZPYSCMTBF6NE3QO2/

> On Jan 21, 2021, at 10:15, Joseph Gelinas  wrote:
> 
> Removing those two lines got the hosted engine vm booting again, so that is a 
> great help. Thank you.
> 
> Now I just need the web interface of ovirt-engine to work again. I feel like 
> I might have run things out of order and forgot to do `engine-setup` as part 
> of the update of hosted engine. Though when I try to do that now it bails out 
> claiming the cluster isn't in global maintenance yet it is.
> 
> [ INFO  ] Stage: Setup validation
> [ ERROR ] It seems that you are running your engine inside of the 
> hosted-engine VM and are not in "Global Maintenance" mode.
> In that case you should put the system into the "Global Maintenance" 
> mode before running engine-setup, or the hosted-engine HA agent might kill 
> the machine, which might corrupt your data.
> 
> [ ERROR ] Failed to execute stage 'Setup validation': Hosted Engine setup 
> detected, but Global Maintenance is not set.
> 
> 
> I see engine.log says it can't contact the database but I certainly see 
> Postgres processes running.
> 
> /var/log/ovirt-engine/engine.log
> 
> 2021-01-21 14:47:31,502Z ERROR [org.ovirt.engine.core.services.HealthStatus] 
> (default task-15) [] Failed to run Health Status.
> 2021-01-21 14:47:31,502Z ERROR [org.ovirt.engine.core.services.HealthStatus] 
> (default task-14) [] Unable to contact Database!: 
> java.lang.InterruptedException
> 
> 
> 
> 
>> On Jan 21, 2021, at 03:19, Arik Hadas  wrote:
>> 
>> 
>> 
>> On Thu, Jan 21, 2021 at 8:57 AM Joseph Gelinas  wrote:
>> Hi,
>> 
>> I recently did some updates of ovirt from 4.4.1 or 4.4.3 to 4.4.4, also 
>> setting the default datacenter from 4.4 to 4.5 and making the default bios 
>> q35+eufi. Unfortunately quite a few things. Now however hosted engine 
>> doesn't boot up anymore and `hosted-engine --console`  just shows the below 
>> bios/firmware output:
>> 
>> RHEL 
>>   
>> RHEL-8.1.0 PC (Q35 + ICH9, 2009)2.00 GHz 
>>   
>> 0.0.0   16384 MB RAM 
>>   
>> 
>> 
>> 
>>   Select Language This is the option   
>>   
>> one adjusts to 
>> change  
>>> Device Managerthe language for the  
>>>  
>>> Boot Manager  current system
>>>  
>>> Boot Maintenance Manager
>>>  
>> 
>>   Continue   
>>   
>>   Reset  
>>   
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>  ^v=Move Highlight   =Select Entry
>>   
>> 
>> 
>> When in this state `hosted-engine --vm-status` says it is up but failed 
>> liveliness check
>> 
>> hosted-engine --vm-status | grep -i engine\ status
>> Engine status  : {"vm": "down", "health": "bad", 
>> "detail": "unknown", "reason": "vm not running on this host"}
>> Engine status  : {"vm": "up", "health": "bad", "detail": 
>> "Up", "reason": "failed liveliness check"}
>> Engine status  : {"vm": "down", "health": "bad", 
>> "detail": "Down", "reason": "bad vm status"}
>> 
>> I assume I am running into https://access.redhat.com/solutions/5341561 (RHV: 
>> Hosted-Engine VM fails to start after changing the cluster to Q35/UEFI) 
>> however how to fix that isn't really described. I have tried starting hosted 
>> engine paused (`hosted-engine --vm-start-paused`) and editing the config 
>> (`virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
>> edit HostedEngine`) to have pc-i440fx instead and removing a bunch of pcie 
>> lines etc until it will accept the config and then resuming hosted engine 
>> (`virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
>> resume HostedEngine`) but haven't come up with something that is able to 
>> start.
>> 
>> Anyone know how to resolve this? Am I even chasing the right path?
>> 
>> Let's start with the negative - this should have been prevented by [1].
>> Can it be that the custom bios type that the hosted engine VM was set with 
>> was manually dropped in this environment?
>> 
>> The positive is that 

[ovirt-users] Re: Hosted Engine stuck in bios

2021-01-21 Thread Joseph Gelinas
Removing those two lines got the hosted engine vm booting again, so that is a 
great help. Thank you.

Now I just need the web interface of ovirt-engine to work again. I feel like I 
might have run things out of order and forgot to do `engine-setup` as part of 
the update of hosted engine. Though when I try to do that now it bails out 
claiming the cluster isn't in global maintenance yet it is.

[ INFO  ] Stage: Setup validation
[ ERROR ] It seems that you are running your engine inside of the hosted-engine 
VM and are not in "Global Maintenance" mode.
 In that case you should put the system into the "Global Maintenance" 
mode before running engine-setup, or the hosted-engine HA agent might kill the 
machine, which might corrupt your data.
 
[ ERROR ] Failed to execute stage 'Setup validation': Hosted Engine setup 
detected, but Global Maintenance is not set.


I see engine.log says it can't contact the database but I certainly see 
Postgres processes running.

/var/log/ovirt-engine/engine.log

2021-01-21 14:47:31,502Z ERROR [org.ovirt.engine.core.services.HealthStatus] 
(default task-15) [] Failed to run Health Status.
2021-01-21 14:47:31,502Z ERROR [org.ovirt.engine.core.services.HealthStatus] 
(default task-14) [] Unable to contact Database!: java.lang.InterruptedException




> On Jan 21, 2021, at 03:19, Arik Hadas  wrote:
> 
> 
> 
> On Thu, Jan 21, 2021 at 8:57 AM Joseph Gelinas  wrote:
> Hi,
> 
> I recently did some updates of ovirt from 4.4.1 or 4.4.3 to 4.4.4, also 
> setting the default datacenter from 4.4 to 4.5 and making the default bios 
> q35+eufi. Unfortunately quite a few things. Now however hosted engine doesn't 
> boot up anymore and `hosted-engine --console`  just shows the below 
> bios/firmware output:
> 
>  RHEL 
>   
>  RHEL-8.1.0 PC (Q35 + ICH9, 2009)2.00 GHz 
>   
>  0.0.0   16384 MB RAM 
>   
> 
> 
> 
>Select Language This is the option   
>   
>  one adjusts to 
> change  
>  > Device Managerthe language for the 
>   
>  > Boot Manager  current system   
>   
>  > Boot Maintenance Manager   
>   
> 
>Continue   
>   
>Reset  
>   
> 
> 
> 
> 
> 
> 
> 
>   ^v=Move Highlight   =Select Entry
>   
> 
> 
> When in this state `hosted-engine --vm-status` says it is up but failed 
> liveliness check
> 
> hosted-engine --vm-status | grep -i engine\ status
> Engine status  : {"vm": "down", "health": "bad", 
> "detail": "unknown", "reason": "vm not running on this host"}
> Engine status  : {"vm": "up", "health": "bad", "detail": 
> "Up", "reason": "failed liveliness check"}
> Engine status  : {"vm": "down", "health": "bad", 
> "detail": "Down", "reason": "bad vm status"}
> 
> I assume I am running into https://access.redhat.com/solutions/5341561 (RHV: 
> Hosted-Engine VM fails to start after changing the cluster to Q35/UEFI) 
> however how to fix that isn't really described. I have tried starting hosted 
> engine paused (`hosted-engine --vm-start-paused`) and editing the config 
> (`virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
> edit HostedEngine`) to have pc-i440fx instead and removing a bunch of pcie 
> lines etc until it will accept the config and then resuming hosted engine 
> (`virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
> resume HostedEngine`) but haven't come up with something that is able to 
> start.
> 
> Anyone know how to resolve this? Am I even chasing the right path?
> 
> Let's start with the negative - this should have been prevented by [1].
> Can it be that the custom bios type that the hosted engine VM was set with 
> was manually dropped in this environment?
> 
> The positive is that the VM starts. This means that from the chipset 
> perspective, the configuration is valid.
> So I wouldn't try to change it to i440fx, but only to switch the firmware to 
> BIOS.
> I think that removing the following lines from the domain xml should do it:
>  type='pflash'>/usr/share/OVMF/OVMF_CODE.secboot.fd
>  template='/usr/share/OVMF/OVMF_VARS.fd'>/var/lib/libvirt/qemu/nvram/81816cd3-5816-4185-b553-b5a636156fbd.fd
> Can you give this a try?
> 
> [1] https://gerrit.ovirt.org/#/c/ovirt-engine/+/59/
>  
> 
> 
> /var/log/libvirt/qemu/HostedEngine.log 
> 
> 2021-01-20 15:31:56.500+: starting up libvirt version: 6.6.0, package: 
> 7.1.el8 (CBS , 2020-12-10-14:05:40, ), qemu version: 
> 

[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2021-01-21 Thread Shantur Rathore
I would love
https://github.com/openstack/cinderlib/commit/a09a7e12fe685d747ed390a59cd42d0acd1399e4
to come back.



On Thu, Jan 21, 2021 at 2:27 PM Gorka Eguileor  wrote:

> On 21/01, Nir Soffer wrote:
> > On Thu, Jan 21, 2021 at 8:50 AM Konstantin Shalygin 
> wrote:
> > >
> > > I understood, more than the code that works with qemu already exists
> for openstack integration
> >
> > We have code on vdsm and engine to support librbd, but using in cinderlib
> > based volume is not a trivial change.
> >
> > On engine side, this means changing the flow, so instead of attaching
> > a device to a host, engine will configure the xml with network disk,
> using
> > the rbd url, same way as old cinder support was using.
> >
> > To make this work, engine needs to configure the ceph authentication
> > secrets on all hosts in the DC. We have code to do this for old cinder
> storage
> > doman, but it is not used for new cinderlib setup. I'm not sure how easy
> is to
> > use the same mechanism for cinderlib.
>
> Hi,
>
> All the data is in the connection info (including the keyring), so it
> should be possible to implement.
>
> >
> > Generally, we don't want to spend time on special code for ceph, and
> prefer
> > to outsource this to os brick and the kernel, so we have a uniform way to
> > use volumes. But if the special code gives important benefits, we can
> > consider it.
> >
>
> I think think that's reasonable. Having less code to worry about and
> making the project's code base more readable and maintainable is a
> considerable benefit that should not be underestimated.
>
>
> > I think openshift virtualization is using the same solution (kernel
> based rbd)
> > for ceph. An important requirement for us is having an easy way to
> migrate
> > vms from ovirt to openshift virtuations. Using the same ceph
> configuration
> > can make this migration easier.
> >
>
> The Ceph CSI plugin seems to have the possibility of using krbd and
> rbd-nbd [1], but that's something we can also achieve in oVirt by adding
> back the rbd-nbd support in cinderlib without changes to oVirt.
>
> Cheers,
> Gorka.
>
> [1]:
> https://github.com/ceph/ceph-csi/blob/04644c1d5896b493d6aaf9ab66f2302cf67a2ee3/internal/rbd/rbd_attach.go#L35-L41
>
> > I'm also not sure about the future of librbd support in qemu. I know that
> > qemu folks also want to get rid of such code. For example libgfapi
> > (Glsuter native driver) is not maintained and likely to be removed soon.
> >
> > If this feature is important to you, please open RFE for this, and
> explain why
> > it is needed.
> >
> > We can consider it for future 4.4.z release.
> >
> > Adding some storage and qemu folks to get more info on this.
> >
> > Nir
> >
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SUXZT47HWHALTYOUF67ALJTMK653SNBO/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3WKB4JRCCBG7RMS7YVKPR6MKASZYM5GG/


[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2021-01-21 Thread Gorka Eguileor
On 21/01, Nir Soffer wrote:
> On Thu, Jan 21, 2021 at 8:50 AM Konstantin Shalygin  wrote:
> >
> > I understood, more than the code that works with qemu already exists for 
> > openstack integration
>
> We have code on vdsm and engine to support librbd, but using in cinderlib
> based volume is not a trivial change.
>
> On engine side, this means changing the flow, so instead of attaching
> a device to a host, engine will configure the xml with network disk, using
> the rbd url, same way as old cinder support was using.
>
> To make this work, engine needs to configure the ceph authentication
> secrets on all hosts in the DC. We have code to do this for old cinder storage
> doman, but it is not used for new cinderlib setup. I'm not sure how easy is to
> use the same mechanism for cinderlib.

Hi,

All the data is in the connection info (including the keyring), so it
should be possible to implement.

>
> Generally, we don't want to spend time on special code for ceph, and prefer
> to outsource this to os brick and the kernel, so we have a uniform way to
> use volumes. But if the special code gives important benefits, we can
> consider it.
>

I think think that's reasonable. Having less code to worry about and
making the project's code base more readable and maintainable is a
considerable benefit that should not be underestimated.


> I think openshift virtualization is using the same solution (kernel based rbd)
> for ceph. An important requirement for us is having an easy way to migrate
> vms from ovirt to openshift virtuations. Using the same ceph configuration
> can make this migration easier.
>

The Ceph CSI plugin seems to have the possibility of using krbd and
rbd-nbd [1], but that's something we can also achieve in oVirt by adding
back the rbd-nbd support in cinderlib without changes to oVirt.

Cheers,
Gorka.

[1]: 
https://github.com/ceph/ceph-csi/blob/04644c1d5896b493d6aaf9ab66f2302cf67a2ee3/internal/rbd/rbd_attach.go#L35-L41

> I'm also not sure about the future of librbd support in qemu. I know that
> qemu folks also want to get rid of such code. For example libgfapi
> (Glsuter native driver) is not maintained and likely to be removed soon.
>
> If this feature is important to you, please open RFE for this, and explain why
> it is needed.
>
> We can consider it for future 4.4.z release.
>
> Adding some storage and qemu folks to get more info on this.
>
> Nir
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SUXZT47HWHALTYOUF67ALJTMK653SNBO/


[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2021-01-21 Thread Nir Soffer
On Thu, Jan 21, 2021 at 8:50 AM Konstantin Shalygin  wrote:
>
> I understood, more than the code that works with qemu already exists for 
> openstack integration

We have code on vdsm and engine to support librbd, but using in cinderlib
based volume is not a trivial change.

On engine side, this means changing the flow, so instead of attaching
a device to a host, engine will configure the xml with network disk, using
the rbd url, same way as old cinder support was using.

To make this work, engine needs to configure the ceph authentication
secrets on all hosts in the DC. We have code to do this for old cinder storage
doman, but it is not used for new cinderlib setup. I'm not sure how easy is to
use the same mechanism for cinderlib.

Generally, we don't want to spend time on special code for ceph, and prefer
to outsource this to os brick and the kernel, so we have a uniform way to
use volumes. But if the special code gives important benefits, we can
consider it.

I think openshift virtualization is using the same solution (kernel based rbd)
for ceph. An important requirement for us is having an easy way to migrate
vms from ovirt to openshift virtuations. Using the same ceph configuration
can make this migration easier.

I'm also not sure about the future of librbd support in qemu. I know that
qemu folks also want to get rid of such code. For example libgfapi
(Glsuter native driver) is not maintained and likely to be removed soon.

If this feature is important to you, please open RFE for this, and explain why
it is needed.

We can consider it for future 4.4.z release.

Adding some storage and qemu folks to get more info on this.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KBJWIHO4TYLHUPPFVKS5LTZRMU454W53/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-21 Thread Nir Soffer
On Wed, Jan 20, 2021 at 7:36 PM Matt Snow  wrote:
>
>
>
> On Wed, Jan 20, 2021 at 2:46 AM Nir Soffer  wrote:
>>
>> On Mon, Jan 18, 2021 at 8:58 AM Matt Snow  wrote:
>> >
>> > I installed ovirt node 4.4.4 as well as 4.4.5-pre and experience the same 
>> > problem with both versions. The issue occurs in both cockpit UI and tmux'd 
>> > CLI of ovirt-hosted-engine-setup. I get passed the point where the VM is 
>> > created and running.
>> > I tried to do some debugging on my own before reaching out to this list. 
>> > Any help is much appreciated!
>> >
>> > ovirt node hardware: NUC format Jetway w/ Intel N3160 (Braswell 4 
>> > cores/4threads), 8GB RAM, 64GB SSD. I understand this is underspec'd, but 
>> > I believe it meets the minimum requirements.
>> >
>> > NFS server:
>> > * Ubuntu 19.10 w/ ZFS share w/ 17TB available space.
>>
>> We don't test ZFS, but since  you tried also non-ZFS server with same issue, 
>> the
>> issue is probably not ZFS.
>>
>> > * NFS share settings are just 'rw=@172.16.1.0/24' but have also tried 
>> > 'rw,sec=sys,anon=0' and '@172.16.1.0/24,insecure'
>>
>> You are missing anonuid=36,anongid=36. This should not affect sanlock, but
>> you will have issues with libvirt without this.
>
>
> I found this setting in another email thread and have applied it
>
> root@stumpy:/tanker/ovirt/host_storage# zfs get all tanker/ovirt| grep nfs
>
> tanker/ovirt  sharenfs  rw,anonuid=36,anongid=36  local
>
> root@stumpy:/tanker/ovirt/host_storage#
>>
>>
>> Here is working export from my test system:
>>
>> /export/00 *(rw,sync,anonuid=36,anongid=36,no_subtree_check)
>>
> I have updated both ZFS server and the Ubuntu 20.04 NFS server to use the 
> above settings.
>  # ZFS Server
>
> root@stumpy:/tanker/ovirt/host_storage# zfs set 
> sharenfs="rw,sync,anonuid=36,anongid=36,no_subtree_check" tanker/ovirt
>
> root@stumpy:/tanker/ovirt/host_storage# zfs get all tanker/ovirt| grep nfs
>
> tanker/ovirt  sharenfs  
> rw,sync,anonuid=36,anongid=36,no_subtree_check  local
>
> root@stumpy:/tanker/ovirt/host_storage#
>
> # Ubuntu laptop
>
> ls -ld /exports/host_storage
>
> drwxr-xr-x 2 36 36 4096 Jan 16 19:42 /exports/host_storage
>
> root@msnowubntlt:/exports/host_storage# showmount -e dongle
>
> Export list for dongle:
>
> /exports/host_storage *
>
> root@msnowubntlt:/exports/host_storage# grep host_storage /etc/exports
>
> /exports/host_storage *(rw,sync,anonuid=36,anongid=36,no_subtree_check)
>
> root@msnowubntlt:/exports/host_storage#
>
>
>
> Interesting point - upon being re-prompted to configure storage after initial 
> failure I provided *different* NFS server information (the ubuntu laptop)
> **snip**
>
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain]
>
> [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail 
> is "[]". HTTP response code is 400.
>
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault 
> reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 
> 400."}
>
>   Please specify the storage you would like to use (glusterfs, iscsi, 
> fc, nfs)[nfs]: nfs
>
>   Please specify the nfs version you would like to use (auto, v3, v4, 
> v4_0, v4_1, v4_2)[auto]:
>
>   Please specify the full shared storage connection path to use 
> (example: host:/path): dongle:/exports/host_storage
>
>   If needed, specify additional mount options for the connection to 
> the hosted-engine storagedomain (example: rsize=32768,wsize=32768) []:
>
> [ INFO  ] Creating Storage Domain
>
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set 
> of steps]
>
> **snip**
>
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain]
>
> [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail 
> is "[]". HTTP response code is 400.
>
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault 
> reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 
> 400."}
>
>   Please specify the storage you would like to use (glusterfs, iscsi, 
> fc, nfs)[nfs]:
>
> **snip**
> Upon checking mounts I see that the original server is still being used.
>
> [root@brick ~]# mount | grep nfs
>
> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
>
> stumpy:/tanker/ovirt/host_storage on 
> /rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage type nfs4 
> (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=172.16.1.49,local_lock=none,addr=172.16.1.50)

Using node and hosted engine makes everything more complicated
and harder to debug.

I would create an engine vm for testing on some machine, and add
this host as regular host. It will be easier to debug this way, since
you don't have to reinstall everything from scratch on every failure.

Once we sort out the issue with the NFS/ZFS storage, using node and

[ovirt-users] Re: For data integrity make sure that the server is configured with Quorum (both client and server Quorum)

2021-01-21 Thread Ahmad Khiet
added Ritesh Chikatwar,
what do you think?

On Thu, Jan 21, 2021 at 12:32 PM tommy  wrote:

> *I get the current setting:*
>
>
>
> [root@gluster1 ~]# gluster volume info all
>
>
>
> Volume Name: volume1
>
> Type: Disperse
>
> Volume ID: 237f7379-cd03-44ef-af39-f6065d01d520
>
> Status: Started
>
> Snapshot Count: 0
>
> Number of Bricks: 1 x (2 + 1) = 3
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: gluster1:/data/glusterfs/myvolume/mybrick1/brick
>
> Brick2: gluster2:/data/glusterfs/myvolume/mybrick1/brick
>
> Brick3: gluster3:/data/glusterfs/myvolume/mybrick1/brick
>
> Options Reconfigured:
>
> transport.address-family: inet
>
> nfs.disable: on
>
>
>
>
>
> *Then I enable the quorum on the server side:*
>
>
>
> [root@gluster1 ~]# gluster volume set all cluster.server-quorum-ratio 51%
>
> volume set: success
>
> [root@gluster1 ~]#
>
> [root@gluster1 ~]# gluster volume set volume1 cluster.server-quorum-type
> server
>
> volume set: success
>
> [root@gluster1 ~]#
>
>
>
>
>
> *Now, the configure is :*
>
>
>
> [root@gluster1 ~]# gluster volume info volume1
>
>
>
> Volume Name: volume1
>
> Type: Disperse
>
> Volume ID: 237f7379-cd03-44ef-af39-f6065d01d520
>
> Status: Started
>
> Snapshot Count: 0
>
> Number of Bricks: 1 x (2 + 1) = 3
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: gluster1:/data/glusterfs/myvolume/mybrick1/brick
>
> Brick2: gluster2:/data/glusterfs/myvolume/mybrick1/brick
>
> Brick3: gluster3:/data/glusterfs/myvolume/mybrick1/brick
>
> Options Reconfigured:
>
> *cluster.server-quorum-type: server*
>
> transport.address-family: inet
>
> nfs.disable: on
>
> *cluster.server-quorum-ratio: 51%*
>
> [root@gluster1 ~]#
>
>
>
>
>
> I want know ,the server quorum is about setting the percen of numbers of
> the active-connected nodes,  no matter what type of the volumes , is this
> right ?
>
>
>
> But if I use the client quorum, the the type of volumes now mounting is
> acting, only duplicated type volume can use the client quorum, is this
> right ?
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *From:* users-boun...@ovirt.org  *On Behalf Of *Ahmad
> Khiet
> *Sent:* Thursday, January 21, 2021 2:48 AM
> *To:* tommy 
> *Cc:* users 
> *Subject:* [ovirt-users] Re: For data integrity make sure that the server
> is configured with Quorum (both client and server Quorum)
>
>
>
> Hi,
>
> as shown in the alert message, that Quorum must be configured with
> GlusterFs in the server side and client side.
>
> please see this resource to configure it for client side and server side
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1/html/administration_guide/sect-managing_split-brain
>
>
>
> Have a nice day
>
>
>
> On Wed, Jan 20, 2021 at 10:37 AM tommy  wrote:
>
> When I want to create a glusterFS domain, the above words appear.
>
>
>
> What’s meaning of these words?
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UMTB5A6AODYZJU26ILP4ZL3CNOX5D6KC/
>
>
>
>
> --
>
> *Ahmad Khiet*
>
> Red Hat 
>
> akh...@redhat.com
> M: +972-54-6225629
>
> 
>
>
>


-- 

Ahmad Khiet

Red Hat 

akh...@redhat.com
M: +972-54-6225629

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HKRHP6DGTEQYFP2WUPWDJQEYKV322E6M/


[ovirt-users] Re: For data integrity make sure that the server is configured with Quorum (both client and server Quorum)

2021-01-21 Thread tommy
I get the current setting:

 

[root@gluster1 ~]# gluster volume info all

 

Volume Name: volume1

Type: Disperse

Volume ID: 237f7379-cd03-44ef-af39-f6065d01d520

Status: Started

Snapshot Count: 0

Number of Bricks: 1 x (2 + 1) = 3

Transport-type: tcp

Bricks:

Brick1: gluster1:/data/glusterfs/myvolume/mybrick1/brick

Brick2: gluster2:/data/glusterfs/myvolume/mybrick1/brick

Brick3: gluster3:/data/glusterfs/myvolume/mybrick1/brick

Options Reconfigured:

transport.address-family: inet

nfs.disable: on

 

 

Then I enable the quorum on the server side:

 

[root@gluster1 ~]# gluster volume set all cluster.server-quorum-ratio 51%

volume set: success

[root@gluster1 ~]#

[root@gluster1 ~]# gluster volume set volume1 cluster.server-quorum-type server

volume set: success

[root@gluster1 ~]#

 

 

Now, the configure is :

 

[root@gluster1 ~]# gluster volume info volume1

 

Volume Name: volume1

Type: Disperse

Volume ID: 237f7379-cd03-44ef-af39-f6065d01d520

Status: Started

Snapshot Count: 0

Number of Bricks: 1 x (2 + 1) = 3

Transport-type: tcp

Bricks:

Brick1: gluster1:/data/glusterfs/myvolume/mybrick1/brick

Brick2: gluster2:/data/glusterfs/myvolume/mybrick1/brick

Brick3: gluster3:/data/glusterfs/myvolume/mybrick1/brick

Options Reconfigured:

cluster.server-quorum-type: server

transport.address-family: inet

nfs.disable: on

cluster.server-quorum-ratio: 51%

[root@gluster1 ~]#

 

 

I want know ,the server quorum is about setting the percen of numbers of the 
active-connected nodes,  no matter what type of the volumes , is this right ?

 

But if I use the client quorum, the the type of volumes now mounting is acting, 
only duplicated type volume can use the client quorum, is this right ?

 

 

 

 

 

 

 

 

 

 

 

 

From: users-boun...@ovirt.org  On Behalf Of Ahmad Khiet
Sent: Thursday, January 21, 2021 2:48 AM
To: tommy 
Cc: users 
Subject: [ovirt-users] Re: For data integrity make sure that the server is 
configured with Quorum (both client and server Quorum)

 

Hi,

as shown in the alert message, that Quorum must be configured with GlusterFs in 
the server side and client side.

please see this resource to configure it for client side and server side 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1/html/administration_guide/sect-managing_split-brain

 

Have a nice day

 

On Wed, Jan 20, 2021 at 10:37 AM tommy mailto:sz_cui...@163.com> > wrote:

When I want to create a glusterFS domain, the above words appear.

 

What’s meaning of these words?

 



___
Users mailing list -- users@ovirt.org  
To unsubscribe send an email to users-le...@ovirt.org 
 
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UMTB5A6AODYZJU26ILP4ZL3CNOX5D6KC/




 

-- 

Ahmad Khiet

  Red Hat

  akh...@redhat.com
M:   +972-54-6225629


  

 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZZ2VGRPDB7MQKBGKQRDXNRTTGBEOGJIR/


[ovirt-users] Re: Cinderlib died after upgrade Cluster Compatibility version 4.4 ->4.5

2021-01-21 Thread Mike Andreev
I restore this table  - "volume_types" from backup of DB and cinderlib work 
now, but I did not like how it happened just after changing the cluster 
compatibility level
Restore command:

pg_restore -d ovirt_cinderlib_20200824130037 -a -t volume_types 
/root/db/cinderlib_backup.db
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PZVGJ44G4QSUOHBR4DQ3SFESDK5DZCIL/


[ovirt-users] Re: Gluster Hyperconverged fails with single disk partitioned

2021-01-21 Thread Shantur Rathore
I have found a workaround to this.
gluster.infra ansible role can exclude and reset lvm filters when 
"gluster_infra_lvm" variable defined.
https://github.com/gluster/gluster-ansible-infra/blob/2522d3bd722be86139c57253a86336b2fec33964/roles/backend_setup/tasks/main.yml#L18

1. Go with gluster deployment wizard till the configuration display step just 
before deploy.
2. Click Edit and scroll down to "vars:" section.
3. Just under "vars:" section, add "gluster_infra_lvm: SOMETHING" and adjust 
spaces to match other variables.
4. Don't forget to click save on top before clicking Deploy.

This will reset the filter and set it back again with correct devices.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H5IDJVSMEPFH2DQ7BORCPALZDLL3LGQP/


[ovirt-users] Re: Gluster Hyperconverged fails with single disk partitioned

2021-01-21 Thread Shantur Rathore
Thanks Derek,

I don't think that is the case as per documentation

https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/



On Thu, Jan 21, 2021 at 12:17 AM Derek Atkins  wrote:

> Ovirt is expecting an LVM volume, not a raw partition.
>
> -derek
> Sent using my mobile device. Please excuse any typos.
>
> On January 20, 2021 7:13:45 PM Shantur Rathore 
> wrote:
>
>> Hi,
>>
>> I am trying to setup a single host Self-Hosted hyperconverged setup with
>> GlusterFS.
>> I have a custom partitioning where I provide 100G for oVirt and its
>> partitions and rest 800G to a physical partition (/dev/sda4).
>>
>> When I try to create gluster deployment with the wizard, it fails
>>
>> TASK [gluster.infra/roles/backend_setup : Create volume groups]
>> 
>> failed: [ovirt-macpro-16.lab.ced.bskyb.com] (item={'key':
>> 'gluster_vg_sda4', 'value': [{'vgname': 'gluster_vg_sda4', 'pvname':
>> '/dev/sda4'}]}) => {"ansible_loop_var": "item", "changed": false, "err": "
>>  Device /dev/sda4 excluded by a filter.\n", "item": {"key":
>> "gluster_vg_sda4", "value": [{"pvname": "/dev/sda4", "vgname":
>> "gluster_vg_sda4"}]}, "msg": "Creating physical volume '/dev/sda4' failed",
>> "rc": 5}
>>
>> I checked and /etc/lvm/lvm.conf filter doesn't allow /dev/sda4. It only
>> allows PV for onn VG.
>> Once I manually allow /dev/sda4 to lvm filter, it works fine and gluster
>> deployment completes.
>>
>> Fdisk :
>>
>> # fdisk -l /dev/sda
>> Disk /dev/sda: 931.9 GiB, 100081440 bytes, 1954210120 sectors
>> Units: sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 4096 bytes
>> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
>> Disklabel type: gpt
>> Disk identifier: FE209000-85B5-489A-8A86-4CF0C91B2E7D
>>
>> Device StartEndSectors   Size Type
>> /dev/sda1   204812308471228800   600M EFI System
>> /dev/sda2123084833279992097152 1G Linux filesystem
>> /dev/sda33328000  213043199  209715200   100G Linux LVM
>> /dev/sda4  213043200 1954209791 1741166592 830.3G Linux filesystem
>>
>> LVS
>>
>> # lvs
>>   LV VG  Attr   LSize  Pool  Origin
>> Data%  Meta%  Move Log Cpy%Sync Convert
>>   home   onn Vwi-aotz-- 10.00g pool0
>>  0.11
>>   ovirt-node-ng-4.4.4-0.20201221.0   onn Vwi---tz-k 10.00g pool0 root
>>   ovirt-node-ng-4.4.4-0.20201221.0+1 onn Vwi-aotz-- 10.00g pool0
>> ovirt-node-ng-4.4.4-0.20201221.0 25.26
>>   pool0  onn twi-aotz-- 95.89g
>>  2.95   14.39
>>   root   onn Vri---tz-k 10.00g pool0
>>   swap   onn -wi-ao  4.00g
>>   tmponn Vwi-aotz-- 10.00g pool0
>>  0.12
>>   varonn Vwi-aotz-- 20.00g pool0
>>  0.92
>>   var_crash  onn Vwi-aotz-- 10.00g pool0
>>  0.11
>>   var_logonn Vwi-aotz-- 10.00g pool0
>>  0.13
>>   var_log_audit  onn Vwi-aotz--  4.00g pool0
>>  0.27
>>
>>
>>
>> # grep filter /etc/lvm/lvm.conf
>> filter =
>> ["a|^/dev/disk/by-id/lvm-pv-uuid-QrvErF-eaS9-PxbI-wCBV-3OxJ-V600-NG7raZ$|",
>> "r|.*|"]
>>
>> Am I doing something which oVirt isn't expecting?
>> Is there anyway to provide tell gluster deployment to add it to lvm
>> config.
>>
>> Thanks,
>> Shantur
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BP7BQWG3O7IFRLU4W6ZNV4J6PHR4DUZF/
>>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GDLGYDWVWQAMLU7R2SIKJ44S3N3CLI4J/


[ovirt-users] Re: Cinderlib died after upgrade Cluster Compatibility version 4.4 ->4.5

2021-01-21 Thread Mike Andreev
looks like our main cinderlib DB is empty:

sql -d ovirt_cinderlib_20200824130037 -c "\x on" -c "select * from volume_types"
Expanded display is on.
(0 rows)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KR5FITZWL4FJBE7GWQDXVCBQYK772RUP/


[ovirt-users] Re: Hosted Engine stuck in bios

2021-01-21 Thread Arik Hadas
On Thu, Jan 21, 2021 at 8:57 AM Joseph Gelinas  wrote:

> Hi,
>
> I recently did some updates of ovirt from 4.4.1 or 4.4.3 to 4.4.4, also
> setting the default datacenter from 4.4 to 4.5 and making the default bios
> q35+eufi. Unfortunately quite a few things. Now however hosted engine
> doesn't boot up anymore and `hosted-engine --console`  just shows the below
> bios/firmware output:
>
>  RHEL
>
>  RHEL-8.1.0 PC (Q35 + ICH9, 2009)2.00 GHz
>
>  0.0.0   16384 MB RAM
>
>
>
>
>Select Language This is the
> option
>  one adjusts to
> change
>  > Device Managerthe language for
> the
>  > Boot Manager  current system
>
>  > Boot Maintenance Manager
>
>
>Continue
>
>Reset
>
>
>
>
>
>
>
>
>   ^v=Move Highlight   =Select Entry
>
>
>
> When in this state `hosted-engine --vm-status` says it is up but failed
> liveliness check
>
> hosted-engine --vm-status | grep -i engine\ status
> Engine status  : {"vm": "down", "health": "bad",
> "detail": "unknown", "reason": "vm not running on this host"}
> Engine status  : {"vm": "up", "health": "bad",
> "detail": "Up", "reason": "failed liveliness check"}
> Engine status  : {"vm": "down", "health": "bad",
> "detail": "Down", "reason": "bad vm status"}
>
> I assume I am running into https://access.redhat.com/solutions/5341561
> (RHV: Hosted-Engine VM fails to start after changing the cluster to
> Q35/UEFI) however how to fix that isn't really described. I have tried
> starting hosted engine paused (`hosted-engine --vm-start-paused`) and
> editing the config (`virsh -c
> qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf edit
> HostedEngine`) to have pc-i440fx instead and removing a bunch of pcie lines
> etc until it will accept the config and then resuming hosted engine (`virsh
> -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf resume
> HostedEngine`) but haven't come up with something that is able to start.
>
> Anyone know how to resolve this? Am I even chasing the right path?
>

Let's start with the negative - this should have been prevented by [1].
Can it be that the custom bios type that the hosted engine VM was set with
was manually dropped in this environment?

The positive is that the VM starts. This means that from the chipset
perspective, the configuration is valid.
So I wouldn't try to change it to i440fx, but only to switch the firmware
to BIOS.
I think that removing the following lines from the domain xml should do it:
/usr/share/OVMF/OVMF_CODE.secboot.fd
/var/lib/libvirt/qemu/nvram/81816cd3-5816-4185-b553-b5a636156fbd.fd
Can you give this a try?

[1] https://gerrit.ovirt.org/#/c/ovirt-engine/+/59/


>
>
> /var/log/libvirt/qemu/HostedEngine.log
>
> 2021-01-20 15:31:56.500+: starting up libvirt version: 6.6.0, package:
> 7.1.el8 (CBS , 2020-12-10-14:05:40, ), qemu version:
> 5.1.0qemu-kvm-5.1.0-14.el8.1, kernel: 4.18.0-240.1.1.el8_3.x86_64,
> hostname: ovirt-3
> LC_ALL=C \
> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \
> HOME=/var/lib/libvirt/qemu/domain-25-HostedEngine \
> XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-25-HostedEngine/.local/share \
> XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-25-HostedEngine/.cache \
> XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-25-HostedEngine/.config \
> QEMU_AUDIO_DRV=spice \
> /usr/libexec/qemu-kvm \
> -name guest=HostedEngine,debug-threads=on \
> -S \
> -object
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-25-HostedEngine/master-key.aes
> \
> -blockdev
> '{"driver":"file","filename":"/usr/share/OVMF/OVMF_CODE.secboot.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}'
> \
> -blockdev
> '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}'
> \
> -blockdev
> '{"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/81816cd3-5816-4185-b553-b5a636156fbd.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}'
> \
> -blockdev
> '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}'
> \
> -machine
> pc-q35-rhel8.1.0,accel=kvm,usb=off,dump-guest-core=off,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format
> \
> -cpu Cascadelake-Server-noTSX,mpx=off \
> -m size=16777216k,slots=16,maxmem=67108864k \
> -overcommit mem-lock=off \
> -smp 4,maxcpus=64,sockets=16,dies=1,cores=4,threads=1 \
> -object iothread,id=iothread1 \
> -numa node,nodeid=0,cpus=0-63,mem=16384 \
> -uuid 81816cd3-5816-4185-b553-b5a636156fbd \
> -smbios
> type=1,manufacturer=oVirt,product=RHEL,version=8-1.2011.el8,serial=4c4c4544-0051-3710-8032-c8c04f483633,uuid=81816cd3-5816-4185-b553-b5a636156fbd,family=oVirt
> \
> -no-user-config \
>