[ovirt-users] Re: The latest guest agent needs to be installed

2022-11-07 Thread Milan Zamazal
Christopher Law  writes:

> I've got a couple of VMs giving me this error message, "the latest
> guest agent needs to be installed and running". These VMs have the
> latest guest agent installed and it is running (virtio-win-1.9.24.iso)

You're probably hitting https://bugzilla.redhat.com/2120381.  It should
get fixed in the next release.

Regards,
Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SDJ7BZ4PJKI6SQ723R75OZVDGZVORP5F/


[ovirt-users] Re: RPM issues updating Ovirt 4.5.2 to 4.5.3

2022-10-19 Thread Milan Zamazal
David Johnson  writes:

> Good afternoon all,
>
> We are trying to update our cluster from 4.5.2 to see if we can
> resolve some issues with the VM Web console not functioning on VM's that
> run on one of the hosts. So there's two problems here, if we can resolve
> the one (dependency resolution) we are hoping that we can resolve the other
> with a reinstall of the software.
>
> *Symptoms:*
> 1. On VM's running one one host, the VM web console does not work.  The
> console.vv downloads to the desktop and we can attempt to launch it.  On
> launch, it immediately exits. The web console works on the VM's on the
> other host.
>
> 2. Attempting to update or reinstall the software to any host via the ovirt
> Compute -> Hosts -> Installation -> Reinstall, or Upgrade menu, we get a
> dependency resolution error:
> package ovirt-openvswitch-2.15-4.el8.noarch requires openvswitch2.15, but
> none of the providers can be installed\\n  - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-117.el8s.x86_64

I don't know what's the official solution, but a workaround I use is
adding the following packages to `exclude' section of
[ovirt-*-centos-stream-openstack-yoga-testing] repo at the end of
/etc/yum.repos.d/ovirt-*-dependencies.repo on the host:

 rdo-openvswitch
 rdo-ovn
 rdo-ovn-host
 python3-rdo-openvswitch

Regards,
Milan

> It appears that the 4.5.2 build is running on an older release of
> openvswitch?
>
> Please advise.
>
>
> *Environment:*
> Production environment: Ovirt 4.5.2.4-1.el8
> 1 Standalone engine (upgraded to
> 3 hosts
>
> *Log excerpts*
> *VM Web Console Not Starting:*
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Withdrawing address
> record for fe80::f0ca:56ff:fe8c:7bb8 on veth8a5345f.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Withdrawing
> workstation service for veth8a5345f.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Withdrawing address
> record for fe80::d879:d5ff:fee4:1855 on vethc9bd12f.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Withdrawing
> workstation service for vethc9bd12f.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Withdrawing address
> record for fe80::42:15ff:fe5a:679 on br-1feb13c47a4f.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Withdrawing address
> record for 172.22.0.1 on br-1feb13c47a4f.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Withdrawing
> workstation service for br-1feb13c47a4f.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Withdrawing address
> record for 172.19.0.1 on docker_gwbridge.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Withdrawing
> workstation service for docker_gwbridge.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Withdrawing address
> record for 172.17.0.1 on docker0.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Withdrawing
> workstation service for docker0.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Withdrawing address
> record for 172.18.0.1 on br-4209e789b982.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Withdrawing
> workstation service for br-4209e789b982.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Withdrawing
> workstation service for virbr0-nic.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Withdrawing address
> record for 192.168.122.1 on virbr0.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Withdrawing
> workstation service for virbr0.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Withdrawing address
> record for 192.168.2.163 on eth0.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Withdrawing
> workstation service for eth0.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Withdrawing
> workstation service for lo.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Host name conflict,
> retrying with cen-76-alc-qa-4236
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Registering new
> address record for fe80::f0ca:56ff:fe8c:7bb8 on veth8a5345f.*.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Registering new
> address record for fe80::d879:d5ff:fee4:1855 on vethc9bd12f.*.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Registering new
> address record for fe80::42:15ff:fe5a:679 on br-1feb13c47a4f.*.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Registering new
> address record for 172.22.0.1 on br-1feb13c47a4f.IPv4.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Registering new
> address record for 172.19.0.1 on docker_gwbridge.IPv4.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Registering new
> address record for 172.17.0.1 on docker0.IPv4.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Registering new
> address record for 172.18.0.1 on br-4209e789b982.IPv4.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: Registering new
> address record for 192.168.122.1 on virbr0.IPv4.
> Oct 18 13:22:02 cen-76-alc-qa-163 avahi-daemon[780]: 

[ovirt-users] Re: Issues with snapshot and template

2022-10-06 Thread Milan Zamazal
David Johnson  writes:

> My current environment is ovirt 4.5.2.4-1.el8, running on centos 7.9.
>
> I'm looking for advice on resolving two issues that have just come to light:
>
> *Issue #1*: When I attempt to create a template from the oVirt GUI, I click
> the "Make Template" button, enter the name for the new template, then click
> the "OK" button.  The OK button highlights, but nothing happens.

I think this is fixed in 4.5.3.

Regards,
Milan

> *There are no entries showing activity in the engine.log.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DBY5R47XRLVN3WZQCP746RIRWNVRFLS2/


[ovirt-users] Re: Export to Data Domain (with TPM)

2022-10-05 Thread Milan Zamazal
Christopher Law  writes:

> Anyone explain why I can't export a Virtual Machine with a TPM? Is
> this something to do with the TPM data? I'm exporting the VM to some
> cold storage. What's the deal here how can I export it and keep the
> TPM data or do I have to disable TPM on it first?
>
> oVirt Error message when trying to export a VM to a Data Domain that has a 
> TPM.
> Export VM Failed
> Cannot add VM. TPM device is required by the guest OS

Hi,

it is a Windows VM, right?  It is a bug, I filed
https://github.com/oVirt/ovirt-engine/issues/702 .  The export should
work.

A workaround could be temporarily switching the VM operating system in
the VM properties to Red Hat Enterprise Linux 9 before exporting but I'm
not sure it won't change some other VM properties, be careful if you
attempt it.

Regards,
Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T4W2ZE2PEOR6NYLGDAV2R6L2XRVI3BPT/


[ovirt-users] Re: vm migration failed with certifacate issue

2022-09-08 Thread Milan Zamazal
parallax  writes:

> ovirt 4.4.4.7
>
> not able to migrate VMs between hosts with following vdsm error:
>
> operation failed: Failed to connect to remote libvirt URI
> qemu+tls://kvm4.imp.loc/system: authentication failed: Failed to verify
> peer's certificate

You should be able to see a more exact reason for the certificate
verification failure in libvirtd logs on the source host (perhaps after
adjusting logging settings in /etc/libvirt/libvirtd.conf + restarting
libvirtd).

Anyway, you should check the certificates in /etc/pki/vdsm/certs on both
the source and destination hosts:

- cacert.pem should be the Engine CA certificate.

- vdsmcert.pem should be a certificate signed by the CA certificate,
  with the right host name and not expired.

If you are using encrypted migrations then you should additionally check
the certificates in /etc/pki/vdsm/libvirt-migrate.  cacert.pem should be
the CA certificate, server-cert.pem a valid certificate signed by the CA
certificate and there should be links client-cert.pem and client-key.pem
to server-cert.pem and server-key.pem respectively.

> hosts certificates was renewed recently but hosts hasn't been reloaded
> how to fix this issue

Regards,
Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7M73Y6X27IW6DSBQLJKH4HWUA3KX7EDO/


[ovirt-users] Re: Notified that Engine's certification is about to expire but no documentation to renew it

2022-08-11 Thread Milan Zamazal
Andrea Chierici  writes:

> The strange thing is that now every time I log on the engine I get
> warnings about host certificates about to expire, even if the expiry
> date is in 2023. This looks weird to me and I wonder if there is
> somewhere a setting to specify how in advance the warning should be
> given.

Both the expiration and warning periods were extended in 4.5.1.  Next
time you renew your host certificates, they will be valid for 5 years.

There are config values to change the warning periods but I wouldn't
recommend using them.  It's better to get the warning sufficiently in
advance rather than being forced to migrate your VMs there at the last
moment.

Regards,
Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AUUVAMADVVZH5I2Y7H26VZKWJUNM4WRQ/


[ovirt-users] Re: Task Run PKI enroll request for vdsm and QEMU failed to execute. Ovirt 4.5.1

2022-07-22 Thread Milan Zamazal
Patrick Hibbs  writes:

> That error is saying the enrollment script cannot access the serial.txt
> file to generate the new certificate's serial number. That file should
> be located at /etc/pki/ovirt-engine/serial.txt Owned by the ovirt user
> / group. (Oddly enough on my system that file is world readable /
> writable. Which seems like it should be wrong...)

It is wrong and it is being handled in
https://github.com/oVirt/ovirt-engine/pull/477.

> There may also be backup files of it in that same directory.
>
> If the file doesn't exist at all and there are no backups: You could
> try to create a new one by figuring out what the highest serial number
> issued by the internal ca is, incrementing it by one, and echoing that
> into a new serial.txt file. (Setting permissions as appropriate.)
> Although in this case, I'd ask why the file was deleted in the first
> place.

It might be related to https://bugzilla.redhat.com/2088446 but I don't
know any details.

Regards,
Milan

> -Patrick Hibbs
>
> On Wed, 2022-07-20 at 19:44 +, xavi...@rogers.com wrote:
>> Log:
>> 
>> 2022-07-20 17:50:43 UTC - TASK [ovirt-host-deploy-vdsm-certificates :
>> Run PKI enroll request for vdsm and QEMU] ***
>> 2022-07-20 17:50:43 UTC - 
>> 2022-07-20 17:50:43 UTC - {
>>   "status" : "OK",
>>   "msg" : "",
>>   "data" : {
>>     "uuid" : "67f44c2c-edf2-454b-ab5f-a3a6e3076ddc",
>>     "counter" : 179,
>>     "stdout" : "",
>>     "start_line" : 171,
>>     "end_line" : 171,
>>     "runner_ident" : "6b4c5f52-0854-11ed-b044-00163e598f5b",
>>     "event" : "runner_on_failed",
>>     "pid" : 32040,
>>     "created" : "2022-07-20T17:50:43.065710",
>>     "parent_uuid" : "00163e59-8f5b-ba87-8722-02a4",
>>     "event_data" : {
>>   "playbook" : "ovirt-host-deploy.yml",
>>   "playbook_uuid" : "4f7a6915-ae99-445b-ac02-ba66bbd1aa57",
>>   "play" : "all",
>>   "play_uuid" : "00163e59-8f5b-ba87-8722-0008",
>>   "play_pattern" : "all",
>>   "task" : "Run PKI enroll request for vdsm and QEMU",
>>   "task_uuid" : "00163e59-8f5b-ba87-8722-02a4",
>>   "task_action" : "command",
>>   "task_args" : "",
>>   "task_path" : "/usr/share/ovirt-engine/ansible-runner-service-
>> project/project/roles/ovirt-host-deploy-vdsm-
>> certificates/tasks/main.yml:38",
>>   "role" : "ovirt-host-deploy-vdsm-certificates",
>>   "host" : "xnet-node-02.xnet.local",
>>   "remote_addr" : "xnet-node-02.xnet.local",
>>   "res" : {
>>     "results" : [ {
>>   "msg" : "non-zero return code",
>>   "cmd" : [ "/usr/share/ovirt-engine/bin/pki-enroll-
>> request.sh", "--name=xnet-node-02.xnet.local", "--
>> subject=/O=xnet.local/CN=xnet-node-02.xnet.local", "--san=DNS:xnet-
>> node-02.xnet.local", "--days=398", "--timeout=30", "--ca-file=ca", "-
>> -cert-dir=certs", "--req-dir=requests" ],
>>   "stdout" : "",
>>   "stderr" : "Using configuration from openssl.conf\nunable
>> to load number from serial.txt\nerror while loading serial
>> number\n140364123252544:error:0D066096:asn1 encoding
>> routines:a2i_ASN1_INTEGER:short line:crypto/asn1/f_int.c:140:\nCannot
>> sign certificate",
>>   "rc" : 1,
>>   "start" : "2022-07-20 17:50:42.811555",
>>   "end" : "2022-07-20 17:50:42.840405",
>>   "delta" : "0:00:00.028850",
>>   "changed" : true,
>>   "failed" : true,
>>   "invocation" : {
>>     "module_args" : {
>>   "_raw_params" : "\"/usr/share/ovirt-engine/bin/pki-
>> enroll-request.sh\"\n\"--name=xnet-node-02.xnet.local\"\n\"--
>> subject=/O=xnet.local/CN=xnet-node-02.xnet.local\"\n\"--san=DNS:xnet-
>> node-02.xnet.local\"\n\"--days=398\"\n\"--timeout=30\"\n\"--ca-
>> file=ca\"\n\"--cert-dir=certs\"\n\"--req-dir=requests\"\n",
>>   "warn" : true,
>>   "_uses_shell" : false,
>>   "stdin_add_newline" : true,
>>   "strip_empty_ends" : true,
>>   "argv" : null,
>>   "chdir" : null,
>>   "executable" : null,
>>   "creates" : null,
>>   "removes" : null,
>>   "stdin" : null
>>     }
>>   },
>>   "stdout_lines" : [ ],
>>   "stderr_lines" : [ "Using configuration from openssl.conf",
>> "unable to load number from serial.txt", "error while loading serial
>> number", "140364123252544:error:0D066096:asn1 encoding
>> routines:a2i_ASN1_INTEGER:short line:crypto/asn1/f_int.c:140:",
>> "Cannot sign certificate" ],
>>   "_ansible_no_log" : false,
>>   "item" : {
>>     "ou" : "",
>>     "ca_file" : "ca",
>>     "cert_dir" : "certs",
>>     "req_dir" : "requests"
>>   },
>>   "ansible_loop_var" : "item",
>>   "_ansible_item_label" : {
>>     "ou" : "",
>>     "ca_file" : "ca",
>>     "cert_dir" : "certs",
>>     "req_dir" : "requests"
>>   }
>>     

[ovirt-users] Re: Incorrect Max CPU's per VM in libvirt if cluster compatibility 4.6 or 4.7

2022-07-20 Thread Milan Zamazal
"David Sekne"  writes:

> Hello,
>
> Not sure about the feature part. I additionally tested this on oVirt
> 4.4.10.7-1.el8 and it works fine even if cluster compatibility there
> is set to 4.6.

Before oVirt 4.5, MaxNumOfCpusCoefficient was applied only if the
maximum number of the VM vCPU sockets was higher than 16.  Now, it is
applied more strictly (although there are still some exceptions, which
is probably why you see 16 max vCPUs rather than 8).  This means
MaxNumOfCpusCoefficient value cannot be ignored anymore in some
scenarios.  If you need to hot plug more vCPUs than the number of the
initial vCPUs, you should change MaxNumOfCpusCoefficient.

Regards,
Milan

> Engine:
> MaxNumOfVmSockets: 32 version: 4.2
> MaxNumOfVmSockets: 32 version: 4.3
> MaxNumOfVmSockets: 32 version: 4.4
> MaxNumOfVmSockets: 32 version: 4.5
> MaxNumOfVmSockets: 32 version: 4.6
>
> VM:
> 32
>
> Regards,
> David
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MBRDAXKCA5GFRRP2N3LX6A2ZGQM67FE5/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HKWJFNDBKFRX562SE5EVI6RAL6Y7DNB4/


[ovirt-users] Re: Incorrect Max CPU's per VM in libvirt if cluster compatibility 4.6 or 4.7

2022-07-19 Thread Milan Zamazal
"David Sekne"  writes:

> Hello,
>
> I recently upgraded our cluster from 4.3 to 4.5 (4.5.1.3-1.el8), I
> raised the cluster compatibility afterwards to 4.7 as well. I noticed
> that the CPU hot plugging does not work above 16 CPU's if cluster
> compatibility is set to 4.6 or 4.7.
>
> Error I get is: 
> Failed to hot set number of CPUS to VM testVM-3. Underlying error
> message: invalid argument: requested vcpus is greater than max
> allowable vcpus for the live domain: 32 > 16
>
> Issue here I believe is that the MaxNumOfVmSockets values set on the
> engine per cluster version is not correctly set on libvirt when VM is
> started.
>
> My values on the engine:
> MaxNumOfVmSockets: 32 version: 4.2
> MaxNumOfVmSockets: 32 version: 4.3
> MaxNumOfVmSockets: 32 version: 4.4
> MaxNumOfVmSockets: 32 version: 4.5
> MaxNumOfVmSockets: 64 version: 4.6
> MaxNumOfVmSockets: 1 version: 4.7
>
> Values on libvirt for VM's for all cluster compatibility versions:
>
> 4.3
> 32
>
> 4.4
> 32
>
> 4.5
> 32
>
> 4.6
> 16
>
> 4.7
> 16
>
> Is anyone else experiencing this issue (possible BUG)?

It's most likely a feature.  Starting with 4.6, the maximum number of
vCPUs is limited because it consumes resources even if the CPUs are not
plugged in.  Adjusting sockets/cores/threads configuration of the VM to
put more than one vCPU on a socket or changing MaxNumOfCpusCoefficient
config value should help in your case.

Regards,
Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L6MLFRWAWTKDKIPCHHYBSXEJ7K73G7EL/


[ovirt-users] Re: VM Migration Failed

2022-07-15 Thread Milan Zamazal
"KSNull Zero"  writes:

> Is it safe to restart libvirtd on hosts with workloads without entering 
> Maintenance mode ?

Generally no, often yes.  Restarting libvirtd shouldn't cause harm to
the VMs themselves but it can disrupt running jobs managed by libvirt or
confuse oVirt if some actions are being performed at the given moment.
It's best to do it when there are no migrations (host migrations don't
work for you currently anyway) or other jobs (e.g. snapshots) or actions
(e.g. VM startup or shutdown) running on the host.  Even if they are, it
doesn't necessarily mean something breaks but it's
best-effort/no-guarantees workflow instead of the normal workflow.

I think just adding the certificate links doesn't require libvirtd
restart.  And reload may be enough after changing libvirt configuration
files.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/55HGK5PK2P6WKM4RGPR73HQQUBTJO3AX/


[ovirt-users] Re: VM Migration Failed

2022-07-14 Thread Milan Zamazal
"KSNull Zero"  writes:

> Running oVirt 4.4.5
> VM cannot migrate between hosts.
>
> vdsm.log contains the following error:
> libvirt.libvirtError: operation failed: Failed to connect to remote
> libvirt URI qemu+tls://ovhost01.local/system: authentication failed:
> Failed to verify peer's certificate
>
> Certificates on hosts was renewed some time ago. How this issue can be fixed ?

I think it's https://bugzilla.redhat.com/show_bug.cgi?id=1948376, which
was fixed in 4.4.6.5.

IIRC you need to create links in /etc/pki/vdsm/libvirt-migrate on the
source host from server-*.pem to client-*.pem and make sure

  migrate_tls_x509_verify = 1

is set (it is by default) in /etc/libvirt/qemu.conf.
Restarting libvirtd may be needed afterwards.

Regards,
Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q7MBTVNUABJSGXFFAVR6WS72COJ4ZOR4/


[ovirt-users] Re: oVirt 4.5 linux guest vm with host device added to it fails to start

2022-06-15 Thread Milan Zamazal
Don Dupuis  writes:

> Hello
> Anyone have any ideas?
>
> Don
>
> On Fri, Jun 10, 2022 at 11:45 AM Don Dupuis  wrote:
>
>> THis is for version oVirt 4.5.0.8-1. Sorry left out the exact release.
>>
>> Don
>>
>> On Fri, Jun 10, 2022 at 11:41 AM Don Dupuis  wrote:
>>
>>> Hello
>>> I have a RHEL 8.6 based hypervisor with a Mellanox ConnectX-5 IB card
>>> installed with SRIOV enabled. The host device I am assigning is
>>> pci__af_00_2. The card is working as I can talk to other infiniband
>>> interfaces on other servers. Below is the output of lspci.
>>> 3b:00.0 Ethernet controller: Mellanox Technologies MT27800 Family
>>> [ConnectX-5]
>>> 3b:00.1 Ethernet controller: Mellanox Technologies MT27800 Family
>>> [ConnectX-5]
>>> af:00.0 Infiniband controller: Mellanox Technologies MT27800 Family
>>> [ConnectX-5]
>>> af:00.1 Infiniband controller: Mellanox Technologies MT27800 Family
>>> [ConnectX-5 Virtual Function]
>>> af:00.2 Infiniband controller: Mellanox Technologies MT27800 Family
>>> [ConnectX-5 Virtual Function]
>>> af:00.3 Infiniband controller: Mellanox Technologies MT27800 Family
>>> [ConnectX-5 Virtual Function]
>>> af:00.4 Infiniband controller: Mellanox Technologies MT27800 Family
>>> [ConnectX-5 Virtual Function]
>>>
>>> The linux vm is configured as Q35 Chipset with UEFI, 16 cpus, numa
>>> enabled, and cpu pinning enabled. OS is RHEL 7.9. As soon as I start the
>>> vm, I get an immediate error message stating "Cannot run VM. There is no
>>> host that satisfies current scheduling constraints. See below for details:,
>>> The host rvsh002 did not satisfy internal filter HostDevice because some of
>>> the required host devices are unavailable." If I remove the host device
>>> from the vm config, then it starts and runs fine. This setup was working
>>> just fine on RHEL8.4 and oVirt 4.4.7 using the proper driver for RHEL 8.4.

Engine apparently cannot find a host with enough CPUs and free memory,
matching the NUMA and CPU pinning configurations, and having the given
host device available.  According to the log, rvsh002 doesn't have the
host device, other hosts apparently don't satisfy some of the other
conditions.  Also, isn't the VM pinned to some hosts?

Maybe someone could provide a better advice, but if you think there is a
host satisfying all the conditions, you can try to start the VM there
with "Run Once" and see if Engine provides a reason why it cannot be
started there.

>>> Here is the engine.log after I press the run button.
>>> 2022-06-10 11:22:10,506-05 INFO  [org.ovirt.engine.core.bll.RunVmCommand]
>>> (default task-1) [81144b66-e5f9-474e-a922-e2ce49cdc8ca] Lock Acquired to
>>> object
>>> 'EngineLock:{exclusiveLocks='[de54b903-7204-4966-95a3-05f64ed17f68=VM]',
>>> sharedLocks=''}'
>>> 2022-06-10 11:22:10,520-05 INFO
>>>  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default
>>> task-1) [81144b66-e5f9-474e-a922-e2ce49cdc8ca] START,
>>> IsVmDuringInitiatingVDSCommand(
>>> IsVmDuringInitiatingVDSCommandParameters:{vmId='de54b903-7204-4966-95a3-05f64ed17f68'}),
>>> log id: 6faf22a5
>>> 2022-06-10 11:22:10,520-05 INFO
>>>  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default
>>> task-1) [81144b66-e5f9-474e-a922-e2ce49cdc8ca] FINISH,
>>> IsVmDuringInitiatingVDSCommand, return: false, log id: 6faf22a5
>>> 2022-06-10 11:22:10,560-05 INFO
>>>  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1)
>>> [] Candidate host 'rvsh002' ('f68352c2-6ddc-44ae-a19b-9262e92327f8') was
>>> filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'HostDevice'
>>> (correlation id: null)
>>> 2022-06-10 11:22:10,569-05 ERROR
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (default task-1) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM
>>> ws006 due to a failed validation: [Cannot run VM. There is no host that
>>> satisfies current scheduling constraints. See below for details:, The host
>>> rvsh002 did not satisfy internal filter HostDevice because some of the
>>> required host devices are unavailable.] (User: admin@internal-authz).
>>> 2022-06-10 11:22:10,569-05 WARN  [org.ovirt.engine.core.bll.RunVmCommand]
>>> (default task-1) [] Validation of action 'RunVm' failed for user
>>> admin@internal-authz. Reasons:
>>> VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_ALL_HOSTS_FILTERED_OUT,VAR__FILTERTYPE__INTERNAL,$hostName
>>> rvsh002,$filterName
>>> HostDevice,VAR__DETAIL__HOST_DEVICE_UNAVAILABLE,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL
>>> 2022-06-10 11:22:10,570-0
>>>
>>> There was nothing in the vdsm.log on the hypervisor related to this issue
>>> that I could see after hitting the run button.

Engine couldn't find a matching host so the VM is not attempted to start 
anywhere.

>>> Thanks
>>> Don
>>>
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code 

[ovirt-users] Re: oVirt 4.5 linux guest vm with host device added to it fails to start

2022-06-15 Thread Milan Zamazal
Don Dupuis  writes:

> Hello
> Anyone have any ideas?
>
> Don
>
> On Fri, Jun 10, 2022 at 11:45 AM Don Dupuis  wrote:
>
>> THis is for version oVirt 4.5.0.8-1. Sorry left out the exact release.
>>
>> Don
>>
>> On Fri, Jun 10, 2022 at 11:41 AM Don Dupuis  wrote:
>>
>>> Hello
>>> I have a RHEL 8.6 based hypervisor with a Mellanox ConnectX-5 IB card
>>> installed with SRIOV enabled. The host device I am assigning is
>>> pci__af_00_2. The card is working as I can talk to other infiniband
>>> interfaces on other servers. Below is the output of lspci.
>>> 3b:00.0 Ethernet controller: Mellanox Technologies MT27800 Family
>>> [ConnectX-5]
>>> 3b:00.1 Ethernet controller: Mellanox Technologies MT27800 Family
>>> [ConnectX-5]
>>> af:00.0 Infiniband controller: Mellanox Technologies MT27800 Family
>>> [ConnectX-5]
>>> af:00.1 Infiniband controller: Mellanox Technologies MT27800 Family
>>> [ConnectX-5 Virtual Function]
>>> af:00.2 Infiniband controller: Mellanox Technologies MT27800 Family
>>> [ConnectX-5 Virtual Function]
>>> af:00.3 Infiniband controller: Mellanox Technologies MT27800 Family
>>> [ConnectX-5 Virtual Function]
>>> af:00.4 Infiniband controller: Mellanox Technologies MT27800 Family
>>> [ConnectX-5 Virtual Function]
>>>
>>> The linux vm is configured as Q35 Chipset with UEFI, 16 cpus, numa
>>> enabled, and cpu pinning enabled. OS is RHEL 7.9. As soon as I start the
>>> vm, I get an immediate error message stating "Cannot run VM. There is no
>>> host that satisfies current scheduling constraints. See below for details:,
>>> The host rvsh002 did not satisfy internal filter HostDevice because some of
>>> the required host devices are unavailable." If I remove the host device
>>> from the vm config, then it starts and runs fine. This setup was working
>>> just fine on RHEL8.4 and oVirt 4.4.7 using the proper driver for RHEL 8.4.

Engine apparently cannot find a host with enough CPUs and free memory,
matching the NUMA and CPU pinning configurations, and having the given
host device available.  According to the log, rvsh002 doesn't have the
host device, other hosts apparently don't satisfy some of the other
conditions.  Also, isn't the VM pinned to some hosts?

Maybe someone could provide a better advice, but if you think there is a
host satisfying all the conditions, you can try to start the VM there
with "Run Once" and see if Engine provides a reason why it cannot be
started there.

>>> Here is the engine.log after I press the run button.
>>> 2022-06-10 11:22:10,506-05 INFO  [org.ovirt.engine.core.bll.RunVmCommand]
>>> (default task-1) [81144b66-e5f9-474e-a922-e2ce49cdc8ca] Lock Acquired to
>>> object
>>> 'EngineLock:{exclusiveLocks='[de54b903-7204-4966-95a3-05f64ed17f68=VM]',
>>> sharedLocks=''}'
>>> 2022-06-10 11:22:10,520-05 INFO
>>>  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default
>>> task-1) [81144b66-e5f9-474e-a922-e2ce49cdc8ca] START,
>>> IsVmDuringInitiatingVDSCommand(
>>> IsVmDuringInitiatingVDSCommandParameters:{vmId='de54b903-7204-4966-95a3-05f64ed17f68'}),
>>> log id: 6faf22a5
>>> 2022-06-10 11:22:10,520-05 INFO
>>>  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default
>>> task-1) [81144b66-e5f9-474e-a922-e2ce49cdc8ca] FINISH,
>>> IsVmDuringInitiatingVDSCommand, return: false, log id: 6faf22a5
>>> 2022-06-10 11:22:10,560-05 INFO
>>>  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1)
>>> [] Candidate host 'rvsh002' ('f68352c2-6ddc-44ae-a19b-9262e92327f8') was
>>> filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'HostDevice'
>>> (correlation id: null)
>>> 2022-06-10 11:22:10,569-05 ERROR
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (default task-1) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM
>>> ws006 due to a failed validation: [Cannot run VM. There is no host that
>>> satisfies current scheduling constraints. See below for details:, The host
>>> rvsh002 did not satisfy internal filter HostDevice because some of the
>>> required host devices are unavailable.] (User: admin@internal-authz).
>>> 2022-06-10 11:22:10,569-05 WARN  [org.ovirt.engine.core.bll.RunVmCommand]
>>> (default task-1) [] Validation of action 'RunVm' failed for user
>>> admin@internal-authz. Reasons:
>>> VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_ALL_HOSTS_FILTERED_OUT,VAR__FILTERTYPE__INTERNAL,$hostName
>>> rvsh002,$filterName
>>> HostDevice,VAR__DETAIL__HOST_DEVICE_UNAVAILABLE,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL
>>> 2022-06-10 11:22:10,570-0
>>>
>>> There was nothing in the vdsm.log on the hypervisor related to this issue
>>> that I could see after hitting the run button.

Engine couldn't find a matching host so the VM is not attempted to start 
anywhere.

>>> Thanks
>>> Don
>>>
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code 

[ovirt-users] Re: VmMediatedDevices help for vGPU in overt 4.5

2022-06-09 Thread Milan Zamazal
Don Dupuis  writes:

> I am looking for an example on how to use the new VmMediatedDevices service
> to add an Nvidia vGPU in ovirt to guest vms. I had it working just fine in
> oVirt 4.4 when using the custom_properties method. Just need to understand
> the new method/way of doing it correctly using the python3-ovirt-engine-sdk.

You can retrieve the vGPU properties like:

  service = 
connection.system_service().vms_service().vm_service('123').mediated_devices_service()
  vgpu_devices = service.list()
  first_vgpu = vgpu_devices[0]
  # id of the device
  first_vgpu.id
  # properties of the device
  [(p.name, p.value) for p in first_vgpu]

Example of adding and removing a vGPU device:

  service = 
connection.system_service().vms_service().vm_service('123').mediated_devices_service()
  spec_params = [ovirtsdk4.types.Property(name=name, value=value)
 for name, value in (('mdevType', 'nvidia-22'),
 ('nodisplay', 'true'),
 ('driverParams', 'enable_uvm=1'))]
  service.add(ovirtsdk4.types.VmMediatedDevice(spec_params=spec_params))
  service.device_service('456').remove()

HTH,
Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SDP5P5XF2W22ZE6DOGP7EXTLDLDSEX7B/


[ovirt-users] Re: Engine Certificate renewal caused "Error while executing action InstallVds: Internal Engine Error"

2022-06-06 Thread Milan Zamazal
Patrick Hibbs  writes:

> Second problem: After having renewed the engine certificate, the engine
> can no longer update a host certificate nor (re-)install a host. Giving
> me the following error in the Admin WebUI: "Error while executing
> action InstallVds: Internal Engine Error"
>
> I've attached the logs from the engine.

The attached log refers to another log
/var/log/ovirt-engine/host-deploy/ovirt-enroll-certs-ansible-20220603115134-virt02.codenet-1a0acee9-f33e-44e3-a863-cd2ee0a4289e.log
where the actual error should be visible.

Also, what's your Engine version?

Regards,
Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RCJPFWILCSD4BTABKNGKWV7TZAPD4NFR/


[ovirt-users] Re: certification expires: PKIX path validation failed

2022-04-13 Thread Milan Zamazal
Nathanaël Blanchet  writes:

> Hi,
>
> Some of my hosts came into a non responsive state since there
> certicate had expired:
>
> VDSM palomo command Get Host Capabilities failed: PKIX path validation
> failed: java.security.cert.CertPathValidatorException: validity check
> failed
>
> |openssl x509 -noout -enddate -in /etc/pki/vdsm/certs/vdsmcert.pem
>  palomo notAfter=Apr 6 11:09:05 2022 GMT |
>
> The recommanded path to update certificates is to put hosts into
> maintenance and enroll certificates.
> But I can't anymore live migrate vms since the certificate is expired:
>
> 2022-04-13 10:34:12,022+0200 ERROR (migsrc/bf0f7628) [virt.vm]
> (vmId='bf0f7628-d70b-47a4-8569-5430e178f429') [SSL:
> CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)
> (migration:331)
>
>
> So is there a way to disable tls to migrate these vms so as to put the
> host into maintenance?

Do you use encrypted migrations?  I think the client certificate is
verified only with encrypted migrations.  You can disable encrypted
migrations in the web UI among other migration settings in cluster or VM
settings.

If it fails also with non-encrypted migrations, *maybe* removing the
client certificate could help.

If disabling encrypted migrations is not possible, you can try to set
migrate_tls_x509_verify option in /etc/libvirt/qemu.conf on the
destination host to 0 (libvirt restart may be needed to apply the
changed setting).

I guess there could be also a way to run the Ansible role for updating
the certificates manually (not recommended etc. etc. but perhaps still
useful in this case) without putting the host into the maintenance.
Just a speculation, I don’t know whether it’s actually possible and how
to do it if it is.

Regards,
Milan

> No possibility of migration would imply to stop production vms, this
> is what we absolutely don't want!
>
> Any help much appreciated.
>
> ||
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/65YXCBQAD47KARXCVGUYVBMMBQMYLVFV/


[ovirt-users] Re: Console - VNC password is 12 characters long, only 8 permitted

2022-02-14 Thread Milan Zamazal
Francesco Lorenzini  writes:

> Hi Milan,
>
> thank you for your answer.
>
> So there is no other way/workaround? We must wait the fix in the
> engine and then upgrade? Maybe a downgrade of libvirt(?).

I can't think about any other workaround, without modifying sources,
than downgrading libvirt, until the fixed Engine is installed.

> I was looking up some config file in the host under /etc/libvirt and
> found the parameters vnc_password in qemu.conf file. I'm not sure that 
> setting a password per host in this config file works, casue it is
> still passed via xml...

In theory, you could set the default password there and remove passwords
from the domain XMLs using Vdsm hooks.  You would have to do it on all
the hosts or handle migrations accordingly.  Downgrading libvirt looks
much easier.

>> The default VNC password. Only 8 bytes are significant for
>> # VNC passwords. This parameter is only used if the per-domain
>> # XML config does not already provide a password.
>
> Francesco
>
> Il 14/02/2022 12:41, Milan Zamazal ha scritto:
>> francesco--- via Users  writes:
>>
>>> Hi all,
>>>
>>> I'm using websockify + noVNC for expose the vm console via browser getting  
>>> the graphicsconsoles ticket via API. Everything works fine for every other 
>>> host that I have (more than 200), the console works either via oVirt engine 
>>> and via browser) but just for a single host (CentOS Stream release 8, oVirt 
>>> 4.4.9) the console works only via engine but when I try the connection via 
>>> browser I get the following error (vdsm log of the host):
>>>
>>>   ERROR FINISH updateDevice error=unsupported configuration: VNC password 
>>> is 12 characters long, only 8 permitted
>>>   Traceback (most recent call last):
>>> File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 124, 
>>> in method
>>>   ret = func(*args, **kwargs)
>>> File "/usr/lib/python3.6/site-packages/vdsm/API.py", line 372, in 
>>> updateDevice
>>>   return self.vm.updateDevice(params)
>>> File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 3389, in 
>>> updateDevice
>>>   return self._updateGraphicsDevice(params)
>>> File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 3365, in 
>>> _updateGraphicsDevice
>>>   params['params']
>>> File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 5169, in 
>>> _setTicketForGraphicDev
>>>   self._dom.updateDeviceFlags(xmlutils.tostring(graphics), 0)
>>> File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 
>>> 101, in f
>>>   ret = attr(*args, **kwargs)
>>> File 
>>> "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line 
>>> 131, in wrapper
>>>   ret = f(*args, **kwargs)
>>> File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 
>>> 94, in wrapper
>>>   return func(inst, *args, **kwargs)
>>> File "/usr/lib64/python3.6/site-packages/libvirt.py", line 3244, in 
>>> updateDeviceFlags
>>>   raise libvirtError('virDomainUpdateDeviceFlags() failed')
>>>   libvirt.libvirtError: unsupported configuration: VNC password is 12 
>>> characters long, only 8 permitted
>>>
>>>
>>> The error is pretty much self explanatory but, I can't manage to
>>> figure out why only on this server
>> Hi,
>>
>> this happens with libvirt 8.0.
>>
>>> and I wonder if I can set the length of the generated vnc password
>>> somewhere.
>> I don't think so, it must be fixed in Engine.  See
>> https://github.com/oVirt/ovirt-engine/commit/a1e7e39348550b575f1f01b701105f9e1066b09f
>> for more details.
>>
>> Regards,
>> Milan
>> ___
>> Users mailing list --users@ovirt.org
>> To unsubscribe send an email tousers-le...@ovirt.org
>> Privacy Statement:https://www.ovirt.org/privacy-policy.html
>> oVirt Code of 
>> Conduct:https://www.ovirt.org/community/about/community-guidelines/
>> List
>> Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/XJMULDXFHBYK3GNICAJRASQCSLBIFJV7/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CE3A2WTEHVE2NPMPYTQS5WAVHEESND4X/


[ovirt-users] Re: Console - VNC password is 12 characters long, only 8 permitted

2022-02-14 Thread Milan Zamazal
francesco--- via Users  writes:

> Hi all,
>
> I'm using websockify + noVNC for expose the vm console via browser getting  
> the graphicsconsoles ticket via API. Everything works fine for every other 
> host that I have (more than 200), the console works either via oVirt engine 
> and via browser) but just for a single host (CentOS Stream release 8, oVirt 
> 4.4.9) the console works only via engine but when I try the connection via 
> browser I get the following error (vdsm log of the host):
>
>  ERROR FINISH updateDevice error=unsupported configuration: VNC password is 
> 12 characters long, only 8 permitted 
>  Traceback (most recent call last):   
>
>File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 124, in 
> method   
>  ret = func(*args, **kwargs)  
>
>File "/usr/lib/python3.6/site-packages/vdsm/API.py", line 372, in 
> updateDevice
>  return self.vm.updateDevice(params)  
>
>File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 3389, in 
> updateDevice   
>  return self._updateGraphicsDevice(params)
>
>File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 3365, in 
> _updateGraphicsDevice  
>  params['params'] 
>
>File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 5169, in 
> _setTicketForGraphicDev
>  self._dom.updateDeviceFlags(xmlutils.tostring(graphics), 0)  
>
>File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, 
> in f
>  ret = attr(*args, **kwargs)  
>
>File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", 
> line 131, in wrapper
>  ret = f(*args, **kwargs) 
>
>File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, 
> in wrapper  
>  return func(inst, *args, **kwargs)   
>
>File "/usr/lib64/python3.6/site-packages/libvirt.py", line 3244, in 
> updateDeviceFlags 
>  raise libvirtError('virDomainUpdateDeviceFlags() failed')
>
>  libvirt.libvirtError: unsupported configuration: VNC password is 12 
> characters long, only 8 permitted
>
>
> The error is pretty much self explanatory but, I can't manage to
> figure out why only on this server

Hi,

this happens with libvirt 8.0.

> and I wonder if I can set the length of the generated vnc password
> somewhere.

I don't think so, it must be fixed in Engine.  See
https://github.com/oVirt/ovirt-engine/commit/a1e7e39348550b575f1f01b701105f9e1066b09f
for more details.

Regards,
Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XJMULDXFHBYK3GNICAJRASQCSLBIFJV7/


[ovirt-users] Re: Q35 with BIOS broken

2021-12-08 Thread Milan Zamazal
"mediocre.slacker--- via Users"  writes:

> To add, it was pretty clear the VM never came up just by looking at
> top/htop. qemu-kvm was using negligible (0.7% of a core) CPU. I'm 99%
> certain this is the cause of my woes. Using Q35 with UEFI should be
> fine, as most systems are fine booting from UEFI. However, I'd like to
> have the option to use the BIOS version in case it is needed for some
> reason.

Do you have QEMU 6.1?  If yes then you've probably hit
https://gitlab.com/qemu-project/qemu/-/issues/641.

Regards,
Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/63YQEJGRYMGUXDNDXFA6RVIMV7L7WD6A/


[ovirt-users] Re: Set fixed VNC/Spice Password for VMs.

2021-07-30 Thread Milan Zamazal
Merlin Timm  writes:

> actually I rather wanted to know how to generate a config with
> Ovirt::Display. I didn't really understand what I have to do to
> generate a config.

I've never tried it but I think you should fetch the perl library and
then run a perl script according to the example in Synopis section of
https://metacpan.org/pod/Ovirt::Display

> Am 30.07.2021 14:04 schrieb Milan Zamazal:
>> Merlin Timm  writes:
>> 
>>> Hey,
>>> Thanks for the answers!
>>> I want to try the perl solution. One, maybe stupid, question: how
>>> do i run this perl module?
>>> Do i run it on the Host or from my local machne? I am a litte bit 
>>> confused.
>> As I understand it, you can run it from anywhere where Engine REST
>> API
>> is reachable from.
>> Regards,
>> Milan
>> 
>>> Could someone explain it to me?
>>> Best regarda
>>> Am 8. Juli 2021 16:05:42 MESZ schrieb Milan Zamazal 
>>> :
>>>> Sandro Bonazzola  writes:
>>>> 
>>>>> Il giorno gio 8 lug 2021 alle ore 13:38 Sandro Bonazzola <
>>>>> sbona...@redhat.com> ha scritto:
>>>>> 
>>>>>> +Milan Zamazal  , +Arik Hadas
>>>>>>  , +Michal
>>>>>> Skrivanek  any hint?
>>>>>> 
>>>>> I found https://metacpan.org/pod/Ovirt::Display but I think there 
>>>>> should be
>>>>> an easier way within the engine to configure this.
>>>>> 
>>>>> 
>>>>>> Il giorno mar 6 lug 2021 alle ore 14:01 Merlin Timm 
>>>>>> 
>>>>>> ha scritto:
>>>>>> 
>>>>>>> Good day to all,
>>>>>>> I have a question about the console configuration of the VMs:
>>>>>>> By default, for each console connection to a VM, a password is
>>>>>>> set for
>>>>>>> 120 seconds, after that you can't use it again. We currently
>>>>>>> have the
>>>>>>> following concern:
>>>>>>> We want to access and control the VMs via the VNC/Spice of the 
>>>>>>> Ovirt
>>>>>>> host. We have already tried to use the password from the
>>>>>>> console.vv for
>>>>>>> the connection and that works so far. Unfortunately we have to
>>>>>>> do this
>>>>>>> every 2 minutes when we want to connect again. We are currently
>>>>>>> building
>>>>>>> an automatic test pipeline and for this we need to access the VMs
>>>>>>> remotely before OS start and we want to be independent of a VNC
>>>>>>> server
>>>>>>> on the guest. This is only possible if we could connect to the
>>>>>>> VNC/Spice
>>>>>>> server from the Ovirt host.
>>>>>>> My question: would it be possible to fix the password or read
>>>>>>> it out via
>>>>>>> api every time you want to connect?
>>>> A one time password is set every time the console is opened, for 
>>>> those
>>>> 120 seconds.  Unfortunately, the 120 seconds limit seems to be
>>>> hardwired
>>>> in Engine sources.  So apparently the only chance would be to set the
>>>> password directly on the host using VM.updateDevice VDSM API call.
>>>> It
>>>> looks like this normally:
>>>>  VM.updateDevice(params={'deviceType': 'graphics', 'password':
>>>> '', 'disconnectAction': 'NONE', 'params': {'vncUsername':
>>>> 'vnc-630b9cae-a983-4ab0-a9ac-6b8728f8014d', 'fips': 'false',
>>>> 'userName': 'admin', 'userId':
>>>> 'fd2c5e14-a8c3-11eb-951c-2a9574de53b6'}, 'ttl': 120, 'graphicsType':
>>>> 'spice'})
>>>> This way it's possible to set a password and its lifetime (`ttl'
>>>> parameter).  Of course, it's needed to find out the host the VM
>>>> runs on,
>>>> a way to call the API (running vdsm-client directly on the host
>>>> may be
>>>> the easiest way), how to make/use the *.vv ticket (you can use the
>>>> same
>>>> password all the time) and to accept collisions with different
>>>> settings
>>>> if someone opens the console from the web UI.
>>>> In the end result, using the Perl library mentioned by Sandro
>>>> above may
>>>> be an easier solution.
>>>> Or another option

[ovirt-users] Re: Set fixed VNC/Spice Password for VMs.

2021-07-30 Thread Milan Zamazal
Merlin Timm  writes:

> Hey, 
>
> Thanks for the answers! 
>
> I want to try the perl solution. One, maybe stupid, question: how do i run 
> this perl module?
>
> Do i run it on the Host or from my local machne? I am a litte bit confused.

As I understand it, you can run it from anywhere where Engine REST API
is reachable from.

Regards,
Milan

> Could someone explain it to me?
>
> Best regarda
>
> Am 8. Juli 2021 16:05:42 MESZ schrieb Milan Zamazal :
>>Sandro Bonazzola  writes:
>>
>>> Il giorno gio 8 lug 2021 alle ore 13:38 Sandro Bonazzola <
>>> sbona...@redhat.com> ha scritto:
>>>
>>>> +Milan Zamazal  , +Arik Hadas  , 
>>>> +Michal
>>>> Skrivanek  any hint?
>>>>
>>>
>>> I found https://metacpan.org/pod/Ovirt::Display but I think there should be
>>> an easier way within the engine to configure this.
>>>
>>>
>>>
>>>>
>>>> Il giorno mar 6 lug 2021 alle ore 14:01 Merlin Timm 
>>>> ha scritto:
>>>>
>>>>> Good day to all,
>>>>>
>>>>> I have a question about the console configuration of the VMs:
>>>>>
>>>>> By default, for each console connection to a VM, a password is set for
>>>>> 120 seconds, after that you can't use it again. We currently have the
>>>>> following concern:
>>>>>
>>>>> We want to access and control the VMs via the VNC/Spice of the Ovirt
>>>>> host. We have already tried to use the password from the console.vv for
>>>>> the connection and that works so far. Unfortunately we have to do this
>>>>> every 2 minutes when we want to connect again. We are currently building
>>>>> an automatic test pipeline and for this we need to access the VMs
>>>>> remotely before OS start and we want to be independent of a VNC server
>>>>> on the guest. This is only possible if we could connect to the VNC/Spice
>>>>> server from the Ovirt host.
>>>>>
>>>>> My question: would it be possible to fix the password or read it out via
>>>>> api every time you want to connect?
>>
>>A one time password is set every time the console is opened, for those
>>120 seconds.  Unfortunately, the 120 seconds limit seems to be hardwired
>>in Engine sources.  So apparently the only chance would be to set the
>>password directly on the host using VM.updateDevice VDSM API call.  It
>>looks like this normally:
>>
>>  VM.updateDevice(params={'deviceType': 'graphics', 'password':
>> '', 'disconnectAction': 'NONE', 'params': {'vncUsername':
>> 'vnc-630b9cae-a983-4ab0-a9ac-6b8728f8014d', 'fips': 'false',
>> 'userName': 'admin', 'userId':
>> 'fd2c5e14-a8c3-11eb-951c-2a9574de53b6'}, 'ttl': 120, 'graphicsType':
>> 'spice'})
>>
>>This way it's possible to set a password and its lifetime (`ttl'
>>parameter).  Of course, it's needed to find out the host the VM runs on,
>>a way to call the API (running vdsm-client directly on the host may be
>>the easiest way), how to make/use the *.vv ticket (you can use the same
>>password all the time) and to accept collisions with different settings
>>if someone opens the console from the web UI.
>>
>>In the end result, using the Perl library mentioned by Sandro above may
>>be an easier solution.
>>
>>Or another option is to submit a patch to Engine to make the timeout
>>configurable (look for TICKET_VALIDITY_SECONDS in the sources).
>>
>>Regards,
>>Milan
>>
>>>>> I would appreciate a reply very much!
>>>>>
>>>>> Best regards
>>>>> Merlin Timm
>>>>> ___
>>>>> Users mailing list -- users@ovirt.org
>>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>>>> oVirt Code of Conduct:
>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>> List Archives:
>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BDPGLBQ4DWE64NATDDFDUB2TZLAHS6SV/
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Sandro Bonazzola
>>>>
>>>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>>>
>>>> Red Hat EMEA <https://www.redhat.com/>
>>>>
>>>> sbona...@redhat.com
>>>> <https://www.redhat.com/>
>>>>
>>>> *Red Hat respects your work life balance. Therefore there is no need to
>>>> answer this email out of your office hours.
>>>> <https://mojo.redhat.com/docs/DOC-1199578>*
>>>>
>>>>
>>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AJYHDJBZCPHMHVM3ZDYINMYE5HKQ4WES/


[ovirt-users] Re: VM CPU Topology

2021-07-30 Thread Milan Zamazal
nelson.lamei...@lyra-network.com writes:

> Hello,
>
> We are currently running oVirt 4.3.10
> Our oVirt hypervisors (HV) have 2 cpu sockets * 6 cores * HT = 24 vcpu
> Our VMs (centos7) range globally from 2vcpu to 8vcpu
> oVirt allows to configure - per VM - the following 3 advanced
> parameters : virtual_sockets : cores_per_virtual_socket :
> threads_per_core
>
> We make sure that threads per core is always 1 (so no question there)
> But, for the other 2 parameters, we are unsure of the correct
> configuration, and if there is a performance penalty on bad
> configuration.
>
> Let's consider a 4vcpu VM
>
> 1- Is there a performance difference betwenn 1:4:1 and 4:1:1 configuration ?
> 2- When should we opt for one or another configuration ?
> 2- Our VMs total CPU provisionning sum is twice the hypervisors
> capacity, but they are mostly idle so it is not an issue, but can this
> influence configuration choice above ?

I'm not sure anybody knows an ultimate answer and/or a simple rule of
thumb.  My guess would be that generally there should be no significant
difference.  But there may be application specific considerations,
especially when NUMA or something similar is involved.

I think the best what can be done is to test both the configurations
with your particular applications and setup and see if any of them
provides a systematically better performance than the other one.

Regards,
Milan

> Thank you for any information that can enlighthen us, since we are
> worried that we are suffering from bad performance due to naive cpu
> configuration choices.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4WUNPQ5WNUVJQAHAJTH4GAJYOTTYTIMB/


[ovirt-users] Re: oVirt and ARM

2021-07-14 Thread Milan Zamazal
Arik Hadas  writes:

> On Wed, Jul 14, 2021 at 10:36 AM Milan Zamazal  wrote:
>
>> Marko Vrgotic  writes:
>>
>> > Dear Arik and Milan,
>> >
>> > In the meantime, I was asked to check if in current 4.4 version or
>> > coming 4.5, are/will there any capabilities or options of emulating
>> > aarch64 on x86_64 platform and if so, what would be the steps to
>> > test/enable it.
>> >
>> > Can you provide some information?
>>
>> Hi Marko,
>>
>> I don't think there is a way to emulate a non-native architecture.
>> Engine doesn't have ARM support and it cannot handle ARM (native or
>> emulated) hosts.  You could try to run emulated ARM VMs presented as x86
>> to Engine using Vdsm hooks but I doubt it would work.
>>
>
> Oh I just sent a draft I had in my mailbox without noticing this comment
> and I see we both mentioned Vdsm hook
> What is the source of the doubts about Vdsm hooks to work for this?

It's possible to override the domain XML obtained from Engine to change
it from x86 to ARM but Engine will get back the non-x86 domain XML.
Engine may not care about the emulator but perhaps it can be confused by
the reported CPU etc.  There can be problems with devices and
architecture specific settings in both the directions.  Engine will base
assumptions about the VM capabilities based on x86, which won't exactly
match ARM.

Maybe it'd be possible to simply run and stop a VM with some effort and
it would be enough for certain purposes.  But for more than that it's a
question whether the effort would be better spent on implementing a
proper architecture support.

>> I'm afraid the only way is to add ARM support to oVirt.  My former
>> colleague has played with running oVirt on Raspberry Pi hosts some years
>> ago (there are traces of that effort in Vdsm) and I think adding ARM
>> support should be, at least in theory, possible.  Particular features
>> available would be mostly dependent on ARM support in QEMU and libvirt.
>
>
>> Regards,
>> Milan
>>
>> > -
>> > kind regards/met vriendelijke groeten
>> >
>> > Marko Vrgotic
>> > Sr. System Engineer @ System Administration
>> >
>> > ActiveVideo
>> > o: +31 (35) 6774131
>> > m: +31 (65) 5734174
>> > e: m.vrgo...@activevideo.com<mailto:m.vrgo...@activevideo.com>
>> > w: www.activevideo.com<http://www.activevideo.com>
>> >
>> > ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein
>> > 1.1217 WJ Hilversum, The Netherlands. The information contained in
>> > this message may be legally privileged and confidential. It is
>> > intended to be read only by the individual or entity to whom it is
>> > addressed or by their designee. If the reader of this message is not
>> > the intended recipient, you are on notice that any distribution of
>> > this message, in any form, is strictly prohibited.  If you have
>> > received this message in error, please immediately notify the sender
>> > and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and
>> > delete or destroy any copy of this message.
>> >
>> >
>> >
>> > From: Sandro Bonazzola 
>> > Date: Friday, 9 July 2021 at 15:37
>> > To: Marko Vrgotic , Arik Hadas
>> > , Milan Zamazal 
>> > Cc: Evgheni Dereveanchin , Zhenyu Zheng
>> > , Joey Ma ,
>> > users@ovirt.org 
>> > Subject: Re: [ovirt-users] oVirt and ARM
>> >
>> > ***CAUTION: This email originated from outside of the organization. Do
>> > not click links or open attachments unless you recognize the
>> > sender!!!***
>> >
>> >
>> > Il giorno ven 9 lug 2021 alle ore 11:00 Marko Vrgotic
>> > mailto:m.vrgo...@activevideo.com>> ha
>> > scritto:
>> > Hi Sandro and the rest of oVirt gurus,
>> >
>> > My managers are positive regarding helping provide some ARM hardware,
>> > but it would not happened earlier than three months from now, as we
>> > are in process of establishing certain relationship with ARM HW
>> > vendor.
>> >
>> > T news!
>> >
>> >
>> > In the meantime, I was asked to check if in current 4.4 version or
>> > coming 4.5, are/will there any capabilities or options of emulating
>> > aarch64 on x86_64 platform and if so, what would be the steps to
>> > test/enable it.
>> >
>> > +Arik Hadas<mailto:aha...@redhat.com> , +Milan Zamazal> mzama...@redhat.com> ?
>> >
>> >
>> > Kindly aw

[ovirt-users] Re: oVirt and ARM

2021-07-14 Thread Milan Zamazal
Marko Vrgotic  writes:

> Dear Arik and Milan,
>
> In the meantime, I was asked to check if in current 4.4 version or
> coming 4.5, are/will there any capabilities or options of emulating
> aarch64 on x86_64 platform and if so, what would be the steps to
> test/enable it.
>
> Can you provide some information?

Hi Marko,

I don't think there is a way to emulate a non-native architecture.
Engine doesn't have ARM support and it cannot handle ARM (native or
emulated) hosts.  You could try to run emulated ARM VMs presented as x86
to Engine using Vdsm hooks but I doubt it would work.

I'm afraid the only way is to add ARM support to oVirt.  My former
colleague has played with running oVirt on Raspberry Pi hosts some years
ago (there are traces of that effort in Vdsm) and I think adding ARM
support should be, at least in theory, possible.  Particular features
available would be mostly dependent on ARM support in QEMU and libvirt.

Regards,
Milan

> -
> kind regards/met vriendelijke groeten
>
> Marko Vrgotic
> Sr. System Engineer @ System Administration
>
> ActiveVideo
> o: +31 (35) 6774131
> m: +31 (65) 5734174
> e: m.vrgo...@activevideo.com<mailto:m.vrgo...@activevideo.com>
> w: www.activevideo.com<http://www.activevideo.com>
>
> ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein
> 1.1217 WJ Hilversum, The Netherlands. The information contained in
> this message may be legally privileged and confidential. It is
> intended to be read only by the individual or entity to whom it is
> addressed or by their designee. If the reader of this message is not
> the intended recipient, you are on notice that any distribution of
> this message, in any form, is strictly prohibited.  If you have
> received this message in error, please immediately notify the sender
> and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and
> delete or destroy any copy of this message.
>
>
>
> From: Sandro Bonazzola 
> Date: Friday, 9 July 2021 at 15:37
> To: Marko Vrgotic , Arik Hadas
> , Milan Zamazal 
> Cc: Evgheni Dereveanchin , Zhenyu Zheng
> , Joey Ma ,
> users@ovirt.org 
> Subject: Re: [ovirt-users] oVirt and ARM
>
> ***CAUTION: This email originated from outside of the organization. Do
> not click links or open attachments unless you recognize the
> sender!!!***
>
>
> Il giorno ven 9 lug 2021 alle ore 11:00 Marko Vrgotic
> mailto:m.vrgo...@activevideo.com>> ha
> scritto:
> Hi Sandro and the rest of oVirt gurus,
>
> My managers are positive regarding helping provide some ARM hardware,
> but it would not happened earlier than three months from now, as we
> are in process of establishing certain relationship with ARM HW
> vendor.
>
> T news!
>
>
> In the meantime, I was asked to check if in current 4.4 version or
> coming 4.5, are/will there any capabilities or options of emulating
> aarch64 on x86_64 platform and if so, what would be the steps to
> test/enable it.
>
> +Arik Hadas<mailto:aha...@redhat.com> , +Milan 
> Zamazal<mailto:mzama...@redhat.com> ?
>
>
> Kindly awaiting your reply.
>
> Marko Vrgotic
>
> From: Marko Vrgotic 
> mailto:m.vrgo...@activevideo.com>>
> Date: Monday, 28 June 2021 at 15:38
> To: Sandro Bonazzola
> mailto:sbona...@redhat.com>>, Evgheni
> Dereveanchin mailto:edere...@redhat.com>>
> Cc: Zhenyu Zheng
> mailto:zhengzhenyul...@gmail.com>>, Joey Ma
> mailto:majunj...@gmail.com>>,
> users@ovirt.org<mailto:users@ovirt.org>
> mailto:users@ovirt.org>>
> Subject: Re: [ovirt-users] oVirt and ARM
> Hi Sandro,
>
> I will check with my managers if we have and could spare some hardware
> to contribute developing for oVirt.
>
>
> -
> kind regards/met vriendelijke groeten
>
> Marko Vrgotic
> Sr. System Engineer @ System Administration
>
> ActiveVideo
> o: +31 (35) 6774131
> m: +31 (65) 5734174
> e: m.vrgo...@activevideo.com<mailto:m.vrgo...@activevideo.com>
> w: www.activevideo.com<http://www.activevideo.com>
>
> ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein
> 1.1217 WJ Hilversum, The Netherlands. The information contained in
> this message may be legally privileged and confidential. It is
> intended to be read only by the individual or entity to whom it is
> addressed or by their designee. If the reader of this message is not
> the intended recipient, you are on notice that any distribution of
> this message, in any form, is strictly prohibited.  If you have
> received this message in error, please immediately notify the sender
> and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and
> delete or destroy any copy of this message.
>
>

[ovirt-users] Re: Unable to migrate VMs to or from oVirt node 4.4.7

2021-07-12 Thread Milan Zamazal
Nathaniel Roach via Users  writes:

> On 10/7/21 9:07 pm, Nathaniel Roach wrote:
>> On 10/7/21 3:31 am, Nir Soffer wrote:
>>> On Fri, Jul 9, 2021 at 5:57 PM nroach44--- via Users  
>>> wrote:
>
 Hi All,

 After upgrading some of my hosts to 4.4.7, and after fixing the
 policy issue, I'm no longer able to migrate VMs to or from 4.4.7
 hosts. Starting them works fine regardless of the host version.

 HE 4.4.7.6-1.el8, Linux and Windows VMs.

 The log on the receiving end (4.4.7 in this case):
 VDSM:
 2021-07-09 22:02:17,491+0800 INFO (libvirt/events) [vds] Channel
 state for vm_id=5d11885a-37d3-4f68-a953-72d808f43cdd changed
 from=UNKNOWN(-1) to=disconnected(2) (qemuguestagent:289)
 2021-07-09 22:02:55,537+0800 INFO (libvirt/events) [virt.vm]
 (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') underlying process
 disconnected (vm:1134)
 2021-07-09 22:02:55,537+0800 INFO (libvirt/events) [virt.vm]
 (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') Release VM resources
 (vm:5313)
 2021-07-09 22:02:55,537+0800 INFO (libvirt/events) [virt.vm]
 (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') Stopping connection
 (guestagent:438)
 2021-07-09 22:02:55,539+0800 INFO (libvirt/events) [virt.vm]
 (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') Stopping connection
 (guestagent:438)
 2021-07-09 22:02:55,539+0800 INFO (libvirt/events) [vdsm.api]
 START
 inappropriateDevices(thiefId='5d11885a-37d3-4f68-a953-72d808f43cdd')
 from=internal, task_id=7abe370b-13bc-4c49-bf02-2e40db142250
 (api:48)
 2021-07-09 22:02:55,544+0800 WARN (vm/5d11885a) [virt.vm]
 (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') Couldn't destroy
 incoming VM: Domain not found: no domain with matching uuid
 '5d11885a-37d3-4f68-a953-72d808f43cdd' (vm:4046)
 2021-07-09 22:02:55,544+0800 INFO (vm/5d11885a) [virt.vm]
 (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') Changed state to
 Down: VM destroyed during the startup (code=10) (vm:1895)

 syslog shows:
 Jul 09 22:35:01 HOSTNAME abrt-hook-ccpp[177862]: Process 177022
 (qemu-kvm) of user 107 killed by SIGABRT - dumping core

 qemu:
 qemu-kvm: ../util/yank.c:107: yank_unregister_instance: Assertion
 `QLIST_EMPTY(>yankfns)' failed.
 2021-07-09 14:02:54.521+: shutting down, reason=failed
>>> Looks like another qemu 6.0.0 regression. Please file ovirt bug for this.
>>>
>>> Note that on RHEL we are still using qemu 5.2.0. qemu 6.0.0 is expected
>>> in RHEL 8.5.
>> Filed: https://bugzilla.redhat.com/show_bug.cgi?id=1981005
 When migrating from 4.4.7 to 4.4.6, syslog shows:
 Jul 09 22:36:36 HOSTNAME libvirtd[2775]: unsupported configuration: 
 unknown audio type 'spice'
>>> Sharing vm xml can help to understand this issue.
>>
>> Looks like it will be this section:
>>
>>     
>>   
>>   
>>     
>>     
>>     
>>     > passwdValidTo='1970-01-01T00:00:01'>
>>   
>>   
>>   
>>   
>>   
>>   
>>   
>>   
>>   
>>     
>> --->    
>>     
>>   > heads='1' primary='yes'/>
>>   
>>   > function='0x0'/>
>>     
>>
> I just thought to check, the VM above does not have not have sound
> enabled, and is missing this line when running on 4.4.6. When it's 
> running on 4.4.7 it *does* have** this line in the config.

This is the following bug: https://bugzilla.redhat.com/1977891

>>> Milan, did we test migration from 4.4.7 to 4.4.6?
>>>
>>> Nir
>>> ___
>>> Users mailing list --users@ovirt.org
>>> To unsubscribe send an email tousers-le...@ovirt.org
>>> Privacy Statement:https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of 
>>> Conduct:https://www.ovirt.org/community/about/community-guidelines/
>>> List
>>> Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/OXBKQ5TZ4G64TU4XFZVYENA4BB4OLT6K/
>> -- Thanks!
>>
>> *Nathaniel Roach*
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CY4DBRAGCBZ5TXN72EDJ7QS6VE4GLUJQ/


[ovirt-users] Re: Unable to migrate VMs to or from oVirt node 4.4.7

2021-07-12 Thread Milan Zamazal
Nir Soffer  writes:

> On Fri, Jul 9, 2021 at 5:57 PM nroach44--- via Users  wrote:
>>
>> Hi All,
>
>>
>> After upgrading some of my hosts to 4.4.7, and after fixing the
>> policy issue, I'm no longer able to migrate VMs to or from 4.4.7
>> hosts. Starting them works fine regardless of the host version.
>>
>> HE 4.4.7.6-1.el8, Linux and Windows VMs.
>>
>> The log on the receiving end (4.4.7 in this case):
>> VDSM:
>> 2021-07-09 22:02:17,491+0800 INFO (libvirt/events) [vds] Channel
>> state for vm_id=5d11885a-37d3-4f68-a953-72d808f43cdd changed
>> from=UNKNOWN(-1) to=disconnected(2) (qemuguestagent:289)
>> 2021-07-09 22:02:55,537+0800 INFO (libvirt/events) [virt.vm]
>> (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') underlying process
>> disconnected (vm:1134)
>> 2021-07-09 22:02:55,537+0800 INFO (libvirt/events) [virt.vm]
>> (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') Release VM resources
>> (vm:5313)
>> 2021-07-09 22:02:55,537+0800 INFO (libvirt/events) [virt.vm]
>> (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') Stopping connection
>> (guestagent:438)
>> 2021-07-09 22:02:55,539+0800 INFO (libvirt/events) [virt.vm]
>> (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') Stopping connection
>> (guestagent:438)
>> 2021-07-09 22:02:55,539+0800 INFO (libvirt/events) [vdsm.api] START
>> inappropriateDevices(thiefId='5d11885a-37d3-4f68-a953-72d808f43cdd')
>> from=internal, task_id=7abe370b-13bc-4c49-bf02-2e40db142250 (api:48)
>> 2021-07-09 22:02:55,544+0800 WARN (vm/5d11885a) [virt.vm]
>> (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') Couldn't destroy
>> incoming VM: Domain not found: no domain with matching uuid
>> '5d11885a-37d3-4f68-a953-72d808f43cdd' (vm:4046)
>> 2021-07-09 22:02:55,544+0800 INFO (vm/5d11885a) [virt.vm]
>> (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') Changed state to Down:
>> VM destroyed during the startup (code=10) (vm:1895)
>>
>> syslog shows:
>> Jul 09 22:35:01 HOSTNAME abrt-hook-ccpp[177862]: Process 177022
>> (qemu-kvm) of user 107 killed by SIGABRT - dumping core
>>
>> qemu:
>> qemu-kvm: ../util/yank.c:107: yank_unregister_instance: Assertion
>> `QLIST_EMPTY(>yankfns)' failed.
>> 2021-07-09 14:02:54.521+: shutting down, reason=failed
>
> Looks like another qemu 6.0.0 regression. Please file ovirt bug for this.

As pointed out by Jean-Louis Dupond, we already have one:
https://bugzilla.redhat.com/1964326

> Note that on RHEL we are still using qemu 5.2.0. qemu 6.0.0 is expected
> in RHEL 8.5.
>
>> When migrating from 4.4.7 to 4.4.6, syslog shows:
>> Jul 09 22:36:36 HOSTNAME libvirtd[2775]: unsupported configuration: unknown 
>> audio type 'spice'
>
> Sharing vm xml can help to understand this issue.
>
> Milan, did we test migration from 4.4.7 to 4.4.6?

I don't know but they work for me with QEMU 5.2.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YO2XQEKYCH5LKPOKTM3B4SYILZHTQJKR/


[ovirt-users] Re: Set fixed VNC/Spice Password for VMs.

2021-07-08 Thread Milan Zamazal
Sandro Bonazzola  writes:

> Il giorno gio 8 lug 2021 alle ore 13:38 Sandro Bonazzola <
> sbona...@redhat.com> ha scritto:
>
>> +Milan Zamazal  , +Arik Hadas  , 
>> +Michal
>> Skrivanek  any hint?
>>
>
> I found https://metacpan.org/pod/Ovirt::Display but I think there should be
> an easier way within the engine to configure this.
>
>
>
>>
>> Il giorno mar 6 lug 2021 alle ore 14:01 Merlin Timm 
>> ha scritto:
>>
>>> Good day to all,
>>>
>>> I have a question about the console configuration of the VMs:
>>>
>>> By default, for each console connection to a VM, a password is set for
>>> 120 seconds, after that you can't use it again. We currently have the
>>> following concern:
>>>
>>> We want to access and control the VMs via the VNC/Spice of the Ovirt
>>> host. We have already tried to use the password from the console.vv for
>>> the connection and that works so far. Unfortunately we have to do this
>>> every 2 minutes when we want to connect again. We are currently building
>>> an automatic test pipeline and for this we need to access the VMs
>>> remotely before OS start and we want to be independent of a VNC server
>>> on the guest. This is only possible if we could connect to the VNC/Spice
>>> server from the Ovirt host.
>>>
>>> My question: would it be possible to fix the password or read it out via
>>> api every time you want to connect?

A one time password is set every time the console is opened, for those
120 seconds.  Unfortunately, the 120 seconds limit seems to be hardwired
in Engine sources.  So apparently the only chance would be to set the
password directly on the host using VM.updateDevice VDSM API call.  It
looks like this normally:

  VM.updateDevice(params={'deviceType': 'graphics', 'password': '', 
'disconnectAction': 'NONE', 'params': {'vncUsername': 
'vnc-630b9cae-a983-4ab0-a9ac-6b8728f8014d', 'fips': 'false', 'userName': 
'admin', 'userId': 'fd2c5e14-a8c3-11eb-951c-2a9574de53b6'}, 'ttl': 120, 
'graphicsType': 'spice'})

This way it's possible to set a password and its lifetime (`ttl'
parameter).  Of course, it's needed to find out the host the VM runs on,
a way to call the API (running vdsm-client directly on the host may be
the easiest way), how to make/use the *.vv ticket (you can use the same
password all the time) and to accept collisions with different settings
if someone opens the console from the web UI.

In the end result, using the Perl library mentioned by Sandro above may
be an easier solution.

Or another option is to submit a patch to Engine to make the timeout
configurable (look for TICKET_VALIDITY_SECONDS in the sources).

Regards,
Milan

>>> I would appreciate a reply very much!
>>>
>>> Best regards
>>> Merlin Timm
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BDPGLBQ4DWE64NATDDFDUB2TZLAHS6SV/
>>>
>>
>>
>> --
>>
>> Sandro Bonazzola
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>
>> Red Hat EMEA <https://www.redhat.com/>
>>
>> sbona...@redhat.com
>> <https://www.redhat.com/>
>>
>> *Red Hat respects your work life balance. Therefore there is no need to
>> answer this email out of your office hours.
>> <https://mojo.redhat.com/docs/DOC-1199578>*
>>
>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PYSQ6UWTTUZBZBE3TNOGVU674M3PC55S/


[ovirt-users] Re: High performance VM cannot migrate due to TSC frequency

2020-12-18 Thread Milan Zamazal
Gianluca Cecchi  writes:

> On Thu, Dec 17, 2020 at 5:30 PM Milan Zamazal  wrote:
>
>> Gianluca Cecchi  writes:
>>
>> > On Wed, Dec 16, 2020 at 8:59 PM Milan Zamazal 
>> wrote:
>> >
>> >>
>> >> If the checkbox is unchecked, the migration shouldn't be prevented.
>> >> I think the TSC frequency shouldn't be written to the VM domain XML in
>> >> such a case and then there should be no restrictions (and no guarantees)
>> >> on the frequency.
>> >>
>> >> Do you mean you can't migrate even with the checkbox unchecked?  If so,
>> >> what error message do you get in such a case?
>> >>
>> >
>> > Yes, exactly.
>> > I powered off the VM and then disabled the check and then powered on the
>> VM
>> > again, that is running on host ov301. ANd I have other two hosts: ov300
>> and
>> > ov200.
>> > From web admin gui if I select the VM and "migrate" button I cannot
>> select
>> > the destination host and inside the bix there is the words "No available
>> > host to migrate VMs to" and going to engine.log, as soon as I click the
>> > "migrate" button I see these new lines:
>>
>> I see, I can reproduce it.  It looks like a bug in Engine.  While the VM
>> is correctly started without TSC frequency set, the migration filter in
>> Engine apparently still applies.
>>
>> I'll add a note about it to the TSC migration bug.
>>
>> Regards,
>> Milan
>>
>>
> Ok, thanks.
> In the meantime do I have any sort of workaround to be able to migrate the
> VM? Eg I could set the VM as non High Performance, or any better other
> option?

Non high performance VMs should migrate fine, but changing the VM kind
requires restart.  Once a high performance VM is running, I don't know
about any good way to avoid the TSC constraint.

Regards,
Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7UBMWPGKTPBOXB26YZMFZLPOXUZ3GEYE/


[ovirt-users] Re: High performance VM cannot migrate due to TSC frequency

2020-12-17 Thread Milan Zamazal
Gianluca Cecchi  writes:

> On Wed, Dec 16, 2020 at 8:59 PM Milan Zamazal  wrote:
>
>>
>> If the checkbox is unchecked, the migration shouldn't be prevented.
>> I think the TSC frequency shouldn't be written to the VM domain XML in
>> such a case and then there should be no restrictions (and no guarantees)
>> on the frequency.
>>
>> Do you mean you can't migrate even with the checkbox unchecked?  If so,
>> what error message do you get in such a case?
>>
>
> Yes, exactly.
> I powered off the VM and then disabled the check and then powered on the VM
> again, that is running on host ov301. ANd I have other two hosts: ov300 and
> ov200.
> From web admin gui if I select the VM and "migrate" button I cannot select
> the destination host and inside the bix there is the words "No available
> host to migrate VMs to" and going to engine.log, as soon as I click the
> "migrate" button I see these new lines:

I see, I can reproduce it.  It looks like a bug in Engine.  While the VM
is correctly started without TSC frequency set, the migration filter in
Engine apparently still applies.

I'll add a note about it to the TSC migration bug.

Regards,
Milan

> 2020-12-16 23:13:27,949+01 INFO
>  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-41)
> [308a29e2-2c4f-45fe-bdce-b032b36d4656] Candidate host 'ov300'
> ('07b979fb-4779-4477-89f2-6a96093c06f7') was filtered out by
> 'VAR__FILTERTYPE__INTERNAL' filter 'Migration-Tsc-Frequency' (correlation
> id: null)
> 2020-12-16 23:13:27,949+01 INFO
>  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-41)
> [308a29e2-2c4f-45fe-bdce-b032b36d4656] Candidate host 'ov200'
> ('949d0087-2c24-4759-8427-f9eade1dd2cc') was filtered out by
> 'VAR__FILTERTYPE__INTERNAL' filter 'Migration-Tsc-Frequency' (correlation
> id: null)
> 2020-12-16 23:13:28,032+01 INFO
>  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-38)
> [5837b695-c70d-4f45-a452-2c7c1b4ea69b] Candidate host 'ov300'
> ('07b979fb-4779-4477-89f2-6a96093c06f7') was filtered out by
> 'VAR__FILTERTYPE__INTERNAL' filter 'Migration-Tsc-Frequency' (correlation
> id: null)
> 2020-12-16 23:13:28,032+01 INFO
>  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-38)
> [5837b695-c70d-4f45-a452-2c7c1b4ea69b] Candidate host 'ov200'
> ('949d0087-2c24-4759-8427-f9eade1dd2cc') was filtered out by
> 'VAR__FILTERTYPE__INTERNAL' filter 'Migration-Tsc-Frequency' (correlation
> id: null)
>
> On all three nodes I have this kind of running kernel and package versions:
>
> [root@ov300 vdsm]# rpm -q qemu-kvm libvirt-daemon systemd
> qemu-kvm-4.2.0-34.module_el8.3.0+555+a55c8938.x86_64
> libvirt-daemon-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
> systemd-239-41.el8_3.x86_64
>
> and
> [root@ov300 vdsm]# uname -r
> 4.18.0-240.1.1.el8_3.x86_64
> [root@ov300 vdsm]#
>
> Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WX4PA75LAOXIN6PKYDTZ5UZ4OMZICXEY/


[ovirt-users] Re: High performance VM cannot migrate due to TSC frequency

2020-12-16 Thread Milan Zamazal
Gianluca Cecchi  writes:

> On Fri, Dec 11, 2020 at 5:39 PM Milan Zamazal  wrote:
>
>>
>>
>> TSC frequency is the frequency with which Time Stamp Counter register is
>> updated, typically a nominal CPU frequency (see
>> https://en.wikipedia.org/wiki/Time_Stamp_Counter for more details).
>>
>> You can check the value oVirt gets from libvirt by running
>>
>>   # virsh -r capabilities
>>
>> and looking at the line like
>>
>>   
>>
>> in the output.  Unless frequency scaling is available, the host
>> frequencies must be almost the same in order to be able to migrate high
>> performance VMs among them.
>>
>> Note there is a bug that may cause a migration failure for the VMs even
>> between hosts with the same frequencies
>> (https://bugzilla.redhat.com/1821199).  But this is apparently not your
>> case, since the migration is prevented already by Engine.
>>
>> Regards,
>> Milan
>>
>>
>>
> See here:
>
> [root@ov200 ~]# virsh -r capabilities | grep "name='tsc'"
>   
> [root@ov200 ~]#
>
> [root@ov300 ~]# virsh -r capabilities | grep "name='tsc'"
>   
> [root@ov300 ~]#
>
> [root@ov301 ~]# virsh -r capabilities | grep "name='tsc'"
>   
> [root@ov301 ~]#
>
> The three hosts have the same model cpu
> Model name:  Intel(R) Xeon(R) CPU   X5690  @ 3.47GHz
> and slightly different actual frequencies at a certain moment...

OK, so this is actually https://bugzilla.redhat.com/1821199.

> But what does it mean so the checkbox
>
> Migrate only to hosts with the same TSC frequency
>
> if even if I don't check it the migration is prevented?

If the checkbox is unchecked, the migration shouldn't be prevented.
I think the TSC frequency shouldn't be written to the VM domain XML in
such a case and then there should be no restrictions (and no guarantees)
on the frequency.

Do you mean you can't migrate even with the checkbox unchecked?  If so,
what error message do you get in such a case?

> BTW the command lscpu produces exactly the same output on the three hosts,
> apart "CPU MHz" and corresponding "BogoMIPS" that slightly change each time
> I run the command.

Yes, the TSC frequency is measured on each boot and may differ across
reboots on the same host.

> And the flags for all are:
>
> Flags:   fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
> mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx
> pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
> xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl
> vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes
> lahf_lm pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid
> dtherm ida arat flush_l1d
>
> Gianluca

Regards,
Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ALBUWFG7ZHRT7SOOCTJERX7ASRRWGOL6/


[ovirt-users] Re: High performance VM cannot migrate due to TSC frequency

2020-12-11 Thread Milan Zamazal
Gianluca Cecchi  writes:

> Hello,
> I'm in 4.4.3 and CentOS 8.3 with 3 hosts.
>
> I have a high performance VM that is running on ov300 and is configured to
> be run on any host.
>
> It seems that both if I set or not the option
>
> Migrate only to hosts with the same TSC frequency
>
> I always am unable to migrate the VM and inside engine.log I see this:
>
> 2020-12-11 15:56:03,424+01 INFO
>  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-36)
> [e4801b28-c832-4474-aa53-4ebfd7c6e2d0] Candidate host 'ov301'
> ('382bfc8f-60d5-4e06-8571-7dae1700574d') was filtered out by
> 'VAR__FILTERTYPE__INTERNAL' filter 'Migration-Tsc-Frequency' (correlation
> id: null)
>
> 2020-12-11 15:56:03,424+01 INFO
>  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-36)
> [e4801b28-c832-4474-aa53-4ebfd7c6e2d0] Candidate host 'ov200'
> ('949d0087-2c24-4759-8427-f9eade1dd2cc') was filtered out by
> 'VAR__FILTERTYPE__INTERNAL' filter 'Migration-Tsc-Frequency' (correlation
> id: null)
>
> Can you verify if it is only my problem?
>
> Apart from the problem itself, what is "TSC frequency" and how can I check
> if my 3 hosts are different or not indeed?

TSC frequency is the frequency with which Time Stamp Counter register is
updated, typically a nominal CPU frequency (see
https://en.wikipedia.org/wiki/Time_Stamp_Counter for more details).

You can check the value oVirt gets from libvirt by running

  # virsh -r capabilities

and looking at the line like

  

in the output.  Unless frequency scaling is available, the host
frequencies must be almost the same in order to be able to migrate high
performance VMs among them.

Note there is a bug that may cause a migration failure for the VMs even
between hosts with the same frequencies
(https://bugzilla.redhat.com/1821199).  But this is apparently not your
case, since the migration is prevented already by Engine.

Regards,
Milan

> Normal VMs are able to migrate without problems
>
> Thanks,
> Gianluca
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2HYSCVHSVZS6KX5FF5MOXI6YTLQOJIK7/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SXQRJZAYTBFBHZJQVMRUL625NE3NPCZ6/


[ovirt-users] Re: vm console problem

2020-04-20 Thread Milan Zamazal
David David  writes:

>  solved using this link https://bugzilla.redhat.com/show_bug.cgi?id=1672587

Great, good to know.

> чт, 2 апр. 2020 г. в 16:11, Milan Zamazal :
>
>> David David  writes:
>>
>> > can connect to a vm which has spice console protocol by remote-viewer but
>> > that not working with vnc protocol
>> > the remote-viewer can't validate the server certs, is this a bug on the
>> > remote-viewerside or in the hypervisor?
>> > this problem is generally known? will it be fixed?
>>
>> It works for me, so it's either a problem with your remote-viewer or an
>> unknown problem on the oVirt side.  I'd suggest paying attention to the
>> authentication method negotiation as pointed out earlier.  I'm not
>> expert in that area, so I can't help you with that but maybe someone
>> else can.
>>
>> Regards,
>> Milan
>>
>> > вс, 29 мар. 2020 г. в 12:52, David David :
>> >
>> >> there is no such problem with the ovirt-engine 4.2.5.2-1.el7
>> >> it appeared when upgrading to 4.3.*
>> >>
>> >> вс, 29 мар. 2020 г. в 12:46, David David :
>> >>
>> >>> tested on four different workstations with: fedora20, fedora31 and
>> >>> windows10(remote-manager last vers)
>> >>>
>> >>> вс, 29 мар. 2020 г. в 12:39, Strahil Nikolov :
>> >>>
>> >>>> On March 29, 2020 9:47:02 AM GMT+03:00, David David <
>> dd432...@gmail.com>
>> >>>> wrote:
>> >>>> >I did as you said:
>> >>>> >copied from engine /etc/ovirt-engine/ca.pem onto my desktop into
>> >>>> >/etc/pki/ca-trust/source/anchors and then run update-ca-trust
>> >>>> >it didn’t help, still the same errors
>> >>>> >
>> >>>> >
>> >>>> >пт, 27 мар. 2020 г. в 21:56, Strahil Nikolov > >:
>> >>>> >
>> >>>> >> On March 27, 2020 12:23:10 PM GMT+02:00, David David
>> >>>> >
>> >>>> >> wrote:
>> >>>> >> >here is debug from opening console.vv by remote-viewer
>> >>>> >> >
>> >>>> >> >2020-03-27 14:09 GMT+04:00, Milan Zamazal :
>> >>>> >> >> David David  writes:
>> >>>> >> >>
>> >>>> >> >>> yes i have
>> >>>> >> >>> console.vv attached
>> >>>> >> >>
>> >>>> >> >> It looks the same as mine.
>> >>>> >> >>
>> >>>> >> >> There is a difference in our logs, you have
>> >>>> >> >>
>> >>>> >> >>   Possible auth 19
>> >>>> >> >>
>> >>>> >> >> while I have
>> >>>> >> >>
>> >>>> >> >>   Possible auth 2
>> >>>> >> >>
>> >>>> >> >> So I still suspect a wrong authentication method is used, but I
>> >>>> >don't
>> >>>> >> >> have any idea why.
>> >>>> >> >>
>> >>>> >> >> Regards,
>> >>>> >> >> Milan
>> >>>> >> >>
>> >>>> >> >>> 2020-03-26 21:38 GMT+04:00, Milan Zamazal > >:
>> >>>> >> >>>> David David  writes:
>> >>>> >> >>>>
>> >>>> >> >>>>> copied from qemu server all certs except "cacrl" to my
>> >>>> >> >desktop-station
>> >>>> >> >>>>> into /etc/pki/
>> >>>> >> >>>>
>> >>>> >> >>>> This is not needed, the CA certificate is included in
>> console.vv
>> >>>> >> >and no
>> >>>> >> >>>> other certificate should be needed.
>> >>>> >> >>>>
>> >>>> >> >>>>> but remote-viewer is still didn't work
>> >>>> >> >>>>
>> >>>> >> >>>> The log looks like remote-viewer is attempting certificate
>> >>>> >> >>>> authentication rather than password authentication.  Do you
>> have
>

[ovirt-users] Re: vm console problem

2020-04-02 Thread Milan Zamazal
David David  writes:

> can connect to a vm which has spice console protocol by remote-viewer but
> that not working with vnc protocol
> the remote-viewer can't validate the server certs, is this a bug on the
> remote-viewerside or in the hypervisor?
> this problem is generally known? will it be fixed?

It works for me, so it's either a problem with your remote-viewer or an
unknown problem on the oVirt side.  I'd suggest paying attention to the
authentication method negotiation as pointed out earlier.  I'm not
expert in that area, so I can't help you with that but maybe someone
else can.

Regards,
Milan

> вс, 29 мар. 2020 г. в 12:52, David David :
>
>> there is no such problem with the ovirt-engine 4.2.5.2-1.el7
>> it appeared when upgrading to 4.3.*
>>
>> вс, 29 мар. 2020 г. в 12:46, David David :
>>
>>> tested on four different workstations with: fedora20, fedora31 and
>>> windows10(remote-manager last vers)
>>>
>>> вс, 29 мар. 2020 г. в 12:39, Strahil Nikolov :
>>>
>>>> On March 29, 2020 9:47:02 AM GMT+03:00, David David 
>>>> wrote:
>>>> >I did as you said:
>>>> >copied from engine /etc/ovirt-engine/ca.pem onto my desktop into
>>>> >/etc/pki/ca-trust/source/anchors and then run update-ca-trust
>>>> >it didn’t help, still the same errors
>>>> >
>>>> >
>>>> >пт, 27 мар. 2020 г. в 21:56, Strahil Nikolov :
>>>> >
>>>> >> On March 27, 2020 12:23:10 PM GMT+02:00, David David
>>>> >
>>>> >> wrote:
>>>> >> >here is debug from opening console.vv by remote-viewer
>>>> >> >
>>>> >> >2020-03-27 14:09 GMT+04:00, Milan Zamazal :
>>>> >> >> David David  writes:
>>>> >> >>
>>>> >> >>> yes i have
>>>> >> >>> console.vv attached
>>>> >> >>
>>>> >> >> It looks the same as mine.
>>>> >> >>
>>>> >> >> There is a difference in our logs, you have
>>>> >> >>
>>>> >> >>   Possible auth 19
>>>> >> >>
>>>> >> >> while I have
>>>> >> >>
>>>> >> >>   Possible auth 2
>>>> >> >>
>>>> >> >> So I still suspect a wrong authentication method is used, but I
>>>> >don't
>>>> >> >> have any idea why.
>>>> >> >>
>>>> >> >> Regards,
>>>> >> >> Milan
>>>> >> >>
>>>> >> >>> 2020-03-26 21:38 GMT+04:00, Milan Zamazal :
>>>> >> >>>> David David  writes:
>>>> >> >>>>
>>>> >> >>>>> copied from qemu server all certs except "cacrl" to my
>>>> >> >desktop-station
>>>> >> >>>>> into /etc/pki/
>>>> >> >>>>
>>>> >> >>>> This is not needed, the CA certificate is included in console.vv
>>>> >> >and no
>>>> >> >>>> other certificate should be needed.
>>>> >> >>>>
>>>> >> >>>>> but remote-viewer is still didn't work
>>>> >> >>>>
>>>> >> >>>> The log looks like remote-viewer is attempting certificate
>>>> >> >>>> authentication rather than password authentication.  Do you have
>>>> >> >>>> password in console.vv?  It should look like:
>>>> >> >>>>
>>>> >> >>>>   [virt-viewer]
>>>> >> >>>>   type=vnc
>>>> >> >>>>   host=192.168.122.2
>>>> >> >>>>   port=5900
>>>> >> >>>>   password=fxLazJu6BUmL
>>>> >> >>>>   # Password is valid for 120 seconds.
>>>> >> >>>>   ...
>>>> >> >>>>
>>>> >> >>>> Regards,
>>>> >> >>>> Milan
>>>> >> >>>>
>>>> >> >>>>> 2020-03-26 2:22 GMT+04:00, Nir Soffer :
>>>> >> >>>>>> On Wed, Mar 25, 2020 at 12:45 PM David David
>>>> >
>>>> >> >>>>

[ovirt-users] Re: vm console problem

2020-03-27 Thread Milan Zamazal
David David  writes:

> yes i have
> console.vv attached

It looks the same as mine.

There is a difference in our logs, you have

  Possible auth 19

while I have

  Possible auth 2

So I still suspect a wrong authentication method is used, but I don't
have any idea why.

Regards,
Milan

> 2020-03-26 21:38 GMT+04:00, Milan Zamazal :
>> David David  writes:
>>
>>> copied from qemu server all certs except "cacrl" to my desktop-station
>>> into /etc/pki/
>>
>> This is not needed, the CA certificate is included in console.vv and no
>> other certificate should be needed.
>>
>>> but remote-viewer is still didn't work
>>
>> The log looks like remote-viewer is attempting certificate
>> authentication rather than password authentication.  Do you have
>> password in console.vv?  It should look like:
>>
>>   [virt-viewer]
>>   type=vnc
>>   host=192.168.122.2
>>   port=5900
>>   password=fxLazJu6BUmL
>>   # Password is valid for 120 seconds.
>>   ...
>>
>> Regards,
>> Milan
>>
>>> 2020-03-26 2:22 GMT+04:00, Nir Soffer :
>>>> On Wed, Mar 25, 2020 at 12:45 PM David David  wrote:
>>>>>
>>>>> ovirt 4.3.8.2-1.el7
>>>>> gtk-vnc2-1.0.0-1.fc31.x86_64
>>>>> remote-viewer version 8.0-3.fc31
>>>>>
>>>>> can't open vm console by remote-viewer
>>>>> vm has vnc console protocol
>>>>> when click on console button to connect to a vm, the remote-viewer
>>>>> console disappear immediately
>>>>>
>>>>> remote-viewer debug in attachment
>>>>
>>>> You an issue with the certificates:
>>>>
>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.238:
>>>> ../src/vncconnection.c Set credential 2 libvirt
>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>>>> ../src/vncconnection.c Searching for certs in /etc/pki
>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>>>> ../src/vncconnection.c Searching for certs in /root/.pki
>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>>>> ../src/vncconnection.c Failed to find certificate CA/cacert.pem
>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>>>> ../src/vncconnection.c No CA certificate provided, using GNUTLS global
>>>> trust
>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>>>> ../src/vncconnection.c Failed to find certificate CA/cacrl.pem
>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>>>> ../src/vncconnection.c Failed to find certificate
>>>> libvirt/private/clientkey.pem
>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>>>> ../src/vncconnection.c Failed to find certificate
>>>> libvirt/clientcert.pem
>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>>>> ../src/vncconnection.c Waiting for missing credentials
>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>>>> ../src/vncconnection.c Got all credentials
>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>>>> ../src/vncconnection.c No CA certificate provided; trying the system
>>>> trust store instead
>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
>>>> ../src/vncconnection.c Using the system trust store and CRL
>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
>>>> ../src/vncconnection.c No client cert or key provided
>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
>>>> ../src/vncconnection.c No CA revocation list provided
>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.241:
>>>> ../src/vncconnection.c Handshake was blocking
>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.243:
>>>> ../src/vncconnection.c Handshake was blocking
>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.251:
>>>> ../src/vncconnection.c Handshake was blocking
>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.298:
>>>> ../src/vncconnection.c Handshake done
>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.298:
>>>> ../src/vncconnection.c Validating
>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.301:
>>>> ../src/vncconnection.c Error: The certificate is not trusted
>>>>
>>>> Adding people that may know more about this.
>>>>
>>>> Nir
>>>>
>>>>
>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IPX2PHLII54CFWKEH7RTN3GPP7VQ2QVZ/


[ovirt-users] Re: vm console problem

2020-03-26 Thread Milan Zamazal
David David  writes:

> copied from qemu server all certs except "cacrl" to my desktop-station
> into /etc/pki/

This is not needed, the CA certificate is included in console.vv and no
other certificate should be needed.

> but remote-viewer is still didn't work

The log looks like remote-viewer is attempting certificate
authentication rather than password authentication.  Do you have
password in console.vv?  It should look like:

  [virt-viewer]
  type=vnc
  host=192.168.122.2
  port=5900
  password=fxLazJu6BUmL
  # Password is valid for 120 seconds.
  ...

Regards,
Milan

> 2020-03-26 2:22 GMT+04:00, Nir Soffer :
>> On Wed, Mar 25, 2020 at 12:45 PM David David  wrote:
>>>
>>> ovirt 4.3.8.2-1.el7
>>> gtk-vnc2-1.0.0-1.fc31.x86_64
>>> remote-viewer version 8.0-3.fc31
>>>
>>> can't open vm console by remote-viewer
>>> vm has vnc console protocol
>>> when click on console button to connect to a vm, the remote-viewer
>>> console disappear immediately
>>>
>>> remote-viewer debug in attachment
>>
>> You an issue with the certificates:
>>
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.238:
>> ../src/vncconnection.c Set credential 2 libvirt
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> ../src/vncconnection.c Searching for certs in /etc/pki
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> ../src/vncconnection.c Searching for certs in /root/.pki
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> ../src/vncconnection.c Failed to find certificate CA/cacert.pem
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> ../src/vncconnection.c No CA certificate provided, using GNUTLS global
>> trust
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> ../src/vncconnection.c Failed to find certificate CA/cacrl.pem
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> ../src/vncconnection.c Failed to find certificate
>> libvirt/private/clientkey.pem
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> ../src/vncconnection.c Failed to find certificate
>> libvirt/clientcert.pem
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> ../src/vncconnection.c Waiting for missing credentials
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> ../src/vncconnection.c Got all credentials
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> ../src/vncconnection.c No CA certificate provided; trying the system
>> trust store instead
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
>> ../src/vncconnection.c Using the system trust store and CRL
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
>> ../src/vncconnection.c No client cert or key provided
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
>> ../src/vncconnection.c No CA revocation list provided
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.241:
>> ../src/vncconnection.c Handshake was blocking
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.243:
>> ../src/vncconnection.c Handshake was blocking
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.251:
>> ../src/vncconnection.c Handshake was blocking
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.298:
>> ../src/vncconnection.c Handshake done
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.298:
>> ../src/vncconnection.c Validating
>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.301:
>> ../src/vncconnection.c Error: The certificate is not trusted
>>
>> Adding people that may know more about this.
>>
>> Nir
>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G4DV6YDFGORYDO64KLD3T6NF4F52QAEN/


[ovirt-users] Re: VM migrations stalling over migration-only network

2020-01-20 Thread Milan Zamazal
Ben  writes:

> Hi Milan,
>
> Thanks for your reply. I checked the firewall, and saw that both the bond0
> interface and the VLAN interface bond0.20 had been added to the default
> zone, which I believe should provide the necessary firewall access (output
> below)
>
> I double-checked the destination host's VDSM logs and wasn't able to find
> any warning or error-level logs during the migration timeframe.
>
> I checked the migration_port_* and *_port settings in qemu.conf and
> libvirtd.conf and all lines are commented. I have not modified either file.

The commented out settings define the default port used for migrations,
so they are valid even when commented out.  I can see you have
libvirt-tls open below, not sure about the QEMU ports.  If migration
works when not using a separate migration network then it should work
with the same rules for the migration network, so I think your settings
are OK.

The fact that you don't get any better explanation than "unexpectedly
failed" and that it fails before transferring any data indicates a
possible networking error, but I can't help with that, someone with
networking knowledge should.

You can also try to enable libvirt debugging on both the sides in
/etc/libvirt/libvirtd.conf and restart libvirt (beware, those logs are
huge).  libvirt logs should report some error.

> [root@vhost2 vdsm]# firewall-cmd --list-all
> public (active)
>   target: default
>   icmp-block-inversion: no
>   interfaces: bond0 bond0.20 em1 em2 migration ovirtmgmt p1p1
>   sources:
>   services: cockpit dhcpv6-client libvirt-tls ovirt-imageio ovirt-vmconsole
> snmp ssh vdsm
>   ports: 1311/tcp 22/tcp 6081/udp 5666/tcp
>   protocols:
>   masquerade: no
>   forward-ports:
>   source-ports:
>   icmp-blocks:
>   rich rules:
>
> On Mon, Jan 20, 2020 at 6:29 AM Milan Zamazal  wrote:
>
>> Ben  writes:
>>
>> > Hi, I'm pretty stuck at the moment so I hope someone can help me.
>> >
>> > I have an oVirt 4.3 data center with two hosts. Recently, I attempted to
>> > segregate migration traffic from the the standard ovirtmgmt network,
>> where
>> > the VM traffic and all other traffic resides.
>> >
>> > I set up the VLAN on my router and switch, and created LACP bonds on both
>> > hosts, tagging them with the VLAN ID. I confirmed the routes work fine,
>> and
>> > traffic speeds are as expected. MTU is set to 9000.
>> >
>> > After configuring the migration network in the cluster and dragging and
>> > dropping it onto the bonds on each host, VMs fail to migrate.
>> >
>> > oVirt is not reporting any issues with the network interfaces or sync
>> with
>> > the hosts. However, when I attempt to live-migrate a VM, progress gets to
>> > 1% and stalls. The transfer rate is 0Mbps, and the operation eventually
>> > fails.
>> >
>> > I have not been able to identify anything useful in the VDSM logs on the
>> > source or destination hosts, or in the engine logs. It repeats the below
>> > WARNING and INFO logs for the duration of the process, then logs the last
>> > entries when it fails. I can provide more logs if it would help. I'm not
>> > even sure where to start -- since I am a novice at networking, at best,
>> my
>> > suspicion the entire time was that something is misconfigured in my
>> > network. However, the routes are good, speed tests are fine, and I can't
>> > find anything else wrong with the connections. It's not impacting any
>> other
>> > traffic over the bond interfaces.
>> >
>> > Are there other requirements that must be met for VMs to migrate over a
>> > separate interface/network?
>>
>> Hi, did you check your firewall settings?  Are the required ports open?
>> See migration_port_* options in /etc/libvirt/qemu.conf and *_port
>> options in /etc/libvirt/libvirtd.conf.
>>
>> Is there any error reported in the destination vdsm.log?
>>
>> Regards,
>> Milan
>>
>> > 2020-01-12 03:18:28,245-0500 WARN  (migmon/a24fd7e3) [virt.vm]
>> > (vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Migration stalling:
>> remaining
>> > (4191MiB) > lowmark (4191MiB). (migration:854)
>> > 2020-01-12 03:18:28,245-0500 INFO  (migmon/a24fd7e3) [virt.vm]
>> > (vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Migration Progress: 930.341
>> > seconds elapsed, 1% of data processed, total data: 4192MB, processed
>> data:
>> > 0MB, remaining data: 4191MB, transfer speed 0MBps, zero pages: 149MB,
>> > compressed: 0MB, dirty rate: 0, memory iteration: 1 (migration:881)
>> > 2020-01-

[ovirt-users] Re: VM migrations stalling over migration-only network

2020-01-20 Thread Milan Zamazal
Ben  writes:

> Hi, I'm pretty stuck at the moment so I hope someone can help me.
>
> I have an oVirt 4.3 data center with two hosts. Recently, I attempted to
> segregate migration traffic from the the standard ovirtmgmt network, where
> the VM traffic and all other traffic resides.
>
> I set up the VLAN on my router and switch, and created LACP bonds on both
> hosts, tagging them with the VLAN ID. I confirmed the routes work fine, and
> traffic speeds are as expected. MTU is set to 9000.
>
> After configuring the migration network in the cluster and dragging and
> dropping it onto the bonds on each host, VMs fail to migrate.
>
> oVirt is not reporting any issues with the network interfaces or sync with
> the hosts. However, when I attempt to live-migrate a VM, progress gets to
> 1% and stalls. The transfer rate is 0Mbps, and the operation eventually
> fails.
>
> I have not been able to identify anything useful in the VDSM logs on the
> source or destination hosts, or in the engine logs. It repeats the below
> WARNING and INFO logs for the duration of the process, then logs the last
> entries when it fails. I can provide more logs if it would help. I'm not
> even sure where to start -- since I am a novice at networking, at best, my
> suspicion the entire time was that something is misconfigured in my
> network. However, the routes are good, speed tests are fine, and I can't
> find anything else wrong with the connections. It's not impacting any other
> traffic over the bond interfaces.
>
> Are there other requirements that must be met for VMs to migrate over a
> separate interface/network?

Hi, did you check your firewall settings?  Are the required ports open?
See migration_port_* options in /etc/libvirt/qemu.conf and *_port
options in /etc/libvirt/libvirtd.conf.

Is there any error reported in the destination vdsm.log?

Regards,
Milan

> 2020-01-12 03:18:28,245-0500 WARN  (migmon/a24fd7e3) [virt.vm]
> (vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Migration stalling: remaining
> (4191MiB) > lowmark (4191MiB). (migration:854)
> 2020-01-12 03:18:28,245-0500 INFO  (migmon/a24fd7e3) [virt.vm]
> (vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Migration Progress: 930.341
> seconds elapsed, 1% of data processed, total data: 4192MB, processed data:
> 0MB, remaining data: 4191MB, transfer speed 0MBps, zero pages: 149MB,
> compressed: 0MB, dirty rate: 0, memory iteration: 1 (migration:881)
> 2020-01-12 03:18:31,386-0500 ERROR (migsrc/a24fd7e3) [virt.vm]
> (vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') operation failed: migration
> out job: unexpectedly failed (migration:282)
> 2020-01-12 03:18:32,695-0500 ERROR (migsrc/a24fd7e3) [virt.vm]
> (vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Failed to migrate
> (migration:450)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 431,
> in _regular_run
> time.time(), migrationParams, machineParams
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 505,
> in _startUnderlyingMigration
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 591,
> in _perform_with_conv_schedule
> self._perform_migration(duri, muri)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 525,
> in _perform_migration
> self._migration_flags)
> libvirtError: operation failed: migration out job: unexpectedly failed
> 2020-01-12 03:18:40,880-0500 INFO  (jsonrpc/6) [api.virt] FINISH
> getMigrationStatus return={'status': {'message': 'Done', 'code': 0},
> 'migrationStats': {'status': {'message': 'Fatal error during migration',
> 'code': 12}, 'progress': 1L}} from=:::10.0.0.20,41462,
> vmId=a24fd7e3-161c-451e-8880-b3e7e1f7d86f (api:54)
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PB3TQTFXWKAMNQBNH2OMH5J7R44TMZQF/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WLLBLT632VYHKONHKL2W7V6VIKAPTLQF/


[ovirt-users] Re: Still having NFS issues. (Permissions)

2019-12-12 Thread Milan Zamazal
Strahil  writes:

> Why do you use  'all_squash' ?
>
> all_squashMap all uids and gids to the anonymous user. Useful for
> NFS-exported public FTP directories, news spool directories, etc. The
> opposite option is no_all_squash, which is the default setting.

AFAIK all_squash,anonuid=36,anongid=36 is the recommended NFS setting
for oVirt and the only one guaranteed to work.

Regards,
Milan

> Best Regards,
> Strahil NikolovOn Dec 10, 2019 07:46, Tony Brian Albers  wrote:
>>
>> On Mon, 2019-12-09 at 18:43 +, Robert Webb wrote: 
>> > To add, the 757 permission does not need to be on the .lease or the 
>> > .meta files. 
>> > 
>> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/KZF6RCSRW2QV3PUEJCJW5DZ54DLAOGAA/
>> >  
>>
>> Good morning, 
>>
>> Check SELinux just in case. 
>>
>> Here's my config: 
>>
>> NFS server: 
>> /etc/exports: 
>> /data/ovirt 
>> *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36) 
>>
>> Folder: 
>> [root@kst001 ~]# ls -ld /data/ovirt 
>> drwxr-xr-x 3 vdsm kvm 76 Jun  1  2017 /data/ovirt 
>>
>> Subfolders: 
>> [root@kst001 ~]# ls -l /data/ovirt/* 
>> -rwxr-xr-x 1 vdsm kvm  0 Dec 10 06:38 /data/ovirt/__DIRECT_IO_TEST__ 
>>
>> /data/ovirt/a597d0aa-bf22-47a3-a8a3-e5cecf3e20e0: 
>> total 4 
>> drwxr-xr-x  2 vdsm kvm  117 Jun  1  2017 dom_md 
>> drwxr-xr-x 56 vdsm kvm 4096 Dec  2 14:51 images 
>> drwxr-xr-x  4 vdsm kvm   42 Jun  1  2017 master 
>> [root@kst001 ~]# 
>>
>>
>> The user: 
>> [root@kst001 ~]# id vdsm 
>> uid=36(vdsm) gid=36(kvm) groups=36(kvm) 
>> [root@kst001 ~]# 
>>
>> And output from 'mount' on a host: 
>> kst001:/data/ovirt on /rhev/data-center/mnt/kst001:_data_ovirt type nfs 
>> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock, 
>> nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=> server- 
>> ip>,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=> -server-ip>) 
>>
>>
>> HTH 
>>
>> /tony 
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/T6S32XNRB6S67PH5TOZZ6ZAD6KMVA3G6/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z5XPTK5B4KTITNDRFKR3C7TQYUXQTC4A/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSSPIUYPPGSAS5TUV3GUWMWNIGGIB2NF/


[ovirt-users] Re: Certificate of host is invalid

2019-11-27 Thread Milan Zamazal
Strahil  writes:

> Hi ,
>
> You can try with:
> 1. Set the host in maintenance
> 2. From Install dropdown , select 'reinstall' and then configure the
> necessary info + whether you would like to use the host as Host for
> the HostedEngine VM.

Rather than full reinstall, Enroll Certificate action (just next to
Reinstall in the menu) should be faster and sufficient.  You still need
to set the host to maintenance before being allowed to do it.

Regards,
Milan

> Once the reinstall (of Ovirt software)  is OK, the node will be activated 
> automatically.
>
> Best Regards,
> Strahil NikolovOn Nov 27, 2019 18:01, Jon bae  wrote:
>>
>> Hello everybody,
>> since last update to 4.3.7 I get this error message:
>>
>> Certificate of host host.name is invalid. The certificate doesn't
>> contain valid subject alternative name, please enroll new
>> certificate for the host.
>>
>> Have you an idea of how I can fix that?
>>
>> Regards
>> Jonathan
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PAOHX6VSO6VWUXAQICH2Q5UUTZY33HPX/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JHXPHMMTOSC35XQBNN4DVDSOFGRQSSM5/


[ovirt-users] Re: Fwd: NEsted oVirt with Ryzen

2019-10-03 Thread Milan Zamazal
"JoseMa(G-Mail)"  writes:

> Hi folks,
> When trying to start a VM in a nested env with Ryzen it complains with:
>
> 2019-09-28 14:29:53,940-0400 ERROR (vm/0391a661) [virt.vm]
> (vmId='0391a661-20fd-490a-9653-dd217147224d') The vm start process failed
> (vm:933)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 867, in
> _startUnderlyingVm
> self._run()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2880, in
> _run
> dom.createWithFlags(flags)
>   File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
> line 131, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94,
> in wrapper
> return func(inst, *args, **kwargs)
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1110, in
> createWithFlags
> if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed',
> dom=self)
>
>
> libvirtError: the CPU is incompatible with host CPU: Host CPU does not
> provide required features: monitor
>
>
> The hooks for nestedv are installed.  Is there any way to modify the xml
> passed to the host used by libvirt and remove the monitor flag ?? Like this
>
> 

Hi, I think 'cpuflags' hook can be used for the purpose, see the
documentation in its before_vm_start.py file how to use it.  In case
it's not enough for you, you can write your own (much simpler)
before_vm_start hook to perform the transformation.

HTH,
Milan

> Lab is installed using Centos latest and oVirt latest as today!. By the way
> a nested intel cpu box works with no problem.
>
>
> THANKS!!
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DPJ66NL5QYYFCIREI6JKIEWQMDXZG6L4/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TDQEDULJ2HEGAFKSFKOMXQO2DH2EVKTU/


[ovirt-users] Re: VDSM Hooks during migration

2019-08-27 Thread Milan Zamazal
Milan Zamazal  writes:

> "Vrgotic, Marko"  writes:
>
>> What I am aiming for is following:
>> We have a nauseate hook which deletes dns records from DNS server, for of a 
>> VM “destroyed”.
>> That is just as we wanted it, except in a case of Migration, which is
>> also a “destructive” action, looking from perspective of a Hypervisor.
>> I was testing an order of Hooks triggered when I issue VM Migrate, in
>> order to discover which Hook I can use to trigger update of the
>> records for a VM that is Migrated.
>>
>> Seems that “after_vm_destroy” is the last in order hook to be executed
>> when VM is migrated, and I wanted to verify that.
>
> Hi Marko, I see, I understand now what's your problem.  after_vm_destroy
> is called on the source while after_vm_migrate_destination on the
> destination and I don't think there is any guarantee in which order they
> are mutually called.
>
>> How come that there is no hook which enables VM start or continue on a
>> destination hypervisor, after VM is migrated? Or am I missing
>> something?
>
> after_vm_migrate_destination is called on the destination, but see
> above.  A possible solution could be to look in the domain XML passed to
> after_vm_destroy, there should be an exit reason in the metadata
> section.  If the reason is migration, then you can skip your delete
> action.

Hm, it seems there is no exit info after migration.  Another idea is to
put something into after_vm_migrate_source hook that would prevent the
record deletion.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZN7VICHDVFMDPDOG4HKF3SVP2PAA4MDZ/


[ovirt-users] Re: VDSM Hooks during migration

2019-08-27 Thread Milan Zamazal
"Vrgotic, Marko"  writes:

> What I am aiming for is following:
> We have a nauseate hook which deletes dns records from DNS server, for of a 
> VM “destroyed”.
> That is just as we wanted it, except in a case of Migration, which is
> also a “destructive” action, looking from perspective of a Hypervisor.
> I was testing an order of Hooks triggered when I issue VM Migrate, in
> order to discover which Hook I can use to trigger update of the
> records for a VM that is Migrated.
>
> Seems that “after_vm_destroy” is the last in order hook to be executed
> when VM is migrated, and I wanted to verify that.

Hi Marko, I see, I understand now what's your problem.  after_vm_destroy
is called on the source while after_vm_migrate_destination on the
destination and I don't think there is any guarantee in which order they
are mutually called.

> How come that there is no hook which enables VM start or continue on a
> destination hypervisor, after VM is migrated? Or am I missing
> something?

after_vm_migrate_destination is called on the destination, but see
above.  A possible solution could be to look in the domain XML passed to
after_vm_destroy, there should be an exit reason in the metadata
section.  If the reason is migration, then you can skip your delete
action.

Regards,
Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LXYRA7X36224LHUPIPSUMGCDN5OFNZFC/


[ovirt-users] Re: VDSM Hooks during migration

2019-08-26 Thread Milan Zamazal
"Vrgotic, Marko"  writes:

> Would you be so kind to help me/tell me or point me how to find which
> Hooks, and in which order, are triggered when VM is being migrated?

See "VDSM and Hooks" appendix of oVirt Admin Guide:

https://ovirt.org/documentation/admin-guide/appe-VDSM_and_Hooks.html

Regards,
Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SNUMEJZ5AFBFCRK5DA5GZ4CU4XMQECVG/


[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-14 Thread Milan Zamazal
Alex McWhirter  writes:

> In this case, i should be able to edit /etc/libvirtd/qemu.conf on all
> the nodes to disable dynamic ownership as a temporary measure until
> this is patched for libgfapi?

No, other devices might have permission problems in such a case.

> On 2019-06-13 10:37, Milan Zamazal wrote:
>> Shani Leviim  writes:
>>
>>> Hi,
>>> It seems that you hit this bug:
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1666795
>>>
>>> Adding +Milan Zamazal , Can you please confirm?
>>
>> There may still be problems when using GlusterFS with libgfapi:
>> https://bugzilla.redhat.com/1719789.
>>
>> What's your Vdsm version and which kind of storage do you use?
>>
>>> *Regards,*
>>>
>>> *Shani Leviim*
>>>
>>>
>>> On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter 
>>> wrote:
>>>
>>>> after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
>>>> images are become owned by root:root. Live migration succeeds and
>>>> the vm
>>>> stays up, but after shutting down the VM from this point, starting
>>>> it up
>>>> again will cause it to fail. At this point i have to go in and change
>>>> the permissions back to vdsm:kvm on the images, and the VM will boot
>>>> again.
>>>> ___
>>>> Users mailing list -- users@ovirt.org
>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGSLA7NC5YGP7BIWQKMM3/
>>>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/36Z6BB5NGYEEFMPRTDYKFJVVBPZFUCBL/


[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-13 Thread Milan Zamazal
Shani Leviim  writes:

> Hi,
> It seems that you hit this bug:
> https://bugzilla.redhat.com/show_bug.cgi?id=1666795
>
> Adding +Milan Zamazal , Can you please confirm?

There may still be problems when using GlusterFS with libgfapi:
https://bugzilla.redhat.com/1719789.

What's your Vdsm version and which kind of storage do you use?

> *Regards,*
>
> *Shani Leviim*
>
>
> On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter  wrote:
>
>> after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
>> images are become owned by root:root. Live migration succeeds and the vm
>> stays up, but after shutting down the VM from this point, starting it up
>> again will cause it to fail. At this point i have to go in and change
>> the permissions back to vdsm:kvm on the images, and the VM will boot
>> again.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGSLA7NC5YGP7BIWQKMM3/
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZP5ACKQOU3J5CFCDFYJSSEAHHJ5Q23MB/


[ovirt-users] Re: Bug 1666795 - Related? - VM's don't start after shutdown on FCP

2019-04-10 Thread Milan Zamazal
Nardus Geldenhuys  writes:

> attached is the engine.log

> Can't find any logs containing the VM name on the host it was supposed
> to start. Seems that it does not even get to the host and that it
> fails in the ovirt engine

Thank you for the info.  The problem looks completely unrelated to the
cited bug.

The VM fails to start already in Engine due to a NullPointerException
when putting network interfaces into the VM domain XML.  So it's
probably unrelated to storage as well.  Something probably broke during
the upgrade regarding network interfaces attached to the VM.

Is there anything special about your network interfaces or is there
anything suspicious about them in Engine when the VM fails to start?

> On Wed, 10 Apr 2019 at 10:39, Milan Zamazal  wrote:
>
>> nard...@gmail.com writes:
>>
>> > Wonder if this issue is related to our problem and if there is a way
>> > around it. We upgraded from 4.2.8. to 4.3.2. Now when we start some of
>> > the VM's fail to start. You need to deattach the disks, create new VM,
>> > reattach the disks to the new VM and then the new VM starts.
>>
>> Hi, were those VMs previously migrated from a 4.2.8 to a 4.3.2 host or
>> to a 4.3.[01] host (which have the given bug)?
>>
>> Would it be possible to provide Vdsm logs from some of the failed and
>> successful (with the new VM) starts with the same storage and also from
>> the destination host of the preceding migration of the VM to a 4.3 host
>> (if the VM was migrated)?
>>
>> Thanks,
>> Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YS7IUTJKLYEG3EHFGT64MMHNL7R5AI6L/


[ovirt-users] Re: Bug 1666795 - Related? - VM's don't start after shutdown on FCP

2019-04-10 Thread Milan Zamazal
nard...@gmail.com writes:

> Wonder if this issue is related to our problem and if there is a way
> around it. We upgraded from 4.2.8. to 4.3.2. Now when we start some of
> the VM's fail to start. You need to deattach the disks, create new VM,
> reattach the disks to the new VM and then the new VM starts.

Hi, were those VMs previously migrated from a 4.2.8 to a 4.3.2 host or
to a 4.3.[01] host (which have the given bug)?

Would it be possible to provide Vdsm logs from some of the failed and
successful (with the new VM) starts with the same storage and also from
the destination host of the preceding migration of the VM to a 4.3 host
(if the VM was migrated)?

Thanks,
Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YTD2JB33JHP4ENVUMWVAOVZQXVRPAKJX/


[ovirt-users] Re: How to set hot plugged memory online_movable

2019-02-18 Thread Milan Zamazal
zoda...@gmail.com writes:

> Hi, 
>
> I'd like to try the memory hot unplug, refer to:
> https://www.ovirt.org/documentation/vmm-guide/chap-Editing_Virtual_Machines.html:
>
> "All blocks of the hot-plugged memory must be set to
> **online_movable** in the virtual machine’s device management
> rules. In virtual machines running up-to-date versions of Enterprise
> Linux or CoreOS, this rule is set by default."
>
> I created a VM running CentOS7.6:
> # more /etc/redhat-release
> CentOS Linux release 7.6.1810 (Core)
> # more /usr/lib/udev/rules.d/40-redhat.rules
> ...
> # Memory hotadd request
> SUBSYSTEM!="memory", ACTION!="add", GOTO="memory_hotplug_end"
> PROGRAM="/bin/uname -p", RESULT=="s390*", GOTO="memory_hotplug_end"
>
> ENV{.state}="online"
> PROGRAM="/bin/systemd-detect-virt", RESULT=="none", 
> ENV{.state}="online_movable"
> ATTR{state}=="offline", ATTR{state}="$env{.state}"
>
> LABEL="memory_hotplug_end"
>
> It looks like the online_movable will be set only when
> systemd-detect-virt returns none, i.e. hot plugging memory for a
> bare-metal machine, so how to make the hot plugged memory
> "online_movable" in the virtual machines? Thank you.

Hi, you can copy 40-redhat.rules to /etc/udev/rules.d/ and edit it to
always set memory state as online_movable.  It will override the default
file content after reboot (or udev rules reload).

Regards,
Milan

> Regards,
> -Zhen
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LKUZKCLLWOCK3SQFSNLNMGGFEBI2OM6W/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CAEIPPDXMMTLY7OBTOBF4M633XDC2CJP/


[ovirt-users] Re: changes in oVirt 4.3 and vGPU?

2019-02-18 Thread Milan Zamazal
Greg Sheremeta  writes:

>>  Exit message: internal error: qemu unexpectedly closed the monitor:
> 2019-02-08T14:01:11.287955Z qemu-kvm: warning: All CPU(s) up to maxcpus
> should be described in NUMA config, ability to start up with partial NUMA
> mappings is obsoleted and will be removed in future
>
> I got that error on a fresh 4.3 yesterday while creating a plain boring
> CentOS VM, and I don't have any Nvidia stuff. Might not be related / could
> be a bug somewhere else. Anyone else seeing this?

Yes, I also get that message.  AFAIK it is harmless (although still
confusing to users) and it probably means what it says: oVirt assigns to
NUMA nodes only CPUs present at the VM start.  See also
https://bugzilla.redhat.com/1437559.

> Greg
>
> On Fri, Feb 8, 2019 at 9:26 AM Hetz Ben Hamo  wrote:
>
>> Hi,
>>
>> I just installed a Tesla T4 card, installed the nvidia's RPM, I see the
>> mdev_type stuff etc.
>> Following their instructions, I'm trying to set a Windows 10 VM to use the
>> vGPU (the VM works without any vGPU), I get this error in the event...
>>
>> VM Win-10-test is down with error. Exit message: internal error: qemu
>> unexpectedly closed the monitor: 2019-02-08T14:01:11.287955Z qemu-kvm:
>> warning: All CPU(s) up to maxcpus should be described in NUMA config,
>> ability to start up with partial NUMA mappings is obsoleted and will be
>> removed in future
>> 2019-02-08T14:01:11.313878Z qemu-kvm: -device
>> vfio-pci,id=hostdev0,sysfsdev=/sys/bus/mdev/devices/486b48a3-01c7-4a67-9727-279813bae0e8,display=off,bus=pci.0,addr=0x8:
>> vfio error: 486b48a3-01c7-4a67-9727-279813bae0e8: error getting device from
>> group 0: Input/output error
>> Verify all devices in group 0 are bound to vfio- or pci-stub and not
>> already in use.
>>
>> Could someone explain to me what am I missing and what to do? I don't see
>> any docs about it.
>>
>> Thanks
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/X57H4C4SYRVPEGGCHH66HW7HUEL4MINO/
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XUZ4MJ64IOZKGYKBNBK6ZPO62T5V6OMU/


[ovirt-users] Re: vGPU with NVIDIA M60 mdev_type not showing

2019-01-16 Thread Milan Zamazal
Josep Manel Andrés Moscardó  writes:

> Hi all,
> I have a host with 2 M60 with the latest supported driver installed,
> and working as you can see:

Hi, all looks fine and the same as on my setup, which is working.

How about kernel command line (cat /proc/cmdline)?  It's important to
have intel_iommu=on there (assuming an Intel machine).

[...]

> Is it possible that the package vdsm-hook-vfio-mdev is needed? As far
> as I understand it is already deprecated, but I cannot find anything
> on the documentation.

The hook is no longer needed nor should be installed.

Regards,
Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WRAZPQKQOWGPTO4VDOESNEZ7BZZ24O3R/


[ovirt-users] Re: ovirt-guest-agent running on Debian vm, but data doesn't show in web-gui

2018-10-03 Thread Milan Zamazal
Arild Ringøy  writes:

> Then I'll just keep calm and wait.

I'm afraid that won't help.  I'll see if I can prepare a non-maintainer
upload of the package with fixes.

Regards,
Milan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/REJRBE5MHRE6QFF4SGLXCNHKJCFTBCZB/


[ovirt-users] Re: ovirt-guest-agent running on Debian vm, but data doesn't show in web-gui

2018-09-26 Thread Milan Zamazal
Sandro Bonazzola  writes:

> Il giorno gio 20 set 2018 alle ore 13:44 Arild Ringøy 
> ha scritto:
>
>> Hi Jon!
>>
>> Thanks for your reply!
>> This did help me last time I had problems like this. The problem then was,
>> like yours, that the agent wasn't running. But now it is. At least as far
>> as I can see. When it wasn't running (or outdated) there was also a warning
>> about this in the web-gui.
>>
>>
> Thanks for reporting, can you please open a bug on debian for this so the
> maintainer can check the guest agent?

The bugs are already reported: https://bugs.debian.org/ovirt-guest-agent
Unfortunately they haven't been handled by the maintainer (CCed) so far,
so users hit those problems again and again. :-(

Regards,
Milan

>> Regards
>> Arild
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/P6BQUWUODFUZNVXDSOMFATFPGQ4CY46Z/
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TYIIMPM7Q45IFP4UIF6V4AAIJTTBG672/


Re: [ovirt-users] VMs stuck in migrating state

2018-03-02 Thread Milan Zamazal
nico...@devels.es writes:

> El 2018-03-02 14:10, Milan Zamazal escribió:
>> nico...@devels.es writes:
>>
>>> We're running 4.1.9 and during the weekend we had a storage issue that
>>> seemed
>>> to leave some hosts in an strange state. One of the hosts has almost all VMs
>>> migrating (although it seems to not actually being migrating them) and the
>>> migration state cannot be cancelled.
>>>
>>> When clicking on one of those machines and selecting 'Cancel migration', in
>>> the
>>> ovirt-engine log I see:
>>>
>>> 2018-02-26 08:52:07,588Z INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CancelMigrateVDSCommand]
>>> (org.ovirt.thread.pool-6-thread-36) [887dfbf9-dece-4f7b-90a8-dac02b849b7f]
>>> HostName = host2.domain.com
>>> 2018-02-26 08:52:07,588Z ERROR
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CancelMigrateVDSCommand]
>>> (org.ovirt.thread.pool-6-thread-36) [887dfbf9-dece-4f7b-90a8-dac02b849b7f]
>>> Command 'CancelMigrateVDSCommand(HostName = host2.domain.com,
>>> CancelMigrationVDSParameters:{runAsync='true',
>>> hostId='e63b9146-10c4-47ad-bd6c-f053a8c5b4eb',
>>> vmId='26d37e43-32e2-4e55-9c1e-1438518d5021'})' execution failed:
>>> VDSGenericException: VDSErrorException: Failed to CancelMigrateVDS, error =
>>> Migration process cancelled, code = 82
>>>
>>> On the vdsm side I see:
>>>
>>> 2018-02-26 08:56:19,396+ INFO  (jsonrpc/0) [vdsm.api] START
>>> migrateCancel()
>>> from=:::10.X.X.X,54654, flow_id=874d36d7-63f5-4b71-8a4d-6d9f3ec65858
>>> (api:46)
>>> 2018-02-26 08:56:19,398+ INFO  (jsonrpc/0) [vdsm.api] FINISH
>>> migrateCancel
>>> return={'status': {'message': 'Migration process cancelled', 'code': 82},
>>> 'progress': 0} from=:::10.X.X.X,54654,
>>> flow_id=874d36d7-63f5-4b71-8a4d-6d9f3ec65858 (api:52)
>>>
>>> So no error on the vdsm side log.
>>
>> Interesting.  The messages above indicate that the VM was attempted to
>> migrate, but the migration got temporarily rejected on the destination
>> due to the number of already running incoming migrations (the limit is 2
>> incoming migrations by default).  Later, Vdsm was asked to cancel the
>> outgoing migration and it successfully set a migration canceling flag.
>> However the action was reported as an error to Engine, due to hitting
>> the incoming migration limit on the destination.  Maybe it's a bug, I'm
>> not sure, resulting in minor confusion.  Normally it shouldn't matter,
>> the migration should be canceled shortly after anyway and Engine should
>> be informed about that.
>>
>> However the migration apparently wasn't canceled here.  I can't say what
>> happened without complete Vdsm log.  One of possible reasons is that the
>> migration has been waiting on completion of another migration outgoing
>> from the source (only one outgoing migration at the time is allowed by
>> default).  In any case it seems the migration either wasn't actually
>> started at all or it just started being set up and that has never been
>> completely finished.
>>
>
> I'm attaching the log. Basically the storage backend was restarted by fencing
> and then this issue happened. This was on 26/02 at about 08:52 (log time).

Thank you for the log, but VMs are already “migrating” at its beginning,
there had to be some problem already earlier.

>>> I already tried restarting ovirt-engine but it didn't work.
>>
>> Here the problem is clearly on the Vdsm side.
>>
>>> Could someone shed some light on how to cancel the migration status for
>>> these
>>> machines? All of them seem to be running on the same host.
>>
>> Did the VMs get unblocked in the meantime?  I can't know what's the
>
> No, they didn't. They're still in a "Migrating" state.
>
>> actual state of the given VMs without seeing the complete Vdsm log, so
>> it's difficult to give a good advice.  I think that Vdsm restart on the
>> given host would help BUT it's generally not a very good idea to restart
>> Vdsm if any real migration, outgoing or incoming, is running on the
>> host.  VMs that aren't actually being migrated (despite being reported
>> as migrating) at all should simply return to Up state after the restart,
>> but VMs with any real migration action pending might get return to Up
>> state without proper cleanup, resulting in a different kind of mess or
>> maybe something even worse (things should improve in oVirt 4.2, but it's
>> still good to avoid Vdsm restarts with migrations running).
>>
>
> I assume this is not a real migration as it has been in this state for several
> days. Would you advice restarting vdsm in this case then?

I'd say try it.  Since nothing has changed for several days, restarting
Vdsm looks like appropriate action at this point.  Just don't make a
habit of it :-).

Regards,
Milan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VMs stuck in migrating state

2018-03-02 Thread Milan Zamazal
nico...@devels.es writes:

> We're running 4.1.9 and during the weekend we had a storage issue that seemed
> to leave some hosts in an strange state. One of the hosts has almost all VMs
> migrating (although it seems to not actually being migrating them) and the
> migration state cannot be cancelled.
>
> When clicking on one of those machines and selecting 'Cancel migration', in 
> the
> ovirt-engine log I see:
>
> 2018-02-26 08:52:07,588Z INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CancelMigrateVDSCommand]
> (org.ovirt.thread.pool-6-thread-36) [887dfbf9-dece-4f7b-90a8-dac02b849b7f]
> HostName = host2.domain.com
> 2018-02-26 08:52:07,588Z ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CancelMigrateVDSCommand]
> (org.ovirt.thread.pool-6-thread-36) [887dfbf9-dece-4f7b-90a8-dac02b849b7f]
> Command 'CancelMigrateVDSCommand(HostName = host2.domain.com,
> CancelMigrationVDSParameters:{runAsync='true',
> hostId='e63b9146-10c4-47ad-bd6c-f053a8c5b4eb',
> vmId='26d37e43-32e2-4e55-9c1e-1438518d5021'})' execution failed:
> VDSGenericException: VDSErrorException: Failed to CancelMigrateVDS, error =
> Migration process cancelled, code = 82
>
> On the vdsm side I see:
>
> 2018-02-26 08:56:19,396+ INFO  (jsonrpc/0) [vdsm.api] START 
> migrateCancel()
> from=:::10.X.X.X,54654, flow_id=874d36d7-63f5-4b71-8a4d-6d9f3ec65858
> (api:46)
> 2018-02-26 08:56:19,398+ INFO  (jsonrpc/0) [vdsm.api] FINISH migrateCancel
> return={'status': {'message': 'Migration process cancelled', 'code': 82},
> 'progress': 0} from=:::10.X.X.X,54654,
> flow_id=874d36d7-63f5-4b71-8a4d-6d9f3ec65858 (api:52)
>
> So no error on the vdsm side log.

Interesting.  The messages above indicate that the VM was attempted to
migrate, but the migration got temporarily rejected on the destination
due to the number of already running incoming migrations (the limit is 2
incoming migrations by default).  Later, Vdsm was asked to cancel the
outgoing migration and it successfully set a migration canceling flag.
However the action was reported as an error to Engine, due to hitting
the incoming migration limit on the destination.  Maybe it's a bug, I'm
not sure, resulting in minor confusion.  Normally it shouldn't matter,
the migration should be canceled shortly after anyway and Engine should
be informed about that.

However the migration apparently wasn't canceled here.  I can't say what
happened without complete Vdsm log.  One of possible reasons is that the
migration has been waiting on completion of another migration outgoing
from the source (only one outgoing migration at the time is allowed by
default).  In any case it seems the migration either wasn't actually
started at all or it just started being set up and that has never been
completely finished.

> I already tried restarting ovirt-engine but it didn't work.

Here the problem is clearly on the Vdsm side.

> Could someone shed some light on how to cancel the migration status for these
> machines? All of them seem to be running on the same host.

Did the VMs get unblocked in the meantime?  I can't know what's the
actual state of the given VMs without seeing the complete Vdsm log, so
it's difficult to give a good advice.  I think that Vdsm restart on the
given host would help BUT it's generally not a very good idea to restart
Vdsm if any real migration, outgoing or incoming, is running on the
host.  VMs that aren't actually being migrated (despite being reported
as migrating) at all should simply return to Up state after the restart,
but VMs with any real migration action pending might get return to Up
state without proper cleanup, resulting in a different kind of mess or
maybe something even worse (things should improve in oVirt 4.2, but it's
still good to avoid Vdsm restarts with migrations running).

Regards,
Milan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM Migrations

2018-03-02 Thread Milan Zamazal
"Bryan Sockel" <bryan.soc...@mdaemon.com> writes:

> Thanks for the info, I will be sure to update to 4.2.2 when it is ready.  
> With out the ability to migrate vm's based on this image it makes it less 
> convenient to patch my servers on a more consistent basis.I was also 
> experiencing this issue prior to upgrading my environment to 4.2.

If you experienced it also in 4.1 then it must be another problem, which
may or may not be fixed in 4.2 or it can be related to your template or
setup.  Let's see what happens once you upgrade to 4.2.2.

> From: "Milan Zamazal (mzama...@redhat.com)" <mzama...@redhat.com>
> To: "Bryan Sockel" <bryan.soc...@mdaemon.com>
> Cc: "users\@ovirt.org" <users@ovirt.org>
> Date: Thu, 01 Mar 2018 17:38:24 +0100
> Subject: Re: VM Migrations
>
> "Bryan Sockel" <bryan.soc...@mdaemon.com> writes:
>
>> I am having an issue migrating all vm's based on a specific template.  The
>> template was created in a previous ovirt environment (4.1), and all VM's
>> deployed from this template experience the same issue.
>>
>> I would like to find a resolution to both the template and vm's that are
>> already deployed from this template.  The VM in question is VDI-Bryan and
>> the migration starts around 12:25.  I have attached the engine.log and the
>> vdsm.log file from the destination server.
>
> The VM died on the destination before it could be migrated and I can't
> see the exact reason in the log.  However I can see there that you have
> been hit by some 4.1->4.2 migration issues and it's likely to be the
> problem as well as being a problem by itself in any case.
>
> That will be fixed in 4.2.2.  If you don't want to wait until 4.2.2 is
> released, you may want to try current 4.2.2 snapshot, which already
> contains the fixes.
>
> Regards,
> Milan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VMs with multiple vdisks don't migrate

2018-03-01 Thread Milan Zamazal
"fsoyer"  writes:

> I tried to activate the debug mode, but the restart of libvirt crashed
> something on the host : it was no more possible to start any vm on it, and
> migration to it just never started. So I decided to restart it, and to be 
> sure,
> I've restarted all the hosts.
> And... now the migration of all VMs, simple or multi-disks, works ?!? So, 
> there
> was probably something hidden that was resetted or repaired by the global
> restart ! In french, we call that "tomber en marche" ;)

I'm always amazed how many problems in computing are eventually resolved
(and how many new ones introduced) by reboot :-).  I'm glad that it
works for you now.

Regards,
Milan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM Migrations

2018-03-01 Thread Milan Zamazal
"Bryan Sockel"  writes:

> I am having an issue migrating all vm's based on a specific template.  The 
> template was created in a previous ovirt environment (4.1), and all VM's 
> deployed from this template experience the same issue.
>
> I would like to find a resolution to both the template and vm's that are 
> already deployed from this template.  The VM in question is VDI-Bryan and 
> the migration starts around 12:25.  I have attached the engine.log and the 
> vdsm.log file from the destination server.

The VM died on the destination before it could be migrated and I can't
see the exact reason in the log.  However I can see there that you have
been hit by some 4.1->4.2 migration issues and it's likely to be the
problem as well as being a problem by itself in any case.

That will be fixed in 4.2.2.  If you don't want to wait until 4.2.2 is
released, you may want to try current 4.2.2 snapshot, which already
contains the fixes.

Regards,
Milan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM paused rather than migrate to another hosts

2018-03-01 Thread Milan Zamazal
Terry hey <recreati...@gmail.com> writes:

> Dear Milan,
> Today, i just found that oVirt 4.2 support iLO5 and power management was
> set on all hosts (hypervisor).
> I found that if i choose VM lease and shutdown iSCSI network, the VM was
> shutdown.
> Then the VM will migrate to another host if the iSCSI network was resumed.

If the VM had been shut down then it was probably restarted on rather
than migrated to another host.

> If i just choose enable HA on VM setting, the VM was successfully migrate
> to another hosts.

There can be a special situation if the storage storing VM leases is
unavailable.

oVirt tries to do what it can in case of storage problems, but it all
depends on the overall state of the storage – for how long it remains
unavailable, if it is available at least on some hosts, and which parts
of the storage are available; there are more possible scenarios here.
Indeed, it's a good idea to experiment with failures and learn what
happens before real problems come!

> But i want to ask another question, what if the management network is down?
> What VM and hosts behavior would you expect?

The primary problem is that oVirt Engine can't communicate with the
hosts in such a case.  Unless there is another problem (especially
assuming storage is still reachable from the hosts) the hosts and VMs
will keep running, but the hosts will be displayed as unreachable and
VMs as unknown in Engine.  And you won't be able to manage your VMs from
Engine of course.  Once the management network is back, things should
return to normal state sooner or later.

Regards,
Milan

> Regards
> Terry Hung
>
> 2018-02-28 22:29 GMT+08:00 Milan Zamazal <mzama...@redhat.com>:
>
>> Terry hey <recreati...@gmail.com> writes:
>>
>> > I am testing iSCSI bonding failover test on oVirt, but i observed that VM
>> > were paused and did not migrate to another host. Please see the details
>> as
>> > follows.
>> >
>> > I have two hypervisors. Since they are running iLO 5 and oVirt 4.2 cannot
>> > support iLO 5, thus i cannot setup power management.
>> >
>> > For the cluster setting, I set "Migrate Virtual Machines" under the
>> > Migration Policy.
>> >
>> > For each hypervisor, I bonded two iSCSI interface as bond 1.
>> >
>> > I created one Virtual machine and enable high availability on it.
>> > Also, I created one Virtual machine and did not enable high availability
>> on
>> > it.
>> >
>> > When i shutdown one of the iSCSI interface, nothing happened.
>> > But when i shutdown both iSCSI interface, VM in that hosts were paused
>> and
>> > did not migrate to another hosts. Is this behavior normal or i miss
>> > something?
>>
>> A paused VM can't be migrated, since there are no guarantees about the
>> storage state.  As the VMs were paused under erroneous (rather than
>> controlled such as putting the host into maintenance) situation,
>> migration policy can't help here.
>>
>> But highly available VMs can be restarted on another host automatically.
>> Do you have VM lease enabled for the highly available VM in High
>> Availability settings?  With a lease, Engine should be able to restart
>> the VM elsewhere after a while, without it Engine can't do that since
>> there is danger of resuming the VM on the original host, resulting in
>> multiple instances of the same VM running at the same time.
>>
>> VMs without high availability must be restarted manually (unless storage
>> domain becomes available again).
>>
>> HTH,
>> Milan
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM paused rather than migrate to another hosts

2018-03-01 Thread Milan Zamazal
Terry hey <recreati...@gmail.com> writes:

> Dear Milan,
> Today, i just found that oVirt 4.2 support iLO5 and power management was
> set on all hosts (hypervisor).
> I found that if i choose VM lease and shutdown iSCSI network, the VM was
> shutdown.
> Then the VM will migrate to another host if the iSCSI network was resumed.

If the VM had been shut down then it was probably restarted on rather
than migrated to another host.

> If i just choose enable HA on VM setting, the VM was successfully migrate
> to another hosts.

There can be a special situation if the storage storing VM leases is
unavailable.

oVirt tries to do what it can in case of storage problems, but it all
depends on the overall state of the storage – for how long it remains
unavailable, if it is available at least on some hosts, and which parts
of the storage are available; there are more possible scenarios here.
Indeed, it's a good idea to experiment with failures and learn what
happens before real problems come!

> But i want to ask another question, what if the management network is down?
> What VM and hosts behavior would you expect?

The primary problem is that oVirt Engine can't communicate with the
hosts in such a case.  Unless there is another problem (especially
assuming storage is still reachable from the hosts) the hosts and VMs
will keep running, but the hosts will be displayed as unreachable and
VMs as unknown in Engine.  And you won't be able to manage your VMs from
Engine of course.  Once the management network is back, things should
return to normal state sooner or later.

Regards,
Milan

> Regards
> Terry Hung
>
> 2018-02-28 22:29 GMT+08:00 Milan Zamazal <mzama...@redhat.com>:
>
>> Terry hey <recreati...@gmail.com> writes:
>>
>> > I am testing iSCSI bonding failover test on oVirt, but i observed that VM
>> > were paused and did not migrate to another host. Please see the details
>> as
>> > follows.
>> >
>> > I have two hypervisors. Since they are running iLO 5 and oVirt 4.2 cannot
>> > support iLO 5, thus i cannot setup power management.
>> >
>> > For the cluster setting, I set "Migrate Virtual Machines" under the
>> > Migration Policy.
>> >
>> > For each hypervisor, I bonded two iSCSI interface as bond 1.
>> >
>> > I created one Virtual machine and enable high availability on it.
>> > Also, I created one Virtual machine and did not enable high availability
>> on
>> > it.
>> >
>> > When i shutdown one of the iSCSI interface, nothing happened.
>> > But when i shutdown both iSCSI interface, VM in that hosts were paused
>> and
>> > did not migrate to another hosts. Is this behavior normal or i miss
>> > something?
>>
>> A paused VM can't be migrated, since there are no guarantees about the
>> storage state.  As the VMs were paused under erroneous (rather than
>> controlled such as putting the host into maintenance) situation,
>> migration policy can't help here.
>>
>> But highly available VMs can be restarted on another host automatically.
>> Do you have VM lease enabled for the highly available VM in High
>> Availability settings?  With a lease, Engine should be able to restart
>> the VM elsewhere after a while, without it Engine can't do that since
>> there is danger of resuming the VM on the original host, resulting in
>> multiple instances of the same VM running at the same time.
>>
>> VMs without high availability must be restarted manually (unless storage
>> domain becomes available again).
>>
>> HTH,
>> Milan
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM paused rather than migrate to another hosts

2018-02-28 Thread Milan Zamazal
Terry hey  writes:

> I am testing iSCSI bonding failover test on oVirt, but i observed that VM
> were paused and did not migrate to another host. Please see the details as
> follows.
>
> I have two hypervisors. Since they are running iLO 5 and oVirt 4.2 cannot
> support iLO 5, thus i cannot setup power management.
>
> For the cluster setting, I set "Migrate Virtual Machines" under the
> Migration Policy.
>
> For each hypervisor, I bonded two iSCSI interface as bond 1.
>
> I created one Virtual machine and enable high availability on it.
> Also, I created one Virtual machine and did not enable high availability on
> it.
>
> When i shutdown one of the iSCSI interface, nothing happened.
> But when i shutdown both iSCSI interface, VM in that hosts were paused and
> did not migrate to another hosts. Is this behavior normal or i miss
> something?

A paused VM can't be migrated, since there are no guarantees about the
storage state.  As the VMs were paused under erroneous (rather than
controlled such as putting the host into maintenance) situation,
migration policy can't help here.

But highly available VMs can be restarted on another host automatically.
Do you have VM lease enabled for the highly available VM in High
Availability settings?  With a lease, Engine should be able to restart
the VM elsewhere after a while, without it Engine can't do that since
there is danger of resuming the VM on the original host, resulting in
multiple instances of the same VM running at the same time.

VMs without high availability must be restarted manually (unless storage
domain becomes available again).

HTH,
Milan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VMs with multiple vdisks don't migrate

2018-02-26 Thread Milan Zamazal
"fsoyer" <fso...@systea.fr> writes:

> I don't beleive that this is relatd to a host, tests have been done from 
> victor
> source to ginger dest and ginger to victor. I don't see problems on storage
> (gluster 3.12 native managed by ovirt), when VMs with a uniq disk from 20 to
> 250G migrate without error in some seconds and with no downtime.

The host itself may be fine, but libvirt/QEMU running there may expose
problems, perhaps just for some VMs.  According to your logs something
is not behaving as expected on the source host during the faulty
migration.

> How ca I enable this libvirt debug mode ?

Set the following options in /etc/libvirt/libvirtd.conf (look for
examples in comments there)

- log_level=1
- log_outputs="1:file:/var/log/libvirt/libvirtd.log"

and restart libvirt.  Then /var/log/libvirt/libvirtd.log should contain
the log.  It will be huge, so I suggest to enable it only for the time
of reproducing the problem.

> --
>
> Cordialement,
>
> Frank Soyer 
>
>  
>
> Le Vendredi, Février 23, 2018 09:56 CET, Milan Zamazal <mzama...@redhat.com> 
> a écrit:
>  Maor Lipchuk <mlipc...@redhat.com> writes:
>
>> I encountered a bug (see [1]) which contains the same error mentioned in
>> your VDSM logs (see [2]), but I doubt it is related.
>
> Indeed, it's not related.
>
> The error in vdsm_victor.log just means that the info gathering call
> tries to access libvirt domain before the incoming migration is
> completed. It's ugly but harmless.
>
>> Milan, maybe you have any advice to troubleshoot the issue? Will the
>> libvirt/qemu logs can help?
>
> It seems there is something wrong on (at least) the source host. There
> are no migration progress messages in the vdsm_ginger.log and there are
> warnings about stale stat samples. That looks like problems with
> calling libvirt – slow and/or stuck calls, maybe due to storage
> problems. The possibly faulty second disk could cause that.
>
> libvirt debug logs could tell us whether that is indeed the problem and
> whether it is caused by storage or something else.
>
>> I would suggest to open a bug on that issue so we can track it more
>> properly.
>>
>> Regards,
>> Maor
>>
>>
>> [1]
>> https://bugzilla.redhat.com/show_bug.cgi?id=1486543 - Migration leads to
>> VM running on 2 Hosts
>>
>> [2]
>> 2018-02-16 09:43:35,236+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer]
>> Internal server error (__init__:577)
>> Traceback (most recent call last):
>> File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572,
>> in _handle_request
>> res = method(**params)
>> File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in
>> _dynamicMethod
>> result = fn(*methodArgs)
>> File "/usr/share/vdsm/API.py", line 1454, in getAllVmIoTunePolicies
>> io_tune_policies_dict = self._cif.getAllVmIoTunePolicies()
>> File "/usr/share/vdsm/clientIF.py", line 454, in getAllVmIoTunePolicies
>> 'current_values': v.getIoTune()}
>> File "/usr/share/vdsm/virt/vm.py", line 2859, in getIoTune
>> result = self.getIoTuneResponse()
>> File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoTuneResponse
>> res = self._dom.blockIoTune(
>> File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47,
>> in __getattr__
>> % self.vmid)
>> NotConnectedError: VM u'755cf168-de65-42ed-b22f-efe9136f7594' was not
>> started yet or was shut down
>>
>> On Thu, Feb 22, 2018 at 4:22 PM, fsoyer <fso...@systea.fr> wrote:
>>
>>> Hi,
>>> Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger
>>> (192.168.0.6) migrated (or failed to migrate...) to victor (192.168.0.5),
>>> while the engine.log in the first mail on 2018-02-12 was for VMs standing
>>> on victor, migrated (or failed to migrate...) to ginger. Symptoms were
>>> exactly the same, in both directions, and VMs works like a charm before,
>>> and even after (migration "killed" by a poweroff of VMs).
>>> Am I the only one experimenting this problem ?
>>>
>>>
>>> Thanks
>>> --
>>>
>>> Cordialement,
>>>
>>> *Frank Soyer *
>>>
>>>
>>>
>>> Le Jeudi, Février 22, 2018 00:45 CET, Maor Lipchuk <mlipc...@redhat.com>
>>> a écrit:
>>>
>>>
>>> Hi Frank,
>>>
>>> Sorry about the delay repond.
>>> I've been going through the logs you attached, although I could not find
>>> any specific indication why the 

Re: [ovirt-users] VMs with multiple vdisks don't migrate

2018-02-23 Thread Milan Zamazal
Maor Lipchuk  writes:

> I encountered a bug (see [1]) which contains the same error mentioned in
> your VDSM logs (see [2]), but I doubt it is related.

Indeed, it's not related.

The error in vdsm_victor.log just means that the info gathering call
tries to access libvirt domain before the incoming migration is
completed.  It's ugly but harmless.

> Milan, maybe you have any advice to troubleshoot the issue? Will the
> libvirt/qemu logs can help?

It seems there is something wrong on (at least) the source host.  There
are no migration progress messages in the vdsm_ginger.log and there are
warnings about stale stat samples.  That looks like problems with
calling libvirt – slow and/or stuck calls, maybe due to storage
problems.  The possibly faulty second disk could cause that.

libvirt debug logs could tell us whether that is indeed the problem and
whether it is caused by storage or something else.

> I would suggest to open a bug on that issue so we can track it more
> properly.
>
> Regards,
> Maor
>
>
> [1]
> https://bugzilla.redhat.com/show_bug.cgi?id=1486543 -  Migration leads to
> VM running on 2 Hosts
>
> [2]
> 2018-02-16 09:43:35,236+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer]
> Internal server error (__init__:577)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572,
> in _handle_request
> res = method(**params)
>   File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in
> _dynamicMethod
> result = fn(*methodArgs)
>   File "/usr/share/vdsm/API.py", line 1454, in getAllVmIoTunePolicies
> io_tune_policies_dict = self._cif.getAllVmIoTunePolicies()
>   File "/usr/share/vdsm/clientIF.py", line 454, in getAllVmIoTunePolicies
> 'current_values': v.getIoTune()}
>   File "/usr/share/vdsm/virt/vm.py", line 2859, in getIoTune
> result = self.getIoTuneResponse()
>   File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoTuneResponse
> res = self._dom.blockIoTune(
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47,
> in __getattr__
> % self.vmid)
> NotConnectedError: VM u'755cf168-de65-42ed-b22f-efe9136f7594' was not
> started yet or was shut down
>
> On Thu, Feb 22, 2018 at 4:22 PM, fsoyer  wrote:
>
>> Hi,
>> Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger
>> (192.168.0.6) migrated (or failed to migrate...) to victor (192.168.0.5),
>> while the engine.log in the first mail on 2018-02-12 was for VMs standing
>> on victor, migrated (or failed to migrate...) to ginger. Symptoms were
>> exactly the same, in both directions, and VMs works like a charm before,
>> and even after (migration "killed" by a poweroff of VMs).
>> Am I the only one experimenting this problem ?
>>
>>
>> Thanks
>> --
>>
>> Cordialement,
>>
>> *Frank Soyer *
>>
>>
>>
>> Le Jeudi, Février 22, 2018 00:45 CET, Maor Lipchuk 
>> a écrit:
>>
>>
>> Hi Frank,
>>
>> Sorry about the delay repond.
>> I've been going through the logs you attached, although I could not find
>> any specific indication why the migration failed because of the disk you
>> were mentionning.
>> Does this VM run with both disks on the target host without migration?
>>
>> Regards,
>> Maor
>>
>>
>> On Fri, Feb 16, 2018 at 11:03 AM, fsoyer  wrote:
>>>
>>> Hi Maor,
>>> sorry for the double post, I've change the email adress of my account and
>>> supposed that I'd need to re-post it.
>>> And thank you for your time. Here are the logs. I added a vdisk to an
>>> existing VM : it no more migrates, needing to poweroff it after minutes.
>>> Then simply deleting the second disk makes migrate it in exactly 9s without
>>> problem !
>>> https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561
>>> https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d
>>>
>>> --
>>>
>>> Cordialement,
>>>
>>> *Frank Soyer *
>>> Le Mercredi, Février 14, 2018 11:04 CET, Maor Lipchuk <
>>> mlipc...@redhat.com> a écrit:
>>>
>>>
>>> Hi Frank,
>>>
>>> I already replied on your last email.
>>> Can you provide the VDSM logs from the time of the migration failure for
>>> both hosts:
>>>   ginger.local.systea.f r and v
>>> ictor.local.systea.fr
>>>
>>> Thanks,
>>> Maor
>>>
>>> On Wed, Feb 14, 2018 at 11:23 AM, fsoyer  wrote:

 Hi all,
 I discovered yesterday a problem when migrating VM with more than one
 vdisk.
 On our test servers (oVirt4.1, shared storage with Gluster), I created 2
 VMs needed for a test, from a template with a 20G vdisk. On this VMs I
 added a 100G vdisk (for this tests I didn't want to waste time to extend
 the existing vdisks... But I lost time finally...). The VMs with the 2
 vdisks works well.
 Now I saw some updates waiting on the host. I tried to put it in
 maintenance... But it stopped on the two VM. They were marked "migrating",
 but no more accessible. Other 

Re: [ovirt-users] ovirt-guest-agent.service has broken in Debian 8 virtual machines after updates hosts to 4.2

2018-01-08 Thread Milan Zamazal
Алексей Максимов  writes:

> # wget 
> http://ftp.us.debian.org/debian/pool/main/o/ovirt-guest-agent/ovirt-guest-agent_1.0.13.dfsg-2_all.deb
> # apt-get install gir1.2-glib-2.0 libdbus-glib-1-2 libgirepository-1.0-1 
> libpango1.0-0 libuser1 python-dbus python-dbus-dev python-ethtool python-gi 
> qemu-guest-agent usermode
> # dpkg -i ~/packages/ovirt-guest-agent_1.0.13.dfsg-2_all.deb

Yes, right, you need newer ovirt-guest-agent version, so to install it
from testing.

I filed a Debian bug asking for a backport of the package for stable:
https://bugs.debian.org/886661

> # udevadm trigger --subsystem-match="virtio-ports"
>
> # systemctl restart ovirt-guest-agent.service

Yes, alternatively you can reboot the VM, whatever is easier :-).

However the package should do it itself, I think there is a bug in its
installation script, so I filed another bug against the package:
https://bugs.debian.org/886660

> Now the service is working.
> But I do not know if it's the right way :(

Yes, it is.

Regards,
Milan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hot Memory add and Physical Memory guaranteed

2017-06-26 Thread Milan Zamazal
"Luca 'remix_tj' Lorenzetto" <lorenzetto.l...@gmail.com> writes:

> On Fri, Jun 23, 2017 at 11:16 AM, Milan Zamazal <mzama...@redhat.com> wrote:
>> "Luca 'remix_tj' Lorenzetto" <lorenzetto.l...@gmail.com> writes:
>>
>>> i just tested the memory hot add to a vm. This vm had 2048 MB. I set
>>> the new memory to 2662 MB.
>>> I logged into the vm and i've seen that hasn't been any memory change,
>>> even if i said to the manager to apply memory expansion immediately.
>>>
>>> Memory shown by free -m is 1772 MB.
>>
>> [...]
>>
>>> forgot to say that is a RHEL 7 VM and has the memory baloon device enabled.
>>
>> This is normal with memory balloon enabled – memory balloon often
>> "consumes" the hot plugged memory, so you can't see it.
>
> Ok.
>
> What's exactly the role of "guaranteed memory"? Is only about ensuring
> on startup time that there is at least X free memory on hosts or
> something more complex?

I think it defines the minimum memory that the balloon should always
leave available.

Martin, do you know answers to the other questions?

> What's the best configuration? keeping baloon or not? setting memory
> and guaranteed memory to the same value?
> If i have 1TB of ram in all the cluster, does the "guaranteed memory"
> doesn't allows to provision vms with cumulative guaranteed memory
> usage greater than 1TB?
> Can KSM help allow to overprovision in this situation?
>
> Does memory baloon device has impacts on vm performance?
>
> I need to understand better in order to plan correctly all the
> required hardware resource i need for migrating to oVirt.
>
> Luca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hot Memory add and Physical Memory guaranteed

2017-06-23 Thread Milan Zamazal
"Luca 'remix_tj' Lorenzetto"  writes:

> i just tested the memory hot add to a vm. This vm had 2048 MB. I set
> the new memory to 2662 MB.
> I logged into the vm and i've seen that hasn't been any memory change,
> even if i said to the manager to apply memory expansion immediately.
>
> Memory shown by free -m is 1772 MB.

[...]

> forgot to say that is a RHEL 7 VM and has the memory baloon device enabled.

This is normal with memory balloon enabled – memory balloon often
"consumes" the hot plugged memory, so you can't see it.

Regards,
Milan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Ubuntu>=14.04?

2016-10-21 Thread Milan Zamazal
Sandro Bonazzola  writes:

> On Fri, Oct 21, 2016 at 1:22 AM, Jon Forrest 
> wrote:
>>
>> On 10/20/16 4:11 PM, Charles Kozler wrote:
>>
>>> oVirt is the upstream source project for RedHat Enterprise
>>> Virtualization (RHEV). As expected, its only supported on CentOS 7 (and
>>> older versions on 6)
>>
>> This makes sense. But, do either of these components work on Ubuntu,
>> and, if so, how well?
>
> Milan Zamal is working on porting to Debian. He may give you some more
> updated information.

As for Ubuntu/Debian guests, there were some bugs in Debian packaging of
ovirt-guest-agent that have been recently fixed by the Debian
maintainer.  I think ovirt-guest-agent package is unchanged in Ubuntu,
so the updated version should work there.

As for Ubuntu/Debian hosts, it's more complicated.  The supporting
libraries for Vdsm are available as Debian packages although they
probably need to be updated to current upstream versions.  The major
problem is Vdsm itself.  I tried to make it working on Debian, but it's
not an easy task, some things work and some don't (e.g. the networking
setup is different on Debian based systems).  Currently Vdsm is not
usable on Debian, more fixes are needed.  If anybody is interested in
that work, I can provide more information.

As for Engine, I don't know about any plans to make it running on Debian
or its derivatives.  One can run Engine in a VM or a container so there
is no real motivation to port it to systems other than CentOS or RHEL.

> As always, help in the effort is welcome :-)

Definitely! :-)

Regards,
Milan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] live migration with openvswitch

2016-09-19 Thread Milan Zamazal
Michal Skrivanek  writes:

>> > I'm afraid that we are not yet ready to backport it to 4.0 - we found
>> > out that as it is, it break migration for vmfex and external network
>> > providers; it also breaks when a buggy Engine db does not send a
>> > displayNetwork. But we plan to fix these issues quite soon.
>
> which “buggy” engine? There were changes in parameters, most of these issues
> are not relevant anymore since we ditched <3.6 though.
> Again it’s ok as long as it is clearly mentioned like "3.6 engine sends it in
> such and such parameter, we can drop it once we support 4.0+"

I think Edward means the problem when there is no display (and
migration) network set for a cluster in Engine.  This may happen due to
a former bug in Engine db scripts.  Vdsm apparently falls back on
ovirtmgmt in most cases so the problem is typically unnoticed.  But when
you look for displayNetwork explicitly in Vdsm, it's not there.

The bug may affect 4.0 installations until a db upgrade fix is created
and backported.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Does anybody use Vdsm connectivity log?

2016-08-30 Thread Milan Zamazal
If anybody uses Vdsm connectivity log (/var/log/vdsm/connectivity.log),
please tell me.  We work on logging cleanup in Vdsm and we are thinking
whether that log is still useful.  We consider removing it in case
nobody needs it.

Thanks,
Milan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem starting VMs

2016-08-23 Thread Milan Zamazal
Wolfgang Bucher  writes:

> After reboot i cannot start some vms, and i get the following warnings in 
> vdsm.log:
>
> periodic/6::WARNING::2016-08-18
> 19:26:10,244::periodic::261::virt.periodic.VmDispatcher::(__call__) could not
> run  on
> [u'5c868b6a-db8e-4c67-a2b7-8bcdefc3350a']

[...]

> vmId=`5c868b6a-db8e-4c67-a2b7-8bcdefc3350a`::could not run on
> 5c868b6a-db8e-4c67-a2b7-8bcdefc3350a: domain not connected
> periodic/3::WARNING::2016-08-18

Those messages may be present on VM start and may (or may not) be
harmless.

[...]

> sometimes the vm starts after 15 min and more.

Do you mean that some VMs start after long time and some don't start at
all?  If a VM doesn't start at all then there should be some ERROR
message in the log.  If all the VMs eventually start sooner or later,
then it would be useful to see the whole piece of the log from the
initial VM.create call to the VM start.

And what do you mean exactly by the VM start?  Is it that a VM is not
booting in the meantime, is inaccessible, is indicated as starting or
not running in Engine, something else?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade 3.6 to 4.0 and "ghost" incompatible cluster version

2016-07-22 Thread Milan Zamazal
"Federico Sayd"  writes:

> I'm trying to upgrade ovirt 3.6.3 to 4.0, but engine-setup complaints about 
> upgrading from incompatible version 3.3
>
> I see in the engine-setup log that  vds_groups table is checked to determine
> the compatibility version. The logs shows that engine-setup detects 2 clusters
> versions: 3.3 and 3.6. Indeed, there is 2 clusters registered in the table:
> "cluster-3.6" and "Default"
>
> "Cluster-3.6" (version 3.6)  is the only cluster in DC in my ovirt setup.
> "Default" (version 3.3)  should be a cluster that surely I deleted in a past 
> upgrade.
>
> Why a cluster named "Default" (with compatibility version 3.3) is still 
> present
> in vds_group table? Cluster "Default" isn't displayed anywhere in the web
> interface.

It looks like a bug to me.  The cluster should be either missing in the
database or present in the web interface.

Could you please provide us more details about the problem?  It might
help us to investigate the issue if you could do the following:

- Install ovirt-engine-webadmin-portal-debuginfo package.
- Restart Engine.
- Go to the main Clusters tab.
- Refresh the page in your browser.
- Send us the parts of engine.log and ui.log corresponding to the
  refresh action.

> Any clue to solve this issue?

As a workaround, if you are sure you don't have anything in Default
cluster, you may try to set compatibility_version to "3.6" for "Default"
cluster in vds_groups database table.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Centos 7 no bootable device

2016-07-22 Thread Milan Zamazal
Johan Kooijman  writes:

> Situation as follows: mixed cluster with 3.5 and 3.6 nodes. Now in the
> process of reinstalling the 3.5 nodes with 3.6 on CentOS 7.2. I can't live
> migrate VM's while they're running on different versions.

Live migration should always work within a single cluster.  What do Vdsm
logs say on both the source and target hosts when the migration fails?

> The most interesting part is happening when I power down a VM, and then run
> it on a 3.6 node. Only on CentOS 7 VM's, I'm getting a "no bootable device"
> error. I have a mixed setup of ubuntu, CentOS 6 and CentOS 7. Ubuntu &
> CentOS 6 are fine.
>
> Tried shooting grub in MBR again, to no effect. I start the VM then on a
> 3.5 node and all is fine.

So the VM is indicated as starting in Engine and BIOS or GRUB can't find
the device to boot from?  Could you provide Vdsm logs from both
successful and unsuccessful boot of the same VM?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to import pre-configured nfs data domain

2016-07-22 Thread Milan Zamazal
Logan Kuhn  writes:

> Am I correct in the assumption that importing a previously master data domain
> into a fresh engine without a current master domain is supported?

It's supported only in case the master domain was previously correctly
detached from the data center.

In case of an unexpected complete disaster, when a fresh engine is
installed and used, it's still possible to recover the master domain in
theory.  You must find `metadata' file in the master domain and edit it
for the new engine.  It's completely unsupported and it may or may not
work.  We don't have guidelines how to do it, but you may try to create
a new master domain, then detach it and compare the two metadata files.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine engine crash

2016-07-15 Thread Milan Zamazal
Mark Gagnon  writes:

> even if it's a storage problem, if it's to happen, how can I force it
> to restart the engine?

Hi Mark, it indeed looks like a storage problem.  Unfortunately, there's
very little what can be done when storage is broken.  I don't think
there is any better option than to restart the Engine manually as you
describe, once the storage is working again.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Kernel related errors with Fedora 24 Guest

2016-07-14 Thread Milan Zamazal
Alexis HAUSER  writes:

> This doesn't looks really good, right ? Should I report that somewhere ?
>
> I actually had this bug when using RHEL7 profile for a Fedora 24 (to provide
> enough vram, because the default with other profiles is really lower).

I don't have any idea what the problem might be.  You may want to report
it on Fedora kernel, the bug may be elsewhere, but hopefully they
redirect you to the right place if this is the case.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Changing video memory size

2016-07-12 Thread Milan Zamazal
Alexis HAUSER  writes:

>> Look for vramMultiplier in osinfo-defaults.properties file.
>> The following formula applies: vram_size = vramMultiplier * vgamem
>> You must restart Engine to apply the new setting.
>
> The only thin I found about it in that file is : 
> os.rhel_7x64.devices.display.vramMultiplier.value = 2

That's right.  You can add similar lines for other OSes as needed.

> I am not sure this file is what I want : from what it seems to only affects
> some parameters at the creation of the OS.
> i.e. if I take an ubuntu but I set it up as RHEL7, it won't have more vram.

Video RAM settings should be applied when a VM is started.  So if you
set your Ubuntu VM as RHEL7 and then start it, it should get 32 MB of
vram (it works for me).

> With centOS7 however (that I have set as RHEL7 at its creation), it has more 
> vram, but not 2*, really more :
> centOS has : "vram_size=33554432" from what qemu says
> and all others VM have 8 instead of...33

If vramMultiplier is not present then a fixed value of 8 MB is used.
If it is present then the formula above applies -- note that vgamem is
16 MB, so vram = 2 * 16 MB = 32 MB is indeed the expected value.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Changing video memory size

2016-07-12 Thread Milan Zamazal
Alexis HAUSER  writes:

> I would like to change the video memory size (vram_size parameter), how can I 
> proceed ?

Look for vramMultiplier in osinfo-defaults.properties file.
The following formula applies: vram_size = vramMultiplier * vgamem

You must restart Engine to apply the new setting.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vms in paused state

2016-05-13 Thread Milan Zamazal
We've found out that if libvirtd got restarted then VMs with disabled
memory balloon device are wrongly reported as being in the paused state.
It's a bug and we're working on a fix.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vms in paused state

2016-05-04 Thread Milan Zamazal
Bill James  writes:

> .recovery setting before removing:
> p298
> sS'status'
> p299
> S'Paused'
> p300
>
> After removing .recovery file and shutdown and restart:
> V0
> sS'status'
> p51
> S'Up'
> p52

Thank you for the information.  I was able to reproduce the problem with
mistakenly reported paused state when Vdsm receives unexpected data from
libvirt.  I'll try to look at it.

Restarting Vdsm (4.17.18 and some newer versions) afterwards remedies
the problem for me, even without removing the recovery file.

Milan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] several questions about serial console

2016-04-20 Thread Milan Zamazal
Nathanaël Blanchet  writes:

> * how to get out of a selectionned vm where we are on the login prompt
> (why not backing up to the vm menu) rather than killing the ssh
> process or closing the terminal? Usual "^] " doesn't work there.

You must use ssh escape sequences with ssh, ~. should work here.
See ESCAPE CHARACTERS section in `man ssh' for more information.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Debian porting

2016-02-01 Thread Milan Zamazal
Nir Soffer  writes:

> If you must avoid /rhev/data-center, move it to /run/vdsm/data-center,
> but note that future version of vdsm may move it to /run/vdsm/storage,
> or another location, so old debain code will not be compatible with
> new debian code :-)
>
> It would be easier to support vdsm if the runtime configuration is the
> same on all platforms.

Thank you for explanation of the issue.  I think we can go with
/run/vdsm/data-center for now.  There's no hurry about final decision,
it would be nice to have the upstream decision about the future location
before the next Debian freeze, which is going to happen sometimes in
fall.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Debian porting

2016-01-29 Thread Milan Zamazal
lucas castro  writes:

> Who is working on ovirt debian porting,

I'm working on inclusion of Vdsm into Debian.  ovirt-guest-agent is
already included in Debian (packaged by another Debian maintainer).  I'm
not aware about any plans to package Engine for Debian nor I plan to do
so.

There are also Vdsm Debian packages provided by oVirt at
http://resources.ovirt.org/pub/ovirt-3.6/debian/, but I don't recommend
using them if you are going to use packages from standard Debian
distribution once they are ready.  While the packages to be included in
Debian started from those provided by oVirt, there are many fixes in
them and upgrading from oVirt repository packages to packages from
Debian is not supported (may change if there is strong demand for that).
So mixing those is likely to cause troubles.

As for Vdsm in Debian, I've already uploaded most of the supporting
packages (Python libraries, MoM) to Debian unstable.  Vdsm itself is in
preparation.

One blocker is old version of sanlock package in Debian and missing
sanlock-python package.  I wrote to the Debian package maintainer a few
days ago, no response so far.  In the meantime, it's possible to use
sanlock packages by oVirt from the URL mentioned above.  (Please note I
can't simply upload Debian package of another maintainer without his
consent, so we must be patient.)

> And how can I help ? 

If you'd like to help with Vdsm packaging in Debian, you can do so in
any of the following ways:

- Providing input on your needs.

- Providing feedback on what to do with /rhev/data-center mounts
  directory in Vdsm.  It's FHS incompatible and must be changed for
  Debian (the current location in the package is
  /run/vdsm/rhev/data-center).  The unpleasant thing is that AFAIK
  migrations are not possible with current Vdsm across machines with
  mounts at different locations, so we should be careful.

- Testing vdsm* packages once they are ready.  They're not yet but once
  they are, testing them will be very welcome.

- Providing feedback on the packaging.  The git repository is on Alioth:
  https://anonscm.debian.org/cgit/collab-maint/vdsm.git/ .  BTW, if
  anybody needs commit access (and doesn't have it) to the repository,
  tell me.  Just please coordinate with me in any case so that we avoid
  duplicate work or conflicting plans.

- The `vdsm*' packages are currently lintian clean, but completely
  untested, even installation may not work.  If you'd like to check the
  installation and to fix contingent bugs preventing it, it's welcome.
  You'll also need safelease
  (https://anonscm.debian.org/cgit/collab-maint/safelease.git/), not yet
  in Debian but ready to upload, I'll do so soon.

- Testing whether all the Vdsm related packages from unstable
  (python-cpopen, python-threading, ioprocess, safelease, mom, vdsm*)
  work on Debian 8 (jessie) as well.  Ideally, they might work
  unchanged, but in case they don't we may be considering backporting
  them.

- You can also review patches in debian/patches.  Maybe some of the
  changes should be incorporated upstream, maybe some of them should be
  improved.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Debian porting

2016-01-29 Thread Milan Zamazal
Rafael Martins <rmart...@redhat.com> writes:

> Milan Zamazal (CC'd) is working on packaging VDSM for Debian. You may
> want to talk to him about what is missing.

I just sent a complete report (and added devel@).  It's probably better
to keep the Debian packaging discussion on (one of) the lists.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users