I remember seeing the bug earlier but because it was closed thought it was
unrelated, this appears to be it....

https://bugzilla.redhat.com/show_bug.cgi?id=1670701

Perhaps I'm not understanding your question about the VM guest agent, but I
don't have any guest agent currently installed on the VM, not sure if the
output of my qemu-kvm process maybe answers this question?....

/usr/libexec/qemu-kvm -name guest=Headoffice.cbl-ho.local,debug-threads=on
-S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on
-m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1
-numa node,nodeid=0,cpus=0-7,mem=8192 -uuid
9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef
-no-user-config -nodefaults -chardev
socket,id=charmonitor,fd=31,server,nowait -mon
chardev=charmonitor,id=monitor,mode=control -rtc
base=2019-07-09T10:26:53,driftfix=slew -global
kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive
if=none,id=drive-ide0-1-0,readonly=on -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
-netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3
-chardev socket,id=charchannel0,fd=35,server,nowait -device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev socket,id=charchannel1,fd=36,server,nowait -device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice
tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-device
qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2
-incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
-object rng-random,id=objrng0,filename=/dev/urandom -device
virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox
on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
-msg timestamp=on

Please shout if you need further info.

Thanks.






On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov <hunter86...@yahoo.com>
wrote:

> Shouldn't cause that problem.
>
> You have to find the bug in bugzilla and report a regression (if it's not
> closed) , or open a new one and report the regression.
> As far as I remember , only the dashboard was affected due to new features
> about vdo disk savings.
>
> About the VM - this should be another issue. What agent are you using in
> the VMs (ovirt or qemu) ?
>
> Best Regards,
> Strahil Nikolov
>
> В вторник, 9 юли 2019 г., 10:09:05 ч. Гринуич-4, Neil <
> nwilson...@gmail.com> написа:
>
>
> Hi Strahil,
>
> Thanks for the quick reply.
> I put the cluster into global maintenance, then installed the 4.3 repo,
> then "yum update ovirt\*setup\*"  then "engine-upgrade-check",
> "engine-setup", then "yum update", once completed, I rebooted the
> hosted-engine VM, and took the cluster out of global maintenance.
>
> Thinking back to the upgrade from 4.1 to 4.2 I don't recall doing a "yum
> update" after doing the engine-setup, not sure if this would cause it
> perhaps?
>
> Thank you.
> Regards.
> Neil Wilson.
>
> On Tue, Jul 9, 2019 at 3:47 PM Strahil Nikolov <hunter86...@yahoo.com>
> wrote:
>
> Hi Neil,
>
> for "Could not fetch data needed for VM migrate operation" - there was a
> bug and it was fixed.
> Are you sure you have fully updated ?
> What procedure did you use ?
>
> Best Regards,
> Strahil Nikolov
>
> В вторник, 9 юли 2019 г., 7:26:21 ч. Гринуич-4, Neil <nwilson...@gmail.com>
> написа:
>
>
> Hi guys.
>
> I have two problems since upgrading from 4.2.x to 4.3.4
>
> First issue is I can no longer manually migrate VM's between hosts, I get
> an error in the ovirt GUI that says "Could not fetch data needed for VM
> migrate operation" and nothing gets logged either in my engine.log or my
> vdsm.log
>
> Then the other issue is my Dashboard says the following "Error! Could not
> fetch dashboard data. Please ensure that data warehouse is properly
> installed and configured."
>
> If I look at my ovirt-engine-dwhd.log I see the following if I try restart
> the dwh service...
>
> 2019-07-09 11:48:04|ETL Service Started
> ovirtEngineDbDriverClass|org.postgresql.Driver
>
> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
> hoursToKeepDaily|0
> hoursToKeepHourly|720
> ovirtEngineDbPassword|**********************
> runDeleteTime|3
>
> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
> runInterleave|60
> limitRows|limit 1000
> ovirtEngineHistoryDbUser|ovirt_engine_history
> ovirtEngineDbUser|engine
> deleteIncrement|10
> timeBetweenErrorEvents|300000
> hoursToKeepSamples|24
> deleteMultiplier|1000
> lastErrorSent|2011-07-03 12:46:47.000000
> etlVersion|4.3.0
> dwhAggregationDebug|false
> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
> ovirtEngineHistoryDbPassword|**********************
> 2019-07-09 11:48:10|ETL Service Stopped
> 2019-07-09 11:49:59|ETL Service Started
> ovirtEngineDbDriverClass|org.postgresql.Driver
>
> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
> hoursToKeepDaily|0
> hoursToKeepHourly|720
> ovirtEngineDbPassword|**********************
> runDeleteTime|3
>
> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
> runInterleave|60
> limitRows|limit 1000
> ovirtEngineHistoryDbUser|ovirt_engine_history
> ovirtEngineDbUser|engine
> deleteIncrement|10
> timeBetweenErrorEvents|300000
> hoursToKeepSamples|24
> deleteMultiplier|1000
> lastErrorSent|2011-07-03 12:46:47.000000
> etlVersion|4.3.0
> dwhAggregationDebug|false
> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
> ovirtEngineHistoryDbPassword|**********************
> 2019-07-09 11:52:56|ETL Service Stopped
> 2019-07-09 11:52:57|ETL Service Started
> ovirtEngineDbDriverClass|org.postgresql.Driver
>
> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
> hoursToKeepDaily|0
> hoursToKeepHourly|720
> ovirtEngineDbPassword|**********************
> runDeleteTime|3
>
> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
> runInterleave|60
> limitRows|limit 1000
> ovirtEngineHistoryDbUser|ovirt_engine_history
> ovirtEngineDbUser|engine
> deleteIncrement|10
> timeBetweenErrorEvents|300000
> hoursToKeepSamples|24
> deleteMultiplier|1000
> lastErrorSent|2011-07-03 12:46:47.000000
> etlVersion|4.3.0
> dwhAggregationDebug|false
> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
> ovirtEngineHistoryDbPassword|**********************
> 2019-07-09 12:16:01|ETL Service Stopped
> 2019-07-09 12:16:45|ETL Service Started
> ovirtEngineDbDriverClass|org.postgresql.Driver
>
> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
> hoursToKeepDaily|0
> hoursToKeepHourly|720
> ovirtEngineDbPassword|**********************
> runDeleteTime|3
>
> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
> runInterleave|60
> limitRows|limit 1000
> ovirtEngineHistoryDbUser|ovirt_engine_history
> ovirtEngineDbUser|engine
> deleteIncrement|10
> timeBetweenErrorEvents|300000
> hoursToKeepSamples|24
> deleteMultiplier|1000
> lastErrorSent|2011-07-03 12:46:47.000000
> etlVersion|4.3.0
> dwhAggregationDebug|false
> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
> ovirtEngineHistoryDbPassword|**********************
>
>
>
>
>
> I have a hosted engine, and I have two hosts and my storage is FC based.
> The hosts are still running on 4.2 because I'm unable to migrate VM's off.
>
> I have plenty resources available in terms of CPU and Memory on the
> destination host, and my Cluster version is set to 4.2 because my hosts are
> still on 4.2
>
> I have recently upgraded from 4.1 to 4.2 and then I upgraded my hosts to
> 4.2 as well, but I can't get my hosts to 4.3 because of the above migration
> issue.
>
> Below my ovirt packages installed...
>
> ovirt-ansible-cluster-upgrade-1.1.13-1.el7.noarch
> ovirt-ansible-disaster-recovery-1.1.4-1.el7.noarch
> ovirt-ansible-engine-setup-1.1.9-1.el7.noarch
> ovirt-ansible-hosted-engine-setup-1.0.20-1.el7.noarch
> ovirt-ansible-image-template-1.1.11-1.el7.noarch
> ovirt-ansible-infra-1.1.12-1.el7.noarch
> ovirt-ansible-manageiq-1.1.14-1.el7.noarch
> ovirt-ansible-repositories-1.1.5-1.el7.noarch
> ovirt-ansible-roles-1.1.6-1.el7.noarch
> ovirt-ansible-shutdown-env-1.0.3-1.el7.noarch
> ovirt-ansible-vm-infra-1.1.18-1.el7.noarch
> ovirt-cockpit-sso-0.1.1-1.el7.noarch
> ovirt-engine-4.3.4.3-1.el7.noarch
> ovirt-engine-api-explorer-0.0.5-1.el7.noarch
> ovirt-engine-backend-4.3.4.3-1.el7.noarch
> ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch
> ovirt-engine-dbscripts-4.3.4.3-1.el7.noarch
> ovirt-engine-dwh-4.3.0-1.el7.noarch
> ovirt-engine-dwh-setup-4.3.0-1.el7.noarch
> ovirt-engine-extension-aaa-jdbc-1.1.10-1.el7.noarch
> ovirt-engine-extensions-api-impl-4.3.4.3-1.el7.noarch
> ovirt-engine-metrics-1.3.3.1-1.el7.noarch
> ovirt-engine-restapi-4.3.4.3-1.el7.noarch
> ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch
> ovirt-engine-setup-4.3.4.3-1.el7.noarch
> ovirt-engine-setup-base-4.3.4.3-1.el7.noarch
> ovirt-engine-setup-plugin-cinderlib-4.3.4.3-1.el7.noarch
> ovirt-engine-setup-plugin-ovirt-engine-4.3.4.3-1.el7.noarch
> ovirt-engine-setup-plugin-ovirt-engine-common-4.3.4.3-1.el7.noarch
> ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.3.4.3-1.el7.noarch
> ovirt-engine-setup-plugin-websocket-proxy-4.3.4.3-1.el7.noarch
> ovirt-engine-tools-4.3.4.3-1.el7.noarch
> ovirt-engine-tools-backup-4.3.4.3-1.el7.noarch
> ovirt-engine-ui-extensions-1.0.5-1.el7.noarch
> ovirt-engine-vmconsole-proxy-helper-4.3.4.3-1.el7.noarch
> ovirt-engine-webadmin-portal-4.3.4.3-1.el7.noarch
> ovirt-engine-websocket-proxy-4.3.4.3-1.el7.noarch
> ovirt-engine-wildfly-15.0.1-1.el7.x86_64
> ovirt-engine-wildfly-overlay-15.0.1-1.el7.noarch
> ovirt-guest-agent-common-1.0.16-1.el7.noarch
> ovirt-guest-tools-iso-4.3-3.el7.noarch
> ovirt-host-deploy-common-1.8.0-1.el7.noarch
> ovirt-host-deploy-java-1.8.0-1.el7.noarch
> ovirt-imageio-common-1.5.1-0.el7.x86_64
> ovirt-imageio-proxy-1.5.1-0.el7.noarch
> ovirt-imageio-proxy-setup-1.5.1-0.el7.noarch
> ovirt-iso-uploader-4.3.1-1.el7.noarch
> ovirt-js-dependencies-1.2.0-3.1.el7.centos.noarch
> ovirt-provider-ovn-1.2.22-1.el7.noarch
> ovirt-release41-4.1.9-1.el7.centos.noarch
> ovirt-release42-4.2.8-1.el7.noarch
> ovirt-release43-4.3.4-1.el7.noarch
> ovirt-vmconsole-1.0.7-2.el7.noarch
> ovirt-vmconsole-proxy-1.0.7-2.el7.noarch
> ovirt-web-ui-1.5.2-1.el7.noarch
> python2-ovirt-engine-lib-4.3.4.3-1.el7.noarch
> python2-ovirt-host-deploy-1.8.0-1.el7.noarch
> python2-ovirt-setup-lib-1.2.0-1.el7.noarch
> python-ovirt-engine-sdk4-4.3.1-2.el7.x86_64
>
> [root@dell-ovirt ~]# rpm -qa | grep postgre
> rh-postgresql10-postgresql-contrib-10.6-1.el7.x86_64
> rh-postgresql10-postgresql-10.6-1.el7.x86_64
> postgresql-libs-9.2.24-1.el7_5.x86_64
> collectd-postgresql-5.8.1-4.el7.x86_64
> postgresql-server-9.2.24-1.el7_5.x86_64
> rh-postgresql10-postgresql-server-10.6-1.el7.x86_64
> rh-postgresql95-postgresql-9.5.14-1.el7.x86_64
> rh-postgresql95-postgresql-contrib-9.5.14-1.el7.x86_64
> postgresql-jdbc-9.2.1002-6.el7_5.noarch
> rh-postgresql10-runtime-3.1-1.el7.x86_64
> rh-postgresql95-postgresql-libs-9.5.14-1.el7.x86_64
> rh-postgresql10-postgresql-libs-10.6-1.el7.x86_64
> postgresql-9.2.24-1.el7_5.x86_64
> rh-postgresql95-runtime-2.2-2.el7.x86_64
> rh-postgresql95-postgresql-server-9.5.14-1.el7.x86_64
>
> I'm also seeing a strange error on my hosts when I log in showing...
>
> node status: DEGRADED
>   Please check the status manually using `nodectl check`
>
> [root@host-a ~]# nodectl check
> Status: FAILED
> Bootloader ... OK
>   Layer boot entries ... OK
>   Valid boot entries ... OK
> Mount points ... OK
>   Separate /var ... OK
>   Discard is used ... OK
> Basic storage ... OK
>   Initialized VG ... OK
>   Initialized Thin Pool ... OK
>   Initialized LVs ... OK
> Thin storage ... FAILED - It looks like the LVM layout is not correct. The
> reason could be an incorrect installation.
>   Checking available space in thinpool ... OK
>   Checking thinpool auto-extend ... FAILED - In order to enable thinpool
> auto-extend,activation/thin_pool_autoextend_threshold needs to be set below
> 100 in lvm.conf
> vdsmd ... OK
>
> I'm running CentOS Linux release 7.6.1810 (Core)
>
> These are my package versions on my hosts...
>
> [root@host-a ~]# rpm -qa | grep -i ovirt
> ovirt-release41-4.1.9-1.el7.centos.noarch
> ovirt-hosted-engine-ha-2.2.19-1.el7.noarch
> ovirt-host-deploy-1.7.4-1.el7.noarch
> ovirt-node-ng-nodectl-4.2.0-0.20190121.0.el7.noarch
> ovirt-vmconsole-host-1.0.6-2.el7.noarch
> ovirt-provider-ovn-driver-1.2.18-1.el7.noarch
> ovirt-engine-appliance-4.2-20190121.1.el7.noarch
> ovirt-release42-4.2.8-1.el7.noarch
> ovirt-release43-4.3.4-1.el7.noarch
> python-ovirt-engine-sdk4-4.2.9-2.el7.x86_64
> cockpit-ovirt-dashboard-0.11.38-1.el7.noarch
> ovirt-imageio-daemon-1.4.6-1.el7.noarch
> ovirt-host-4.2.3-1.el7.x86_64
> ovirt-setup-lib-1.1.5-1.el7.noarch
> ovirt-node-ng-image-update-4.2.8-1.el7.noarch
> ovirt-imageio-common-1.4.6-1.el7.x86_64
> ovirt-vmconsole-1.0.6-2.el7.noarch
> ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch
> ovirt-host-dependencies-4.2.3-1.el7.x86_64
> ovirt-release-host-node-4.2.8-1.el7.noarch
> cockpit-machines-ovirt-193-2.el7.noarch
> ovirt-hosted-engine-setup-2.2.33-1.el7.noarch
>
> [root@host-a ~]# rpm -qa | grep -i vdsm
> vdsm-http-4.20.46-1.el7.noarch
> vdsm-common-4.20.46-1.el7.noarch
> vdsm-network-4.20.46-1.el7.x86_64
> vdsm-jsonrpc-4.20.46-1.el7.noarch
> vdsm-4.20.46-1.el7.x86_64
> vdsm-hook-ethtool-options-4.20.46-1.el7.noarch
> vdsm-hook-vhostmd-4.20.46-1.el7.noarch
> vdsm-python-4.20.46-1.el7.noarch
> vdsm-api-4.20.46-1.el7.noarch
> vdsm-yajsonrpc-4.20.46-1.el7.noarch
> vdsm-hook-fcoe-4.20.46-1.el7.noarch
> vdsm-hook-openstacknet-4.20.46-1.el7.noarch
> vdsm-client-4.20.46-1.el7.noarch
> vdsm-gluster-4.20.46-1.el7.x86_64
> vdsm-hook-vmfex-dev-4.20.46-1.el7.noarch
>
> I am seeing the following error every minute or so in my vdsm.log as
> follows....
>
> 2019-07-09 12:50:31,543+0200 WARN  (qgapoller/2)
> [virt.periodic.VmDispatcher] could not run <function <lambda> at
> 0x7f52b01b85f0> on ['9a6561b8-5702-43dc-9e92-1dc5dfed4eef'] (periodic:323)
>
> Then also under /var/log/messages..
>
> Jul  9 12:57:48 host-a ovs-vsctl:
> ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database
> connection failed (No such file or directory)
>
> I'm not using ovn so I'm guessing this can be ignored.
>
> If I search for ERROR or WARN in my logs nothing relevant is logged
>
> Any suggestions on what to start looking for please?
>
> Please let me know if you need further info.
>
> Thank you.
>
> Regards.
>
> Neil Wilson
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AI3HSM3L7WNMT2AFJN6IOZEATH7OCHAI/
>
>
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LN3QX4AEVWLQYQLYLRJFGVW4EZLIW6VJ/

Reply via email to