[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-08-02 Thread Michal Skrivanek


> On 31 Jul 2019, at 16:16, Neil  wrote:
> 
> Hi Sharon,
> 
> This issue still persists, and when I saw that 4.3.5 was released I've tried 
> to upgrade, but I see it says there are no packages available, however I see 
> I have 11 updates that are version locked.

you probably upgraded setup files already, but didn’t run engine-setup, did you?

> Could this possibly be causing issues in terms of why updating to 4.3.5 when 
> it was in "pre" that it didn't resolve the dashboard problem?
> 
> [root@ovirt]# yum update "ovirt-*-setup*"
> Loaded plugins: fastestmirror, versionlock
> Repository centos-sclo-rh-release is listed more than once in the 
> configuration
> Repository ovirt-4.3-epel is listed more than once in the configuration
> Repository ovirt-4.3-centos-gluster6 is listed more than once in the 
> configuration
> Repository ovirt-4.3-virtio-win-latest is listed more than once in the 
> configuration
> Repository ovirt-4.3-centos-qemu-ev is listed more than once in the 
> configuration
> Repository ovirt-4.3-centos-ovirt43 is listed more than once in the 
> configuration
> Repository ovirt-4.3-centos-opstools is listed more than once in the 
> configuration
> Repository centos-sclo-rh-release is listed more than once in the 
> configuration
> Repository sac-gluster-ansible is listed more than once in the configuration
> Repository ovirt-4.3 is listed more than once in the configuration
> Loading mirror speeds from cached hostfile
> ovirt-4.3-epel/x86_64/metalink
>   |  46 kB  00:00:00 
>  * base: mirror.pcsp.co.za 
>  * extras: mirror.pcsp.co.za 
>  * ovirt-4.1: mirror.slu.cz 
>  * ovirt-4.1-epel: ftp.uni-bayreuth.de 
>  * ovirt-4.2: mirror.slu.cz 
>  * ovirt-4.2-epel: ftp.uni-bayreuth.de 
>  * ovirt-4.3-epel: ftp.uni-bayreuth.de 
>  * updates: mirror.bitco.co.za 
> ovirt-4.3-centos-gluster6 
>   | 2.9 kB  00:00:00 
> ovirt-4.3-centos-opstools 
>   | 2.9 kB  00:00:00 
> ovirt-4.3-centos-ovirt43  
>   | 2.9 kB  00:00:00 
> ovirt-4.3-centos-qemu-ev  
>   | 2.9 kB  00:00:00 
> ovirt-4.3-virtio-win-latest   
>   | 3.0 kB  00:00:00 
> sac-gluster-ansible   
>   | 3.3 kB  00:00:00 
> Excluding 11 updates due to versionlock (use "yum versionlock status" to show 
> them)
> No packages marked for update
> 
> [root@ovirt yum.repos.d]# yum versionlock status
> Loaded plugins: fastestmirror, versionlock
> Repository centos-sclo-rh-release is listed more than once in the 
> configuration
> Repository ovirt-4.3-epel is listed more than once in the configuration
> Repository ovirt-4.3-centos-gluster6 is listed more than once in the 
> configuration
> Repository ovirt-4.3-virtio-win-latest is listed more than once in the 
> configuration
> Repository ovirt-4.3-centos-qemu-ev is listed more than once in the 
> configuration
> Repository ovirt-4.3-centos-ovirt43 is listed more than once in the 
> configuration
> Repository ovirt-4.3-centos-opstools is listed more than once in the 
> configuration
> Repository centos-sclo-rh-release is listed more than once in the 
> configuration
> Repository sac-gluster-ansible is listed more than once in the configuration
> Repository ovirt-4.3 is listed more than once in the configuration
> Loading mirror speeds from cached hostfile
>  * base: mirror.pcsp.co.za 
>  * extras: mirror.pcsp.co.za 
>  * ovirt-4.1: mirror.slu.cz 
>  * ovirt-4.1-epel: ftp.uni-bayreuth.de 
>  * ovirt-4.2: mirror.slu.cz 
>  * ovirt-4.2-epel: ftp.uni-bayreuth.de 
>  * ovirt-4.3-epel: ftp.uni-bayreuth.de 
>  * updates: mirror.bitco.co.za 
> 0:ovirt-engine-webadmin-portal-4.2.8.2-1.el7.*
> 0:ovirt-engine-dwh-4.2.4.3-1.el7.*
> 0:ovirt-engine-tools-backup-4.2.8.2-1.el7.*
> 0:ovirt-engine-restapi-4.2.8.2-1.el7.*
> 0:ovirt-engine-dbscripts-4.2.8.2-1.el7.*
> 0:ovirt-engine-4.2.8.2-1.el7.*
> 0:ovirt-engine-backend-4.2.8.2-1.el7.*
> 0:ovirt-engine-wildfly-14.0.1-3.el7.*
> 0:ovirt-engine-wildfly-overlay-14.0.1-3.el7.*
> 

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-31 Thread Neil
Hi Sharon,

This issue still persists, and when I saw that 4.3.5 was released I've
tried to upgrade, but I see it says there are no packages available,
however I see I have 11 updates that are version locked. Could this
possibly be causing issues in terms of why updating to 4.3.5 when it was in
"pre" that it didn't resolve the dashboard problem?

[root@ovirt]# yum update "ovirt-*-setup*"
Loaded plugins: fastestmirror, versionlock
Repository centos-sclo-rh-release is listed more than once in the
configuration
Repository ovirt-4.3-epel is listed more than once in the configuration
Repository ovirt-4.3-centos-gluster6 is listed more than once in the
configuration
Repository ovirt-4.3-virtio-win-latest is listed more than once in the
configuration
Repository ovirt-4.3-centos-qemu-ev is listed more than once in the
configuration
Repository ovirt-4.3-centos-ovirt43 is listed more than once in the
configuration
Repository ovirt-4.3-centos-opstools is listed more than once in the
configuration
Repository centos-sclo-rh-release is listed more than once in the
configuration
Repository sac-gluster-ansible is listed more than once in the configuration
Repository ovirt-4.3 is listed more than once in the configuration
Loading mirror speeds from cached hostfile
ovirt-4.3-epel/x86_64/metalink
 |  46 kB  00:00:00
 * base: mirror.pcsp.co.za
 * extras: mirror.pcsp.co.za
 * ovirt-4.1: mirror.slu.cz
 * ovirt-4.1-epel: ftp.uni-bayreuth.de
 * ovirt-4.2: mirror.slu.cz
 * ovirt-4.2-epel: ftp.uni-bayreuth.de
 * ovirt-4.3-epel: ftp.uni-bayreuth.de
 * updates: mirror.bitco.co.za
ovirt-4.3-centos-gluster6
| 2.9 kB  00:00:00
ovirt-4.3-centos-opstools
| 2.9 kB  00:00:00
ovirt-4.3-centos-ovirt43
 | 2.9 kB  00:00:00
ovirt-4.3-centos-qemu-ev
 | 2.9 kB  00:00:00
ovirt-4.3-virtio-win-latest
| 3.0 kB  00:00:00
sac-gluster-ansible
| 3.3 kB  00:00:00
Excluding 11 updates due to versionlock (use "yum versionlock status" to
show them)
No packages marked for update

[root@ovirt yum.repos.d]# yum versionlock status
Loaded plugins: fastestmirror, versionlock
Repository centos-sclo-rh-release is listed more than once in the
configuration
Repository ovirt-4.3-epel is listed more than once in the configuration
Repository ovirt-4.3-centos-gluster6 is listed more than once in the
configuration
Repository ovirt-4.3-virtio-win-latest is listed more than once in the
configuration
Repository ovirt-4.3-centos-qemu-ev is listed more than once in the
configuration
Repository ovirt-4.3-centos-ovirt43 is listed more than once in the
configuration
Repository ovirt-4.3-centos-opstools is listed more than once in the
configuration
Repository centos-sclo-rh-release is listed more than once in the
configuration
Repository sac-gluster-ansible is listed more than once in the configuration
Repository ovirt-4.3 is listed more than once in the configuration
Loading mirror speeds from cached hostfile
 * base: mirror.pcsp.co.za
 * extras: mirror.pcsp.co.za
 * ovirt-4.1: mirror.slu.cz
 * ovirt-4.1-epel: ftp.uni-bayreuth.de
 * ovirt-4.2: mirror.slu.cz
 * ovirt-4.2-epel: ftp.uni-bayreuth.de
 * ovirt-4.3-epel: ftp.uni-bayreuth.de
 * updates: mirror.bitco.co.za
0:ovirt-engine-webadmin-portal-4.2.8.2-1.el7.*
0:ovirt-engine-dwh-4.2.4.3-1.el7.*
0:ovirt-engine-tools-backup-4.2.8.2-1.el7.*
0:ovirt-engine-restapi-4.2.8.2-1.el7.*
0:ovirt-engine-dbscripts-4.2.8.2-1.el7.*
0:ovirt-engine-4.2.8.2-1.el7.*
0:ovirt-engine-backend-4.2.8.2-1.el7.*
0:ovirt-engine-wildfly-14.0.1-3.el7.*
0:ovirt-engine-wildfly-overlay-14.0.1-3.el7.*
0:ovirt-engine-tools-4.2.8.2-1.el7.*
0:ovirt-engine-extension-aaa-jdbc-1.1.7-1.el7.centos.*
versionlock status done

Any ideas?

Thank you.
Regards.
Neil Wilson.



On Wed, Jul 24, 2019 at 3:46 PM Neil  wrote:

> Hi Sharon,
>
> Thank you for the info and apologies for the very late reply.
>
> I've done the service ovirt-engine-dwhd restart, and unfortunately
> there's no difference, below is the log
>
> 2019-07-24
> 03:00:00|3lI186|A138nf|XhBMpJ|OVIRT_ENGINE_DWH|DeleteTimeKeepingJob|Default|6|Java
> Exception|tJDBCInput_10|org.postgresql.util.PSQLException:This connection
> has been closed.|1
> Exception in component tJDBCInput_10
> org.postgresql.util.PSQLException: This connection has been closed.
> at
> org.postgresql.jdbc2.AbstractJdbc2Connection.checkClosed(AbstractJdbc2Connection.java:822)
> at
> org.postgresql.jdbc3.AbstractJdbc3Connection.createStatement(AbstractJdbc3Connection.java:229)
> at
> org.postgresql.jdbc2.AbstractJdbc2Connection.createStatement(AbstractJdbc2Connection.java:294)
> at
> ovirt_engine_dwh.deletetimekeepingjob_4_3.DeleteTimeKeepingJob.tJDBCInput_10Process(DeleteTimeKeepingJob.java:1493)
> at
> 

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-16 Thread Sharon Gratch
Hi,

For the dashboard:
If ovirt-engine-dwh is still installed and running after upgrade
(service ovirt-engine-dwhd restart) then can you please re-check the
ovirt-engine-dwh.log file for errors?
@Shirly Radco  anything else to check?

For the Migrate option, please attach again your browser console log
snippet when you have the problem and also a screenshot of the error.

Please also attach the engine log (the warnings you mentioned are not
related to those issues).

Thanks,
Sharon

On Tue, Jul 16, 2019 at 4:14 PM Neil  wrote:

> Hi Sharon,
>
> Thank you for coming back to me.
>
> Unfortunately I've upgraded to 4.3.5 today and both issues still persist.
> I have also tried clearing all data out of my browser and re-logged back in.
>
> I see a new error though in my engine.log as below, however I still don't
> see anything logged when I click the migrate button...
>
> 2019-07-16 15:01:19,600+02 WARN
>  [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15)
> [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'balloonEnabled' can not be
> updated when status is 'Up'
> 2019-07-16 15:01:19,601+02 WARN
>  [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15)
> [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'watchdog' can not be updated
> when status is 'Up'
> 2019-07-16 15:01:19,602+02 WARN
>  [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15)
> [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'rngDevice' can not be updated
> when status is 'Up'
> 2019-07-16 15:01:19,602+02 WARN
>  [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15)
> [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'soundDeviceEnabled' can not
> be updated when status is 'Up'
> 2019-07-16 15:01:19,603+02 WARN
>  [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15)
> [685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'consoleEnabled' can not be
> updated when status is 'Up'
>
> Then in my vdsm.log I'm seeing the following error
>
> 2019-07-16 15:05:59,038+0200 WARN  (qgapoller/3)
> [virt.periodic.VmDispatcher] could not run  at
> 0x7f00a00476e0> on ['ded20d05-f558-4e17-bf2d-e4907e1bbcde',
> '8c93b301-b50d-4d3d-b6cb-54abb3d7f0bb',
> '8d8571bf-a7ce-4e73-8d3e-fe1a2aab9b4b',
> '2489c75f-2758-4d82-8338-12f02ff78afa',
> '9a6561b8-5702-43dc-9e92-1dc5dfed4eef',
> '523ad9ee-5738-42f2-9ee1-50727207e93b',
> '84f4685b-39e1-4bc8-b8ab-755a2c325cb0',
> '43c06f86-2e37-410b-84be-47e83052344a',
> '6f44a02c-5de6-4002-992f-2c2c5feb2ee5',
> '19844323-b3cc-441a-8d70-e45326848b10',
> '77872f3d-c69f-48ab-992b-1d2765a38481'] (periodic:289)
>
> 2019-07-16 15:06:09,036+0200 WARN  (qgapoller/2)
> [virt.periodic.VmDispatcher] could not run  at
> 0x7f00a00476e0> on ['ded20d05-f558-4e17-bf2d-e4907e1bbcde',
> '8c93b301-b50d-4d3d-b6cb-54abb3d7f0bb',
> '8d8571bf-a7ce-4e73-8d3e-fe1a2aab9b4b',
> '2489c75f-2758-4d82-8338-12f02ff78afa',
> '9a6561b8-5702-43dc-9e92-1dc5dfed4eef',
> '523ad9ee-5738-42f2-9ee1-50727207e93b',
> '84f4685b-39e1-4bc8-b8ab-755a2c325cb0',
> '43c06f86-2e37-410b-84be-47e83052344a',
> '6f44a02c-5de6-4002-992f-2c2c5feb2ee5',
> '19844323-b3cc-441a-8d70-e45326848b10',
> '77872f3d-c69f-48ab-992b-1d2765a38481'] (periodic:289)
>
> I'm not sure if this is related to either of the above issues though, but
> I can attach the full log if needed.
>
> Please shout if there is anything else you think I can try doing.
>
> Thank you.
>
> Regards.
>
> Neil Wilson
>
>
>
>
> On Mon, Jul 15, 2019 at 11:29 AM Sharon Gratch  wrote:
>
>> Hi Neil,
>>
>> Regarding issue 1 (Dashboard):
>> I recommend to upgrade to latest oVirt version 4.3.5, for this fix as
>> well as other enhancements and bug fixes.
>> For oVirt 4.3.5 installation / upgrade instructions:
>> http://www.ovirt.org/release/4.3.5/
>>
>> Regarding issue 2 (Manual Migrate dialog):
>> If it will be reproduced after upgrading then please try to clean your
>> browser caching before running the admin portal. It might help.
>>
>> Regards,
>> Sharon
>>
>> On Thu, Jul 11, 2019 at 1:24 PM Neil  wrote:
>>
>>>
>>> Hi Sharon,
>>>
>>> Thanks for the assistance.
>>> On Thu, Jul 11, 2019 at 11:58 AM Sharon Gratch 
>>> wrote:
>>>
 Hi,

 Regarding issue 1 (Dashboard):
 Did you upgrade the engine to 4.3.5? There was a bug fixed in version
 4.3.4-5 https://bugzilla.redhat.com/show_bug.cgi?id=1713967 and it may
 be the same issue.

>>>
>>>
>>> No I  wasn't aware that there were updates, how do I obtain 4.3.4-5 is
>>> there another repo available?
>>>
>>> Regarding issue 2 (Manual Migrate dialog):
 Can you please attach your browser console log and engine.log snippet
 when you have the problem?
 If you could take from the console log the actual REST API response,
 that would be great.
 The request will be something like
 /api/hosts?migration_target_of=...

>>>
>>> Please see attached text log for the browser console, I don't see any
>>> REST API being logged, just a stack trace error.
>>> The engine.log literally doesn't 

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-16 Thread Neil
Hi Sharon,

Thank you for coming back to me.

Unfortunately I've upgraded to 4.3.5 today and both issues still persist. I
have also tried clearing all data out of my browser and re-logged back in.

I see a new error though in my engine.log as below, however I still don't
see anything logged when I click the migrate button...

2019-07-16 15:01:19,600+02 WARN
 [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15)
[685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'balloonEnabled' can not be
updated when status is 'Up'
2019-07-16 15:01:19,601+02 WARN
 [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15)
[685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'watchdog' can not be updated
when status is 'Up'
2019-07-16 15:01:19,602+02 WARN
 [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15)
[685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'rngDevice' can not be updated
when status is 'Up'
2019-07-16 15:01:19,602+02 WARN
 [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15)
[685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'soundDeviceEnabled' can not
be updated when status is 'Up'
2019-07-16 15:01:19,603+02 WARN
 [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15)
[685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'consoleEnabled' can not be
updated when status is 'Up'

Then in my vdsm.log I'm seeing the following error

2019-07-16 15:05:59,038+0200 WARN  (qgapoller/3)
[virt.periodic.VmDispatcher] could not run  at
0x7f00a00476e0> on ['ded20d05-f558-4e17-bf2d-e4907e1bbcde',
'8c93b301-b50d-4d3d-b6cb-54abb3d7f0bb',
'8d8571bf-a7ce-4e73-8d3e-fe1a2aab9b4b',
'2489c75f-2758-4d82-8338-12f02ff78afa',
'9a6561b8-5702-43dc-9e92-1dc5dfed4eef',
'523ad9ee-5738-42f2-9ee1-50727207e93b',
'84f4685b-39e1-4bc8-b8ab-755a2c325cb0',
'43c06f86-2e37-410b-84be-47e83052344a',
'6f44a02c-5de6-4002-992f-2c2c5feb2ee5',
'19844323-b3cc-441a-8d70-e45326848b10',
'77872f3d-c69f-48ab-992b-1d2765a38481'] (periodic:289)

2019-07-16 15:06:09,036+0200 WARN  (qgapoller/2)
[virt.periodic.VmDispatcher] could not run  at
0x7f00a00476e0> on ['ded20d05-f558-4e17-bf2d-e4907e1bbcde',
'8c93b301-b50d-4d3d-b6cb-54abb3d7f0bb',
'8d8571bf-a7ce-4e73-8d3e-fe1a2aab9b4b',
'2489c75f-2758-4d82-8338-12f02ff78afa',
'9a6561b8-5702-43dc-9e92-1dc5dfed4eef',
'523ad9ee-5738-42f2-9ee1-50727207e93b',
'84f4685b-39e1-4bc8-b8ab-755a2c325cb0',
'43c06f86-2e37-410b-84be-47e83052344a',
'6f44a02c-5de6-4002-992f-2c2c5feb2ee5',
'19844323-b3cc-441a-8d70-e45326848b10',
'77872f3d-c69f-48ab-992b-1d2765a38481'] (periodic:289)

I'm not sure if this is related to either of the above issues though, but I
can attach the full log if needed.

Please shout if there is anything else you think I can try doing.

Thank you.

Regards.

Neil Wilson




On Mon, Jul 15, 2019 at 11:29 AM Sharon Gratch  wrote:

> Hi Neil,
>
> Regarding issue 1 (Dashboard):
> I recommend to upgrade to latest oVirt version 4.3.5, for this fix as well
> as other enhancements and bug fixes.
> For oVirt 4.3.5 installation / upgrade instructions:
> http://www.ovirt.org/release/4.3.5/
>
> Regarding issue 2 (Manual Migrate dialog):
> If it will be reproduced after upgrading then please try to clean your
> browser caching before running the admin portal. It might help.
>
> Regards,
> Sharon
>
> On Thu, Jul 11, 2019 at 1:24 PM Neil  wrote:
>
>>
>> Hi Sharon,
>>
>> Thanks for the assistance.
>> On Thu, Jul 11, 2019 at 11:58 AM Sharon Gratch 
>> wrote:
>>
>>> Hi,
>>>
>>> Regarding issue 1 (Dashboard):
>>> Did you upgrade the engine to 4.3.5? There was a bug fixed in version
>>> 4.3.4-5 https://bugzilla.redhat.com/show_bug.cgi?id=1713967 and it may
>>> be the same issue.
>>>
>>
>>
>> No I  wasn't aware that there were updates, how do I obtain 4.3.4-5 is
>> there another repo available?
>>
>> Regarding issue 2 (Manual Migrate dialog):
>>> Can you please attach your browser console log and engine.log snippet
>>> when you have the problem?
>>> If you could take from the console log the actual REST API response,
>>> that would be great.
>>> The request will be something like
>>> /api/hosts?migration_target_of=...
>>>
>>
>> Please see attached text log for the browser console, I don't see any
>> REST API being logged, just a stack trace error.
>> The engine.log literally doesn't get updated when I click the Migrate
>> button so there isn't anything to share unfortunately.
>>
>> Please shout if you need further info.
>>
>> Thank you!
>>
>>
>>
>>
>>>
>>>
>>> On Thu, Jul 11, 2019 at 10:04 AM Neil  wrote:
>>>
 Hi everyone,
 Just an update.

 I have both hosts upgraded to 4.3, I have upgraded my DC and cluster to
 4.3 and I'm still faced with the same problems.

 1.) My Dashboard says the following "Error! Could not fetch dashboard
 data. Please ensure that data warehouse is properly installed and
 configured."

 2.) When I click the Migrate button I get the error "Could not fetch
 data needed for VM migrate operation"

 Upgrading my 

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-15 Thread Sharon Gratch
Hi Neil,

Regarding issue 1 (Dashboard):
I recommend to upgrade to latest oVirt version 4.3.5, for this fix as well
as other enhancements and bug fixes.
For oVirt 4.3.5 installation / upgrade instructions:
http://www.ovirt.org/release/4.3.5/

Regarding issue 2 (Manual Migrate dialog):
If it will be reproduced after upgrading then please try to clean your
browser caching before running the admin portal. It might help.

Regards,
Sharon

On Thu, Jul 11, 2019 at 1:24 PM Neil  wrote:

>
> Hi Sharon,
>
> Thanks for the assistance.
> On Thu, Jul 11, 2019 at 11:58 AM Sharon Gratch  wrote:
>
>> Hi,
>>
>> Regarding issue 1 (Dashboard):
>> Did you upgrade the engine to 4.3.5? There was a bug fixed in version
>> 4.3.4-5 https://bugzilla.redhat.com/show_bug.cgi?id=1713967 and it may
>> be the same issue.
>>
>
>
> No I  wasn't aware that there were updates, how do I obtain 4.3.4-5 is
> there another repo available?
>
> Regarding issue 2 (Manual Migrate dialog):
>> Can you please attach your browser console log and engine.log snippet
>> when you have the problem?
>> If you could take from the console log the actual REST API response,
>> that would be great.
>> The request will be something like
>> /api/hosts?migration_target_of=...
>>
>
> Please see attached text log for the browser console, I don't see any REST
> API being logged, just a stack trace error.
> The engine.log literally doesn't get updated when I click the Migrate
> button so there isn't anything to share unfortunately.
>
> Please shout if you need further info.
>
> Thank you!
>
>
>
>
>>
>>
>> On Thu, Jul 11, 2019 at 10:04 AM Neil  wrote:
>>
>>> Hi everyone,
>>> Just an update.
>>>
>>> I have both hosts upgraded to 4.3, I have upgraded my DC and cluster to
>>> 4.3 and I'm still faced with the same problems.
>>>
>>> 1.) My Dashboard says the following "Error! Could not fetch dashboard
>>> data. Please ensure that data warehouse is properly installed and
>>> configured."
>>>
>>> 2.) When I click the Migrate button I get the error "Could not fetch
>>> data needed for VM migrate operation"
>>>
>>> Upgrading my hosts resolved the "node status: DEGRADED" issue so at
>>> least it's one issue down.
>>>
>>> I've done an engine-upgrade-check and a yum update on all my hosts and
>>> engine and there are no further updates or patches waiting.
>>> Nothing is logged in my engine.log when I click the Migrate button
>>> either.
>>>
>>> Any ideas what to do or try for  1 and 2 above?
>>>
>>> Thank you.
>>>
>>> Regards.
>>>
>>> Neil Wilson.
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Jul 11, 2019 at 8:27 AM Alex K  wrote:
>>>


 On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek <
 michal.skriva...@redhat.com> wrote:

>
>
> On 11 Jul 2019, at 06:34, Alex K  wrote:
>
>
>
> On Tue, Jul 9, 2019, 19:10 Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
>
>>
>>
>> On 9 Jul 2019, at 17:16, Strahil  wrote:
>>
>> I'm not sure, but I always thought that you need  an agent for live
>> migrations.
>>
>>
>> You don’t. For snapshots, and other less important stuff like
>> reporting IPs you do. In 4.3 you should be fine with qemu-ga only
>>
> I've seen resolving live migration issues by installing newer versions
> of ovirt ga.
>
>
> Hm, it shouldn’t make any difference whatsoever. Do you have any
> concrete data? that would help.
>
 That is some time ago when runnign 4.1. No data unfortunately. Also did
 not expect ovirt ga to affect migration, but experience showed me that it
 did.  The only observation is that it affected only Windows VMs. Linux VMs
 never had an issue, regardless of ovirt ga.

> You can always try installing either qemu-guest-agent  or
>> ovirt-guest-agent and check if live  migration between hosts is possible.
>>
>> Have you set the new cluster/dc version ?
>>
>> Best Regards
>> Strahil Nikolov
>> On Jul 9, 2019 17:42, Neil  wrote:
>>
>> I remember seeing the bug earlier but because it was closed thought
>> it was unrelated, this appears to be it
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1670701
>>
>> Perhaps I'm not understanding your question about the VM guest agent,
>> but I don't have any guest agent currently installed on the VM, not sure 
>> if
>> the output of my qemu-kvm process maybe answers this question?
>>
>> /usr/libexec/qemu-kvm -name
>> guest=Headoffice.cbl-ho.local,debug-threads=on -S -object
>> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes
>> -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
>> Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on
>> -m 8192 -realtime mlock=off -smp 
>> 8,maxcpus=64,sockets=16,cores=4,threads=1
>> -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-11 Thread Neil
Hi Sharon,

Thanks for the assistance.
On Thu, Jul 11, 2019 at 11:58 AM Sharon Gratch  wrote:

> Hi,
>
> Regarding issue 1 (Dashboard):
> Did you upgrade the engine to 4.3.5? There was a bug fixed in version
> 4.3.4-5 https://bugzilla.redhat.com/show_bug.cgi?id=1713967 and it may be
> the same issue.
>


No I  wasn't aware that there were updates, how do I obtain 4.3.4-5 is
there another repo available?

Regarding issue 2 (Manual Migrate dialog):
> Can you please attach your browser console log and engine.log snippet when
> you have the problem?
> If you could take from the console log the actual REST API response, that
> would be great.
> The request will be something like
> /api/hosts?migration_target_of=...
>

Please see attached text log for the browser console, I don't see any REST
API being logged, just a stack trace error.
The engine.log literally doesn't get updated when I click the Migrate
button so there isn't anything to share unfortunately.

Please shout if you need further info.

Thank you!




>
>
> On Thu, Jul 11, 2019 at 10:04 AM Neil  wrote:
>
>> Hi everyone,
>> Just an update.
>>
>> I have both hosts upgraded to 4.3, I have upgraded my DC and cluster to
>> 4.3 and I'm still faced with the same problems.
>>
>> 1.) My Dashboard says the following "Error! Could not fetch dashboard
>> data. Please ensure that data warehouse is properly installed and
>> configured."
>>
>> 2.) When I click the Migrate button I get the error "Could not fetch
>> data needed for VM migrate operation"
>>
>> Upgrading my hosts resolved the "node status: DEGRADED" issue so at least
>> it's one issue down.
>>
>> I've done an engine-upgrade-check and a yum update on all my hosts and
>> engine and there are no further updates or patches waiting.
>> Nothing is logged in my engine.log when I click the Migrate button either.
>>
>> Any ideas what to do or try for  1 and 2 above?
>>
>> Thank you.
>>
>> Regards.
>>
>> Neil Wilson.
>>
>>
>>
>>
>>
>> On Thu, Jul 11, 2019 at 8:27 AM Alex K  wrote:
>>
>>>
>>>
>>> On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek <
>>> michal.skriva...@redhat.com> wrote:
>>>


 On 11 Jul 2019, at 06:34, Alex K  wrote:



 On Tue, Jul 9, 2019, 19:10 Michal Skrivanek <
 michal.skriva...@redhat.com> wrote:

>
>
> On 9 Jul 2019, at 17:16, Strahil  wrote:
>
> I'm not sure, but I always thought that you need  an agent for live
> migrations.
>
>
> You don’t. For snapshots, and other less important stuff like
> reporting IPs you do. In 4.3 you should be fine with qemu-ga only
>
 I've seen resolving live migration issues by installing newer versions
 of ovirt ga.


 Hm, it shouldn’t make any difference whatsoever. Do you have any
 concrete data? that would help.

>>> That is some time ago when runnign 4.1. No data unfortunately. Also did
>>> not expect ovirt ga to affect migration, but experience showed me that it
>>> did.  The only observation is that it affected only Windows VMs. Linux VMs
>>> never had an issue, regardless of ovirt ga.
>>>
 You can always try installing either qemu-guest-agent  or
> ovirt-guest-agent and check if live  migration between hosts is possible.
>
> Have you set the new cluster/dc version ?
>
> Best Regards
> Strahil Nikolov
> On Jul 9, 2019 17:42, Neil  wrote:
>
> I remember seeing the bug earlier but because it was closed thought it
> was unrelated, this appears to be it
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1670701
>
> Perhaps I'm not understanding your question about the VM guest agent,
> but I don't have any guest agent currently installed on the VM, not sure 
> if
> the output of my qemu-kvm process maybe answers this question?
>
> /usr/libexec/qemu-kvm -name
> guest=Headoffice.cbl-ho.local,debug-threads=on -S -object
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes
> -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
> Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on
> -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1
> -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid
> 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios
> type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-
>
>
 It’s 7.3, likely oVirt 4.1. Please upgrade...

 C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config
> -nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon
> chardev=charmonitor,id=monitor,mode=control -rtc
> base=2019-07-09T10:26:53,driftfix=slew -global
> kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on
> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> 

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-11 Thread Sharon Gratch
Hi,

Regarding issue 1 (Dashboard):
Did you upgrade the engine to 4.3.5? There was a bug fixed in version
4.3.4-5 https://bugzilla.redhat.com/show_bug.cgi?id=1713967 and it may be
the same issue.

Regarding issue 2 (Manual Migrate dialog):
Can you please attach your browser console log and engine.log snippet when
you have the problem?
If you could take from the console log the actual REST API response, that
would be great.
The request will be something like
/api/hosts?migration_target_of=...

Thanks,
Sharon



On Thu, Jul 11, 2019 at 10:04 AM Neil  wrote:

> Hi everyone,
> Just an update.
>
> I have both hosts upgraded to 4.3, I have upgraded my DC and cluster to
> 4.3 and I'm still faced with the same problems.
>
> 1.) My Dashboard says the following "Error! Could not fetch dashboard
> data. Please ensure that data warehouse is properly installed and
> configured."
>
> 2.) When I click the Migrate button I get the error "Could not fetch data
> needed for VM migrate operation"
>
> Upgrading my hosts resolved the "node status: DEGRADED" issue so at least
> it's one issue down.
>
> I've done an engine-upgrade-check and a yum update on all my hosts and
> engine and there are no further updates or patches waiting.
> Nothing is logged in my engine.log when I click the Migrate button either.
>
> Any ideas what to do or try for  1 and 2 above?
>
> Thank you.
>
> Regards.
>
> Neil Wilson.
>
>
>
>
>
> On Thu, Jul 11, 2019 at 8:27 AM Alex K  wrote:
>
>>
>>
>> On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek <
>> michal.skriva...@redhat.com> wrote:
>>
>>>
>>>
>>> On 11 Jul 2019, at 06:34, Alex K  wrote:
>>>
>>>
>>>
>>> On Tue, Jul 9, 2019, 19:10 Michal Skrivanek 
>>> wrote:
>>>


 On 9 Jul 2019, at 17:16, Strahil  wrote:

 I'm not sure, but I always thought that you need  an agent for live
 migrations.


 You don’t. For snapshots, and other less important stuff like reporting
 IPs you do. In 4.3 you should be fine with qemu-ga only

>>> I've seen resolving live migration issues by installing newer versions
>>> of ovirt ga.
>>>
>>>
>>> Hm, it shouldn’t make any difference whatsoever. Do you have any
>>> concrete data? that would help.
>>>
>> That is some time ago when runnign 4.1. No data unfortunately. Also did
>> not expect ovirt ga to affect migration, but experience showed me that it
>> did.  The only observation is that it affected only Windows VMs. Linux VMs
>> never had an issue, regardless of ovirt ga.
>>
>>> You can always try installing either qemu-guest-agent  or
 ovirt-guest-agent and check if live  migration between hosts is possible.

 Have you set the new cluster/dc version ?

 Best Regards
 Strahil Nikolov
 On Jul 9, 2019 17:42, Neil  wrote:

 I remember seeing the bug earlier but because it was closed thought it
 was unrelated, this appears to be it

 https://bugzilla.redhat.com/show_bug.cgi?id=1670701

 Perhaps I'm not understanding your question about the VM guest agent,
 but I don't have any guest agent currently installed on the VM, not sure if
 the output of my qemu-kvm process maybe answers this question?

 /usr/libexec/qemu-kvm -name
 guest=Headoffice.cbl-ho.local,debug-threads=on -S -object
 secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes
 -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
 Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on
 -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1
 -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid
 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios
 type=1,manufacturer=oVirt,product=oVirt
 Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-


>>> It’s 7.3, likely oVirt 4.1. Please upgrade...
>>>
>>> C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config
 -nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon
 chardev=charmonitor,id=monitor,mode=control -rtc
 base=2019-07-09T10:26:53,driftfix=slew -global
 kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on
 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
 virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
 virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive
 if=none,id=drive-ide0-1-0,readonly=on -device
 ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
 file=/rhev/data-center/59831b91-00a5-01e4-0294-0018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native
 -device
 

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-11 Thread Neil
Hi everyone,
Just an update.

I have both hosts upgraded to 4.3, I have upgraded my DC and cluster to 4.3
and I'm still faced with the same problems.

1.) My Dashboard says the following "Error! Could not fetch dashboard data.
Please ensure that data warehouse is properly installed and configured."

2.) When I click the Migrate button I get the error "Could not fetch data
needed for VM migrate operation"

Upgrading my hosts resolved the "node status: DEGRADED" issue so at least
it's one issue down.

I've done an engine-upgrade-check and a yum update on all my hosts and
engine and there are no further updates or patches waiting.
Nothing is logged in my engine.log when I click the Migrate button either.

Any ideas what to do or try for  1 and 2 above?

Thank you.

Regards.

Neil Wilson.





On Thu, Jul 11, 2019 at 8:27 AM Alex K  wrote:

>
>
> On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
>
>>
>>
>> On 11 Jul 2019, at 06:34, Alex K  wrote:
>>
>>
>>
>> On Tue, Jul 9, 2019, 19:10 Michal Skrivanek 
>> wrote:
>>
>>>
>>>
>>> On 9 Jul 2019, at 17:16, Strahil  wrote:
>>>
>>> I'm not sure, but I always thought that you need  an agent for live
>>> migrations.
>>>
>>>
>>> You don’t. For snapshots, and other less important stuff like reporting
>>> IPs you do. In 4.3 you should be fine with qemu-ga only
>>>
>> I've seen resolving live migration issues by installing newer versions of
>> ovirt ga.
>>
>>
>> Hm, it shouldn’t make any difference whatsoever. Do you have any concrete
>> data? that would help.
>>
> That is some time ago when runnign 4.1. No data unfortunately. Also did
> not expect ovirt ga to affect migration, but experience showed me that it
> did.  The only observation is that it affected only Windows VMs. Linux VMs
> never had an issue, regardless of ovirt ga.
>
>> You can always try installing either qemu-guest-agent  or
>>> ovirt-guest-agent and check if live  migration between hosts is possible.
>>>
>>> Have you set the new cluster/dc version ?
>>>
>>> Best Regards
>>> Strahil Nikolov
>>> On Jul 9, 2019 17:42, Neil  wrote:
>>>
>>> I remember seeing the bug earlier but because it was closed thought it
>>> was unrelated, this appears to be it
>>>
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1670701
>>>
>>> Perhaps I'm not understanding your question about the VM guest agent,
>>> but I don't have any guest agent currently installed on the VM, not sure if
>>> the output of my qemu-kvm process maybe answers this question?
>>>
>>> /usr/libexec/qemu-kvm -name
>>> guest=Headoffice.cbl-ho.local,debug-threads=on -S -object
>>> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes
>>> -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
>>> Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on
>>> -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1
>>> -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid
>>> 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios
>>> type=1,manufacturer=oVirt,product=oVirt
>>> Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-
>>>
>>>
>> It’s 7.3, likely oVirt 4.1. Please upgrade...
>>
>> C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config
>>> -nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon
>>> chardev=charmonitor,id=monitor,mode=control -rtc
>>> base=2019-07-09T10:26:53,driftfix=slew -global
>>> kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on
>>> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
>>> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
>>> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive
>>> if=none,id=drive-ide0-1-0,readonly=on -device
>>> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
>>> file=/rhev/data-center/59831b91-00a5-01e4-0294-0018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native
>>> -device
>>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
>>> -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device
>>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3
>>> -chardev socket,id=charchannel0,fd=35,server,nowait -device
>>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
>>> -chardev socket,id=charchannel1,fd=36,server,nowait -device
>>> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
>>> -chardev spicevmc,id=charchannel2,name=vdagent -device
>>> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
>>> -spice 
>>> 

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-11 Thread Alex K
On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
>
> On 11 Jul 2019, at 06:34, Alex K  wrote:
>
>
>
> On Tue, Jul 9, 2019, 19:10 Michal Skrivanek 
> wrote:
>
>>
>>
>> On 9 Jul 2019, at 17:16, Strahil  wrote:
>>
>> I'm not sure, but I always thought that you need  an agent for live
>> migrations.
>>
>>
>> You don’t. For snapshots, and other less important stuff like reporting
>> IPs you do. In 4.3 you should be fine with qemu-ga only
>>
> I've seen resolving live migration issues by installing newer versions of
> ovirt ga.
>
>
> Hm, it shouldn’t make any difference whatsoever. Do you have any concrete
> data? that would help.
>
That is some time ago when runnign 4.1. No data unfortunately. Also did not
expect ovirt ga to affect migration, but experience showed me that it did.
The only observation is that it affected only Windows VMs. Linux VMs never
had an issue, regardless of ovirt ga.

> You can always try installing either qemu-guest-agent  or
>> ovirt-guest-agent and check if live  migration between hosts is possible.
>>
>> Have you set the new cluster/dc version ?
>>
>> Best Regards
>> Strahil Nikolov
>> On Jul 9, 2019 17:42, Neil  wrote:
>>
>> I remember seeing the bug earlier but because it was closed thought it
>> was unrelated, this appears to be it
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1670701
>>
>> Perhaps I'm not understanding your question about the VM guest agent, but
>> I don't have any guest agent currently installed on the VM, not sure if the
>> output of my qemu-kvm process maybe answers this question?
>>
>> /usr/libexec/qemu-kvm -name
>> guest=Headoffice.cbl-ho.local,debug-threads=on -S -object
>> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes
>> -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
>> Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on
>> -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1
>> -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid
>> 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios
>> type=1,manufacturer=oVirt,product=oVirt
>> Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-
>>
>>
> It’s 7.3, likely oVirt 4.1. Please upgrade...
>
> C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config
>> -nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon
>> chardev=charmonitor,id=monitor,mode=control -rtc
>> base=2019-07-09T10:26:53,driftfix=slew -global
>> kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on
>> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
>> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
>> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive
>> if=none,id=drive-ide0-1-0,readonly=on -device
>> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
>> file=/rhev/data-center/59831b91-00a5-01e4-0294-0018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native
>> -device
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
>> -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device
>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3
>> -chardev socket,id=charchannel0,fd=35,server,nowait -device
>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
>> -chardev socket,id=charchannel1,fd=36,server,nowait -device
>> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
>> -chardev spicevmc,id=charchannel2,name=vdagent -device
>> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
>> -spice 
>> tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
>> -device
>> qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2
>> -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
>> -object rng-random,id=objrng0,filename=/dev/urandom -device
>> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox
>> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
>> -msg timestamp=on
>>
>> Please shout if you need further info.
>>
>> Thanks.
>>
>>
>>
>>
>>
>>
>> On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov 
>> wrote:
>>
>> Shouldn't cause that problem.
>>
>> You have to find the bug in bugzilla and report a regression (if it's not
>> 

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-10 Thread Michal Skrivanek
On 11 Jul 2019, at 06:34, Alex K  wrote:



On Tue, Jul 9, 2019, 19:10 Michal Skrivanek 
wrote:

>
>
> On 9 Jul 2019, at 17:16, Strahil  wrote:
>
> I'm not sure, but I always thought that you need  an agent for live
> migrations.
>
>
> You don’t. For snapshots, and other less important stuff like reporting
> IPs you do. In 4.3 you should be fine with qemu-ga only
>
I've seen resolving live migration issues by installing newer versions of
ovirt ga.


Hm, it shouldn’t make any difference whatsoever. Do you have any concrete
data? that would help.

You can always try installing either qemu-guest-agent  or ovirt-guest-agent
> and check if live  migration between hosts is possible.
>
> Have you set the new cluster/dc version ?
>
> Best Regards
> Strahil Nikolov
> On Jul 9, 2019 17:42, Neil  wrote:
>
> I remember seeing the bug earlier but because it was closed thought it was
> unrelated, this appears to be it
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1670701
>
> Perhaps I'm not understanding your question about the VM guest agent, but
> I don't have any guest agent currently installed on the VM, not sure if the
> output of my qemu-kvm process maybe answers this question?
>
> /usr/libexec/qemu-kvm -name guest=Headoffice.cbl-ho.local,debug-threads=on
> -S -object
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes
> -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
> Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on
> -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1
> -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid
> 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios
> type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-
>
>
It’s 7.3, likely oVirt 4.1. Please upgrade...

C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config
> -nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon
> chardev=charmonitor,id=monitor,mode=control -rtc
> base=2019-07-09T10:26:53,driftfix=slew -global
> kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on
> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive
> if=none,id=drive-ide0-1-0,readonly=on -device
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
> file=/rhev/data-center/59831b91-00a5-01e4-0294-0018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
> -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3
> -chardev socket,id=charchannel0,fd=35,server,nowait -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> -chardev socket,id=charchannel1,fd=36,server,nowait -device
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
> -chardev spicevmc,id=charchannel2,name=vdagent -device
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> -spice 
> tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
> -device
> qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2
> -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
> -object rng-random,id=objrng0,filename=/dev/urandom -device
> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox
> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
> -msg timestamp=on
>
> Please shout if you need further info.
>
> Thanks.
>
>
>
>
>
>
> On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov 
> wrote:
>
> Shouldn't cause that problem.
>
> You have to find the bug in bugzilla and report a regression (if it's not
> closed) , or open a new one and report the regression.
> As far as I remember , only the dashboard was affected due to new features
> about vdo disk savings.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-10 Thread Alex K
On Tue, Jul 9, 2019, 19:10 Michal Skrivanek 
wrote:

>
>
> On 9 Jul 2019, at 17:16, Strahil  wrote:
>
> I'm not sure, but I always thought that you need  an agent for live
> migrations.
>
>
> You don’t. For snapshots, and other less important stuff like reporting
> IPs you do. In 4.3 you should be fine with qemu-ga only
>
I've seen resolving live migration issues by installing newer versions of
ovirt ga.

> You can always try installing either qemu-guest-agent  or
> ovirt-guest-agent and check if live  migration between hosts is possible.
>
> Have you set the new cluster/dc version ?
>
> Best Regards
> Strahil Nikolov
> On Jul 9, 2019 17:42, Neil  wrote:
>
> I remember seeing the bug earlier but because it was closed thought it was
> unrelated, this appears to be it
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1670701
>
> Perhaps I'm not understanding your question about the VM guest agent, but
> I don't have any guest agent currently installed on the VM, not sure if the
> output of my qemu-kvm process maybe answers this question?
>
> /usr/libexec/qemu-kvm -name guest=Headoffice.cbl-ho.local,debug-threads=on
> -S -object
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes
> -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
> Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on
> -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1
> -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid
> 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios
> type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef
> -no-user-config -nodefaults -chardev
> socket,id=charmonitor,fd=31,server,nowait -mon
> chardev=charmonitor,id=monitor,mode=control -rtc
> base=2019-07-09T10:26:53,driftfix=slew -global
> kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on
> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive
> if=none,id=drive-ide0-1-0,readonly=on -device
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
> file=/rhev/data-center/59831b91-00a5-01e4-0294-0018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
> -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3
> -chardev socket,id=charchannel0,fd=35,server,nowait -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> -chardev socket,id=charchannel1,fd=36,server,nowait -device
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
> -chardev spicevmc,id=charchannel2,name=vdagent -device
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> -spice 
> tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
> -device
> qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2
> -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
> -object rng-random,id=objrng0,filename=/dev/urandom -device
> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox
> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
> -msg timestamp=on
>
> Please shout if you need further info.
>
> Thanks.
>
>
>
>
>
>
> On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov 
> wrote:
>
> Shouldn't cause that problem.
>
> You have to find the bug in bugzilla and report a regression (if it's not
> closed) , or open a new one and report the regression.
> As far as I remember , only the dashboard was affected due to new features
> about vdo disk savings.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQCHU3VAIQQCG7NSBYK5UMZYFRTJ7B2E/
>
> ___
> Users mailing list -- users@ovirt.org
> 

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-10 Thread Alex K
On Wed, Jul 10, 2019, 14:57 Neil  wrote:

> To provide a slight update on this.
>
> I put one of my hosts into maintenance and it then migrated the two VM's
> off of it, I then upgraded the host to 4.3.
>
> I have 12 VM's running on the remaining host, if I put it into maintenance
> will it try migrate all 12 VM's at once or will it stagger them until they
> are all migrated?
>
If you have a good migration network (at least 10Gbps) the it should be
fine. You could also just manually migrate one by one.

>
> Thank you.
>
> Regards.
>
> Neil Wilson.
>
>
>
>
>
>
> On Wed, Jul 10, 2019 at 9:44 AM Neil  wrote:
>
>> Hi Michal,
>>
>> Thanks for assisting.
>>
>> I've just done as requested however nothing is logged in the engine.log
>> at the time I click Migrate, below is the log and I hit the Migrate button
>> about 4 times between 09:35 and 09:36 and nothing was logged about this...
>>
>> 2019-07-10 09:35:57,967+02 INFO
>>  [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-14) []
>> User trouble@internal successfully logged in with scopes:
>> ovirt-app-admin ovirt-app-api ovirt-app-portal
>> ovirt-ext=auth:sequence-priority=~ ovirt-ext=revoke:revoke-all
>> ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search
>> ovirt-ext=token-info:validate ovirt-ext=token:password-access
>> 2019-07-10 09:35:58,012+02 INFO
>>  [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-14)
>> [2997034] Running command: CreateUserSessionCommand internal: false.
>> 2019-07-10 09:35:58,021+02 INFO
>>  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (default task-14) [2997034] EVENT_ID: USER_VDC_LOGIN(30), User
>> trouble@internal-authz connecting from '160.128.20.85' using session
>> 'bv55G0wZznETUiQwjgjfUNje7wOsG4UDCuFunSslVeAFQkhdY2zzTY7du36ynTF5nW5U7JiPyr7gl9QDHfWuig=='
>> logged in.
>> 2019-07-10 09:36:58,304+02 INFO
>>  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
>> 'default' is using 0 threads out of 1, 5 threads waiting for tasks.
>> 2019-07-10 09:36:58,305+02 INFO
>>  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
>> 'engine' is using 0 threads out of 500, 16 threads waiting for tasks and 0
>> tasks in queue.
>> 2019-07-10 09:36:58,305+02 INFO
>>  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
>> 'engineScheduled' is using 0 threads out of 100, 100 threads waiting for
>> tasks.
>> 2019-07-10 09:36:58,305+02 INFO
>>  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
>> 'engineThreadMonitoring' is using 1 threads out of 1, 0 threads waiting for
>> tasks.
>> 2019-07-10 09:36:58,305+02 INFO
>>  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
>> 'hostUpdatesChecker' is using 0 threads out of 5, 2 threads waiting for
>> tasks.
>>
>> The same is observed in the vdsm.log too, below is the log during the
>> attempted migration
>>
>> 2019-07-10 09:39:57,034+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer]
>> RPC call Host.getStats succeeded in 0.01 seconds (__init__:573)
>> 2019-07-10 09:39:57,994+0200 INFO  (jsonrpc/2) [api.host] START
>> getStats() from=:::10.0.1.1,57934 (api:46)
>> 2019-07-10 09:39:57,994+0200 INFO  (jsonrpc/2) [vdsm.api] START
>> repoStats(domains=()) from=:::10.0.1.1,57934,
>> task_id=e2529cfc-4293-42b4-91fa-7f5558e279dd (api:46)
>> 2019-07-10 09:39:57,994+0200 INFO  (jsonrpc/2) [vdsm.api] FINISH
>> repoStats return={u'8a607f8a-542a-473c-bb18-25c05fe2a3d4': {'code': 0,
>> 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000194846',
>> 'lastCheck': '2.4', 'valid': True},
>> u'37b1a5d7-4e29-4763-9337-63c51dbc5fc8': {'code': 0, 'actual': True,
>> 'version': 0, 'acquired': True, 'delay': '0.000277154', 'lastCheck': '6.0',
>> 'valid': True}, u'2558679a-2214-466b-8f05-06fdda9146e5': {'code': 0,
>> 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000421988',
>> 'lastCheck': '2.4', 'valid': True},
>> u'640a5875-3d82-43c0-860f-7bb3e4a7e6f0': {'code': 0, 'actual': True,
>> 'version': 4, 'acquired': True, 'delay': '0.000228443', 'lastCheck': '2.4',
>> 'valid': True}} from=:::10.0.1.1,57934,
>> task_id=e2529cfc-4293-42b4-91fa-7f5558e279dd (api:52)
>> 2019-07-10 09:39:57,995+0200 INFO  (jsonrpc/2) [vdsm.api] START
>> multipath_health() from=:::10.0.1.1,57934,
>> task_id=fd7ad703-5096-4f09-99fa-54672cb4aad9 (api:46)
>> 2019-07-10 09:39:57,995+0200 INFO  (jsonrpc/2) [vdsm.api] FINISH
>> multipath_health return={} from=:::10.0.1.1,57934,
>> task_id=fd7ad703-5096-4f09-99fa-54672cb4aad9 (api:52)
>> 2019-07-10 09:39:58,002+0200 INFO  (jsonrpc/2) [api.host] FINISH 

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-10 Thread Neil
To provide a slight update on this.

I put one of my hosts into maintenance and it then migrated the two VM's
off of it, I then upgraded the host to 4.3.

I have 12 VM's running on the remaining host, if I put it into maintenance
will it try migrate all 12 VM's at once or will it stagger them until they
are all migrated?

Thank you.

Regards.

Neil Wilson.






On Wed, Jul 10, 2019 at 9:44 AM Neil  wrote:

> Hi Michal,
>
> Thanks for assisting.
>
> I've just done as requested however nothing is logged in the engine.log at
> the time I click Migrate, below is the log and I hit the Migrate button
> about 4 times between 09:35 and 09:36 and nothing was logged about this...
>
> 2019-07-10 09:35:57,967+02 INFO
>  [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-14) []
> User trouble@internal successfully logged in with scopes: ovirt-app-admin
> ovirt-app-api ovirt-app-portal ovirt-ext=auth:sequence-priority=~
> ovirt-ext=revoke:revoke-all ovirt-ext=token-info:authz-search
> ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate
> ovirt-ext=token:password-access
> 2019-07-10 09:35:58,012+02 INFO
>  [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-14)
> [2997034] Running command: CreateUserSessionCommand internal: false.
> 2019-07-10 09:35:58,021+02 INFO
>  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-14) [2997034] EVENT_ID: USER_VDC_LOGIN(30), User
> trouble@internal-authz connecting from '160.128.20.85' using session
> 'bv55G0wZznETUiQwjgjfUNje7wOsG4UDCuFunSslVeAFQkhdY2zzTY7du36ynTF5nW5U7JiPyr7gl9QDHfWuig=='
> logged in.
> 2019-07-10 09:36:58,304+02 INFO
>  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'default' is using 0 threads out of 1, 5 threads waiting for tasks.
> 2019-07-10 09:36:58,305+02 INFO
>  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'engine' is using 0 threads out of 500, 16 threads waiting for tasks and 0
> tasks in queue.
> 2019-07-10 09:36:58,305+02 INFO
>  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'engineScheduled' is using 0 threads out of 100, 100 threads waiting for
> tasks.
> 2019-07-10 09:36:58,305+02 INFO
>  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'engineThreadMonitoring' is using 1 threads out of 1, 0 threads waiting for
> tasks.
> 2019-07-10 09:36:58,305+02 INFO
>  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'hostUpdatesChecker' is using 0 threads out of 5, 2 threads waiting for
> tasks.
>
> The same is observed in the vdsm.log too, below is the log during the
> attempted migration
>
> 2019-07-10 09:39:57,034+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
> call Host.getStats succeeded in 0.01 seconds (__init__:573)
> 2019-07-10 09:39:57,994+0200 INFO  (jsonrpc/2) [api.host] START getStats()
> from=:::10.0.1.1,57934 (api:46)
> 2019-07-10 09:39:57,994+0200 INFO  (jsonrpc/2) [vdsm.api] START
> repoStats(domains=()) from=:::10.0.1.1,57934,
> task_id=e2529cfc-4293-42b4-91fa-7f5558e279dd (api:46)
> 2019-07-10 09:39:57,994+0200 INFO  (jsonrpc/2) [vdsm.api] FINISH repoStats
> return={u'8a607f8a-542a-473c-bb18-25c05fe2a3d4': {'code': 0, 'actual':
> True, 'version': 4, 'acquired': True, 'delay': '0.000194846', 'lastCheck':
> '2.4', 'valid': True}, u'37b1a5d7-4e29-4763-9337-63c51dbc5fc8': {'code': 0,
> 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000277154',
> 'lastCheck': '6.0', 'valid': True},
> u'2558679a-2214-466b-8f05-06fdda9146e5': {'code': 0, 'actual': True,
> 'version': 4, 'acquired': True, 'delay': '0.000421988', 'lastCheck': '2.4',
> 'valid': True}, u'640a5875-3d82-43c0-860f-7bb3e4a7e6f0': {'code': 0,
> 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000228443',
> 'lastCheck': '2.4', 'valid': True}} from=:::10.0.1.1,57934,
> task_id=e2529cfc-4293-42b4-91fa-7f5558e279dd (api:52)
> 2019-07-10 09:39:57,995+0200 INFO  (jsonrpc/2) [vdsm.api] START
> multipath_health() from=:::10.0.1.1,57934,
> task_id=fd7ad703-5096-4f09-99fa-54672cb4aad9 (api:46)
> 2019-07-10 09:39:57,995+0200 INFO  (jsonrpc/2) [vdsm.api] FINISH
> multipath_health return={} from=:::10.0.1.1,57934,
> task_id=fd7ad703-5096-4f09-99fa-54672cb4aad9 (api:52)
> 2019-07-10 09:39:58,002+0200 INFO  (jsonrpc/2) [api.host] FINISH getStats
> return={'status': {'message': 'Done', 'code': 0}, 'info': {'cpuStatistics':
> {'42': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle':
> '99.87'}, '43': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00',
> 'cpuIdle': '100.00'}, '24': {'cpuUser': '0.73', 

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-10 Thread Neil
Hi Michal,

Thanks for assisting.

I've just done as requested however nothing is logged in the engine.log at
the time I click Migrate, below is the log and I hit the Migrate button
about 4 times between 09:35 and 09:36 and nothing was logged about this...

2019-07-10 09:35:57,967+02 INFO
 [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-14) []
User trouble@internal successfully logged in with scopes: ovirt-app-admin
ovirt-app-api ovirt-app-portal ovirt-ext=auth:sequence-priority=~
ovirt-ext=revoke:revoke-all ovirt-ext=token-info:authz-search
ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate
ovirt-ext=token:password-access
2019-07-10 09:35:58,012+02 INFO
 [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-14)
[2997034] Running command: CreateUserSessionCommand internal: false.
2019-07-10 09:35:58,021+02 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-14) [2997034] EVENT_ID: USER_VDC_LOGIN(30), User
trouble@internal-authz connecting from '160.128.20.85' using session
'bv55G0wZznETUiQwjgjfUNje7wOsG4UDCuFunSslVeAFQkhdY2zzTY7du36ynTF5nW5U7JiPyr7gl9QDHfWuig=='
logged in.
2019-07-10 09:36:58,304+02 INFO
 [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
(EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
'default' is using 0 threads out of 1, 5 threads waiting for tasks.
2019-07-10 09:36:58,305+02 INFO
 [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
(EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
'engine' is using 0 threads out of 500, 16 threads waiting for tasks and 0
tasks in queue.
2019-07-10 09:36:58,305+02 INFO
 [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
(EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
'engineScheduled' is using 0 threads out of 100, 100 threads waiting for
tasks.
2019-07-10 09:36:58,305+02 INFO
 [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
(EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
'engineThreadMonitoring' is using 1 threads out of 1, 0 threads waiting for
tasks.
2019-07-10 09:36:58,305+02 INFO
 [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
(EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
'hostUpdatesChecker' is using 0 threads out of 5, 2 threads waiting for
tasks.

The same is observed in the vdsm.log too, below is the log during the
attempted migration

2019-07-10 09:39:57,034+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
call Host.getStats succeeded in 0.01 seconds (__init__:573)
2019-07-10 09:39:57,994+0200 INFO  (jsonrpc/2) [api.host] START getStats()
from=:::10.0.1.1,57934 (api:46)
2019-07-10 09:39:57,994+0200 INFO  (jsonrpc/2) [vdsm.api] START
repoStats(domains=()) from=:::10.0.1.1,57934,
task_id=e2529cfc-4293-42b4-91fa-7f5558e279dd (api:46)
2019-07-10 09:39:57,994+0200 INFO  (jsonrpc/2) [vdsm.api] FINISH repoStats
return={u'8a607f8a-542a-473c-bb18-25c05fe2a3d4': {'code': 0, 'actual':
True, 'version': 4, 'acquired': True, 'delay': '0.000194846', 'lastCheck':
'2.4', 'valid': True}, u'37b1a5d7-4e29-4763-9337-63c51dbc5fc8': {'code': 0,
'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000277154',
'lastCheck': '6.0', 'valid': True},
u'2558679a-2214-466b-8f05-06fdda9146e5': {'code': 0, 'actual': True,
'version': 4, 'acquired': True, 'delay': '0.000421988', 'lastCheck': '2.4',
'valid': True}, u'640a5875-3d82-43c0-860f-7bb3e4a7e6f0': {'code': 0,
'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000228443',
'lastCheck': '2.4', 'valid': True}} from=:::10.0.1.1,57934,
task_id=e2529cfc-4293-42b4-91fa-7f5558e279dd (api:52)
2019-07-10 09:39:57,995+0200 INFO  (jsonrpc/2) [vdsm.api] START
multipath_health() from=:::10.0.1.1,57934,
task_id=fd7ad703-5096-4f09-99fa-54672cb4aad9 (api:46)
2019-07-10 09:39:57,995+0200 INFO  (jsonrpc/2) [vdsm.api] FINISH
multipath_health return={} from=:::10.0.1.1,57934,
task_id=fd7ad703-5096-4f09-99fa-54672cb4aad9 (api:52)
2019-07-10 09:39:58,002+0200 INFO  (jsonrpc/2) [api.host] FINISH getStats
return={'status': {'message': 'Done', 'code': 0}, 'info': {'cpuStatistics':
{'42': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle':
'99.87'}, '43': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00',
'cpuIdle': '100.00'}, '24': {'cpuUser': '0.73', 'nodeIndex': 0, 'cpuSys':
'0.07', 'cpuIdle': '99.20'}, '25': {'cpuUser': '0.07', 'nodeIndex': 1,
'cpuSys': '0.00', 'cpuIdle': '99.93'}, '26': {'cpuUser': '5.59',
'nodeIndex': 0, 'cpuSys': '1.20', 'cpuIdle': '93.21'}, '27': {'cpuUser':
'0.87', 'nodeIndex': 1, 'cpuSys': '0.60', 'cpuIdle': '98.53'}, '20':
{'cpuUser': '0.53', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.34'},
'21': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle':
'99.93'}, '22': {'cpuUser': '0.40', 'nodeIndex': 0, 'cpuSys': '0.20',
'cpuIdle': '99.40'}, '23': {'cpuUser': '0.07', 

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-09 Thread Michal Skrivanek
On 9 Jul 2019, at 17:16, Strahil  wrote:

I'm not sure, but I always thought that you need  an agent for live
migrations.


You don’t. For snapshots, and other less important stuff like reporting IPs
you do. In 4.3 you should be fine with qemu-ga only

You can always try installing either qemu-guest-agent  or ovirt-guest-agent
and check if live  migration between hosts is possible.

Have you set the new cluster/dc version ?

Best Regards
Strahil Nikolov
On Jul 9, 2019 17:42, Neil  wrote:

I remember seeing the bug earlier but because it was closed thought it was
unrelated, this appears to be it

https://bugzilla.redhat.com/show_bug.cgi?id=1670701

Perhaps I'm not understanding your question about the VM guest agent, but I
don't have any guest agent currently installed on the VM, not sure if the
output of my qemu-kvm process maybe answers this question?

/usr/libexec/qemu-kvm -name guest=Headoffice.cbl-ho.local,debug-threads=on
-S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on
-m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1
-numa node,nodeid=0,cpus=0-7,mem=8192 -uuid
9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef
-no-user-config -nodefaults -chardev
socket,id=charmonitor,fd=31,server,nowait -mon
chardev=charmonitor,id=monitor,mode=control -rtc
base=2019-07-09T10:26:53,driftfix=slew -global
kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive
if=none,id=drive-ide0-1-0,readonly=on -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/59831b91-00a5-01e4-0294-0018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
-netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3
-chardev socket,id=charchannel0,fd=35,server,nowait -device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev socket,id=charchannel1,fd=36,server,nowait -device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice 
tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-device
qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2
-incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
-object rng-random,id=objrng0,filename=/dev/urandom -device
virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox
on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
-msg timestamp=on

Please shout if you need further info.

Thanks.






On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov 
wrote:

Shouldn't cause that problem.

You have to find the bug in bugzilla and report a regression (if it's not
closed) , or open a new one and report the regression.
As far as I remember , only the dashboard was affected due to new features
about vdo disk savings.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQCHU3VAIQQCG7NSBYK5UMZYFRTJ7B2E/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVCCY6JWXWH6UBJYLEHLMKFXURLWK7YR/


[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-09 Thread Michal Skrivanek
Can you share the engine.log please? And highlight the exact time when
you attempt that migrate action

Thanks,
michal

> On 9 Jul 2019, at 16:42, Neil  wrote:
>
> --166784058d409302
> Content-Type: text/plain; charset="UTF-8"
> Content-Transfer-Encoding: quoted-printable
>
> I remember seeing the bug earlier but because it was closed thought it was
> unrelated, this appears to be it
>
> https://bugzilla.redhat.com/show_bug.cgi?id=3D1670701
>
> Perhaps I'm not understanding your question about the VM guest agent, but I
> don't have any guest agent currently installed on the VM, not sure if the
> output of my qemu-kvm process maybe answers this question?
>
> /usr/libexec/qemu-kvm -name guest=3DHeadoffice.cbl-ho.local,debug-threads=
> =3Don
> -S -object
> secret,id=3DmasterKey0,format=3Draw,file=3D/var/lib/libvirt/qemu/domain-1-H=
> eadoffice.cbl-ho.lo/master-key.aes
> -machine pc-i440fx-rhel7.3.0,accel=3Dkvm,usb=3Doff,dump-guest-core=3Doff -c=
> pu
> Broadwell,vme=3Don,f16c=3Don,rdrand=3Don,hypervisor=3Don,arat=3Don,xsaveopt=
> =3Don,abm=3Don,rtm=3Don,hle=3Don
> -m 8192 -realtime mlock=3Doff -smp 8,maxcpus=3D64,sockets=3D16,cores=3D4,th=
> reads=3D1
> -numa node,nodeid=3D0,cpus=3D0-7,mem=3D8192 -uuid
> 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios
> type=3D1,manufacturer=3DoVirt,product=3DoVirt
> Node,version=3D7-3.1611.el7.centos,serial=3D4C4C4544-0034-5810-8033-C2C04F4=
> E4B32,uuid=3D9a6561b8-5702-43dc-9e92-1dc5dfed4eef
> -no-user-config -nodefaults -chardev
> socket,id=3Dcharmonitor,fd=3D31,server,nowait -mon
> chardev=3Dcharmonitor,id=3Dmonitor,mode=3Dcontrol -rtc
> base=3D2019-07-09T10:26:53,driftfix=3Dslew -global
> kvm-pit.lost_tick_policy=3Ddelay -no-hpet -no-shutdown -boot strict=3Don
> -device piix3-usb-uhci,id=3Dusb,bus=3Dpci.0,addr=3D0x1.0x2 -device
> virtio-scsi-pci,id=3Dscsi0,bus=3Dpci.0,addr=3D0x4 -device
> virtio-serial-pci,id=3Dvirtio-serial0,max_ports=3D16,bus=3Dpci.0,addr=3D0x5=
> -drive
> if=3Dnone,id=3Ddrive-ide0-1-0,readonly=3Don -device
> ide-cd,bus=3Dide.1,unit=3D0,drive=3Ddrive-ide0-1-0,id=3Dide0-1-0 -drive
> file=3D/rhev/data-center/59831b91-00a5-01e4-0294-0018/8a607f8a-542a=
> -473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f24546=
> 7-d31d-4f5a-8037-7c5012a4aa84,format=3Dqcow2,if=3Dnone,id=3Ddrive-virtio-di=
> sk0,serial=3D56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=3Dstop,rerror=3Dst=
> op,cache=3Dnone,aio=3Dnative
> -device
> virtio-blk-pci,scsi=3Doff,bus=3Dpci.0,addr=3D0x7,drive=3Ddrive-virtio-disk0=
> ,id=3Dvirtio-disk0,bootindex=3D1,write-cache=3Don
> -netdev tap,fd=3D33,id=3Dhostnet0,vhost=3Don,vhostfd=3D34 -device
> virtio-net-pci,netdev=3Dhostnet0,id=3Dnet0,mac=3D00:1a:4a:16:01:5b,bus=3Dpc=
> i.0,addr=3D0x3
> -chardev socket,id=3Dcharchannel0,fd=3D35,server,nowait -device
> virtserialport,bus=3Dvirtio-serial0.0,nr=3D1,chardev=3Dcharchannel0,id=3Dch=
> annel0,name=3Dcom.redhat.rhevm.vdsm
> -chardev socket,id=3Dcharchannel1,fd=3D36,server,nowait -device
> virtserialport,bus=3Dvirtio-serial0.0,nr=3D2,chardev=3Dcharchannel1,id=3Dch=
> annel1,name=3Dorg.qemu.guest_agent.0
> -chardev spicevmc,id=3Dcharchannel2,name=3Dvdagent -device
> virtserialport,bus=3Dvirtio-serial0.0,nr=3D3,chardev=3Dcharchannel2,id=3Dch=
> annel2,name=3Dcom.redhat.spice.0
> -spice
> tls-port=3D5900,addr=3D10.0.1.11,x509-dir=3D/etc/pki/vdsm/libvirt-spice,tls=
> -channel=3Ddefault,tls-channel=3Dmain,tls-channel=3Ddisplay,tls-channel=3Di=
> nputs,tls-channel=3Dcursor,tls-channel=3Dplayback,tls-channel=3Drecord,tls-=
> channel=3Dsmartcard,tls-channel=3Dusbredir,seamless-migration=3Don
> -device
> qxl-vga,id=3Dvideo0,ram_size=3D67108864,vram_size=3D8388608,vram64_size_mb=
> =3D0,vgamem_mb=3D16,max_outputs=3D1,bus=3Dpci.0,addr=3D0x2
> -incoming defer -device virtio-balloon-pci,id=3Dballoon0,bus=3Dpci.0,addr=
> =3D0x6
> -object rng-random,id=3Dobjrng0,filename=3D/dev/urandom -device
> virtio-rng-pci,rng=3Dobjrng0,id=3Drng0,bus=3Dpci.0,addr=3D0x8 -sandbox
> on,obsolete=3Ddeny,elevateprivileges=3Ddeny,spawn=3Ddeny,resourcecontrol=3D=
> deny
> -msg timestamp=3Don
>
> Please shout if you need further info.
>
> Thanks.
>
>
>
>
>
>
> On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov 
> wrote:
>
>> Shouldn't cause that problem.
>>
>> You have to find the bug in bugzilla and report a regression (if it's not
>> closed) , or open a new one and report the regression.
>> As far as I remember , only the dashboard was affected due to new feature=
> s
>> about vdo disk savings.
>>
>> About the VM - this should be another issue. What agent are you using in
>> the VMs (ovirt or qemu) ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> =D0=92 =D0=B2=D1=82=D0=BE=D1=80=D0=BD=D0=B8=D0=BA, 9 =D1=8E=D0=BB=D0=B8 2=
> 019 =D0=B3., 10:09:05 =D1=87. =D0=93=D1=80=D0=B8=D0=BD=D1=83=D0=B8=D1=87-4,=
> Neil <
>> nwilson...@gmail.com> =D0=BD=D0=B0=D0=BF=D0=B8=D1=81=D0=B0:
>>
>>
>> Hi Strahil,
>>
>> Thanks for the quick reply.
>> I put the cluster into global maintenance, then installed the 4.3 repo,
>> 

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-09 Thread Strahil
I'm not sure, but I always thought that you need  an agent for live migrations.
You can always try installing either qemu-guest-agent  or ovirt-guest-agent and 
check if live  migration between hosts is possible.

Have you set the new cluster/dc version ?

Best Regards
Strahil NikolovOn Jul 9, 2019 17:42, Neil  wrote:
>
> I remember seeing the bug earlier but because it was closed thought it was 
> unrelated, this appears to be it
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1670701
>
> Perhaps I'm not understanding your question about the VM guest agent, but I 
> don't have any guest agent currently installed on the VM, not sure if the 
> output of my qemu-kvm process maybe answers this question?
>
> /usr/libexec/qemu-kvm -name guest=Headoffice.cbl-ho.local,debug-threads=on -S 
> -object 
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes
>  -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu 
> Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on
>  -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1 
> -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid 
> 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios 
> type=1,manufacturer=oVirt,product=oVirt 
> Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef
>  -no-user-config -nodefaults -chardev 
> socket,id=charmonitor,fd=31,server,nowait -mon 
> chardev=charmonitor,id=monitor,mode=control -rtc 
> base=2019-07-09T10:26:53,driftfix=slew -global kvm-pit.lost_tick_policy=delay 
> -no-hpet -no-shutdown -boot strict=on -device 
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device 
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device 
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive 
> if=none,id=drive-ide0-1-0,readonly=on -device 
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive 
> file=/rhev/data-center/59831b91-00a5-01e4-0294-0018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native
>  -device 
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
>  -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device 
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3
>  -chardev socket,id=charchannel0,fd=35,server,nowait -device 
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
>  -chardev socket,id=charchannel1,fd=36,server,nowait -device 
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
>  -chardev spicevmc,id=charchannel2,name=vdagent -device 
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
>  -spice 
> tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
>  -device 
> qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2
>  -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 
> -object rng-random,id=objrng0,filename=/dev/urandom -device 
> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox 
> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg 
> timestamp=on
>
> Please shout if you need further info.
>
> Thanks.
>
>
>
>
>
>
> On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov  wrote:
>>
>> Shouldn't cause that problem.
>>
>> You have to find the bug in bugzilla and report a regression (if it's not 
>> closed) , or open a new one and report the regression.
>> As far as I remember , only the dashboard was affected due to new features 
>> about vdo disk savings.
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQCHU3VAIQQCG7NSBYK5UMZYFRTJ7B2E/


[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-09 Thread Neil
I remember seeing the bug earlier but because it was closed thought it was
unrelated, this appears to be it

https://bugzilla.redhat.com/show_bug.cgi?id=1670701

Perhaps I'm not understanding your question about the VM guest agent, but I
don't have any guest agent currently installed on the VM, not sure if the
output of my qemu-kvm process maybe answers this question?

/usr/libexec/qemu-kvm -name guest=Headoffice.cbl-ho.local,debug-threads=on
-S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on
-m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1
-numa node,nodeid=0,cpus=0-7,mem=8192 -uuid
9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef
-no-user-config -nodefaults -chardev
socket,id=charmonitor,fd=31,server,nowait -mon
chardev=charmonitor,id=monitor,mode=control -rtc
base=2019-07-09T10:26:53,driftfix=slew -global
kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive
if=none,id=drive-ide0-1-0,readonly=on -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/59831b91-00a5-01e4-0294-0018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
-netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3
-chardev socket,id=charchannel0,fd=35,server,nowait -device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev socket,id=charchannel1,fd=36,server,nowait -device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice
tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-device
qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2
-incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
-object rng-random,id=objrng0,filename=/dev/urandom -device
virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox
on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
-msg timestamp=on

Please shout if you need further info.

Thanks.






On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov 
wrote:

> Shouldn't cause that problem.
>
> You have to find the bug in bugzilla and report a regression (if it's not
> closed) , or open a new one and report the regression.
> As far as I remember , only the dashboard was affected due to new features
> about vdo disk savings.
>
> About the VM - this should be another issue. What agent are you using in
> the VMs (ovirt or qemu) ?
>
> Best Regards,
> Strahil Nikolov
>
> В вторник, 9 юли 2019 г., 10:09:05 ч. Гринуич-4, Neil <
> nwilson...@gmail.com> написа:
>
>
> Hi Strahil,
>
> Thanks for the quick reply.
> I put the cluster into global maintenance, then installed the 4.3 repo,
> then "yum update ovirt\*setup\*"  then "engine-upgrade-check",
> "engine-setup", then "yum update", once completed, I rebooted the
> hosted-engine VM, and took the cluster out of global maintenance.
>
> Thinking back to the upgrade from 4.1 to 4.2 I don't recall doing a "yum
> update" after doing the engine-setup, not sure if this would cause it
> perhaps?
>
> Thank you.
> Regards.
> Neil Wilson.
>
> On Tue, Jul 9, 2019 at 3:47 PM Strahil Nikolov 
> wrote:
>
> Hi Neil,
>
> for "Could not fetch data needed for VM migrate operation" - there was a
> bug and it was fixed.
> Are you sure you have fully updated ?
> What procedure did you use ?
>
> Best Regards,
> Strahil Nikolov
>
> В вторник, 9 юли 2019 г., 7:26:21 ч. Гринуич-4, Neil 
> написа:
>
>
> Hi guys.
>
> I have two problems since upgrading from 4.2.x to 4.3.4
>
> First issue is I can no longer manually migrate VM's between hosts, I get
> an error in the ovirt 

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-09 Thread Strahil Nikolov
 Shouldn't cause that problem.
You have to find the bug in bugzilla and report a regression (if it's not 
closed) , or open a new one and report the regression.As far as I remember , 
only the dashboard was affected due to new features about vdo disk savings.
About the VM - this should be another issue. What agent are you using in the 
VMs (ovirt or qemu) ?

Best Regards,Strahil Nikolov

В вторник, 9 юли 2019 г., 10:09:05 ч. Гринуич-4, Neil 
 написа:  
 
 Hi Strahil,
Thanks for the quick reply.I put the cluster into global maintenance, then 
installed the 4.3 repo, then "yum update ovirt\*setup\*"  then 
"engine-upgrade-check", "engine-setup", then "yum update", once completed, I 
rebooted the hosted-engine VM, and took the cluster out of global maintenance.
Thinking back to the upgrade from 4.1 to 4.2 I don't recall doing a "yum 
update" after doing the engine-setup, not sure if this would cause it perhaps?

Thank you.Regards.
Neil Wilson.

On Tue, Jul 9, 2019 at 3:47 PM Strahil Nikolov  wrote:

 Hi Neil,
for "Could not fetch data needed for VM migrate operation" - there was a bug 
and it was fixed.Are you sure you have fully updated ?What procedure did you 
use ?
Best Regards,Strahil Nikolov

В вторник, 9 юли 2019 г., 7:26:21 ч. Гринуич-4, Neil  
написа:  
 
 Hi guys.
I have two problems since upgrading from 4.2.x to 4.3.4
First issue is I can no longer manually migrate VM's between hosts, I get an 
error in the ovirt GUI that says "Could not fetch data needed for VM migrate 
operation" and nothing gets logged either in my engine.log or my vdsm.log
Then the other issue is my Dashboard says the following "Error! Could not fetch 
dashboard data. Please ensure that data warehouse is properly installed and 
configured."
If I look at my ovirt-engine-dwhd.log I see the following if I try restart the 
dwh service...
2019-07-09 11:48:04|ETL Service Started
ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0
hoursToKeepHourly|720
ovirtEngineDbPassword|**
runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
runInterleave|60
limitRows|limit 1000
ovirtEngineHistoryDbUser|ovirt_engine_history
ovirtEngineDbUser|engine
deleteIncrement|10
timeBetweenErrorEvents|30
hoursToKeepSamples|24
deleteMultiplier|1000
lastErrorSent|2011-07-03 12:46:47.00
etlVersion|4.3.0
dwhAggregationDebug|false
dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbPassword|**
2019-07-09 11:48:10|ETL Service Stopped
2019-07-09 11:49:59|ETL Service Started
ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0
hoursToKeepHourly|720
ovirtEngineDbPassword|**
runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
runInterleave|60
limitRows|limit 1000
ovirtEngineHistoryDbUser|ovirt_engine_history
ovirtEngineDbUser|engine
deleteIncrement|10
timeBetweenErrorEvents|30
hoursToKeepSamples|24
deleteMultiplier|1000
lastErrorSent|2011-07-03 12:46:47.00
etlVersion|4.3.0
dwhAggregationDebug|false
dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbPassword|**
2019-07-09 11:52:56|ETL Service Stopped
2019-07-09 11:52:57|ETL Service Started
ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0
hoursToKeepHourly|720
ovirtEngineDbPassword|**
runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
runInterleave|60
limitRows|limit 1000
ovirtEngineHistoryDbUser|ovirt_engine_history
ovirtEngineDbUser|engine
deleteIncrement|10
timeBetweenErrorEvents|30
hoursToKeepSamples|24
deleteMultiplier|1000
lastErrorSent|2011-07-03 12:46:47.00
etlVersion|4.3.0
dwhAggregationDebug|false
dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbPassword|**
2019-07-09 12:16:01|ETL Service Stopped
2019-07-09 12:16:45|ETL Service Started
ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0
hoursToKeepHourly|720
ovirtEngineDbPassword|**
runDeleteTime|3

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-09 Thread Neil
Apologies this should read...
"I put the cluster into global maintenance, then installed the 4.3 repo,
then "engine-upgrade-check" then "yum update ovirt\*setup\*" and then
"engine-setup"..."

On Tue, Jul 9, 2019 at 4:08 PM Neil  wrote:

> Hi Strahil,
>
> Thanks for the quick reply.
> I put the cluster into global maintenance, then installed the 4.3 repo,
> then "yum update ovirt\*setup\*"  then "engine-upgrade-check",
> "engine-setup", then "yum update", once completed, I rebooted the
> hosted-engine VM, and took the cluster out of global maintenance.
>
> Thinking back to the upgrade from 4.1 to 4.2 I don't recall doing a "yum
> update" after doing the engine-setup, not sure if this would cause it
> perhaps?
>
> Thank you.
> Regards.
> Neil Wilson.
>
> On Tue, Jul 9, 2019 at 3:47 PM Strahil Nikolov 
> wrote:
>
>> Hi Neil,
>>
>> for "Could not fetch data needed for VM migrate operation" - there was a
>> bug and it was fixed.
>> Are you sure you have fully updated ?
>> What procedure did you use ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> В вторник, 9 юли 2019 г., 7:26:21 ч. Гринуич-4, Neil <
>> nwilson...@gmail.com> написа:
>>
>>
>> Hi guys.
>>
>> I have two problems since upgrading from 4.2.x to 4.3.4
>>
>> First issue is I can no longer manually migrate VM's between hosts, I get
>> an error in the ovirt GUI that says "Could not fetch data needed for VM
>> migrate operation" and nothing gets logged either in my engine.log or my
>> vdsm.log
>>
>> Then the other issue is my Dashboard says the following "Error! Could not
>> fetch dashboard data. Please ensure that data warehouse is properly
>> installed and configured."
>>
>> If I look at my ovirt-engine-dwhd.log I see the following if I try
>> restart the dwh service...
>>
>> 2019-07-09 11:48:04|ETL Service Started
>> ovirtEngineDbDriverClass|org.postgresql.Driver
>>
>> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
>> hoursToKeepDaily|0
>> hoursToKeepHourly|720
>> ovirtEngineDbPassword|**
>> runDeleteTime|3
>>
>> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
>> runInterleave|60
>> limitRows|limit 1000
>> ovirtEngineHistoryDbUser|ovirt_engine_history
>> ovirtEngineDbUser|engine
>> deleteIncrement|10
>> timeBetweenErrorEvents|30
>> hoursToKeepSamples|24
>> deleteMultiplier|1000
>> lastErrorSent|2011-07-03 12:46:47.00
>> etlVersion|4.3.0
>> dwhAggregationDebug|false
>> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
>> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
>> ovirtEngineHistoryDbPassword|**
>> 2019-07-09 11:48:10|ETL Service Stopped
>> 2019-07-09 11:49:59|ETL Service Started
>> ovirtEngineDbDriverClass|org.postgresql.Driver
>>
>> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
>> hoursToKeepDaily|0
>> hoursToKeepHourly|720
>> ovirtEngineDbPassword|**
>> runDeleteTime|3
>>
>> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
>> runInterleave|60
>> limitRows|limit 1000
>> ovirtEngineHistoryDbUser|ovirt_engine_history
>> ovirtEngineDbUser|engine
>> deleteIncrement|10
>> timeBetweenErrorEvents|30
>> hoursToKeepSamples|24
>> deleteMultiplier|1000
>> lastErrorSent|2011-07-03 12:46:47.00
>> etlVersion|4.3.0
>> dwhAggregationDebug|false
>> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
>> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
>> ovirtEngineHistoryDbPassword|**
>> 2019-07-09 11:52:56|ETL Service Stopped
>> 2019-07-09 11:52:57|ETL Service Started
>> ovirtEngineDbDriverClass|org.postgresql.Driver
>>
>> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
>> hoursToKeepDaily|0
>> hoursToKeepHourly|720
>> ovirtEngineDbPassword|**
>> runDeleteTime|3
>>
>> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
>> runInterleave|60
>> limitRows|limit 1000
>> ovirtEngineHistoryDbUser|ovirt_engine_history
>> ovirtEngineDbUser|engine
>> deleteIncrement|10
>> timeBetweenErrorEvents|30
>> hoursToKeepSamples|24
>> deleteMultiplier|1000
>> lastErrorSent|2011-07-03 12:46:47.00
>> etlVersion|4.3.0
>> dwhAggregationDebug|false
>> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
>> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
>> ovirtEngineHistoryDbPassword|**
>> 2019-07-09 12:16:01|ETL Service Stopped
>> 2019-07-09 12:16:45|ETL Service Started
>> ovirtEngineDbDriverClass|org.postgresql.Driver
>>
>> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
>> 

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-09 Thread Neil
Hi Strahil,

Thanks for the quick reply.
I put the cluster into global maintenance, then installed the 4.3 repo,
then "yum update ovirt\*setup\*"  then "engine-upgrade-check",
"engine-setup", then "yum update", once completed, I rebooted the
hosted-engine VM, and took the cluster out of global maintenance.

Thinking back to the upgrade from 4.1 to 4.2 I don't recall doing a "yum
update" after doing the engine-setup, not sure if this would cause it
perhaps?

Thank you.
Regards.
Neil Wilson.

On Tue, Jul 9, 2019 at 3:47 PM Strahil Nikolov 
wrote:

> Hi Neil,
>
> for "Could not fetch data needed for VM migrate operation" - there was a
> bug and it was fixed.
> Are you sure you have fully updated ?
> What procedure did you use ?
>
> Best Regards,
> Strahil Nikolov
>
> В вторник, 9 юли 2019 г., 7:26:21 ч. Гринуич-4, Neil 
> написа:
>
>
> Hi guys.
>
> I have two problems since upgrading from 4.2.x to 4.3.4
>
> First issue is I can no longer manually migrate VM's between hosts, I get
> an error in the ovirt GUI that says "Could not fetch data needed for VM
> migrate operation" and nothing gets logged either in my engine.log or my
> vdsm.log
>
> Then the other issue is my Dashboard says the following "Error! Could not
> fetch dashboard data. Please ensure that data warehouse is properly
> installed and configured."
>
> If I look at my ovirt-engine-dwhd.log I see the following if I try restart
> the dwh service...
>
> 2019-07-09 11:48:04|ETL Service Started
> ovirtEngineDbDriverClass|org.postgresql.Driver
>
> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
> hoursToKeepDaily|0
> hoursToKeepHourly|720
> ovirtEngineDbPassword|**
> runDeleteTime|3
>
> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
> runInterleave|60
> limitRows|limit 1000
> ovirtEngineHistoryDbUser|ovirt_engine_history
> ovirtEngineDbUser|engine
> deleteIncrement|10
> timeBetweenErrorEvents|30
> hoursToKeepSamples|24
> deleteMultiplier|1000
> lastErrorSent|2011-07-03 12:46:47.00
> etlVersion|4.3.0
> dwhAggregationDebug|false
> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
> ovirtEngineHistoryDbPassword|**
> 2019-07-09 11:48:10|ETL Service Stopped
> 2019-07-09 11:49:59|ETL Service Started
> ovirtEngineDbDriverClass|org.postgresql.Driver
>
> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
> hoursToKeepDaily|0
> hoursToKeepHourly|720
> ovirtEngineDbPassword|**
> runDeleteTime|3
>
> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
> runInterleave|60
> limitRows|limit 1000
> ovirtEngineHistoryDbUser|ovirt_engine_history
> ovirtEngineDbUser|engine
> deleteIncrement|10
> timeBetweenErrorEvents|30
> hoursToKeepSamples|24
> deleteMultiplier|1000
> lastErrorSent|2011-07-03 12:46:47.00
> etlVersion|4.3.0
> dwhAggregationDebug|false
> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
> ovirtEngineHistoryDbPassword|**
> 2019-07-09 11:52:56|ETL Service Stopped
> 2019-07-09 11:52:57|ETL Service Started
> ovirtEngineDbDriverClass|org.postgresql.Driver
>
> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
> hoursToKeepDaily|0
> hoursToKeepHourly|720
> ovirtEngineDbPassword|**
> runDeleteTime|3
>
> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
> runInterleave|60
> limitRows|limit 1000
> ovirtEngineHistoryDbUser|ovirt_engine_history
> ovirtEngineDbUser|engine
> deleteIncrement|10
> timeBetweenErrorEvents|30
> hoursToKeepSamples|24
> deleteMultiplier|1000
> lastErrorSent|2011-07-03 12:46:47.00
> etlVersion|4.3.0
> dwhAggregationDebug|false
> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
> ovirtEngineHistoryDbPassword|**
> 2019-07-09 12:16:01|ETL Service Stopped
> 2019-07-09 12:16:45|ETL Service Started
> ovirtEngineDbDriverClass|org.postgresql.Driver
>
> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
> hoursToKeepDaily|0
> hoursToKeepHourly|720
> ovirtEngineDbPassword|**
> runDeleteTime|3
>
> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
> runInterleave|60
> limitRows|limit 1000
> ovirtEngineHistoryDbUser|ovirt_engine_history
> ovirtEngineDbUser|engine
> deleteIncrement|10
> timeBetweenErrorEvents|30
> 

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-09 Thread Strahil Nikolov
 Hi Neil,
for "Could not fetch data needed for VM migrate operation" - there was a bug 
and it was fixed.Are you sure you have fully updated ?What procedure did you 
use ?
Best Regards,Strahil Nikolov

В вторник, 9 юли 2019 г., 7:26:21 ч. Гринуич-4, Neil  
написа:  
 
 Hi guys.
I have two problems since upgrading from 4.2.x to 4.3.4
First issue is I can no longer manually migrate VM's between hosts, I get an 
error in the ovirt GUI that says "Could not fetch data needed for VM migrate 
operation" and nothing gets logged either in my engine.log or my vdsm.log
Then the other issue is my Dashboard says the following "Error! Could not fetch 
dashboard data. Please ensure that data warehouse is properly installed and 
configured."
If I look at my ovirt-engine-dwhd.log I see the following if I try restart the 
dwh service...
2019-07-09 11:48:04|ETL Service Started
ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0
hoursToKeepHourly|720
ovirtEngineDbPassword|**
runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
runInterleave|60
limitRows|limit 1000
ovirtEngineHistoryDbUser|ovirt_engine_history
ovirtEngineDbUser|engine
deleteIncrement|10
timeBetweenErrorEvents|30
hoursToKeepSamples|24
deleteMultiplier|1000
lastErrorSent|2011-07-03 12:46:47.00
etlVersion|4.3.0
dwhAggregationDebug|false
dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbPassword|**
2019-07-09 11:48:10|ETL Service Stopped
2019-07-09 11:49:59|ETL Service Started
ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0
hoursToKeepHourly|720
ovirtEngineDbPassword|**
runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
runInterleave|60
limitRows|limit 1000
ovirtEngineHistoryDbUser|ovirt_engine_history
ovirtEngineDbUser|engine
deleteIncrement|10
timeBetweenErrorEvents|30
hoursToKeepSamples|24
deleteMultiplier|1000
lastErrorSent|2011-07-03 12:46:47.00
etlVersion|4.3.0
dwhAggregationDebug|false
dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbPassword|**
2019-07-09 11:52:56|ETL Service Stopped
2019-07-09 11:52:57|ETL Service Started
ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0
hoursToKeepHourly|720
ovirtEngineDbPassword|**
runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
runInterleave|60
limitRows|limit 1000
ovirtEngineHistoryDbUser|ovirt_engine_history
ovirtEngineDbUser|engine
deleteIncrement|10
timeBetweenErrorEvents|30
hoursToKeepSamples|24
deleteMultiplier|1000
lastErrorSent|2011-07-03 12:46:47.00
etlVersion|4.3.0
dwhAggregationDebug|false
dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbPassword|**
2019-07-09 12:16:01|ETL Service Stopped
2019-07-09 12:16:45|ETL Service Started
ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0
hoursToKeepHourly|720
ovirtEngineDbPassword|**
runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
runInterleave|60
limitRows|limit 1000
ovirtEngineHistoryDbUser|ovirt_engine_history
ovirtEngineDbUser|engine
deleteIncrement|10
timeBetweenErrorEvents|30
hoursToKeepSamples|24
deleteMultiplier|1000
lastErrorSent|2011-07-03 12:46:47.00
etlVersion|4.3.0
dwhAggregationDebug|false
dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbPassword|**





I have a hosted engine, and I have two hosts and my storage is FC based. The 
hosts are still running on 4.2 because I'm unable to migrate VM's off.
I have plenty resources available in terms of CPU and Memory on the destination 
host, and my Cluster version is set to 4.2 because my hosts are still on 4.2
I have recently upgraded from 4.1 to 4.2 and then I upgraded my hosts to 4.2 as 
well, but I can't get my hosts to 4.3 because of the above migration issue.
Below my ovirt packages