[ovirt-users] Re: ovirt-engine-appliance ova

2019-07-11 Thread Strahil
Based on my experience - the OVA contains the xml and the actual disk of the 
hosted engine.
Then the deployment starts locally the VM and populates the necessary data in it
Once it's over , the deployment shuts down and copies the disk of that local 
VM, undefines it and then ovirt's ha agents are being configured - so they can 
mount the shared storage and power up the VM (special tar /OVMF/ file on shared 
storage has the agent  configuration file).

So , in the ova there should be a  template VM + the xml config (cpus, ram, 
devices, etc) .
I would be surprised if there is something else in it.

Best Regards,
Strahil NikolovOn Jul 11, 2019 23:39, Jingjie Jiang  
wrote:
>
> Hi Strahil,
>
> Yes, you are right.
>
> After install ovirt-engine-appliance rpm, the ova file will be save at 
> /usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.3-20190610.1.el7.ova
>
> I was trying to understand what the ova file included. 
>
> I thought it only has the CentOS7.6.
>
> I observed that ovirt-engine was installed during "host-engine --deploy"
>
> Is ovirt-engine-appliance-4.3-20190610.1.el7.ova only used for deploy 
> host-engine?
>
> Is there a document about how to generate?
>
>
> Thanks,
>
> Jingjie
>
>
> On 7/11/19 4:20 PM, Strahil Nikolov wrote:
>
> If I'm not wrong, this rpm is being downloaded to one of the hosts during 
> self-hosted engine's deployment.
> Why would you try to import a second self-hosted engine ?
>
> Best Regards,
> Strahil Nikolov
>
>
> В четвъртък, 11 юли 2019 г., 22:37:56 ч. Гринуич+3, 
>  написа:
>
>
> Hi,
> Can someone tell me how to generate  ovirt-engine-appliance ova file in 
> ovirt-engine-appliance-4.3-20190610.1.el7.x86_64.rpm?
> I tried to import ovirt-engine-appliance 
> ova(ovirt-engine-appliance-4.3-20190610.1.el7.ova) from ovirt-engine, but I 
> got error as following:
> Failed to load VM configuration from OVA file: 
> /var/tmp/ovirt-engine-appliance-4.2-20190121.1.el7.ova
>
> I guess ovirt-engine-appliance-4.2-20190121.1.el7.ova has more than 
> CentOS7.6. 
>
> Thanks,
> Jingjie
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EP2BMVXRXUM6F3WF77OTKZ75NQKMBYC6/";
>  rel="nofollow" target="_blank" 
> moz-do-not-send="true">https://lists.ovirt.org/archives/list/users@o___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OBLCILUNJ2LGZMLVLH2RGMZ5JGNDL43V/


[ovirt-users] Re: ovirt-engine-appliance ova

2019-07-11 Thread Jingjie Jiang

Hi Strahil,

Yes, you are right.

After install ovirt-engine-appliance rpm, the ova file will be save at 
/usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.3-20190610.1.el7.ova


I was trying to understand what the ova file included.

I thought it only has the CentOS7.6.

I observed that ovirt-engine was installed during "host-engine --deploy"

Is ovirt-engine-appliance-4.3-20190610.1.el7.ova only used for deploy 
host-engine?


Is there a document about how to generate?


Thanks,

Jingjie


On 7/11/19 4:20 PM, Strahil Nikolov wrote:
If I'm not wrong, this rpm is being downloaded to one of the hosts 
during self-hosted engine's deployment.

Why would you try to import a second self-hosted engine ?

Best Regards,
Strahil Nikolov


В четвъртък, 11 юли 2019 г., 22:37:56 ч. Гринуич+3, 
 написа:



Hi,
Can someone tell me how to generate ovirt-engine-appliance ova file in 
ovirt-engine-appliance-4.3-20190610.1.el7.x86_64.rpm?
I tried to import ovirt-engine-appliance 
ova(ovirt-engine-appliance-4.3-20190610.1.el7.ova) from ovirt-engine, 
but I got error as following:
Failed to load VM configuration from OVA file: 
/var/tmp/ovirt-engine-appliance-4.2-20190121.1.el7.ova


I guess ovirt-engine-appliance-4.2-20190121.1.el7.ova has more than 
CentOS7.6.


Thanks,
Jingjie
___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 


Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EP2BMVXRXUM6F3WF77OTKZ75NQKMBYC6/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QJYEFWGGQTJFPBCD2OMID7JMFFUCPON5/


[ovirt-users] Re: ovirt-engine-appliance ova

2019-07-11 Thread Strahil Nikolov
 If I'm not wrong, this rpm is being downloaded to one of the hosts during 
self-hosted engine's deployment.Why would you try to import a second 
self-hosted engine ?
Best Regards,Strahil Nikolov

В четвъртък, 11 юли 2019 г., 22:37:56 ч. Гринуич+3, 
 написа:  
 
 Hi,
Can someone tell me how to generate  ovirt-engine-appliance ova file in 
ovirt-engine-appliance-4.3-20190610.1.el7.x86_64.rpm?
I tried to import ovirt-engine-appliance 
ova(ovirt-engine-appliance-4.3-20190610.1.el7.ova) from ovirt-engine, but I got 
error as following:
Failed to load VM configuration from OVA file: 
/var/tmp/ovirt-engine-appliance-4.2-20190121.1.el7.ova

I guess ovirt-engine-appliance-4.2-20190121.1.el7.ova has more than CentOS7.6. 

Thanks,
Jingjie
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EP2BMVXRXUM6F3WF77OTKZ75NQKMBYC6/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QNV52UBXJKCAV442N6K42KPOOVUI4F25/


[ovirt-users] ovirt-engine-appliance ova

2019-07-11 Thread jingjie . jiang
Hi,
Can someone tell me how to generate  ovirt-engine-appliance ova file in 
ovirt-engine-appliance-4.3-20190610.1.el7.x86_64.rpm?
I tried to import ovirt-engine-appliance 
ova(ovirt-engine-appliance-4.3-20190610.1.el7.ova) from ovirt-engine, but I got 
error as following:
Failed to load VM configuration from OVA file: 
/var/tmp/ovirt-engine-appliance-4.2-20190121.1.el7.ova

I guess ovirt-engine-appliance-4.2-20190121.1.el7.ova has more than CentOS7.6. 

Thanks,
Jingjie
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EP2BMVXRXUM6F3WF77OTKZ75NQKMBYC6/


[ovirt-users] Re: Update 4.2.8 --> 4.3.5

2019-07-11 Thread Strahil
I'm addding gluster-users as I'm not sure if you can go gluster v3 -> v6 
directly.

Theoretically speaking ,  there  should be no problem - but I don't know if you 
will observe any issues.

@Gluster-users,

Can someone share their thoughts about v3 to v6 migration  ?

Best Regards,
Strahil NikolovOn Jul 11, 2019 14:05, Christoph Köhler wrote: > > Hello! > > We 
have a 4.2.8 environment with some managed gluster-volumes as storage > domains 
and we want to go up to 4.3.5. > > How is that procedure especially with the 
gluster nodes in ovirt that > are running 3.12.15? My fear is on the jump to 
gluster 6. Do the cluster > work if the first node (of three) is upgraded? And 
what about the > sequence - first the hypervisors or first gluster nodes? > > 
Is there anyone who had done this? > > Greetings! > Christoph Köhler > 
___ > Users mailing list -- 
users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > 
Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of 
Conduct: https://www.ovirt.org/community/about/community-guidelines/ > List 
Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VWFS6YUKA77VP5DWGV7SBYGZREZDJMO7/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NFB3C67VFWN3CHIVVKPVGZSABHDK2PRX/


[ovirt-users] Re: An error has occurred during installation of Host

2019-07-11 Thread Gobinda Das
This could be the issue with your yum package.
Can you please try yum update. If you will get same error then probably you
need to update your yum pkg.

On Wed, Jul 10, 2019 at 2:03 PM  wrote:

> Upadte01:
> I tried to add another new host,some errors occurred:
> 1st:
> An error has occurred during installation of Host node03: Yum 'ascii'
> codec can't encode characters in position 165-169: ordinal not in
> range(128).
> 2ed:
> An error has occurred during installation of Host node03: Failed to
> execute stage 'Environment customization': 'ascii' codec can't encode
> characters in position 165-169: ordinal not in range(128).
> 3rd:
> Host node03 installation failed. Command returned failure code 1 during
> SSH session 'root@node03'.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KBAXSCWTTKUBTH5GMPFZNQDKQ5T7Z4NK/
>


-- 


Thanks,
Gobinda
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SIB36SRIOGB4IINQ7COIRVPFVSLMAEXO/


[ovirt-users] Update 4.2.8 --> 4.3.5

2019-07-11 Thread Christoph Köhler

Hello!

We have a 4.2.8 environment with some managed gluster-volumes as storage 
domains and we want to go up to 4.3.5.


How is that procedure especially with the gluster nodes in ovirt that 
are running 3.12.15? My fear is on the jump to gluster 6. Do the cluster 
work if the first node (of three) is upgraded? And what about the 
sequence - first the hypervisors or first gluster nodes?


Is there anyone who had done this?

Greetings!
Christoph Köhler
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VWFS6YUKA77VP5DWGV7SBYGZREZDJMO7/


[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-11 Thread Neil
Hi Sharon,

Thanks for the assistance.
On Thu, Jul 11, 2019 at 11:58 AM Sharon Gratch  wrote:

> Hi,
>
> Regarding issue 1 (Dashboard):
> Did you upgrade the engine to 4.3.5? There was a bug fixed in version
> 4.3.4-5 https://bugzilla.redhat.com/show_bug.cgi?id=1713967 and it may be
> the same issue.
>


No I  wasn't aware that there were updates, how do I obtain 4.3.4-5 is
there another repo available?

Regarding issue 2 (Manual Migrate dialog):
> Can you please attach your browser console log and engine.log snippet when
> you have the problem?
> If you could take from the console log the actual REST API response, that
> would be great.
> The request will be something like
> /api/hosts?migration_target_of=...
>

Please see attached text log for the browser console, I don't see any REST
API being logged, just a stack trace error.
The engine.log literally doesn't get updated when I click the Migrate
button so there isn't anything to share unfortunately.

Please shout if you need further info.

Thank you!




>
>
> On Thu, Jul 11, 2019 at 10:04 AM Neil  wrote:
>
>> Hi everyone,
>> Just an update.
>>
>> I have both hosts upgraded to 4.3, I have upgraded my DC and cluster to
>> 4.3 and I'm still faced with the same problems.
>>
>> 1.) My Dashboard says the following "Error! Could not fetch dashboard
>> data. Please ensure that data warehouse is properly installed and
>> configured."
>>
>> 2.) When I click the Migrate button I get the error "Could not fetch
>> data needed for VM migrate operation"
>>
>> Upgrading my hosts resolved the "node status: DEGRADED" issue so at least
>> it's one issue down.
>>
>> I've done an engine-upgrade-check and a yum update on all my hosts and
>> engine and there are no further updates or patches waiting.
>> Nothing is logged in my engine.log when I click the Migrate button either.
>>
>> Any ideas what to do or try for  1 and 2 above?
>>
>> Thank you.
>>
>> Regards.
>>
>> Neil Wilson.
>>
>>
>>
>>
>>
>> On Thu, Jul 11, 2019 at 8:27 AM Alex K  wrote:
>>
>>>
>>>
>>> On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek <
>>> michal.skriva...@redhat.com> wrote:
>>>


 On 11 Jul 2019, at 06:34, Alex K  wrote:



 On Tue, Jul 9, 2019, 19:10 Michal Skrivanek <
 michal.skriva...@redhat.com> wrote:

>
>
> On 9 Jul 2019, at 17:16, Strahil  wrote:
>
> I'm not sure, but I always thought that you need  an agent for live
> migrations.
>
>
> You don’t. For snapshots, and other less important stuff like
> reporting IPs you do. In 4.3 you should be fine with qemu-ga only
>
 I've seen resolving live migration issues by installing newer versions
 of ovirt ga.


 Hm, it shouldn’t make any difference whatsoever. Do you have any
 concrete data? that would help.

>>> That is some time ago when runnign 4.1. No data unfortunately. Also did
>>> not expect ovirt ga to affect migration, but experience showed me that it
>>> did.  The only observation is that it affected only Windows VMs. Linux VMs
>>> never had an issue, regardless of ovirt ga.
>>>
 You can always try installing either qemu-guest-agent  or
> ovirt-guest-agent and check if live  migration between hosts is possible.
>
> Have you set the new cluster/dc version ?
>
> Best Regards
> Strahil Nikolov
> On Jul 9, 2019 17:42, Neil  wrote:
>
> I remember seeing the bug earlier but because it was closed thought it
> was unrelated, this appears to be it
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1670701
>
> Perhaps I'm not understanding your question about the VM guest agent,
> but I don't have any guest agent currently installed on the VM, not sure 
> if
> the output of my qemu-kvm process maybe answers this question?
>
> /usr/libexec/qemu-kvm -name
> guest=Headoffice.cbl-ho.local,debug-threads=on -S -object
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes
> -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
> Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on
> -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1
> -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid
> 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios
> type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-
>
>
 It’s 7.3, likely oVirt 4.1. Please upgrade...

 C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config
> -nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon
> chardev=charmonitor,id=monitor,mode=control -rtc
> base=2019-07-09T10:26:53,driftfix=slew -global
> kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on
> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-11 Thread Sharon Gratch
Hi,

Regarding issue 1 (Dashboard):
Did you upgrade the engine to 4.3.5? There was a bug fixed in version
4.3.4-5 https://bugzilla.redhat.com/show_bug.cgi?id=1713967 and it may be
the same issue.

Regarding issue 2 (Manual Migrate dialog):
Can you please attach your browser console log and engine.log snippet when
you have the problem?
If you could take from the console log the actual REST API response, that
would be great.
The request will be something like
/api/hosts?migration_target_of=...

Thanks,
Sharon



On Thu, Jul 11, 2019 at 10:04 AM Neil  wrote:

> Hi everyone,
> Just an update.
>
> I have both hosts upgraded to 4.3, I have upgraded my DC and cluster to
> 4.3 and I'm still faced with the same problems.
>
> 1.) My Dashboard says the following "Error! Could not fetch dashboard
> data. Please ensure that data warehouse is properly installed and
> configured."
>
> 2.) When I click the Migrate button I get the error "Could not fetch data
> needed for VM migrate operation"
>
> Upgrading my hosts resolved the "node status: DEGRADED" issue so at least
> it's one issue down.
>
> I've done an engine-upgrade-check and a yum update on all my hosts and
> engine and there are no further updates or patches waiting.
> Nothing is logged in my engine.log when I click the Migrate button either.
>
> Any ideas what to do or try for  1 and 2 above?
>
> Thank you.
>
> Regards.
>
> Neil Wilson.
>
>
>
>
>
> On Thu, Jul 11, 2019 at 8:27 AM Alex K  wrote:
>
>>
>>
>> On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek <
>> michal.skriva...@redhat.com> wrote:
>>
>>>
>>>
>>> On 11 Jul 2019, at 06:34, Alex K  wrote:
>>>
>>>
>>>
>>> On Tue, Jul 9, 2019, 19:10 Michal Skrivanek 
>>> wrote:
>>>


 On 9 Jul 2019, at 17:16, Strahil  wrote:

 I'm not sure, but I always thought that you need  an agent for live
 migrations.


 You don’t. For snapshots, and other less important stuff like reporting
 IPs you do. In 4.3 you should be fine with qemu-ga only

>>> I've seen resolving live migration issues by installing newer versions
>>> of ovirt ga.
>>>
>>>
>>> Hm, it shouldn’t make any difference whatsoever. Do you have any
>>> concrete data? that would help.
>>>
>> That is some time ago when runnign 4.1. No data unfortunately. Also did
>> not expect ovirt ga to affect migration, but experience showed me that it
>> did.  The only observation is that it affected only Windows VMs. Linux VMs
>> never had an issue, regardless of ovirt ga.
>>
>>> You can always try installing either qemu-guest-agent  or
 ovirt-guest-agent and check if live  migration between hosts is possible.

 Have you set the new cluster/dc version ?

 Best Regards
 Strahil Nikolov
 On Jul 9, 2019 17:42, Neil  wrote:

 I remember seeing the bug earlier but because it was closed thought it
 was unrelated, this appears to be it

 https://bugzilla.redhat.com/show_bug.cgi?id=1670701

 Perhaps I'm not understanding your question about the VM guest agent,
 but I don't have any guest agent currently installed on the VM, not sure if
 the output of my qemu-kvm process maybe answers this question?

 /usr/libexec/qemu-kvm -name
 guest=Headoffice.cbl-ho.local,debug-threads=on -S -object
 secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes
 -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
 Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on
 -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1
 -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid
 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios
 type=1,manufacturer=oVirt,product=oVirt
 Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-


>>> It’s 7.3, likely oVirt 4.1. Please upgrade...
>>>
>>> C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config
 -nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon
 chardev=charmonitor,id=monitor,mode=control -rtc
 base=2019-07-09T10:26:53,driftfix=slew -global
 kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on
 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
 virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
 virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive
 if=none,id=drive-ide0-1-0,readonly=on -device
 ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
 file=/rhev/data-center/59831b91-00a5-01e4-0294-0018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native
 -device
 virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bo

[ovirt-users] [ANN] oVirt 4.3.5 Fifth Release Candidate is now available for testing

2019-07-11 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.5 Fifth Release Candidate for testing, as of July 11th, 2019.

While testing this release candidate please consider deeper testing on
gluster upgrade since with this release we are switching from Gluster 5 to
Gluster 6.

This update is a release candidate of the fifth in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.

This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later

This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)

See the release notes [1] for installation / upgrade instructions and a
list of new features and bugs fixed.

Notes:
- oVirt Appliance is laready available
- oVirt Node is already available[2]

Additional Resources:
* Read more about the oVirt 4.3.5 release highlights:
http://www.ovirt.org/release/4.3.5/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] http://www.ovirt.org/release/4.3.5/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QCJ2IZOKMIIN4DMN4T4K22P7L4MJK6TA/


[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-11 Thread Neil
Hi everyone,
Just an update.

I have both hosts upgraded to 4.3, I have upgraded my DC and cluster to 4.3
and I'm still faced with the same problems.

1.) My Dashboard says the following "Error! Could not fetch dashboard data.
Please ensure that data warehouse is properly installed and configured."

2.) When I click the Migrate button I get the error "Could not fetch data
needed for VM migrate operation"

Upgrading my hosts resolved the "node status: DEGRADED" issue so at least
it's one issue down.

I've done an engine-upgrade-check and a yum update on all my hosts and
engine and there are no further updates or patches waiting.
Nothing is logged in my engine.log when I click the Migrate button either.

Any ideas what to do or try for  1 and 2 above?

Thank you.

Regards.

Neil Wilson.





On Thu, Jul 11, 2019 at 8:27 AM Alex K  wrote:

>
>
> On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
>
>>
>>
>> On 11 Jul 2019, at 06:34, Alex K  wrote:
>>
>>
>>
>> On Tue, Jul 9, 2019, 19:10 Michal Skrivanek 
>> wrote:
>>
>>>
>>>
>>> On 9 Jul 2019, at 17:16, Strahil  wrote:
>>>
>>> I'm not sure, but I always thought that you need  an agent for live
>>> migrations.
>>>
>>>
>>> You don’t. For snapshots, and other less important stuff like reporting
>>> IPs you do. In 4.3 you should be fine with qemu-ga only
>>>
>> I've seen resolving live migration issues by installing newer versions of
>> ovirt ga.
>>
>>
>> Hm, it shouldn’t make any difference whatsoever. Do you have any concrete
>> data? that would help.
>>
> That is some time ago when runnign 4.1. No data unfortunately. Also did
> not expect ovirt ga to affect migration, but experience showed me that it
> did.  The only observation is that it affected only Windows VMs. Linux VMs
> never had an issue, regardless of ovirt ga.
>
>> You can always try installing either qemu-guest-agent  or
>>> ovirt-guest-agent and check if live  migration between hosts is possible.
>>>
>>> Have you set the new cluster/dc version ?
>>>
>>> Best Regards
>>> Strahil Nikolov
>>> On Jul 9, 2019 17:42, Neil  wrote:
>>>
>>> I remember seeing the bug earlier but because it was closed thought it
>>> was unrelated, this appears to be it
>>>
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1670701
>>>
>>> Perhaps I'm not understanding your question about the VM guest agent,
>>> but I don't have any guest agent currently installed on the VM, not sure if
>>> the output of my qemu-kvm process maybe answers this question?
>>>
>>> /usr/libexec/qemu-kvm -name
>>> guest=Headoffice.cbl-ho.local,debug-threads=on -S -object
>>> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes
>>> -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
>>> Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on
>>> -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1
>>> -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid
>>> 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios
>>> type=1,manufacturer=oVirt,product=oVirt
>>> Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-
>>>
>>>
>> It’s 7.3, likely oVirt 4.1. Please upgrade...
>>
>> C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config
>>> -nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon
>>> chardev=charmonitor,id=monitor,mode=control -rtc
>>> base=2019-07-09T10:26:53,driftfix=slew -global
>>> kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on
>>> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
>>> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
>>> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive
>>> if=none,id=drive-ide0-1-0,readonly=on -device
>>> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
>>> file=/rhev/data-center/59831b91-00a5-01e4-0294-0018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native
>>> -device
>>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
>>> -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device
>>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3
>>> -chardev socket,id=charchannel0,fd=35,server,nowait -device
>>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
>>> -chardev socket,id=charchannel1,fd=36,server,nowait -device
>>> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
>>> -chardev spicevmc,id=charchannel2,name=vdagent -device
>>> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
>>> -spice 
>>> tls-port=59