Re: [ovirt-users] Some major problems after 4.2 upgrade, could really use some assistance

2018-01-12 Thread Simone Tiraboschi
On Thu, Jan 11, 2018 at 6:15 AM, Jayme  wrote:

> I performed Ovirt 4.2 upgrade on a 3 host cluster with NFS shared
> storage.  The shared storage is mounted from one of the hosts.
>
> I upgraded the hosted engine first, downloading the 4.2 rpm, doing a yum
> update then engine setup which seemed to complete successfully, at the end
> it powered down the hosted VM but it never came back up.  I was unable to
> start it.
>
> I proceeded to upgrade the three hosts, ovirt 4.2 rpm and a full yum
> update.  I also rebooted each of the three hosts.
>
> After some time the hosts did come back and almost all of the VMs are
> running again and seem to be working ok with the exception of two:
>
> 1. The hosted VM still will not start, I've tried everything I can think
> of.
>
> 2. A VM that I know existed is not running and does not appear to exist, I
> have no idea where it is or how to start it.
>
> 1. Hosted engine
>
> From one of the hosts I get a weird error trying to start it:
>
> # hosted-engine --vm-start
> Command VM.getStats with args {'vmID': '4013c829-c9d7-4b72-90d5-6fe58137504c'}
> failed:
> (code=1, message=Virtual machine does not exist: {'vmId':
> u'4013c829-c9d7-4b72-90d5-6fe58137504c'})
>
> From the two other hosts I do not get the same error as above, sometimes
> it appears to start but --vm-status shows errors such as:  Engine status
>   : {"reason": "failed liveliness check", "health": "bad",
> "vm": "up", "detail": "Up"}
>
> Seeing these errors in syslog:
>
> Jan 11 01:06:30 host0 libvirtd: 2018-01-11 05:06:30.473+: 1910: error
> : qemuOpenFileAs:3183 : Failed to open file '/var/run/vdsm/storage/
> 248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/
> 23aa0a66-fa6c-4967-a1e5-fbe47c0cd705': No such file or directory
>
> Jan 11 01:06:30 host0 libvirtd: 2018-01-11 05:06:30.473+: 1910: error
> : qemuDomainStorageOpenStat:11492 : cannot stat file
> '/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-
> 4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-fbe47c0cd705': Bad file
> descriptor
>
> 2. Missing VM.  virsh -r list on each host does not show the VM at all.  I
> know it existed and is important.  The log on one of the hosts even shows
> that it started it recently then stopped in 10 or so minutes later:
>
> Jan 10 18:47:17 host3 systemd-machined: New machine qemu-9-Berna.
> Jan 10 18:47:17 host3 systemd: Started Virtual Machine qemu-9-Berna.
> Jan 10 18:47:17 host3 systemd: Starting Virtual Machine qemu-9-Berna.
> Jan 10 18:54:45 host3 systemd-machined: Machine qemu-9-Berna terminated.
>
> How can I find out the status of the "Berna" VM and get it running again?
>

Is it on the engine DB?


>
> Thanks so much!
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Some major problems after 4.2 upgrade, could really use some assistance

2018-01-11 Thread Darrell Budic
Were you running gluster under you shared storage? If so, you probably need to 
setup ganesha nfs yourself.

If not, check your ha-agent logs and make sure it’s mounting the storage 
properly and check for errors. Good luck!

> From: Jayme 
> Subject: Re: [ovirt-users] Some major problems after 4.2 upgrade, could 
> really use some assistance
> Date: January 11, 2018 at 12:28:32 PM CST
> To: Martin Sivak; users@ovirt.org
> 
> This is becoming critical for me, does anyone have any ideas or 
> recommendations on what I can do to recover access to hosted VM?  As of right 
> now I have three hosts that are fully updated, they have the 4.2  repo and a 
> full yum update was performed on them, there are no new updates to apply.  
> The hosted engine had updates as well as a full and complete engine-setup, 
> but did not return after being shut down.  There must be some way I can get 
> the engine running again?  Please
> 
> On Thu, Jan 11, 2018 at 8:24 AM, Jayme  <mailto:jay...@gmail.com>> wrote:
> The hosts have all ready been fully updated with 4.2 packages though.
> 
> ex. 
> 
> ovirt-host.x86_64  
> 4.2.0-1.el7.centos   @ovirt-4.2
> ovirt-host-dependencies.x86_64 
> 4.2.0-1.el7.centos   @ovirt-4.2
> ovirt-host-deploy.noarch   
> 1.7.0-1.el7.centos   @ovirt-4.2
> ovirt-hosted-engine-ha.noarch  
> 2.2.2-1.el7.centos   @ovirt-4.2
> ovirt-hosted-engine-setup.noarch   
> 2.2.3-1.el7.centos   @ovirt-4.2
> 
> On Thu, Jan 11, 2018 at 8:16 AM, Martin Sivak  <mailto:msi...@redhat.com>> wrote:
> Hi,
> 
> yes, you need to upgrade the hosts. Just take the
> ovirt-hosted-engine-ha and ovirt-hosted-engine-setup packages from
> ovirt 4.2 repositories.
> 
> Martin
> 
> On Thu, Jan 11, 2018 at 11:40 AM, Jayme  <mailto:jay...@gmail.com>> wrote:
> > How do I upgrade the hosted engine packages when I can't reach it or do you
> > mean upgrade the hosts if so how exactly do I do that. As for the missing VM
> > it appears that the disk image is there but it's missing XML file I have no
> > idea why or how to recreate it.
> >
> > On Jan 11, 2018 4:43 AM, "Martin Sivak"  > <mailto:msi...@redhat.com>> wrote:
> >>
> >> Hi,
> >>
> >> you hit one known issue we already have fixes for (4.1 hosts with 4.2
> >> engine):
> >> https://gerrit.ovirt.org/#/q/status:open+project:ovirt-hosted-engine-ha+branch:v2.1.z+topic:ovf_42_for_41
> >>  
> >> <https://gerrit.ovirt.org/#/q/status:open+project:ovirt-hosted-engine-ha+branch:v2.1.z+topic:ovf_42_for_41>
> >>
> >> You can try hotfixing it by upgrading hosted engine packages to 4.2 or
> >> applying the patches manually and installing python-lxml.
> >>
> >> I am not sure what happened to your other VM.
> >>
> >> Best regards
> >>
> >> Martin Sivak
> >>
> >> On Thu, Jan 11, 2018 at 6:15 AM, Jayme  >> <mailto:jay...@gmail.com>> wrote:
> >> > I performed Ovirt 4.2 upgrade on a 3 host cluster with NFS shared
> >> > storage.
> >> > The shared storage is mounted from one of the hosts.
> >> >
> >> > I upgraded the hosted engine first, downloading the 4.2 rpm, doing a yum
> >> > update then engine setup which seemed to complete successfully, at the
> >> > end
> >> > it powered down the hosted VM but it never came back up.  I was unable
> >> > to
> >> > start it.
> >> >
> >> > I proceeded to upgrade the three hosts, ovirt 4.2 rpm and a full yum
> >> > update.
> >> > I also rebooted each of the three hosts.
> >> >
> >> > After some time the hosts did come back and almost all of the VMs are
> >> > running again and seem to be working ok with the exception of two:
> >> >
> >> > 1. The hosted VM still will not start, I've tried everything I can think
> >> > of.
> >> >
> >> > 2. A VM that I know existed is not running and does not appear to exist,
> >> > I
> >> > have no idea where it is or how to start it.
> >> >
> >> > 1. Hosted engine
> >> >
> >> > From one of the hosts I get a weird error trying to start it:
> >> >
>

Re: [ovirt-users] Some major problems after 4.2 upgrade, could really use some assistance

2018-01-11 Thread Jayme
This is becoming critical for me, does anyone have any ideas or
recommendations on what I can do to recover access to hosted VM?  As of
right now I have three hosts that are fully updated, they have the 4.2 repo
and a full yum update was performed on them, there are no new updates to
apply.  The hosted engine had updates as well as a full and complete
engine-setup, but did not return after being shut down.  There must be some
way I can get the engine running again?  Please

On Thu, Jan 11, 2018 at 8:24 AM, Jayme  wrote:

> The hosts have all ready been fully updated with 4.2 packages though.
>
> ex.
>
> ovirt-host.x86_64
> 4.2.0-1.el7.centos   @ovirt-4.2
> ovirt-host-dependencies.x86_64
>  4.2.0-1.el7.centos   @ovirt-4.2
> ovirt-host-deploy.noarch
>  1.7.0-1.el7.centos   @ovirt-4.2
> ovirt-hosted-engine-ha.noarch
> 2.2.2-1.el7.centos   @ovirt-4.2
> ovirt-hosted-engine-setup.noarch
>  2.2.3-1.el7.centos   @ovirt-4.2
>
> On Thu, Jan 11, 2018 at 8:16 AM, Martin Sivak  wrote:
>
>> Hi,
>>
>> yes, you need to upgrade the hosts. Just take the
>> ovirt-hosted-engine-ha and ovirt-hosted-engine-setup packages from
>> ovirt 4.2 repositories.
>>
>> Martin
>>
>> On Thu, Jan 11, 2018 at 11:40 AM, Jayme  wrote:
>> > How do I upgrade the hosted engine packages when I can't reach it or do
>> you
>> > mean upgrade the hosts if so how exactly do I do that. As for the
>> missing VM
>> > it appears that the disk image is there but it's missing XML file I
>> have no
>> > idea why or how to recreate it.
>> >
>> > On Jan 11, 2018 4:43 AM, "Martin Sivak"  wrote:
>> >>
>> >> Hi,
>> >>
>> >> you hit one known issue we already have fixes for (4.1 hosts with 4.2
>> >> engine):
>> >> https://gerrit.ovirt.org/#/q/status:open+project:ovirt-hoste
>> d-engine-ha+branch:v2.1.z+topic:ovf_42_for_41
>> >>
>> >> You can try hotfixing it by upgrading hosted engine packages to 4.2 or
>> >> applying the patches manually and installing python-lxml.
>> >>
>> >> I am not sure what happened to your other VM.
>> >>
>> >> Best regards
>> >>
>> >> Martin Sivak
>> >>
>> >> On Thu, Jan 11, 2018 at 6:15 AM, Jayme  wrote:
>> >> > I performed Ovirt 4.2 upgrade on a 3 host cluster with NFS shared
>> >> > storage.
>> >> > The shared storage is mounted from one of the hosts.
>> >> >
>> >> > I upgraded the hosted engine first, downloading the 4.2 rpm, doing a
>> yum
>> >> > update then engine setup which seemed to complete successfully, at
>> the
>> >> > end
>> >> > it powered down the hosted VM but it never came back up.  I was
>> unable
>> >> > to
>> >> > start it.
>> >> >
>> >> > I proceeded to upgrade the three hosts, ovirt 4.2 rpm and a full yum
>> >> > update.
>> >> > I also rebooted each of the three hosts.
>> >> >
>> >> > After some time the hosts did come back and almost all of the VMs are
>> >> > running again and seem to be working ok with the exception of two:
>> >> >
>> >> > 1. The hosted VM still will not start, I've tried everything I can
>> think
>> >> > of.
>> >> >
>> >> > 2. A VM that I know existed is not running and does not appear to
>> exist,
>> >> > I
>> >> > have no idea where it is or how to start it.
>> >> >
>> >> > 1. Hosted engine
>> >> >
>> >> > From one of the hosts I get a weird error trying to start it:
>> >> >
>> >> > # hosted-engine --vm-start
>> >> > Command VM.getStats with args {'vmID':
>> >> > '4013c829-c9d7-4b72-90d5-6fe58137504c'} failed:
>> >> > (code=1, message=Virtual machine does not exist: {'vmId':
>> >> > u'4013c829-c9d7-4b72-90d5-6fe58137504c'})
>> >> >
>> >> > From the two other hosts I do not get the same error as above,
>> sometimes
>> >> > it
>> >> > appears to start but --vm-status shows errors such as:  Engine status
>> >> > : {"reason": "failed liveliness check", "health": "bad", "vm": "up",
>> >> > "detail": "Up"}
>> >> >
>> >> > Seeing these errors in syslog:
>> >> >
>> >> > Jan 11 01:06:30 host0 libvirtd: 2018-01-11 05:06:30.473+: 1910:
>> >> > error :
>> >> > qemuOpenFileAs:3183 : Failed to open file
>> >> >
>> >> > '/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/
>> c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-
>> 4967-a1e5-fbe47c0cd705':
>> >> > No such file or directory
>> >> >
>> >> > Jan 11 01:06:30 host0 libvirtd: 2018-01-11 05:06:30.473+: 1910:
>> >> > error :
>> >> > qemuDomainStorageOpenStat:11492 : cannot stat file
>> >> >
>> >> > '/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/
>> c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-
>> 4967-a1e5-fbe47c0cd705':
>> >> > Bad file descriptor
>> >> >
>> >> > 2. Missing VM.  virsh -r list on each host does not show the VM at
>> all.
>> >> > I
>> >> > know it existed and is important.  The log on one of the hosts even
>> >> > shows
>> >> > that it started it recently then stopped in 10 or so minutes later:
>> >> >
>> >> > Jan 10 18:47:17 host3 systemd-machined: New machine qemu-

Re: [ovirt-users] Some major problems after 4.2 upgrade, could really use some assistance

2018-01-11 Thread Martin Sivak
Hi,

you hit one known issue we already have fixes for (4.1 hosts with 4.2
engine): 
https://gerrit.ovirt.org/#/q/status:open+project:ovirt-hosted-engine-ha+branch:v2.1.z+topic:ovf_42_for_41

You can try hotfixing it by upgrading hosted engine packages to 4.2 or
applying the patches manually and installing python-lxml.

I am not sure what happened to your other VM.

Best regards

Martin Sivak

On Thu, Jan 11, 2018 at 6:15 AM, Jayme  wrote:
> I performed Ovirt 4.2 upgrade on a 3 host cluster with NFS shared storage.
> The shared storage is mounted from one of the hosts.
>
> I upgraded the hosted engine first, downloading the 4.2 rpm, doing a yum
> update then engine setup which seemed to complete successfully, at the end
> it powered down the hosted VM but it never came back up.  I was unable to
> start it.
>
> I proceeded to upgrade the three hosts, ovirt 4.2 rpm and a full yum update.
> I also rebooted each of the three hosts.
>
> After some time the hosts did come back and almost all of the VMs are
> running again and seem to be working ok with the exception of two:
>
> 1. The hosted VM still will not start, I've tried everything I can think of.
>
> 2. A VM that I know existed is not running and does not appear to exist, I
> have no idea where it is or how to start it.
>
> 1. Hosted engine
>
> From one of the hosts I get a weird error trying to start it:
>
> # hosted-engine --vm-start
> Command VM.getStats with args {'vmID':
> '4013c829-c9d7-4b72-90d5-6fe58137504c'} failed:
> (code=1, message=Virtual machine does not exist: {'vmId':
> u'4013c829-c9d7-4b72-90d5-6fe58137504c'})
>
> From the two other hosts I do not get the same error as above, sometimes it
> appears to start but --vm-status shows errors such as:  Engine status
> : {"reason": "failed liveliness check", "health": "bad", "vm": "up",
> "detail": "Up"}
>
> Seeing these errors in syslog:
>
> Jan 11 01:06:30 host0 libvirtd: 2018-01-11 05:06:30.473+: 1910: error :
> qemuOpenFileAs:3183 : Failed to open file
> '/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-fbe47c0cd705':
> No such file or directory
>
> Jan 11 01:06:30 host0 libvirtd: 2018-01-11 05:06:30.473+: 1910: error :
> qemuDomainStorageOpenStat:11492 : cannot stat file
> '/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-fbe47c0cd705':
> Bad file descriptor
>
> 2. Missing VM.  virsh -r list on each host does not show the VM at all.  I
> know it existed and is important.  The log on one of the hosts even shows
> that it started it recently then stopped in 10 or so minutes later:
>
> Jan 10 18:47:17 host3 systemd-machined: New machine qemu-9-Berna.
> Jan 10 18:47:17 host3 systemd: Started Virtual Machine qemu-9-Berna.
> Jan 10 18:47:17 host3 systemd: Starting Virtual Machine qemu-9-Berna.
> Jan 10 18:54:45 host3 systemd-machined: Machine qemu-9-Berna terminated.
>
> How can I find out the status of the "Berna" VM and get it running again?
>
> Thanks so much!
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Some major problems after 4.2 upgrade, could really use some assistance

2018-01-11 Thread Jayme
I performed Ovirt 4.2 upgrade on a 3 host cluster with NFS shared storage.
The shared storage is mounted from one of the hosts.

I upgraded the hosted engine first, downloading the 4.2 rpm, doing a yum
update then engine setup which seemed to complete successfully, at the end
it powered down the hosted VM but it never came back up.  I was unable to
start it.

I proceeded to upgrade the three hosts, ovirt 4.2 rpm and a full yum
update.  I also rebooted each of the three hosts.

After some time the hosts did come back and almost all of the VMs are
running again and seem to be working ok with the exception of two:

1. The hosted VM still will not start, I've tried everything I can think of.

2. A VM that I know existed is not running and does not appear to exist, I
have no idea where it is or how to start it.

1. Hosted engine

>From one of the hosts I get a weird error trying to start it:

# hosted-engine --vm-start
Command VM.getStats with args {'vmID':
'4013c829-c9d7-4b72-90d5-6fe58137504c'} failed:
(code=1, message=Virtual machine does not exist: {'vmId':
u'4013c829-c9d7-4b72-90d5-6fe58137504c'})

>From the two other hosts I do not get the same error as above, sometimes it
appears to start but --vm-status shows errors such as:  Engine status
: {"reason": "failed liveliness check", "health": "bad",
"vm": "up", "detail": "Up"}

Seeing these errors in syslog:

Jan 11 01:06:30 host0 libvirtd: 2018-01-11 05:06:30.473+: 1910: error :
qemuOpenFileAs:3183 : Failed to open file
'/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-fbe47c0cd705':
No such file or directory

Jan 11 01:06:30 host0 libvirtd: 2018-01-11 05:06:30.473+: 1910: error :
qemuDomainStorageOpenStat:11492 : cannot stat file
'/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-fbe47c0cd705':
Bad file descriptor

2. Missing VM.  virsh -r list on each host does not show the VM at all.  I
know it existed and is important.  The log on one of the hosts even shows
that it started it recently then stopped in 10 or so minutes later:

Jan 10 18:47:17 host3 systemd-machined: New machine qemu-9-Berna.
Jan 10 18:47:17 host3 systemd: Started Virtual Machine qemu-9-Berna.
Jan 10 18:47:17 host3 systemd: Starting Virtual Machine qemu-9-Berna.
Jan 10 18:54:45 host3 systemd-machined: Machine qemu-9-Berna terminated.

How can I find out the status of the "Berna" VM and get it running again?

Thanks so much!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users