Re: [ovirt-users] Can't start VM after shutdown

2016-06-20 Thread Michal Skrivanek

> On 13 Jun 2016, at 11:48, Colin Coe  wrote:
> 
> Initially we only saw this on VMs with 2 or more disks. Subsequently we 
> confirmed that it does happen on single disk VMs also.
> 
Sounds like an issue with snapshots, "backing filename too long” should be 
https://bugzilla.redhat.com/show_bug.cgi?id=1333627
Tal, can you confirm?

Thanks,
michal
> CC
> 
> ---
> 
> Sent from my Nexus 5
> 
> On Jun 13, 2016 5:12 PM, "gregor"  > wrote:
> The VM has two disks both are VirtIO. During testing its now clear that
> the problem occur only with two disks. When I select only one disk for
> the snapshost it works.
> Is this a problem of oVirt or is it not possible to use two disks on a
> VM in oVirt?
> 
> Have you also two or more disks on your VM?
> 
> Here are the Testresults:
> -
> What does not work:
> - Export the VM: Failed with error "ImageIsNotLegalChain and code 262"
> - Clone the VM: Failed with error "IRSErrorException: Image is not a
> legal chain" with the ID of the second Disk.
> 
> After removing the second Disk:
> - Create offline snapshot: Works
> - Remove offline snapshot: After two hours I run "engine-setup
> --offline" to clean the looked snapshot !!!
> - Export the VM: Works
> - Import the exported VM: Works
> - Add Disk to the imported VM: Works
> - Create offline snapshot of the imported VM: Failed
> - Clone the VM: Works
> - Add Disk to the cloned VM: Works
> - Create offline snapshot of the cloned VM: Failed
> 
> What works:
> - Make offline snapshot only with the system disk: Works
> - Remove offline snapshot of the system disk: Works
> - Make online snapshot only with the system disk: Works
> - Remove online snapshot of the system disk: Works
> 
> cheers
> gregor
> 
> On 12/06/16 19:42, gregor wrote:
> > Hi,
> >
> > I solved my problem, here are the steps but be carefully if you don't
> > know what the commands did and how to restore from backup don't follow this:
> >
> > - ssh to the host
> > - systemctl stop ovirt-engine
> > - backup the database with "engine-backup"
> > - navigate to the image files
> > - backup the images: sudo -u vdsm rsync -av  
> > - check which one is the backing file: qemu-img info 
> > - check for damages: qemu-img check 
> > - qemu-img commit 
> > - rename the  + .lease and .meta so it can't be accessed
> >
> > - vmname=srv03
> > - db=engine
> > - sudo -u postgres psql $db -c "SELECT b.disk_alias, s.description,
> > s.snapshot_id, i.creation_date, s.status, i.imagestatus, i.size,
> > i.image_group_id, i.vm_snapshot_id, i.image_guid, i.parentid, i.active
> > FROM images as i JOIN snapshots AS s ON (i.vm_snapshot_id =
> > s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id = v.vm_guid) JOIN
> > base_disks AS b ON (i.image_group_id = b.disk_id) WHERE v.vm_name =
> > '$vmname' ORDER BY creation_date, description, disk_alias"
> >
> > - note the image_guid and parent_id from the broken snapshot and the
> > active snapshot, the active state is the image_guuid with the parentid
> > ----
> > - igid_active=
> > - igid_broken=
> > - the parentid of the image_guuid of the broken snapshot must be the
> > same as the activ snapshots image_guuid
> > - note the snapshot id
> > - sid_active=
> > - sid_broken=
> >
> > - delete the broken snapshot
> > - sudo -u postgres psql $db -c "DELETE FROM snapshots AS s WHERE
> > s.snapshot_id = '$sid_broken'"
> >
> > - pid_new=----
> > - sudo -u postgres psql $db -c "SELECT * FROM images WHERE
> > vm_snapshot_id = '$sid_active' AND image_guid = '$igid_broken'"
> > - sudo -u postgres psql $db -c "DELETE FROM images WHERE vm_snapshot_id
> > = '$sid_broken' AND image_guid = '$igid_active'"
> > - sudo -u postgres psql $db -c "SELECT * FROM image_storage_domain_map
> > WHERE image_id = '$igid_broken'"
> > - sudo -u postgres psql $db -c "DELETE FROM image_storage_domain_map
> > WHERE image_id = '$igid_broken'"
> > - sudo -u postgres psql $db -c "UPDATE images SET image_guid =
> > '$igid_active', parentid = '$pid_new' WHERE vm_snapshot_id =
> > '$sid_active' AND image_guid = '$igid_broken'"
> > - sudo -u postgres psql $db -c "SELECT * FROM image_storage_domain_map"
> > - storid=
> > - diskprofileid=
> > - sudo -u postgres psql $db -c "INSERT INTO image_storage_domain_map
> > (image_id, storage_domain_id, disk_profile_id) VALUES ('$igid_broken',
> > '$stor_id', '$diskprofileid')"
> >
> > - check values
> > - sudo -u postgres psql $db -c "SELECT b.disk_alias, s.description,
> > s.snapshot_id, i.creation_date, s.status, i.imagestatus, i.size,
> > i.image_group_id, i.vm_snapshot_id, i.image_guid, i.parentid, i.active
> > FROM images as i JOIN snapshots AS s ON (i.vm_snapshot_id =
> > s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id = v.vm_guid) JOIN
> > base_disks AS b ON (i.image_group_id = b.disk_id) WHERE v.vm_name =
> > '$vmname' ORDER BY creation_date, description, 

Re: [ovirt-users] Can't start VM after shutdown

2016-06-13 Thread Colin Coe
Initially we only saw this on VMs with 2 or more disks. Subsequently we
confirmed that it does happen on single disk VMs also.

CC

---

Sent from my Nexus 5
On Jun 13, 2016 5:12 PM, "gregor"  wrote:

> The VM has two disks both are VirtIO. During testing its now clear that
> the problem occur only with two disks. When I select only one disk for
> the snapshost it works.
> Is this a problem of oVirt or is it not possible to use two disks on a
> VM in oVirt?
>
> Have you also two or more disks on your VM?
>
> Here are the Testresults:
> -
> What does not work:
> - Export the VM: Failed with error "ImageIsNotLegalChain and code 262"
> - Clone the VM: Failed with error "IRSErrorException: Image is not a
> legal chain" with the ID of the second Disk.
>
> After removing the second Disk:
> - Create offline snapshot: Works
> - Remove offline snapshot: After two hours I run "engine-setup
> --offline" to clean the looked snapshot !!!
> - Export the VM: Works
> - Import the exported VM: Works
> - Add Disk to the imported VM: Works
> - Create offline snapshot of the imported VM: Failed
> - Clone the VM: Works
> - Add Disk to the cloned VM: Works
> - Create offline snapshot of the cloned VM: Failed
>
> What works:
> - Make offline snapshot only with the system disk: Works
> - Remove offline snapshot of the system disk: Works
> - Make online snapshot only with the system disk: Works
> - Remove online snapshot of the system disk: Works
>
> cheers
> gregor
>
> On 12/06/16 19:42, gregor wrote:
> > Hi,
> >
> > I solved my problem, here are the steps but be carefully if you don't
> > know what the commands did and how to restore from backup don't follow
> this:
> >
> > - ssh to the host
> > - systemctl stop ovirt-engine
> > - backup the database with "engine-backup"
> > - navigate to the image files
> > - backup the images: sudo -u vdsm rsync -av  
> > - check which one is the backing file: qemu-img info 
> > - check for damages: qemu-img check 
> > - qemu-img commit 
> > - rename the  + .lease and .meta so it can't be accessed
> >
> > - vmname=srv03
> > - db=engine
> > - sudo -u postgres psql $db -c "SELECT b.disk_alias, s.description,
> > s.snapshot_id, i.creation_date, s.status, i.imagestatus, i.size,
> > i.image_group_id, i.vm_snapshot_id, i.image_guid, i.parentid, i.active
> > FROM images as i JOIN snapshots AS s ON (i.vm_snapshot_id =
> > s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id = v.vm_guid) JOIN
> > base_disks AS b ON (i.image_group_id = b.disk_id) WHERE v.vm_name =
> > '$vmname' ORDER BY creation_date, description, disk_alias"
> >
> > - note the image_guid and parent_id from the broken snapshot and the
> > active snapshot, the active state is the image_guuid with the parentid
> > ----
> > - igid_active=
> > - igid_broken=
> > - the parentid of the image_guuid of the broken snapshot must be the
> > same as the activ snapshots image_guuid
> > - note the snapshot id
> > - sid_active=
> > - sid_broken=
> >
> > - delete the broken snapshot
> > - sudo -u postgres psql $db -c "DELETE FROM snapshots AS s WHERE
> > s.snapshot_id = '$sid_broken'"
> >
> > - pid_new=----
> > - sudo -u postgres psql $db -c "SELECT * FROM images WHERE
> > vm_snapshot_id = '$sid_active' AND image_guid = '$igid_broken'"
> > - sudo -u postgres psql $db -c "DELETE FROM images WHERE vm_snapshot_id
> > = '$sid_broken' AND image_guid = '$igid_active'"
> > - sudo -u postgres psql $db -c "SELECT * FROM image_storage_domain_map
> > WHERE image_id = '$igid_broken'"
> > - sudo -u postgres psql $db -c "DELETE FROM image_storage_domain_map
> > WHERE image_id = '$igid_broken'"
> > - sudo -u postgres psql $db -c "UPDATE images SET image_guid =
> > '$igid_active', parentid = '$pid_new' WHERE vm_snapshot_id =
> > '$sid_active' AND image_guid = '$igid_broken'"
> > - sudo -u postgres psql $db -c "SELECT * FROM image_storage_domain_map"
> > - storid=
> > - diskprofileid=
> > - sudo -u postgres psql $db -c "INSERT INTO image_storage_domain_map
> > (image_id, storage_domain_id, disk_profile_id) VALUES ('$igid_broken',
> > '$stor_id', '$diskprofileid')"
> >
> > - check values
> > - sudo -u postgres psql $db -c "SELECT b.disk_alias, s.description,
> > s.snapshot_id, i.creation_date, s.status, i.imagestatus, i.size,
> > i.image_group_id, i.vm_snapshot_id, i.image_guid, i.parentid, i.active
> > FROM images as i JOIN snapshots AS s ON (i.vm_snapshot_id =
> > s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id = v.vm_guid) JOIN
> > base_disks AS b ON (i.image_group_id = b.disk_id) WHERE v.vm_name =
> > '$vmname' ORDER BY creation_date, description, disk_alias"could not
> > change directory to "/root/Backups/oVirt"
> >
> > - check for errors
> > - engine-setup --offline
> > - systemctl start ovirt-engine
> >
> > Now you should have a clean state and a working VM ;-)
> >
> > What was tested:
> > - Power up and down the VM
> >
> > What does not work:
> > - 

Re: [ovirt-users] Can't start VM after shutdown

2016-06-13 Thread gregor
The VM has two disks both are VirtIO. During testing its now clear that
the problem occur only with two disks. When I select only one disk for
the snapshost it works.
Is this a problem of oVirt or is it not possible to use two disks on a
VM in oVirt?

Have you also two or more disks on your VM?

Here are the Testresults:
-
What does not work:
- Export the VM: Failed with error "ImageIsNotLegalChain and code 262"
- Clone the VM: Failed with error "IRSErrorException: Image is not a
legal chain" with the ID of the second Disk.

After removing the second Disk:
- Create offline snapshot: Works
- Remove offline snapshot: After two hours I run "engine-setup
--offline" to clean the looked snapshot !!!
- Export the VM: Works
- Import the exported VM: Works
- Add Disk to the imported VM: Works
- Create offline snapshot of the imported VM: Failed
- Clone the VM: Works
- Add Disk to the cloned VM: Works
- Create offline snapshot of the cloned VM: Failed

What works:
- Make offline snapshot only with the system disk: Works
- Remove offline snapshot of the system disk: Works
- Make online snapshot only with the system disk: Works
- Remove online snapshot of the system disk: Works

cheers
gregor

On 12/06/16 19:42, gregor wrote:
> Hi,
> 
> I solved my problem, here are the steps but be carefully if you don't
> know what the commands did and how to restore from backup don't follow this:
> 
> - ssh to the host
> - systemctl stop ovirt-engine
> - backup the database with "engine-backup"
> - navigate to the image files
> - backup the images: sudo -u vdsm rsync -av  
> - check which one is the backing file: qemu-img info 
> - check for damages: qemu-img check 
> - qemu-img commit 
> - rename the  + .lease and .meta so it can't be accessed
> 
> - vmname=srv03
> - db=engine
> - sudo -u postgres psql $db -c "SELECT b.disk_alias, s.description,
> s.snapshot_id, i.creation_date, s.status, i.imagestatus, i.size,
> i.image_group_id, i.vm_snapshot_id, i.image_guid, i.parentid, i.active
> FROM images as i JOIN snapshots AS s ON (i.vm_snapshot_id =
> s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id = v.vm_guid) JOIN
> base_disks AS b ON (i.image_group_id = b.disk_id) WHERE v.vm_name =
> '$vmname' ORDER BY creation_date, description, disk_alias"
> 
> - note the image_guid and parent_id from the broken snapshot and the
> active snapshot, the active state is the image_guuid with the parentid
> ----
> - igid_active=
> - igid_broken=
> - the parentid of the image_guuid of the broken snapshot must be the
> same as the activ snapshots image_guuid
> - note the snapshot id
> - sid_active=
> - sid_broken=
> 
> - delete the broken snapshot
> - sudo -u postgres psql $db -c "DELETE FROM snapshots AS s WHERE
> s.snapshot_id = '$sid_broken'"
> 
> - pid_new=----
> - sudo -u postgres psql $db -c "SELECT * FROM images WHERE
> vm_snapshot_id = '$sid_active' AND image_guid = '$igid_broken'"
> - sudo -u postgres psql $db -c "DELETE FROM images WHERE vm_snapshot_id
> = '$sid_broken' AND image_guid = '$igid_active'"
> - sudo -u postgres psql $db -c "SELECT * FROM image_storage_domain_map
> WHERE image_id = '$igid_broken'"
> - sudo -u postgres psql $db -c "DELETE FROM image_storage_domain_map
> WHERE image_id = '$igid_broken'"
> - sudo -u postgres psql $db -c "UPDATE images SET image_guid =
> '$igid_active', parentid = '$pid_new' WHERE vm_snapshot_id =
> '$sid_active' AND image_guid = '$igid_broken'"
> - sudo -u postgres psql $db -c "SELECT * FROM image_storage_domain_map"
> - storid=
> - diskprofileid=
> - sudo -u postgres psql $db -c "INSERT INTO image_storage_domain_map
> (image_id, storage_domain_id, disk_profile_id) VALUES ('$igid_broken',
> '$stor_id', '$diskprofileid')"
> 
> - check values
> - sudo -u postgres psql $db -c "SELECT b.disk_alias, s.description,
> s.snapshot_id, i.creation_date, s.status, i.imagestatus, i.size,
> i.image_group_id, i.vm_snapshot_id, i.image_guid, i.parentid, i.active
> FROM images as i JOIN snapshots AS s ON (i.vm_snapshot_id =
> s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id = v.vm_guid) JOIN
> base_disks AS b ON (i.image_group_id = b.disk_id) WHERE v.vm_name =
> '$vmname' ORDER BY creation_date, description, disk_alias"could not
> change directory to "/root/Backups/oVirt"
> 
> - check for errors
> - engine-setup --offline
> - systemctl start ovirt-engine
> 
> Now you should have a clean state and a working VM ;-)
> 
> What was tested:
> - Power up and down the VM
> 
> What does not work:
> - Its not possible to make offline snapshots, online was not tested
> because I will not getting into such trouble again. It took many hours
> after the machine is up again.
> 
> PLEASE be aware and don't destroy your Host and VM !!!
> 
> cheers
> gregor
> 
> On 12/06/16 13:40, Colin Coe wrote:
>> We've seen this with both Linux and Windows VMs.  I'm guessing that
>> you've had failures on this VM in both snapshot create and delete
>> 

Re: [ovirt-users] Can't start VM after shutdown

2016-06-12 Thread Colin Coe
Yeah, that looks like the fix Red Hat GSS came up with.  Note that is only
online snapshots that we've seen the problem with, never offline but YMMV.

What version of oVirt are you running?  We're running RHEV 3.5.7 in prod
and test environments but 3.6.5 in dev and we've not had a re-occurrence of
this problem in dev since moving to 3.6.x

CC

On Mon, Jun 13, 2016 at 1:42 AM, gregor  wrote:

> Hi,
>
> I solved my problem, here are the steps but be carefully if you don't
> know what the commands did and how to restore from backup don't follow
> this:
>
> - ssh to the host
> - systemctl stop ovirt-engine
> - backup the database with "engine-backup"
> - navigate to the image files
> - backup the images: sudo -u vdsm rsync -av  
> - check which one is the backing file: qemu-img info 
> - check for damages: qemu-img check 
> - qemu-img commit 
> - rename the  + .lease and .meta so it can't be accessed
>
> - vmname=srv03
> - db=engine
> - sudo -u postgres psql $db -c "SELECT b.disk_alias, s.description,
> s.snapshot_id, i.creation_date, s.status, i.imagestatus, i.size,
> i.image_group_id, i.vm_snapshot_id, i.image_guid, i.parentid, i.active
> FROM images as i JOIN snapshots AS s ON (i.vm_snapshot_id =
> s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id = v.vm_guid) JOIN
> base_disks AS b ON (i.image_group_id = b.disk_id) WHERE v.vm_name =
> '$vmname' ORDER BY creation_date, description, disk_alias"
>
> - note the image_guid and parent_id from the broken snapshot and the
> active snapshot, the active state is the image_guuid with the parentid
> ----
> - igid_active=
> - igid_broken=
> - the parentid of the image_guuid of the broken snapshot must be the
> same as the activ snapshots image_guuid
> - note the snapshot id
> - sid_active=
> - sid_broken=
>
> - delete the broken snapshot
> - sudo -u postgres psql $db -c "DELETE FROM snapshots AS s WHERE
> s.snapshot_id = '$sid_broken'"
>
> - pid_new=----
> - sudo -u postgres psql $db -c "SELECT * FROM images WHERE
> vm_snapshot_id = '$sid_active' AND image_guid = '$igid_broken'"
> - sudo -u postgres psql $db -c "DELETE FROM images WHERE vm_snapshot_id
> = '$sid_broken' AND image_guid = '$igid_active'"
> - sudo -u postgres psql $db -c "SELECT * FROM image_storage_domain_map
> WHERE image_id = '$igid_broken'"
> - sudo -u postgres psql $db -c "DELETE FROM image_storage_domain_map
> WHERE image_id = '$igid_broken'"
> - sudo -u postgres psql $db -c "UPDATE images SET image_guid =
> '$igid_active', parentid = '$pid_new' WHERE vm_snapshot_id =
> '$sid_active' AND image_guid = '$igid_broken'"
> - sudo -u postgres psql $db -c "SELECT * FROM image_storage_domain_map"
> - storid=
> - diskprofileid=
> - sudo -u postgres psql $db -c "INSERT INTO image_storage_domain_map
> (image_id, storage_domain_id, disk_profile_id) VALUES ('$igid_broken',
> '$stor_id', '$diskprofileid')"
>
> - check values
> - sudo -u postgres psql $db -c "SELECT b.disk_alias, s.description,
> s.snapshot_id, i.creation_date, s.status, i.imagestatus, i.size,
> i.image_group_id, i.vm_snapshot_id, i.image_guid, i.parentid, i.active
> FROM images as i JOIN snapshots AS s ON (i.vm_snapshot_id =
> s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id = v.vm_guid) JOIN
> base_disks AS b ON (i.image_group_id = b.disk_id) WHERE v.vm_name =
> '$vmname' ORDER BY creation_date, description, disk_alias"could not
> change directory to "/root/Backups/oVirt"
>
> - check for errors
> - engine-setup --offline
> - systemctl start ovirt-engine
>
> Now you should have a clean state and a working VM ;-)
>
> What was tested:
> - Power up and down the VM
>
> What does not work:
> - Its not possible to make offline snapshots, online was not tested
> because I will not getting into such trouble again. It took many hours
> after the machine is up again.
>
> PLEASE be aware and don't destroy your Host and VM !!!
>
> cheers
> gregor
>
> On 12/06/16 13:40, Colin Coe wrote:
> > We've seen this with both Linux and Windows VMs.  I'm guessing that
> > you've had failures on this VM in both snapshot create and delete
> > operations.  oVirt/RHEV 3.5 seems particularly affected.  I'm told that
> > oVirt 3.6.7 has the last of the fixes for these known snapshot problems.
> >
> > My original email was eorded wrong.  I meant that qemu-img gives
> > "backing filename too long" errors.  You may have seen this in your logs.
> >
> > Note also that you may be seeing an entirely un-related problem.
> >
> > You may wish to post you're VDSM logs and the qemu log from
> > /var/lib/libvirt/qemu/.log
> >
> > Hope this helps
> >
> > CC
> >
> >
> > On Sun, Jun 12, 2016 at 4:45 PM, gregor  > > wrote:
> >
> > Sound's bad. Recreating the VM is no way because this is a productive
> > VM. During testing I need to recreate it more than once. oVirt works
> > perfect which Linux VM's but when it comes to Windows VM's 

Re: [ovirt-users] Can't start VM after shutdown

2016-06-12 Thread gregor
Hi,

I solved my problem, here are the steps but be carefully if you don't
know what the commands did and how to restore from backup don't follow this:

- ssh to the host
- systemctl stop ovirt-engine
- backup the database with "engine-backup"
- navigate to the image files
- backup the images: sudo -u vdsm rsync -av  
- check which one is the backing file: qemu-img info 
- check for damages: qemu-img check 
- qemu-img commit 
- rename the  + .lease and .meta so it can't be accessed

- vmname=srv03
- db=engine
- sudo -u postgres psql $db -c "SELECT b.disk_alias, s.description,
s.snapshot_id, i.creation_date, s.status, i.imagestatus, i.size,
i.image_group_id, i.vm_snapshot_id, i.image_guid, i.parentid, i.active
FROM images as i JOIN snapshots AS s ON (i.vm_snapshot_id =
s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id = v.vm_guid) JOIN
base_disks AS b ON (i.image_group_id = b.disk_id) WHERE v.vm_name =
'$vmname' ORDER BY creation_date, description, disk_alias"

- note the image_guid and parent_id from the broken snapshot and the
active snapshot, the active state is the image_guuid with the parentid
----
- igid_active=
- igid_broken=
- the parentid of the image_guuid of the broken snapshot must be the
same as the activ snapshots image_guuid
- note the snapshot id
- sid_active=
- sid_broken=

- delete the broken snapshot
- sudo -u postgres psql $db -c "DELETE FROM snapshots AS s WHERE
s.snapshot_id = '$sid_broken'"

- pid_new=----
- sudo -u postgres psql $db -c "SELECT * FROM images WHERE
vm_snapshot_id = '$sid_active' AND image_guid = '$igid_broken'"
- sudo -u postgres psql $db -c "DELETE FROM images WHERE vm_snapshot_id
= '$sid_broken' AND image_guid = '$igid_active'"
- sudo -u postgres psql $db -c "SELECT * FROM image_storage_domain_map
WHERE image_id = '$igid_broken'"
- sudo -u postgres psql $db -c "DELETE FROM image_storage_domain_map
WHERE image_id = '$igid_broken'"
- sudo -u postgres psql $db -c "UPDATE images SET image_guid =
'$igid_active', parentid = '$pid_new' WHERE vm_snapshot_id =
'$sid_active' AND image_guid = '$igid_broken'"
- sudo -u postgres psql $db -c "SELECT * FROM image_storage_domain_map"
- storid=
- diskprofileid=
- sudo -u postgres psql $db -c "INSERT INTO image_storage_domain_map
(image_id, storage_domain_id, disk_profile_id) VALUES ('$igid_broken',
'$stor_id', '$diskprofileid')"

- check values
- sudo -u postgres psql $db -c "SELECT b.disk_alias, s.description,
s.snapshot_id, i.creation_date, s.status, i.imagestatus, i.size,
i.image_group_id, i.vm_snapshot_id, i.image_guid, i.parentid, i.active
FROM images as i JOIN snapshots AS s ON (i.vm_snapshot_id =
s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id = v.vm_guid) JOIN
base_disks AS b ON (i.image_group_id = b.disk_id) WHERE v.vm_name =
'$vmname' ORDER BY creation_date, description, disk_alias"could not
change directory to "/root/Backups/oVirt"

- check for errors
- engine-setup --offline
- systemctl start ovirt-engine

Now you should have a clean state and a working VM ;-)

What was tested:
- Power up and down the VM

What does not work:
- Its not possible to make offline snapshots, online was not tested
because I will not getting into such trouble again. It took many hours
after the machine is up again.

PLEASE be aware and don't destroy your Host and VM !!!

cheers
gregor

On 12/06/16 13:40, Colin Coe wrote:
> We've seen this with both Linux and Windows VMs.  I'm guessing that
> you've had failures on this VM in both snapshot create and delete
> operations.  oVirt/RHEV 3.5 seems particularly affected.  I'm told that
> oVirt 3.6.7 has the last of the fixes for these known snapshot problems.
> 
> My original email was eorded wrong.  I meant that qemu-img gives
> "backing filename too long" errors.  You may have seen this in your logs.
> 
> Note also that you may be seeing an entirely un-related problem.
> 
> You may wish to post you're VDSM logs and the qemu log from
> /var/lib/libvirt/qemu/.log
> 
> Hope this helps
> 
> CC
> 
> 
> On Sun, Jun 12, 2016 at 4:45 PM, gregor  > wrote:
> 
> Sound's bad. Recreating the VM is no way because this is a productive
> VM. During testing I need to recreate it more than once. oVirt works
> perfect which Linux VM's but when it comes to Windows VM's we get lots
> of problems.
> 
> Which OS you used on the problematic VM?
> 
> cheers
> gregor
> 
> On 11/06/16 19:22, Anantha Raghava wrote:
> > Hi,
> >
> > Even I observed this behaviour.
> >
> > When we take the snapshot, the main VM using which the snapshot was
> > taken is shutdown and a new VM with external- comes to
> life. We
> > cannot get the original VM back to life, but a clone starts
> functioning.
> >
> > We cannot remove the snapshot whether or not the VM is running. I
> had to
> > remove the entire VM that came to life with snapshot 

Re: [ovirt-users] Can't start VM after shutdown

2016-06-12 Thread gregor
Thanks for sharing the information's, I check the file structure and
database entries. I had problems with too long names during coding a
oVirt backup tool [1]. Here is a limitations list [2]. There was the
problem that a Windows VM name can only be 15 characters long, others
can named with 64 characters.

Which OS you used on the problematic VM?

cheers
gregor

[1] https://github.com/wefixit-AT/oVirtBackup
[2]
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.2/html/Administration_Guide/Red_Hat_Enterprise_Virtualization_Manager_configuration_options_explanations_limitations_and_best_practices.html

On 12/06/16 02:16, Colin Coe wrote:
> We had the same problem. Turned out that qemu-img was complaining that
> the image file was too long. We use RHEV and Red Hat support did the
> hard work to correct the image chain.  Its quite a process...
> 
> CC
> 
> On Sat, Jun 11, 2016 at 10:39 PM, gregor  > wrote:
> 
> Hi,
> 
> a VM has snapshots which are unable to remove during when the VM is up.
> Therefore I power down the Windows Server 2012 VM. The snapshots are
> still unable to remove and the VM can't boot anymore !!!
> 
> This is the message from engine.log
> 
> --
> Message: VM srv03 is down with error. Exit message: Bad volume
> specification
> --
> 
> Clone is not possible I get:
> --
> Message: VDSM command failed: Image is not a legal chain
> --
> 
> All others VM's can be powered down and start without any problem.
> What can I do?
> This is very important because now no one can work :-( !!!
> 
> cheers
> gregor
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can't start VM after shutdown

2016-06-12 Thread gregor
Sound's bad. Recreating the VM is no way because this is a productive
VM. During testing I need to recreate it more than once. oVirt works
perfect which Linux VM's but when it comes to Windows VM's we get lots
of problems.

Which OS you used on the problematic VM?

cheers
gregor

On 11/06/16 19:22, Anantha Raghava wrote:
> Hi,
> 
> Even I observed this behaviour.
> 
> When we take the snapshot, the main VM using which the snapshot was
> taken is shutdown and a new VM with external- comes to life. We
> cannot get the original VM back to life, but a clone starts functioning.
> 
> We cannot remove the snapshot whether or not the VM is running. I had to
> remove the entire VM that came to life with snapshot and recreate the
> entire VM from scratch. Luckily the VM was still not in production,
> hence could afford it.
> 
> First I could not understand, why, when a snapshot is created, the VM
> with snapshot comes to life and starts running and not the original VM.
> 
> Is it necessary that we shutdown the VM before taking snapshots?
> Snapshot is supposed to be a backup of original VM, that unless we
> restore by cloning should not come to life as I understand.
> 
> -- 
> 
> Thanks & Regards,
> 
> Anantha Raghava
> 
> 
> On Saturday 11 June 2016 08:09 PM, gregor wrote:
>> Hi,
>>
>> a VM has snapshots which are unable to remove during when the VM is up.
>> Therefore I power down the Windows Server 2012 VM. The snapshots are
>> still unable to remove and the VM can't boot anymore !!!
>>
>> This is the message from engine.log
>>
>> --
>> Message: VM srv03 is down with error. Exit message: Bad volume specification
>> --
>>
>> Clone is not possible I get:
>> --
>> Message: VDSM command failed: Image is not a legal chain
>> --
>>
>> All others VM's can be powered down and start without any problem.
>> What can I do?
>> This is very important because now no one can work :-( !!!
>>
>> cheers
>> gregor
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users