Re: Help! Jobs stuck in pending state

2019-01-22 Thread Anurag Awasthi
Hi Alireza,

Could you elaborate on how you instantiated the jobs and any thing specific 
that went wrong in between? Usually deleting directly through SQL statements is 
very risky and first try should be through any API support.

Also, you might want to use github page 
(https://github.com/apache/cloudstack/issues) to raise an issue as I think most 
people active on project have been referring the issues list on that page.

Best Regards,
Anurag

On 1/23/19, 10:52 AM, "Alireza Eskandari"  wrote:

First I deleted two jobs which was existed in  vm_work_job table and its
related entry in  sync_queue table but it doesn't help.
Then I delete all the entries in sync_queue tables and again no success.
Any idea?


anurag.awas...@shapeblue.com 
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue
  
 

On Wed, Jan 23, 2019 at 1:50 AM Wei ZHOU  wrote:

> If you know the instance id and mysql password, it should work after
> removing some records in mysql.
>
> ```
> set @id=X;
>
> delete from vm_work_job where vm_instance_id=@id;
> delete from sync_queue where sync_objid=@id;
> ```
>
> Alireza Eskandari  于2019年1月22日周二 下午10:59写道:
>
> > Hi guys
> > I have opened a bug in jira about my problem in CS:
> > https://issues.apache.org/jira/browse/CLOUDSTACK-10401
> > CloudStack doesn't process jobs! My cloud in totally unusable.
> > Thanks in advance for you help.
> >
>




Re: Help! Jobs stuck in pending state

2019-01-22 Thread Alireza Eskandari
First I deleted two jobs which was existed in  vm_work_job table and its
related entry in  sync_queue table but it doesn't help.
Then I delete all the entries in sync_queue tables and again no success.
Any idea?

On Wed, Jan 23, 2019 at 1:50 AM Wei ZHOU  wrote:

> If you know the instance id and mysql password, it should work after
> removing some records in mysql.
>
> ```
> set @id=X;
>
> delete from vm_work_job where vm_instance_id=@id;
> delete from sync_queue where sync_objid=@id;
> ```
>
> Alireza Eskandari  于2019年1月22日周二 下午10:59写道:
>
> > Hi guys
> > I have opened a bug in jira about my problem in CS:
> > https://issues.apache.org/jira/browse/CLOUDSTACK-10401
> > CloudStack doesn't process jobs! My cloud in totally unusable.
> > Thanks in advance for you help.
> >
>


RE: Snapshots on KVM corrupting disk images

2019-01-22 Thread Sean Lair
Thanks Wei!  We really appreciate the response and the link.

Shouldn't we be doing something to stop the ability to use snapshots (scheduled 
and other snapshot operations) in CloudStack?  

-Original Message-
From: Wei ZHOU [mailto:ustcweiz...@gmail.com] 
Sent: Tuesday, January 22, 2019 4:06 PM
To: dev@cloudstack.apache.org
Subject: Re: Snapshots on KVM corrupting disk images

Hi Sean,

The (recurring) volume snapshot on running vms should be disabled in cloudstack.

According to some discussions (for example 
https://bugzilla.redhat.com/show_bug.cgi?id=920020), the image might be 
corrupted due to the concurrent read/write operations in volume snapshot (by 
qemu-img snapshot).

```

qcow2 images must not be used in read-write mode from two processes at the same 
time. You can either have them opened either by one read-write process or by 
many read-only processes. Having one (paused) read-write process (the running
VM) and additional read-only processes (copying out a snapshot with qemu-img) 
may happen to work in practice, but you're on your own and we won't give 
support for such attempts.

```
The safe way to take a volume snapshot of running vm is
(1) take a vm snapshot (vm will be paused)
(2) then create a volume snapshot from the vm snapshot

-Wei



Sean Lair  于2019年1月22日周二 下午5:30写道:

> Hi all,
>
> We had some instances where VM disks are becoming corrupted when using 
> KVM snapshots.  We are running CloudStack 4.9.3 with KVM on CentOS 7.
>
> The first time was when someone mass-enabled scheduled snapshots on a 
> lot of large number VMs and secondary storage filled up.  We had to 
> restore all those VM disks...  But believed it was just our fault with 
> letting secondary storage fill up.
>
> Today we had an instance where a snapshot failed and now the disk 
> image is corrupted and the VM can't boot.  here is the output of some 
> commands:
>
> ---
> [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img check
> ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> qemu-img: Could not open './184aa458-9d4b-4c1b-a3c6-23d28ea28e80': 
> Could not read snapshots: File too large
>
> [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img info
> ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> qemu-img: Could not open './184aa458-9d4b-4c1b-a3c6-23d28ea28e80': 
> Could not read snapshots: File too large
>
> [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# ls -lh
> ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> -rw-r--r--. 1 root root 73G Jan 22 11:04
> ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> ---
>
> We tried restoring to before the snapshot failure, but still have 
> strange
> errors:
>
> --
> [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# ls -lh
> ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> -rw-r--r--. 1 root root 73G Jan 22 11:04
> ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
>
> [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img info
> ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> image: ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> file format: qcow2
> virtual size: 50G (53687091200 bytes)
> disk size: 73G
> cluster_size: 65536
> Snapshot list:
> IDTAG VM SIZEDATE   VM CLOCK
> 1 a8fdf99f-8219-4032-a9c8-87a6e09e7f95   3.7G 2018-12-23 11:01:43
> 3099:35:55.242
> 2 b4d74338-b0e3-4eeb-8bf8-41f6f75d9abd   3.8G 2019-01-06 11:03:16
> 3431:52:23.942
> Format specific information:
> compat: 1.1
> lazy refcounts: false
>
> [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img check
> ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> tcmalloc: large alloc 1539750010880 bytes == (nil) @  0x7fb9cbbf7bf3
> 0x7fb9cbc19488 0x7fb9cb71dc56 0x55d16ddf1c77 0x55d16ddf1edc 
> 0x55d16ddf2541 0x55d16ddf465e 0x55d16ddf8ad1 0x55d16de336db 
> 0x55d16de373e6 0x7fb9c63a3c05 0x55d16ddd9f7d No errors were found on 
> the image.
>
> [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img 
> snapshot -l ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> Snapshot list:
> IDTAG VM SIZEDATE   VM CLOCK
> 1 a8fdf99f-8219-4032-a9c8-87a6e09e7f95   3.7G 2018-12-23 11:01:43
> 3099:35:55.242
> 2 b4d74338-b0e3-4eeb-8bf8-41f6f75d9abd   3.8G 2019-01-06 11:03:16
> 3431:52:23.942
> --
>
> Everyone is now extremely hesitant to use snapshots in KVM  We 
> tried deleting the snapshots in the restored disk image, but it errors out...
>
>
> Does anyone else have issues with KVM snapshots?  We are considering 
> just disabling this functionality now...
>
> Thanks
> Sean
>
>
>
>
>
>
>


Re: Snapshots on KVM corrupting disk images

2019-01-22 Thread Ivan Kudryavtsev
I've met the situations when CLOUDSTACK+KVM+QCOW2+SNAPSHOTS led to
corrupted images, mostly in 4.3 and NFS, but I've thought that CS stops VM
just before it does the snapshot. At least the VM behavior when the VM
snapshot is created looks like it happens (freezing). That's why it looks
strange. But, in general, I agree, that the above bundle leads to data
corruption, especially when the storage is under IO pressure. We recommend
our customers avoiding running snapshots if possible for such a bundle.

ср, 23 янв. 2019 г. в 05:06, Wei ZHOU :

> Hi Sean,
>
> The (recurring) volume snapshot on running vms should be disabled in
> cloudstack.
>
> According to some discussions (for example
> https://bugzilla.redhat.com/show_bug.cgi?id=920020), the image might be
> corrupted due to the concurrent read/write operations in volume snapshot
> (by qemu-img snapshot).
>
> ```
>
> qcow2 images must not be used in read-write mode from two processes at the
> same
> time. You can either have them opened either by one read-write process or
> by
> many read-only processes. Having one (paused) read-write process (the
> running
> VM) and additional read-only processes (copying out a snapshot with
> qemu-img)
> may happen to work in practice, but you're on your own and we won't give
> support for such attempts.
>
> ```
> The safe way to take a volume snapshot of running vm is
> (1) take a vm snapshot (vm will be paused)
> (2) then create a volume snapshot from the vm snapshot
>
> -Wei
>
>
>
> Sean Lair  于2019年1月22日周二 下午5:30写道:
>
> > Hi all,
> >
> > We had some instances where VM disks are becoming corrupted when using
> KVM
> > snapshots.  We are running CloudStack 4.9.3 with KVM on CentOS 7.
> >
> > The first time was when someone mass-enabled scheduled snapshots on a lot
> > of large number VMs and secondary storage filled up.  We had to restore
> all
> > those VM disks...  But believed it was just our fault with letting
> > secondary storage fill up.
> >
> > Today we had an instance where a snapshot failed and now the disk image
> is
> > corrupted and the VM can't boot.  here is the output of some commands:
> >
> > ---
> > [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img check
> > ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> > qemu-img: Could not open './184aa458-9d4b-4c1b-a3c6-23d28ea28e80': Could
> > not read snapshots: File too large
> >
> > [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img info
> > ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> > qemu-img: Could not open './184aa458-9d4b-4c1b-a3c6-23d28ea28e80': Could
> > not read snapshots: File too large
> >
> > [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# ls -lh
> > ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> > -rw-r--r--. 1 root root 73G Jan 22 11:04
> > ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> > ---
> >
> > We tried restoring to before the snapshot failure, but still have strange
> > errors:
> >
> > --
> > [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# ls -lh
> > ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> > -rw-r--r--. 1 root root 73G Jan 22 11:04
> > ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> >
> > [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img info
> > ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> > image: ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> > file format: qcow2
> > virtual size: 50G (53687091200 bytes)
> > disk size: 73G
> > cluster_size: 65536
> > Snapshot list:
> > IDTAG VM SIZEDATE   VM CLOCK
> > 1 a8fdf99f-8219-4032-a9c8-87a6e09e7f95   3.7G 2018-12-23 11:01:43
> > 3099:35:55.242
> > 2 b4d74338-b0e3-4eeb-8bf8-41f6f75d9abd   3.8G 2019-01-06 11:03:16
> > 3431:52:23.942
> > Format specific information:
> > compat: 1.1
> > lazy refcounts: false
> >
> > [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img check
> > ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> > tcmalloc: large alloc 1539750010880 bytes == (nil) @  0x7fb9cbbf7bf3
> > 0x7fb9cbc19488 0x7fb9cb71dc56 0x55d16ddf1c77 0x55d16ddf1edc
> 0x55d16ddf2541
> > 0x55d16ddf465e 0x55d16ddf8ad1 0x55d16de336db 0x55d16de373e6
> 0x7fb9c63a3c05
> > 0x55d16ddd9f7d
> > No errors were found on the image.
> >
> > [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img
> snapshot
> > -l ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> > Snapshot list:
> > IDTAG VM SIZEDATE   VM CLOCK
> > 1 a8fdf99f-8219-4032-a9c8-87a6e09e7f95   3.7G 2018-12-23 11:01:43
> > 3099:35:55.242
> > 2 b4d74338-b0e3-4eeb-8bf8-41f6f75d9abd   3.8G 2019-01-06 11:03:16
> > 3431:52:23.942
> > --
> >
> > Everyone is now extremely hesitant to use snapshots in KVM  We tried
> > deleting the snapshots in the restored disk image, but it errors out...
> >
> >
> > Does anyone else have issues with KVM snapshots?  We are considering just
> > disabling this functionality now...
> >
> > Thanks
> 

Re:答复: CloudStack 4.11.2 Snapshot Revert fail

2019-01-22 Thread Haijiao
This is a fresh installation (XenSever 7.1.2+ACS 4.11.2).  



We simply can not revert the VM snapshot though there's no changes at all to VM 
or template.   


We can consistently reproduce this issue in our environtment with the error log 
attached. And we believe the issue doesn't exisit in another environment ( 
XenServer 6.5+ACS 4.7.1).





Re: Help! Jobs stuck in pending state

2019-01-22 Thread Alireza Eskandari
Here is my query on those tables:

MySQL [cloud]> select * from vm_work_job;
+---+--+--++
| id| step | vm_type  | vm_instance_id |
+---+--+--++
| 57262 | Prepare  | Instance |691 |
| 57268 | Starting | Instance |748 |
+---+--+--++
2 rows in set (0.00 sec)


MySQL [cloud]> SELECT * FROM cloud.sync_queue;
+---+++---+-+-++--+
| id| sync_objtype   | sync_objid | queue_proc_number | created
 | last_updated| queue_size | queue_size_limit |
+---+++---+-+-++--+
| 4 | VmWorkJobQueue |  1 | 3 | 2017-08-28
12:24:09 | 2017-08-28 12:24:42 |  0 |1 |
| 7 | VmWorkJobQueue |  2 | 4 | 2017-08-28
12:24:10 | 2017-08-28 13:18:54 |  0 |1 |
|19 | VmWorkJobQueue |  3 | 4 | 2017-08-28
12:44:09 | 2017-08-29 11:31:09 |  0 |1 |
|34 | VmWorkJobQueue |  4 | 2 | 2017-08-29
11:03:28 | 2017-08-29 11:24:59 |  0 |1 |
.
.
.
| 16360 | VmWorkJobQueue |745 | 2 | 2019-01-22
07:06:48 | 2019-01-22 08:06:56 |  0 |1 |
| 16369 | VmWorkJobQueue |746 | 2 | 2019-01-22
11:01:45 | 2019-01-22 12:03:54 |  0 |1 |
| 16378 | VmWorkJobQueue |747 | 2 | 2019-01-22
13:30:48 | 2019-01-22 14:32:54 |  0 |1 |
| 16390 | VmWorkJobQueue |748 | 1 | 2019-01-22
15:48:53 | 2019-01-22 16:12:53 |  0 |1 |
+---+++---+-+-++--+
740 rows in set (0.01 sec)




On Wed, Jan 23, 2019 at 1:50 AM Wei ZHOU  wrote:

> If you know the instance id and mysql password, it should work after
> removing some records in mysql.
>
> ```
> set @id=X;
>
> delete from vm_work_job where vm_instance_id=@id;
> delete from sync_queue where sync_objid=@id;
> ```
>
> Alireza Eskandari  于2019年1月22日周二 下午10:59写道:
>
> > Hi guys
> > I have opened a bug in jira about my problem in CS:
> > https://issues.apache.org/jira/browse/CLOUDSTACK-10401
> > CloudStack doesn't process jobs! My cloud in totally unusable.
> > Thanks in advance for you help.
> >
>


Re: Help! Jobs stuck in pending state

2019-01-22 Thread Wei ZHOU
If you know the instance id and mysql password, it should work after
removing some records in mysql.

```
set @id=X;

delete from vm_work_job where vm_instance_id=@id;
delete from sync_queue where sync_objid=@id;
```

Alireza Eskandari  于2019年1月22日周二 下午10:59写道:

> Hi guys
> I have opened a bug in jira about my problem in CS:
> https://issues.apache.org/jira/browse/CLOUDSTACK-10401
> CloudStack doesn't process jobs! My cloud in totally unusable.
> Thanks in advance for you help.
>


Re: Snapshots on KVM corrupting disk images

2019-01-22 Thread Wei ZHOU
Hi Sean,

The (recurring) volume snapshot on running vms should be disabled in
cloudstack.

According to some discussions (for example
https://bugzilla.redhat.com/show_bug.cgi?id=920020), the image might be
corrupted due to the concurrent read/write operations in volume snapshot
(by qemu-img snapshot).

```

qcow2 images must not be used in read-write mode from two processes at the same
time. You can either have them opened either by one read-write process or by
many read-only processes. Having one (paused) read-write process (the running
VM) and additional read-only processes (copying out a snapshot with qemu-img)
may happen to work in practice, but you're on your own and we won't give
support for such attempts.

```
The safe way to take a volume snapshot of running vm is
(1) take a vm snapshot (vm will be paused)
(2) then create a volume snapshot from the vm snapshot

-Wei



Sean Lair  于2019年1月22日周二 下午5:30写道:

> Hi all,
>
> We had some instances where VM disks are becoming corrupted when using KVM
> snapshots.  We are running CloudStack 4.9.3 with KVM on CentOS 7.
>
> The first time was when someone mass-enabled scheduled snapshots on a lot
> of large number VMs and secondary storage filled up.  We had to restore all
> those VM disks...  But believed it was just our fault with letting
> secondary storage fill up.
>
> Today we had an instance where a snapshot failed and now the disk image is
> corrupted and the VM can't boot.  here is the output of some commands:
>
> ---
> [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img check
> ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> qemu-img: Could not open './184aa458-9d4b-4c1b-a3c6-23d28ea28e80': Could
> not read snapshots: File too large
>
> [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img info
> ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> qemu-img: Could not open './184aa458-9d4b-4c1b-a3c6-23d28ea28e80': Could
> not read snapshots: File too large
>
> [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# ls -lh
> ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> -rw-r--r--. 1 root root 73G Jan 22 11:04
> ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> ---
>
> We tried restoring to before the snapshot failure, but still have strange
> errors:
>
> --
> [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# ls -lh
> ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> -rw-r--r--. 1 root root 73G Jan 22 11:04
> ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
>
> [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img info
> ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> image: ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> file format: qcow2
> virtual size: 50G (53687091200 bytes)
> disk size: 73G
> cluster_size: 65536
> Snapshot list:
> IDTAG VM SIZEDATE   VM CLOCK
> 1 a8fdf99f-8219-4032-a9c8-87a6e09e7f95   3.7G 2018-12-23 11:01:43
> 3099:35:55.242
> 2 b4d74338-b0e3-4eeb-8bf8-41f6f75d9abd   3.8G 2019-01-06 11:03:16
> 3431:52:23.942
> Format specific information:
> compat: 1.1
> lazy refcounts: false
>
> [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img check
> ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> tcmalloc: large alloc 1539750010880 bytes == (nil) @  0x7fb9cbbf7bf3
> 0x7fb9cbc19488 0x7fb9cb71dc56 0x55d16ddf1c77 0x55d16ddf1edc 0x55d16ddf2541
> 0x55d16ddf465e 0x55d16ddf8ad1 0x55d16de336db 0x55d16de373e6 0x7fb9c63a3c05
> 0x55d16ddd9f7d
> No errors were found on the image.
>
> [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img snapshot
> -l ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> Snapshot list:
> IDTAG VM SIZEDATE   VM CLOCK
> 1 a8fdf99f-8219-4032-a9c8-87a6e09e7f95   3.7G 2018-12-23 11:01:43
> 3099:35:55.242
> 2 b4d74338-b0e3-4eeb-8bf8-41f6f75d9abd   3.8G 2019-01-06 11:03:16
> 3431:52:23.942
> --
>
> Everyone is now extremely hesitant to use snapshots in KVM  We tried
> deleting the snapshots in the restored disk image, but it errors out...
>
>
> Does anyone else have issues with KVM snapshots?  We are considering just
> disabling this functionality now...
>
> Thanks
> Sean
>
>
>
>
>
>
>


Help! Jobs stuck in pending state

2019-01-22 Thread Alireza Eskandari
Hi guys
I have opened a bug in jira about my problem in CS:
https://issues.apache.org/jira/browse/CLOUDSTACK-10401
CloudStack doesn't process jobs! My cloud in totally unusable.
Thanks in advance for you help.


RE: CloudStack 4.11.2 Snapshot Revert fail

2019-01-22 Thread Sean Lair
Luckily it was for a VM that is never touched in CloudStack.  The snaps were 
scheduled ones.  No, no changes to VM or template.

We are due to upgrade from 4.9.3 but we have not yet.

-Original Message-
From: Andrija Panic [mailto:andrija.pa...@gmail.com] 
Sent: Tuesday, January 22, 2019 11:05 AM
To: dev 
Cc: us...@cloudstack.apache.org
Subject: Re: CloudStack 4.11.2 Snapshot Revert fail

Hi there,

after VM was deployed and snapshots created - was there any changes to VM or 
template from which VM was created - did ACS version get upgraded ?

Best

On Tue, 22 Jan 2019 at 17:52, li jerry  wrote:

> HI ALL
>
> I use CloudStack 4.11.2 to manage Xenserver 7.1.2 (XenServer CU2).
>
> VM snapshot for revert failure (snapshot does not contain memorysnapshot).
>
> 2019-01-23 00:06:54,210 DEBUG [c.c.a.m.ClusteredAgentAttache] 
> (Work-Job-Executor-156:ctx-28f7465a job-2867/job-2869 ctx-a04e0ed9)
> (logid:a9ef7fe7) Seq 5-6201456686889173919: Forwarding Seq
> 5-6201456686889173919:  { Cmd , MgmtId: 240661250348494, via: 
> 5(wxac6001),
> Ver: v1, Flags: 100011,
> [{"com.cloud.agent.api.RevertToVMSnapshotCommand":{"reloadVm":false,"vmUuid":"b2a78e9c-06ab-4200-ad6d-fe095f622502","volumeTOs":[{"uuid":"7a58ffdc-b02c-41bf-963c-be56c2da0e9b","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"WXACP01CL01_LUN10","id":19,"poolType":"PreSetup","host":"localhost","path":"/WXACP01CL01_LUN10","port":0,"url":"PreSetup://localhost/WXACP01CL01_LUN10/?ROLE=Primary=WXACP01CL01_LUN10","isManaged":false}},"name":"ROOT-33","size":21474836480,"path":"dd1cf43d-d5a4-4633-9c3e-8f73d1ccc484","volumeId":93,"vmName":"i-2-33-VM","accountId":2,"format":"VHD","provisioningType":"THIN","id":93,"deviceId":0,"hypervisorType":"XenServer"},{"uuid":"74268aa2-b4e5-4574-a981-027e55b5383f","volumeType":"DATADISK","dataStore":{"org.apache.
> cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"WXACP01CL01_LUN01",
> "id":1,"poolType":"PreSetup","host":"localhost","path":"/WXACP01CL01_L
> UN01","port":0,"url":"PreSetup://localhost/WXACP01CL01_LUN01/?ROLE=Pri
> mary=WXACP01CL01_LUN01","isManaged":false}},"name":"DATA-33"
> ,"size":1099511627776,"path":"e2ead686-d0bb-49f2-b656-77c2bf497990","v
> olumeId":95,"vmName":"i-2-33-VM","accountId":2,"format":"VHD","provisi
> oningType":"THIN","id":95,"deviceId":1,"hypervisorType":"XenServer"}],
> "target":{"id":27,"snapshotName":"i-2-33-VM_VS_20190122155503","type":
> "Disk","createTime":1548172503000,"current":true,"description":"asdfas
> df","quiescevm":true},"vmName":"i-2-33-VM","guestOSType":"CentOS
> 7","wait":0}}] } to 55935224135780
>
> 2019-01-23 00:06:54,222 DEBUG [c.c.a.t.Request]
> (AgentManager-Handler-14:null) (logid:) Seq 5-6201456686889173919:
> Processing:  { Ans: , MgmtId: 240661250348494, via: 5, Ver: v1, Flags: 
> 10, 
> [{"com.cloud.agent.api.RevertToVMSnapshotAnswer":{"result":false,"details":"
> Hypervisor 
> com.cloud.hypervisor.xenserver.resource.XenServer650Resource
> doesn't support guest OS type CentOS 7. you can choose 'Other install 
> media' to run it as HVM","wait":0}}] }
> 2019-01-23 00:06:54,223 DEBUG [c.c.a.t.Request] 
> (Work-Job-Executor-156:ctx-28f7465a job-2867/job-2869 ctx-a04e0ed9)
> (logid:a9ef7fe7) Seq 5-6201456686889173919: Received:  { Ans: , MgmtId:
> 240661250348494, via: 5(wxac6001), Ver: v1, Flags: 10, { 
> RevertToVMSnapshotAnswer } }
> 2019-01-23 00:06:54,223 ERROR [o.a.c.s.v.DefaultVMSnapshotStrategy]
> (Work-Job-Executor-156:ctx-28f7465a job-2867/job-2869 ctx-a04e0ed9)
> (logid:a9ef7fe7) Revert VM: i-2-33-VM to snapshot:
> i-2-33-VM_VS_20190122155503 failed due to  Hypervisor 
> com.cloud.hypervisor.xenserver.resource.XenServer650Resource doesn't 
> support guest OS type CentOS 7. you can choose 'Other install media' 
> to run it as HVM
> 2019-01-23 00:06:54,226 DEBUG [c.c.v.s.VMSnapshotManagerImpl] 
> (Work-Job-Executor-156:ctx-28f7465a job-2867/job-2869 ctx-a04e0ed9)
> (logid:a9ef7fe7) Failed to revert vmsnapshot: 27
> com.cloud.utils.exception.CloudRuntimeException: Revert VM: i-2-33-VM 
> to
> snapshot: i-2-33-VM_VS_20190122155503 failed due to  Hypervisor 
> com.cloud.hypervisor.xenserver.resource.XenServer650Resource doesn't 
> support guest OS type CentOS 7. you can choose 'Other install media' 
> to run it as HVM
>   at
> org.apache.cloudstack.storage.vmsnapshot.DefaultVMSnapshotStrategy.revertVMSnapshot(DefaultVMSnapshotStrategy.java:393)
>   at
> com.cloud.vm.snapshot.VMSnapshotManagerImpl.orchestrateRevertToVMSnapshot(VMSnapshotManagerImpl.java:846)
>   at
> com.cloud.vm.snapshot.VMSnapshotManagerImpl.orchestrateRevertToVMSnapshot(VMSnapshotManagerImpl.java:1211)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at
> 

RE: CloudStack 4.11.2 Snapshot Revert fail

2019-01-22 Thread Sean Lair
Sorry, replied to wrong snapshot thread..


-Original Message-
From: Sean Lair 
Sent: Tuesday, January 22, 2019 11:48 AM
To: dev 
Cc: us...@cloudstack.apache.org
Subject: RE: CloudStack 4.11.2 Snapshot Revert fail

Luckily it was for a VM that is never touched in CloudStack.  The snaps were 
scheduled ones.  No, no changes to VM or template.

We are due to upgrade from 4.9.3 but we have not yet.

-Original Message-
From: Andrija Panic [mailto:andrija.pa...@gmail.com]
Sent: Tuesday, January 22, 2019 11:05 AM
To: dev 
Cc: us...@cloudstack.apache.org
Subject: Re: CloudStack 4.11.2 Snapshot Revert fail

Hi there,

after VM was deployed and snapshots created - was there any changes to VM or 
template from which VM was created - did ACS version get upgraded ?

Best

On Tue, 22 Jan 2019 at 17:52, li jerry  wrote:

> HI ALL
>
> I use CloudStack 4.11.2 to manage Xenserver 7.1.2 (XenServer CU2).
>
> VM snapshot for revert failure (snapshot does not contain memorysnapshot).
>
> 2019-01-23 00:06:54,210 DEBUG [c.c.a.m.ClusteredAgentAttache] 
> (Work-Job-Executor-156:ctx-28f7465a job-2867/job-2869 ctx-a04e0ed9)
> (logid:a9ef7fe7) Seq 5-6201456686889173919: Forwarding Seq
> 5-6201456686889173919:  { Cmd , MgmtId: 240661250348494, via: 
> 5(wxac6001),
> Ver: v1, Flags: 100011,
> [{"com.cloud.agent.api.RevertToVMSnapshotCommand":{"reloadVm":false,"vmUuid":"b2a78e9c-06ab-4200-ad6d-fe095f622502","volumeTOs":[{"uuid":"7a58ffdc-b02c-41bf-963c-be56c2da0e9b","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"WXACP01CL01_LUN10","id":19,"poolType":"PreSetup","host":"localhost","path":"/WXACP01CL01_LUN10","port":0,"url":"PreSetup://localhost/WXACP01CL01_LUN10/?ROLE=Primary=WXACP01CL01_LUN10","isManaged":false}},"name":"ROOT-33","size":21474836480,"path":"dd1cf43d-d5a4-4633-9c3e-8f73d1ccc484","volumeId":93,"vmName":"i-2-33-VM","accountId":2,"format":"VHD","provisioningType":"THIN","id":93,"deviceId":0,"hypervisorType":"XenServer"},{"uuid":"74268aa2-b4e5-4574-a981-027e55b5383f","volumeType":"DATADISK","dataStore":{"org.apache.
> cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"WXACP01CL01_LUN01",
> "id":1,"poolType":"PreSetup","host":"localhost","path":"/WXACP01CL01_L
> UN01","port":0,"url":"PreSetup://localhost/WXACP01CL01_LUN01/?ROLE=Pri
> mary=WXACP01CL01_LUN01","isManaged":false}},"name":"DATA-33"
> ,"size":1099511627776,"path":"e2ead686-d0bb-49f2-b656-77c2bf497990","v
> olumeId":95,"vmName":"i-2-33-VM","accountId":2,"format":"VHD","provisi
> oningType":"THIN","id":95,"deviceId":1,"hypervisorType":"XenServer"}],
> "target":{"id":27,"snapshotName":"i-2-33-VM_VS_20190122155503","type":
> "Disk","createTime":1548172503000,"current":true,"description":"asdfas
> df","quiescevm":true},"vmName":"i-2-33-VM","guestOSType":"CentOS
> 7","wait":0}}] } to 55935224135780
>
> 2019-01-23 00:06:54,222 DEBUG [c.c.a.t.Request]
> (AgentManager-Handler-14:null) (logid:) Seq 5-6201456686889173919:
> Processing:  { Ans: , MgmtId: 240661250348494, via: 5, Ver: v1, Flags: 
> 10, 
> [{"com.cloud.agent.api.RevertToVMSnapshotAnswer":{"result":false,"details":"
> Hypervisor
> com.cloud.hypervisor.xenserver.resource.XenServer650Resource
> doesn't support guest OS type CentOS 7. you can choose 'Other install 
> media' to run it as HVM","wait":0}}] }
> 2019-01-23 00:06:54,223 DEBUG [c.c.a.t.Request] 
> (Work-Job-Executor-156:ctx-28f7465a job-2867/job-2869 ctx-a04e0ed9)
> (logid:a9ef7fe7) Seq 5-6201456686889173919: Received:  { Ans: , MgmtId:
> 240661250348494, via: 5(wxac6001), Ver: v1, Flags: 10, { 
> RevertToVMSnapshotAnswer } }
> 2019-01-23 00:06:54,223 ERROR [o.a.c.s.v.DefaultVMSnapshotStrategy]
> (Work-Job-Executor-156:ctx-28f7465a job-2867/job-2869 ctx-a04e0ed9)
> (logid:a9ef7fe7) Revert VM: i-2-33-VM to snapshot:
> i-2-33-VM_VS_20190122155503 failed due to  Hypervisor 
> com.cloud.hypervisor.xenserver.resource.XenServer650Resource doesn't 
> support guest OS type CentOS 7. you can choose 'Other install media'
> to run it as HVM
> 2019-01-23 00:06:54,226 DEBUG [c.c.v.s.VMSnapshotManagerImpl] 
> (Work-Job-Executor-156:ctx-28f7465a job-2867/job-2869 ctx-a04e0ed9)
> (logid:a9ef7fe7) Failed to revert vmsnapshot: 27
> com.cloud.utils.exception.CloudRuntimeException: Revert VM: i-2-33-VM 
> to
> snapshot: i-2-33-VM_VS_20190122155503 failed due to  Hypervisor 
> com.cloud.hypervisor.xenserver.resource.XenServer650Resource doesn't 
> support guest OS type CentOS 7. you can choose 'Other install media'
> to run it as HVM
>   at
> org.apache.cloudstack.storage.vmsnapshot.DefaultVMSnapshotStrategy.revertVMSnapshot(DefaultVMSnapshotStrategy.java:393)
>   at
> com.cloud.vm.snapshot.VMSnapshotManagerImpl.orchestrateRevertToVMSnapshot(VMSnapshotManagerImpl.java:846)
>   at
> com.cloud.vm.snapshot.VMSnapshotManagerImpl.orchestrateRevertToVMSnapshot(VMSnapshotManagerImpl.java:1211)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at
> 

答复: CloudStack 4.11.2 Snapshot Revert fail

2019-01-22 Thread li jerry
VM deployed through the template

After the snapshot was created, the calculation scheme was not changed and 
there was no upgrade.

Only shut down the VM and perform snapshot recovery

发件人: Andrija Panic
发送时间: 2019年1月23日 1:05
收件人: dev
抄送: us...@cloudstack.apache.org
主题: Re: CloudStack 4.11.2 Snapshot Revert fail

Hi there,

after VM was deployed and snapshots created - was there any changes to VM
or template from which VM was created - did ACS version get upgraded ?

Best

On Tue, 22 Jan 2019 at 17:52, li jerry  wrote:

> HI ALL
>
> I use CloudStack 4.11.2 to manage Xenserver 7.1.2 (XenServer CU2).
>
> VM snapshot for revert failure (snapshot does not contain memorysnapshot).
>
> 2019-01-23 00:06:54,210 DEBUG [c.c.a.m.ClusteredAgentAttache]
> (Work-Job-Executor-156:ctx-28f7465a job-2867/job-2869 ctx-a04e0ed9)
> (logid:a9ef7fe7) Seq 5-6201456686889173919: Forwarding Seq
> 5-6201456686889173919:  { Cmd , MgmtId: 240661250348494, via: 5(wxac6001),
> Ver: v1, Flags: 100011,
> [{"com.cloud.agent.api.RevertToVMSnapshotCommand":{"reloadVm":false,"vmUuid":"b2a78e9c-06ab-4200-ad6d-fe095f622502","volumeTOs":[{"uuid":"7a58ffdc-b02c-41bf-963c-be56c2da0e9b","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"WXACP01CL01_LUN10","id":19,"poolType":"PreSetup","host":"localhost","path":"/WXACP01CL01_LUN10","port":0,"url":"PreSetup://localhost/WXACP01CL01_LUN10/?ROLE=Primary=WXACP01CL01_LUN10","isManaged":false}},"name":"ROOT-33","size":21474836480,"path":"dd1cf43d-d5a4-4633-9c3e-8f73d1ccc484","volumeId":93,"vmName":"i-2-33-VM","accountId":2,"format":"VHD","provisioningType":"THIN","id":93,"deviceId":0,"hypervisorType":"XenServer"},{"uuid":"74268aa2-b4e5-4574-a981-027e55b5383f","volumeType":"DATADISK","dataStore":{"org.apache.
> cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"WXACP01CL01_LUN01","id":1,"poolType":"PreSetup","host":"localhost","path":"/WXACP01CL01_LUN01","port":0,"url":"PreSetup://localhost/WXACP01CL01_LUN01/?ROLE=Primary=WXACP01CL01_LUN01","isManaged":false}},"name":"DATA-33","size":1099511627776,"path":"e2ead686-d0bb-49f2-b656-77c2bf497990","volumeId":95,"vmName":"i-2-33-VM","accountId":2,"format":"VHD","provisioningType":"THIN","id":95,"deviceId":1,"hypervisorType":"XenServer"}],"target":{"id":27,"snapshotName":"i-2-33-VM_VS_20190122155503","type":"Disk","createTime":1548172503000,"current":true,"description":"asdfasdf","quiescevm":true},"vmName":"i-2-33-VM","guestOSType":"CentOS
> 7","wait":0}}] } to 55935224135780
>
> 2019-01-23 00:06:54,222 DEBUG [c.c.a.t.Request]
> (AgentManager-Handler-14:null) (logid:) Seq 5-6201456686889173919:
> Processing:  { Ans: , MgmtId: 240661250348494, via: 5, Ver: v1, Flags: 10,
> [{"com.cloud.agent.api.RevertToVMSnapshotAnswer":{"result":false,"details":"
> Hypervisor com.cloud.hypervisor.xenserver.resource.XenServer650Resource
> doesn't support guest OS type CentOS 7. you can choose 'Other install
> media' to run it as HVM","wait":0}}] }
> 2019-01-23 00:06:54,223 DEBUG [c.c.a.t.Request]
> (Work-Job-Executor-156:ctx-28f7465a job-2867/job-2869 ctx-a04e0ed9)
> (logid:a9ef7fe7) Seq 5-6201456686889173919: Received:  { Ans: , MgmtId:
> 240661250348494, via: 5(wxac6001), Ver: v1, Flags: 10, {
> RevertToVMSnapshotAnswer } }
> 2019-01-23 00:06:54,223 ERROR [o.a.c.s.v.DefaultVMSnapshotStrategy]
> (Work-Job-Executor-156:ctx-28f7465a job-2867/job-2869 ctx-a04e0ed9)
> (logid:a9ef7fe7) Revert VM: i-2-33-VM to snapshot:
> i-2-33-VM_VS_20190122155503 failed due to  Hypervisor
> com.cloud.hypervisor.xenserver.resource.XenServer650Resource doesn't
> support guest OS type CentOS 7. you can choose 'Other install media' to run
> it as HVM
> 2019-01-23 00:06:54,226 DEBUG [c.c.v.s.VMSnapshotManagerImpl]
> (Work-Job-Executor-156:ctx-28f7465a job-2867/job-2869 ctx-a04e0ed9)
> (logid:a9ef7fe7) Failed to revert vmsnapshot: 27
> com.cloud.utils.exception.CloudRuntimeException: Revert VM: i-2-33-VM to
> snapshot: i-2-33-VM_VS_20190122155503 failed due to  Hypervisor
> com.cloud.hypervisor.xenserver.resource.XenServer650Resource doesn't
> support guest OS type CentOS 7. you can choose 'Other install media' to run
> it as HVM
>   at
> org.apache.cloudstack.storage.vmsnapshot.DefaultVMSnapshotStrategy.revertVMSnapshot(DefaultVMSnapshotStrategy.java:393)
>   at
> com.cloud.vm.snapshot.VMSnapshotManagerImpl.orchestrateRevertToVMSnapshot(VMSnapshotManagerImpl.java:846)
>   at
> com.cloud.vm.snapshot.VMSnapshotManagerImpl.orchestrateRevertToVMSnapshot(VMSnapshotManagerImpl.java:1211)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at
> 

Re: CloudStack 4.11.2 Snapshot Revert fail

2019-01-22 Thread Andrija Panic
Hi there,

after VM was deployed and snapshots created - was there any changes to VM
or template from which VM was created - did ACS version get upgraded ?

Best

On Tue, 22 Jan 2019 at 17:52, li jerry  wrote:

> HI ALL
>
> I use CloudStack 4.11.2 to manage Xenserver 7.1.2 (XenServer CU2).
>
> VM snapshot for revert failure (snapshot does not contain memorysnapshot).
>
> 2019-01-23 00:06:54,210 DEBUG [c.c.a.m.ClusteredAgentAttache]
> (Work-Job-Executor-156:ctx-28f7465a job-2867/job-2869 ctx-a04e0ed9)
> (logid:a9ef7fe7) Seq 5-6201456686889173919: Forwarding Seq
> 5-6201456686889173919:  { Cmd , MgmtId: 240661250348494, via: 5(wxac6001),
> Ver: v1, Flags: 100011,
> [{"com.cloud.agent.api.RevertToVMSnapshotCommand":{"reloadVm":false,"vmUuid":"b2a78e9c-06ab-4200-ad6d-fe095f622502","volumeTOs":[{"uuid":"7a58ffdc-b02c-41bf-963c-be56c2da0e9b","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"WXACP01CL01_LUN10","id":19,"poolType":"PreSetup","host":"localhost","path":"/WXACP01CL01_LUN10","port":0,"url":"PreSetup://localhost/WXACP01CL01_LUN10/?ROLE=Primary=WXACP01CL01_LUN10","isManaged":false}},"name":"ROOT-33","size":21474836480,"path":"dd1cf43d-d5a4-4633-9c3e-8f73d1ccc484","volumeId":93,"vmName":"i-2-33-VM","accountId":2,"format":"VHD","provisioningType":"THIN","id":93,"deviceId":0,"hypervisorType":"XenServer"},{"uuid":"74268aa2-b4e5-4574-a981-027e55b5383f","volumeType":"DATADISK","dataStore":{"org.apache.
> cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"WXACP01CL01_LUN01","id":1,"poolType":"PreSetup","host":"localhost","path":"/WXACP01CL01_LUN01","port":0,"url":"PreSetup://localhost/WXACP01CL01_LUN01/?ROLE=Primary=WXACP01CL01_LUN01","isManaged":false}},"name":"DATA-33","size":1099511627776,"path":"e2ead686-d0bb-49f2-b656-77c2bf497990","volumeId":95,"vmName":"i-2-33-VM","accountId":2,"format":"VHD","provisioningType":"THIN","id":95,"deviceId":1,"hypervisorType":"XenServer"}],"target":{"id":27,"snapshotName":"i-2-33-VM_VS_20190122155503","type":"Disk","createTime":1548172503000,"current":true,"description":"asdfasdf","quiescevm":true},"vmName":"i-2-33-VM","guestOSType":"CentOS
> 7","wait":0}}] } to 55935224135780
>
> 2019-01-23 00:06:54,222 DEBUG [c.c.a.t.Request]
> (AgentManager-Handler-14:null) (logid:) Seq 5-6201456686889173919:
> Processing:  { Ans: , MgmtId: 240661250348494, via: 5, Ver: v1, Flags: 10,
> [{"com.cloud.agent.api.RevertToVMSnapshotAnswer":{"result":false,"details":"
> Hypervisor com.cloud.hypervisor.xenserver.resource.XenServer650Resource
> doesn't support guest OS type CentOS 7. you can choose 'Other install
> media' to run it as HVM","wait":0}}] }
> 2019-01-23 00:06:54,223 DEBUG [c.c.a.t.Request]
> (Work-Job-Executor-156:ctx-28f7465a job-2867/job-2869 ctx-a04e0ed9)
> (logid:a9ef7fe7) Seq 5-6201456686889173919: Received:  { Ans: , MgmtId:
> 240661250348494, via: 5(wxac6001), Ver: v1, Flags: 10, {
> RevertToVMSnapshotAnswer } }
> 2019-01-23 00:06:54,223 ERROR [o.a.c.s.v.DefaultVMSnapshotStrategy]
> (Work-Job-Executor-156:ctx-28f7465a job-2867/job-2869 ctx-a04e0ed9)
> (logid:a9ef7fe7) Revert VM: i-2-33-VM to snapshot:
> i-2-33-VM_VS_20190122155503 failed due to  Hypervisor
> com.cloud.hypervisor.xenserver.resource.XenServer650Resource doesn't
> support guest OS type CentOS 7. you can choose 'Other install media' to run
> it as HVM
> 2019-01-23 00:06:54,226 DEBUG [c.c.v.s.VMSnapshotManagerImpl]
> (Work-Job-Executor-156:ctx-28f7465a job-2867/job-2869 ctx-a04e0ed9)
> (logid:a9ef7fe7) Failed to revert vmsnapshot: 27
> com.cloud.utils.exception.CloudRuntimeException: Revert VM: i-2-33-VM to
> snapshot: i-2-33-VM_VS_20190122155503 failed due to  Hypervisor
> com.cloud.hypervisor.xenserver.resource.XenServer650Resource doesn't
> support guest OS type CentOS 7. you can choose 'Other install media' to run
> it as HVM
>   at
> org.apache.cloudstack.storage.vmsnapshot.DefaultVMSnapshotStrategy.revertVMSnapshot(DefaultVMSnapshotStrategy.java:393)
>   at
> com.cloud.vm.snapshot.VMSnapshotManagerImpl.orchestrateRevertToVMSnapshot(VMSnapshotManagerImpl.java:846)
>   at
> com.cloud.vm.snapshot.VMSnapshotManagerImpl.orchestrateRevertToVMSnapshot(VMSnapshotManagerImpl.java:1211)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at
> com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
>   at
> com.cloud.vm.snapshot.VMSnapshotManagerImpl.handleVmWorkJob(VMSnapshotManagerImpl.java:1224)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 

RE: Snapshots on KVM corrupting disk images

2019-01-22 Thread Sean Lair
Hi Simon

It is NFS mount.  The underlying storage is NetApp that we run a lot of 
different environments on, it is rock-solid, the only issues we've had are with 
KVM snapshots.

Thanks
Sean

-Original Message-
From: Simon Weller [mailto:swel...@ena.com.INVALID] 
Sent: Tuesday, January 22, 2019 10:42 AM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Re: Snapshots on KVM corrupting disk images

Sean,


What underlying primary storage are you using and how is it being utilized by 
ACS (e.g. NFS, shared mount et al)?



- Si



From: Sean Lair 
Sent: Tuesday, January 22, 2019 10:30 AM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Snapshots on KVM corrupting disk images

Hi all,

We had some instances where VM disks are becoming corrupted when using KVM 
snapshots.  We are running CloudStack 4.9.3 with KVM on CentOS 7.

The first time was when someone mass-enabled scheduled snapshots on a lot of 
large number VMs and secondary storage filled up.  We had to restore all those 
VM disks...  But believed it was just our fault with letting secondary storage 
fill up.

Today we had an instance where a snapshot failed and now the disk image is 
corrupted and the VM can't boot.  here is the output of some commands:

---
[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img check 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
qemu-img: Could not open './184aa458-9d4b-4c1b-a3c6-23d28ea28e80': Could not 
read snapshots: File too large

[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img info 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
qemu-img: Could not open './184aa458-9d4b-4c1b-a3c6-23d28ea28e80': Could not 
read snapshots: File too large

[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# ls -lh 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
-rw-r--r--. 1 root root 73G Jan 22 11:04 ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
---

We tried restoring to before the snapshot failure, but still have strange 
errors:

--
[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# ls -lh 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
-rw-r--r--. 1 root root 73G Jan 22 11:04 ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80

[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img info 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
image: ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
file format: qcow2
virtual size: 50G (53687091200 bytes)
disk size: 73G
cluster_size: 65536
Snapshot list:
IDTAG VM SIZEDATE   VM CLOCK
1 a8fdf99f-8219-4032-a9c8-87a6e09e7f95   3.7G 2018-12-23 11:01:43 
3099:35:55.242
2 b4d74338-b0e3-4eeb-8bf8-41f6f75d9abd   3.8G 2019-01-06 11:03:16 
3431:52:23.942
Format specific information:
compat: 1.1
lazy refcounts: false

[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img check 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
tcmalloc: large alloc 1539750010880 bytes == (nil) @  0x7fb9cbbf7bf3 
0x7fb9cbc19488 0x7fb9cb71dc56 0x55d16ddf1c77 0x55d16ddf1edc 0x55d16ddf2541 
0x55d16ddf465e 0x55d16ddf8ad1 0x55d16de336db 0x55d16de373e6 0x7fb9c63a3c05 
0x55d16ddd9f7d No errors were found on the image.

[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img snapshot -l 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
Snapshot list:
IDTAG VM SIZEDATE   VM CLOCK
1 a8fdf99f-8219-4032-a9c8-87a6e09e7f95   3.7G 2018-12-23 11:01:43 
3099:35:55.242
2 b4d74338-b0e3-4eeb-8bf8-41f6f75d9abd   3.8G 2019-01-06 11:03:16 
3431:52:23.942
--

Everyone is now extremely hesitant to use snapshots in KVM  We tried 
deleting the snapshots in the restored disk image, but it errors out...


Does anyone else have issues with KVM snapshots?  We are considering just 
disabling this functionality now...

Thanks
Sean








CloudStack 4.11.2 Snapshot Revert fail

2019-01-22 Thread li jerry
HI ALL

I use CloudStack 4.11.2 to manage Xenserver 7.1.2 (XenServer CU2).

VM snapshot for revert failure (snapshot does not contain memorysnapshot).

2019-01-23 00:06:54,210 DEBUG [c.c.a.m.ClusteredAgentAttache] 
(Work-Job-Executor-156:ctx-28f7465a job-2867/job-2869 ctx-a04e0ed9) 
(logid:a9ef7fe7) Seq 5-6201456686889173919: Forwarding Seq 
5-6201456686889173919:  { Cmd , MgmtId: 240661250348494, via: 5(wxac6001), Ver: 
v1, Flags: 100011, 
[{"com.cloud.agent.api.RevertToVMSnapshotCommand":{"reloadVm":false,"vmUuid":"b2a78e9c-06ab-4200-ad6d-fe095f622502","volumeTOs":[{"uuid":"7a58ffdc-b02c-41bf-963c-be56c2da0e9b","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"WXACP01CL01_LUN10","id":19,"poolType":"PreSetup","host":"localhost","path":"/WXACP01CL01_LUN10","port":0,"url":"PreSetup://localhost/WXACP01CL01_LUN10/?ROLE=Primary=WXACP01CL01_LUN10","isManaged":false}},"name":"ROOT-33","size":21474836480,"path":"dd1cf43d-d5a4-4633-9c3e-8f73d1ccc484","volumeId":93,"vmName":"i-2-33-VM","accountId":2,"format":"VHD","provisioningType":"THIN","id":93,"deviceId":0,"hypervisorType":"XenServer"},{"uuid":"74268aa2-b4e5-4574-a981-027e55b5383f","volumeType":"DATADISK","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"WXACP01CL01_LUN01","id":1,"poolType":"PreSetup","host":"localhost","path":"/WXACP01CL01_LUN01","port":0,"url":"PreSetup://localhost/WXACP01CL01_LUN01/?ROLE=Primary=WXACP01CL01_LUN01","isManaged":false}},"name":"DATA-33","size":1099511627776,"path":"e2ead686-d0bb-49f2-b656-77c2bf497990","volumeId":95,"vmName":"i-2-33-VM","accountId":2,"format":"VHD","provisioningType":"THIN","id":95,"deviceId":1,"hypervisorType":"XenServer"}],"target":{"id":27,"snapshotName":"i-2-33-VM_VS_20190122155503","type":"Disk","createTime":1548172503000,"current":true,"description":"asdfasdf","quiescevm":true},"vmName":"i-2-33-VM","guestOSType":"CentOS
 7","wait":0}}] } to 55935224135780

2019-01-23 00:06:54,222 DEBUG [c.c.a.t.Request] (AgentManager-Handler-14:null) 
(logid:) Seq 5-6201456686889173919: Processing:  { Ans: , MgmtId: 
240661250348494, via: 5, Ver: v1, Flags: 10, 
[{"com.cloud.agent.api.RevertToVMSnapshotAnswer":{"result":false,"details":" 
Hypervisor com.cloud.hypervisor.xenserver.resource.XenServer650Resource doesn't 
support guest OS type CentOS 7. you can choose 'Other install media' to run it 
as HVM","wait":0}}] }
2019-01-23 00:06:54,223 DEBUG [c.c.a.t.Request] 
(Work-Job-Executor-156:ctx-28f7465a job-2867/job-2869 ctx-a04e0ed9) 
(logid:a9ef7fe7) Seq 5-6201456686889173919: Received:  { Ans: , MgmtId: 
240661250348494, via: 5(wxac6001), Ver: v1, Flags: 10, { 
RevertToVMSnapshotAnswer } }
2019-01-23 00:06:54,223 ERROR [o.a.c.s.v.DefaultVMSnapshotStrategy] 
(Work-Job-Executor-156:ctx-28f7465a job-2867/job-2869 ctx-a04e0ed9) 
(logid:a9ef7fe7) Revert VM: i-2-33-VM to snapshot: i-2-33-VM_VS_20190122155503 
failed due to  Hypervisor 
com.cloud.hypervisor.xenserver.resource.XenServer650Resource doesn't support 
guest OS type CentOS 7. you can choose 'Other install media' to run it as HVM
2019-01-23 00:06:54,226 DEBUG [c.c.v.s.VMSnapshotManagerImpl] 
(Work-Job-Executor-156:ctx-28f7465a job-2867/job-2869 ctx-a04e0ed9) 
(logid:a9ef7fe7) Failed to revert vmsnapshot: 27
com.cloud.utils.exception.CloudRuntimeException: Revert VM: i-2-33-VM to 
snapshot: i-2-33-VM_VS_20190122155503 failed due to  Hypervisor 
com.cloud.hypervisor.xenserver.resource.XenServer650Resource doesn't support 
guest OS type CentOS 7. you can choose 'Other install media' to run it as HVM
  at 
org.apache.cloudstack.storage.vmsnapshot.DefaultVMSnapshotStrategy.revertVMSnapshot(DefaultVMSnapshotStrategy.java:393)
  at 
com.cloud.vm.snapshot.VMSnapshotManagerImpl.orchestrateRevertToVMSnapshot(VMSnapshotManagerImpl.java:846)
  at 
com.cloud.vm.snapshot.VMSnapshotManagerImpl.orchestrateRevertToVMSnapshot(VMSnapshotManagerImpl.java:1211)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at 
com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
  at 
com.cloud.vm.snapshot.VMSnapshotManagerImpl.handleVmWorkJob(VMSnapshotManagerImpl.java:1224)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at 
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:338)
  at 
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:197)
  at 

Re: Snapshots on KVM corrupting disk images

2019-01-22 Thread Simon Weller
Sean,


What underlying primary storage are you using and how is it being utilized by 
ACS (e.g. NFS, shared mount et al)?



- Si



From: Sean Lair 
Sent: Tuesday, January 22, 2019 10:30 AM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Snapshots on KVM corrupting disk images

Hi all,

We had some instances where VM disks are becoming corrupted when using KVM 
snapshots.  We are running CloudStack 4.9.3 with KVM on CentOS 7.

The first time was when someone mass-enabled scheduled snapshots on a lot of 
large number VMs and secondary storage filled up.  We had to restore all those 
VM disks...  But believed it was just our fault with letting secondary storage 
fill up.

Today we had an instance where a snapshot failed and now the disk image is 
corrupted and the VM can't boot.  here is the output of some commands:

---
[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img check 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
qemu-img: Could not open './184aa458-9d4b-4c1b-a3c6-23d28ea28e80': Could not 
read snapshots: File too large

[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img info 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
qemu-img: Could not open './184aa458-9d4b-4c1b-a3c6-23d28ea28e80': Could not 
read snapshots: File too large

[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# ls -lh 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
-rw-r--r--. 1 root root 73G Jan 22 11:04 ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
---

We tried restoring to before the snapshot failure, but still have strange 
errors:

--
[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# ls -lh 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
-rw-r--r--. 1 root root 73G Jan 22 11:04 ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80

[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img info 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
image: ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
file format: qcow2
virtual size: 50G (53687091200 bytes)
disk size: 73G
cluster_size: 65536
Snapshot list:
IDTAG VM SIZEDATE   VM CLOCK
1 a8fdf99f-8219-4032-a9c8-87a6e09e7f95   3.7G 2018-12-23 11:01:43 
3099:35:55.242
2 b4d74338-b0e3-4eeb-8bf8-41f6f75d9abd   3.8G 2019-01-06 11:03:16 
3431:52:23.942
Format specific information:
compat: 1.1
lazy refcounts: false

[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img check 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
tcmalloc: large alloc 1539750010880 bytes == (nil) @  0x7fb9cbbf7bf3 
0x7fb9cbc19488 0x7fb9cb71dc56 0x55d16ddf1c77 0x55d16ddf1edc 0x55d16ddf2541 
0x55d16ddf465e 0x55d16ddf8ad1 0x55d16de336db 0x55d16de373e6 0x7fb9c63a3c05 
0x55d16ddd9f7d
No errors were found on the image.

[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img snapshot -l 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
Snapshot list:
IDTAG VM SIZEDATE   VM CLOCK
1 a8fdf99f-8219-4032-a9c8-87a6e09e7f95   3.7G 2018-12-23 11:01:43 
3099:35:55.242
2 b4d74338-b0e3-4eeb-8bf8-41f6f75d9abd   3.8G 2019-01-06 11:03:16 
3431:52:23.942
--

Everyone is now extremely hesitant to use snapshots in KVM  We tried 
deleting the snapshots in the restored disk image, but it errors out...


Does anyone else have issues with KVM snapshots?  We are considering just 
disabling this functionality now...

Thanks
Sean








Snapshots on KVM corrupting disk images

2019-01-22 Thread Sean Lair
Hi all,

We had some instances where VM disks are becoming corrupted when using KVM 
snapshots.  We are running CloudStack 4.9.3 with KVM on CentOS 7.

The first time was when someone mass-enabled scheduled snapshots on a lot of 
large number VMs and secondary storage filled up.  We had to restore all those 
VM disks...  But believed it was just our fault with letting secondary storage 
fill up.

Today we had an instance where a snapshot failed and now the disk image is 
corrupted and the VM can't boot.  here is the output of some commands:

---
[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img check 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
qemu-img: Could not open './184aa458-9d4b-4c1b-a3c6-23d28ea28e80': Could not 
read snapshots: File too large

[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img info 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
qemu-img: Could not open './184aa458-9d4b-4c1b-a3c6-23d28ea28e80': Could not 
read snapshots: File too large

[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# ls -lh 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
-rw-r--r--. 1 root root 73G Jan 22 11:04 ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
---

We tried restoring to before the snapshot failure, but still have strange 
errors:

--
[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# ls -lh 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
-rw-r--r--. 1 root root 73G Jan 22 11:04 ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80

[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img info 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
image: ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
file format: qcow2
virtual size: 50G (53687091200 bytes)
disk size: 73G
cluster_size: 65536
Snapshot list:
IDTAG VM SIZEDATE   VM CLOCK
1 a8fdf99f-8219-4032-a9c8-87a6e09e7f95   3.7G 2018-12-23 11:01:43 
3099:35:55.242
2 b4d74338-b0e3-4eeb-8bf8-41f6f75d9abd   3.8G 2019-01-06 11:03:16 
3431:52:23.942
Format specific information:
compat: 1.1
lazy refcounts: false

[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img check 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
tcmalloc: large alloc 1539750010880 bytes == (nil) @  0x7fb9cbbf7bf3 
0x7fb9cbc19488 0x7fb9cb71dc56 0x55d16ddf1c77 0x55d16ddf1edc 0x55d16ddf2541 
0x55d16ddf465e 0x55d16ddf8ad1 0x55d16de336db 0x55d16de373e6 0x7fb9c63a3c05 
0x55d16ddd9f7d
No errors were found on the image.

[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img snapshot -l 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
Snapshot list:
IDTAG VM SIZEDATE   VM CLOCK
1 a8fdf99f-8219-4032-a9c8-87a6e09e7f95   3.7G 2018-12-23 11:01:43 
3099:35:55.242
2 b4d74338-b0e3-4eeb-8bf8-41f6f75d9abd   3.8G 2019-01-06 11:03:16 
3431:52:23.942
--

Everyone is now extremely hesitant to use snapshots in KVM  We tried 
deleting the snapshots in the restored disk image, but it errors out...


Does anyone else have issues with KVM snapshots?  We are considering just 
disabling this functionality now...

Thanks
Sean