Ok, looks like a rendering bug following migration .. if I wait a few minutes 
and refresh, it reverts to the "onevm show" value ... 

# Host Action Reason Chg time Total time Prolog time 0  node3   live-migrate    
USER    13:23:38 21/11/2013     0d 00:03        0d 00:01        
1       node1   live-migrate    USER    13:27:02 21/11/2013     0d 00:02        
0d 00:00        
2       node3   live-migrate    USER    13:29:32 21/11/2013     0d 01:04        
0d 00:00        
3       node2   live-migrate    USER    14:32:51 21/11/2013     0d 00:01        
0d 00:00        
4       node1   none    NONE    14:33:52 21/11/2013     30d 23:56       0d 
00:00        


$ onevm show 225 
VIRTUAL MACHINE 225 INFORMATION 
ID : 225 
NAME : Ten 
USER : oneadmin 
GROUP : oneadmin 
STATE : ACTIVE 
LCM_STATE : RUNNING 
RESCHED : No 
HOST : node1 
CLUSTER ID : -1 
START TIME : 11/21 13:23:21 
END TIME : - 
DEPLOY ID : one-225 

VIRTUAL MACHINE MONITORING 
USED MEMORY : 1024M 
USED CPU : 0 
NET_TX : 1K 
NET_RX : 71K 

PERMISSIONS 
OWNER : um- 
GROUP : --- 
OTHER : --- 

VM DISKS 
ID TARGET IMAGE TYPE SAVE SAVE_AS 
0 sda Ubuntu 12.04 Clone file NO - 

VM NICS 
ID NETWORK VLAN BRIDGE IP MAC 
0 Public Range 1 no public 193.111.185.139 02:00:c1:6f:b9:8b 
fe80::400:c1ff:fe6f:b98b 

VIRTUAL MACHINE HISTORY 
SEQ HOST ACTION DS START TIME PROLOG 
0 node3 live-migrate 0 11/21 13:23:38 0d 00h03m 0h01m34s 
1 node1 live-migrate 0 11/21 13:27:02 0d 00h02m 0h00m00s 
2 node3 live-migrate 0 11/21 13:29:32 0d 01h04m 0h00m00s 
3 node2 live-migrate 0 11/21 14:32:51 0d 00h01m 0h00m00s 
4 node1 none 0 11/21 14:33:52 0d 00h05m 0h00m00s 

VIRTUAL MACHINE TEMPLATE 
CONTEXT=[ 
DISK_ID="1", 
ETH0_DNS="8.8.8.8", 
ETH0_GATEWAY="193.111.185.1", 
ETH0_IP="193.111.185.139", 
ETH0_MASK="255.255.255.0", 
ETH0_NETWORK="193.111.185.0", 
HOSTNAME="demo", 
NETWORK="YES", 
TARGET="hda", 
VDC_CACHE_SIZE="1G" ] 
CPU="0.1" 
GRAPHICS=[ 
LISTEN="0.0.0.0", 
PORT="6125", 
TYPE="VNC" ] 
MEMORY="1024" 
RAW=[ 
TYPE="kvm" ] 
TEMPLATE_ID="1" 
VMID="225" 


-- 
        Gareth Bult 
“The odds of hitting your target go up dramatically when you aim at it.” 



----- Original Message -----

From: "Ruben S. Montero" <rsmont...@opennebula.org> 
To: "Gareth Bult" <gar...@linux.co.uk> 
Cc: "users" <users@lists.opennebula.org> 
Sent: Wednesday, 20 November, 2013 12:36:57 PM 
Subject: Re: [one-users] Re; Snapshots ... 

Ups my mistake, Storage tab I meant... When you save a disk you can do it live 
or deferred, deferred is when the VM is shutdown, live calls the cpds script to 
copy the disk to the datastore. 


On Wed, Nov 20, 2013 at 1:41 PM, Gareth Bult < gar...@linux.co.uk > wrote: 



Ok, I've completely missed the capacity / cpds bit , indeed I can't see any 
other snapshot option on the GUI 
so I'm going to have to go back and read some more .. however 

Re; the bug, looks like rendering, gone back to it now and it looks fine ... 

# Host Action Reason Chg time Total time Prolog time 0  node3   live-migrate    
USER    19:08:46 19/11/2013     0d 00:01        0d 00:00        
        1       node2   live-migrate    USER    19:09:44 19/11/2013     0d 
00:03        0d 00:00        
2       node1   live-migrate    USER    19:12:56 19/11/2013     0d 00:01        
0d 00:00        
        3       node3   live-migrate    USER    19:13:59 19/11/2013     0d 
14:47        0d 00:00        
4       node1   none    NONE    10:00:15 20/11/2013     0d 02:17        0d 
00:00 
I must admit to being a little confused, my "capacity" tab on VM's has only a 
"resize" button .. which is inactive (?) 
(cpds sounds ideal if I can find how to use it .. :) ) 

-- 
        Gareth Bult 
“The odds of hitting your target go up dramatically when you aim at it.” 




From: "Ruben S. Montero" < rsmont...@opennebula.org > 
To: "Gareth Bult" < gar...@linux.co.uk > 
Cc: "users" < users@lists.opennebula.org > 
Sent: Wednesday, 20 November, 2013 11:20:09 AM 
Subject: Re: [one-users] Re; Snapshots ... 

Hi 

As you know there are two types of snapshots, system and disk. Disk only 
snapshots are handled through the capacity tab in sunstone and eventually by 
the CPDS script in the TM. System snapshots are handled by the snapshot script 
of VMM. 

There are no plans to redesign this, disk snapshotting can use a custom storage 
facility through "cpds", but system snapshots will be handled through the 
hypervisor.... 

Note that system snapshots require also to checkpoint the memory state of the 
system, thats the reason for requiring qcow2 in kvm. So I am not really sure 
how this two processes, memory and disk snapshot, can be orchestrated outside 
of libvirt, maybe libvirt hooks? 

About the bug, I am wondering if this is a rendering issue or something deeper, 
could you send the output of: 

onevm show <VM_ID> -x 

Cheers 

Ruben 





On Wed, Nov 20, 2013 at 11:19 AM, Gareth Bult < gar...@linux.co.uk > wrote: 

<blockquote>
Hi, 

It seems that the snapshot facility relies on the "libvirt" snapshot facility 
wh 


<blockquote>
ich at the moment 
relies entirely upon the QEMU snapshot facility, which means you can really 
only snapshot QCOW 
images. Before I start to modify remotes/bmm/kvm/snapshot*, is there a way or 
are there any plans 
to move snapshot functionality to the drivers such that we can use a custom 
snapshot facility on 
a per storage facility basis? 

Case in point; 

At the moment there seems to be a script which does this; 

virsh --connect $LIBVIRT_URI snapshot-create-as $DOMAIN (which is QCOW2 only?) 

I would like it to be able to handle this; 

vdc-tool -n ON_IM_81 --mksnap "First Snapshot" 
> :: Snapshot created [First Snapshot] 

vdc-tool -n ON_IM_81 --lssnap 
+----------+--------------------------------------+----------+----------+----------------------+
 
| UniqueID | Snapshot Name | Size | Blocks | Created@ | 
+----------+--------------------------------------+----------+----------+----------------------+
 
| 1 | First Snapshot | 662.53M | 370404 | 20 Oct 2013 09:55:08 | 
| | 6c0ca1c0-9d62-474a-83fc-369fa01d4068 | | | Root: 1196295401 | 
+----------+--------------------------------------+----------+----------+----------------------+
 
Number of snapshot blocks used (370404) taking ( 662.53M) 
Current committed blocks = 652079 
Current blocks (0 ) and current size ( 0.00b) 

Obviously the vdc-tool output can be tweaked as necessary .. I would like to be 
able to integrate 
this into libvirt, but the snapshot API in libvirt appears still to be on the 
drawing board whereas 
I already have a working / usable snapshot facility ... any thoughts ? 

Incidentally, I think I just spotted a bug on the Placement log; 

# Host Action Reason Chg time Total time Prolog time 
0 node3 live-migrate USER 19:08:46 19/11/2013 0d 00:01 0d 00:00 
1 node2 live-migrate USER 19:09:44 19/11/2013 0d 00:03 0d 00:00 
2 node1 live-migrate USER 19:12:56 19/11/2013 0d 00:01 0d 00:00 
3 node3 live-migrate USER 19:13:59 19/11/2013 0d 14:47 0d 00:00 
4 node1 none NONE 10:00:15 20/11/2013 30d 23:55 0d 00:00 

We've accumulated 30d of total time overnight ?! 
(yes, the clocks are in sync ...) 

-- 
Gareth Bult 
“The odds of hitting your target go up dramatically when you aim at it.” 
_______________________________________________ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 

</blockquote>




-- 
-- 
Ruben S. Montero, PhD 
Project co-Lead and Chief Architect 
OpenNebula - Flexible Enterprise Cloud Made Simple 
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula 


</blockquote>




-- 
-- 
Ruben S. Montero, PhD 
Project co-Lead and Chief Architect 
OpenNebula - Flexible Enterprise Cloud Made Simple 
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula 

_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

Reply via email to