[
https://issues.apache.org/jira/browse/CLOUDSTACK-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13738691#comment-13738691
]
sadhu suresh commented on CLOUDSTACK-4304:
------------------------------------------
mysql> select * from snapshots where id=13 and volume_id=76\G;
id: 13
data_center_id: 1
account_id: 2
domain_id: 1
volume_id: 76
disk_offering_id: 1
status: BackedUp
path: NULL
name: normal_ROOT-57_20130813224146
uuid: 419d03c5-c5f4-4ffb-8214-1fde944d0397
snapshot_type: 0
type_description: MANUAL
size: 8589934592
created: 2013-08-13 22:41:46
removed: NULL
backup_snap_id: NULL
swift_id: NULL
sechost_id: NULL
prev_snap_id: NULL
hypervisor_type: KVM
version: 2.2
s3_id: NULL
13 rows in set (0.00 sec)
after performing volume fromsnapshot:
id: 89
account_id: 2
domain_id: 1
pool_id: 20
last_pool_id: NULL
instance_id: NULL
device_id: NULL
name: normalvolume
uuid: 893e729f-aaed-438f-abc2-b5d662230078
size: 1599209472
folder: NULL
path: 2cb08bc5-3e14-4bd8-bc23-afdc9e47072d
pod_id: NULL
data_center_id: 1
iscsi_name: NULL
host_ip: NULL
volume_type: DATADISK
pool_type: NULL
disk_offering_id: 1
template_id: 4
first_snapshot_backup_uuid: NULL
recreatable: 0
created: 2013-08-14 00:03:54
attached: NULL
updated: 2013-08-14 00:13:24
removed: NULL
state: Ready
chain_info: NULL
update_count: 2
disk_type: NULL
vm_snapshot_chain_size: NULL
iso_id: 0
display_volume: 1
format: QCOW2
min_iops: NULL
max_iops: NULL
where as in UI: here it shows storage as "RBD" instead of NORMAL
http://10.147.59.110:8080/client/api?command=listVolumes&id=893e729f-aaed-438f-abc2-b5d662230078&response=json&sessionkey=3oaouwLLGveZlrYIn0zpMe4e394%3D&_=1376420708190
{ "listvolumesresponse" : { "count":1 ,"volume" : [
{"id":"893e729f-aaed-438f-abc2-b5d662230078","name":"normalvolume","zoneid":"6ec00b45-7913-4095-944b-d0fa16adb84a","zonename":"zone111","type":"DATADISK","size":1599209472,"created":"2013-08-13T20:03:54-0400","state":"Ready","account":"admin","domainid":"a707b316-fe1d-11e2-9c5b-06a2f0000056","domain":"ROOT","storagetype":"shared","hypervisor":"KVM","diskofferingid":"95a4090d-5b0a-408b-9907-6d9ad1ae10e3","diskofferingname":"Small
Instance","diskofferingdisplaytext":"Small
Instance","storage":"rbd","destroyed":false,"isextractable":true,"tags":[],"displayvolume":true}
] } }
> ceph:fail to attach a volume created from snapshot to same Instance
> -------------------------------------------------------------------
>
> Key: CLOUDSTACK-4304
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4304
> Project: CloudStack
> Issue Type: Bug
> Security Level: Public(Anyone can view this level - this is the
> default.)
> Components: Volumes
> Affects Versions: 4.2.0
> Reporter: sadhu suresh
>
> By default when we perform create volume form snapshot(which NFS based
> volume) it internally converting QCOW2 to RAW (RBD )format
> 1.add nfs based ,RBD based primary storage
> 2. add storage tags for primary storage and create compute offering for
> both storages
> like for
> nfs :normal
> RBD :RBD
> 3. select the nfs based compute offering and deployed a VM
> 4.Once it successful,select the root partition and perform snapshot
> 5.once snapshot successful,create a volume from snapshot
> here it internally convering into RAW disk(qcow2 format to Raw format) even
> though snapshot volume is NFS bassed
> 6.once it created,try to attche newly created data disk to same VM createdin
> step3
> Actual result:
> attach volume failed with error code 530 "Failed to attach volume:
> normalvolume to VM: normal; org.libvirt.LibvirtException: internal error
> unable to execute QEMU command 'device_add': Duplicate ID 'virtio-disk1' for
> device"
> http://10.147.59.110:8080/client/api?command=attachVolume&id=893e729f-aaed-438f-abc2-b5d662230078&virtualMachineId=361dc403-7e3f-4977-86ba-0589481f8259&response=json&sessionkey=3oaouwLLGveZlrYIn0zpMe4e394%3D&_=1376419838455
> { "queryasyncjobresultresponse" :
> {"accountid":"d4df5456-fe1d-11e2-9c5b-06a2f0000056","userid":"d4dff794-fe1d-11e2-9c5b-06a2f0000056","cmd":"org.apache.cloudstack.api.command.user.volume.AttachVolumeCmd","jobstatus":2,"jobprocstatus":0,"jobresultcode":530,"jobresulttype":"object","jobresult":{"errorcode":530,"errortext":"Failed
> to attach volume: normalvolume to VM: normal; org.libvirt.LibvirtException:
> internal error unable to execute QEMU command 'device_add': Duplicate ID
> 'virtio-disk1' for
> device"},"created":"2013-08-13T20:18:47-0400","jobid":"cd3446fe-b082-4c90-8cd6-30036ccda59e"}
> }
> Agent log:
>
> [{"com.cloud.agent.api.PingAnswer":{"_command":{"hostType":"Routing","hostId":7,"contextMap":{},"wait":0},"result":true,"contextMap":{},"wait":0}}]
> }
> 2013-08-14 00:20:34,385 TRACE [utils.nio.NioConnection] (Agent-Selector:null)
> Keys Processing: 1
> 2013-08-14 00:20:34,386 TRACE [utils.nio.NioConnection] (Agent-Selector:null)
> Reading from: Socket[addr=/10.147.59.110,port=8250,localport=47605]
> 2013-08-14 00:20:34,386 TRACE [utils.nio.Link] (Agent-Selector:null) Packet
> length is 758
> 2013-08-14 00:20:34,386 TRACE [utils.nio.Link] (Agent-Selector:null) Done
> with packet: 737
> 2013-08-14 00:20:34,386 TRACE [utils.nio.NioConnection] (Agent-Selector:null)
> Keys Done Processing.
> 2013-08-14 00:20:34,387 DEBUG [cloud.agent.Agent]
> (agentRequest-Handler-5:null) Request:Seq 7-1066226815: { Cmd , MgmtId:
> 7296881000534, via: 7, Ver: v1, Flags: 100011,
> [{"org.apache.cloudstack.storage.command.AttachCommand":{"disk":{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"893e729f-aaed-438f-abc2-b5d662230078","volumeType":"DATADISK","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"8e7c6fb5-c4d7-38f9-9cb1-d34d3bba4957","id":20,"poolType":"RBD","host":"10.147.41.3","path":"cloudstack","port":6789}},"name":"normalvolume","size":1599209472,"path":"2cb08bc5-3e14-4bd8-bc23-afdc9e47072d","volumeId":89,"accountId":2,"format":"QCOW2","id":89,"hypervisorType":"KVM"}},"diskSeq":1,"type":"DATADISK"},"vmName":"i-2-57-VM","_storageHost":"10.147.41.3","_storagePort":6789,"_managed":false,"contextMap":{},"wait":0}}]
> }
> 2013-08-14 00:20:34,387 DEBUG [cloud.agent.Agent]
> (agentRequest-Handler-5:null) Processing command:
> org.apache.cloudstack.storage.command.AttachCommand
> 2013-08-14 00:20:34,759 DEBUG [kvm.storage.KVMStorageProcessor]
> (agentRequest-Handler-5:null) Attaching device: <disk device='disk'
> type='network'>
> <driver name='qemu' type='raw' cache='none' />
> <source protocol='rbd'
> name='cloudstack/2cb08bc5-3e14-4bd8-bc23-afdc9e47072d'>
> <host name='10.147.41.3' port='6789'/>
> </source>
> <auth username='admin'>
> <secret type='ceph' uuid='8e7c6fb5-c4d7-38f9-9cb1-d34d3bba4957'/>
> </auth>
> <target dev='vdb' bus='virtio'/>
> </disk>
> 2013-08-14 00:20:34,766 WARN [kvm.storage.KVMStorageProcessor]
> (agentRequest-Handler-5:null) Failed to attach device to i-2-57-VM: internal
> error unable to execute QEMU command 'device_add': Duplicate ID
> 'virtio-disk1' for device
> 2013-08-14 00:20:34,767 DEBUG [kvm.storage.KVMStorageProcessor]
> (agentRequest-Handler-5:null) Failed to attach volume:
> 2cb08bc5-3e14-4bd8-bc23-afdc9e47072d, due to org.libvirt.LibvirtException:
> internal error unable to execute QEMU command 'device_add': Duplicate ID
> 'virtio-disk1' for device
> 2013-08-14 00:20:34,767 DEBUG [cloud.agent.Agent]
> (agentRequest-Handler-5:null) Seq 7-1066226815: { Ans: , MgmtId:
> 7296881000534, via: 7, Ver: v1, Flags: 10,
> [{"org.apache.cloudstack.storage.command.AttachAnswer":{"result":false,"details":"org.libvirt.LibvirtException:
> internal error unable to execute QEMU command 'device_add': Duplicate ID
> 'virtio-disk1' for device","contextMap":{},"wait":0}}] }
> 2013-08-14 00:20:34,767 TRACE [utils.nio.Link] (agentRequest-Handler-5:null)
> Sending packet of length 299
> 2013-08-14 00:20:34,767 TRACE [utils.nio.NioConnection] (Agent-Selector:null)
> Keys Processing: 0
> 2013-08-14 00:20:34,768 TRACE [utils.nio.NioConnection] (Agent-Selector:null)
> Keys Done Processing.
> 2013-08-14 00:20:34,768 TRACE [utils.nio.NioConnection] (Agent-Selector:null)
> To
>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira