Hi Ilya,

Just testing this for the first time so I'm not sure if  this has ever
worked. I will raise this in JIRA and post it to the dev list.

Thanks!

On Wed, Oct 22, 2014 at 4:40 PM, ilya musayev <[email protected]>
wrote:

> Do you recall what release of ACS had this functional? Or is this
> something you trying now and it fails?
>
> Please open a jira issue here https://issues.apache.org/
> jira/browse/CLOUDSTACK
>
> Mark this issue as major in Jira, post this issue on dev list with
> reference to jira issue id.
>
>
>
> On 10/22/14, 5:17 AM, cs user wrote:
>
>> Apologies, new member of the list having problems replying to an existing
>> message! :-)
>>
>> We're having the same issues as France. Xenserver, 6.01 latest hotfixes,
>> cloudstack 4.3 and 4.4. Snapshot of volume on local disk, to be
>> transferred
>> to S3. When the xenhost attemps the PUT request, the snapshot is no longer
>> present and so the request fails. Snapshots of volumes stored on Primary
>> storage work fine and are uploaded to the S3 backed secondary storage as
>> expected.
>>
>> Also tried upgrading the hosts to Xenserver 6.2, however it still has the
>> same issue. On another environment with cloudstack 4.3, with Xenserver
>> 6.01
>> (no recent xen hotfixes and no S3 backed secondary storage), local
>> snapshots work fine.
>>
>> We see this in the management server logs:
>>
>> (I have removed references to any hosts from the logs)
>>
>> WARN  [c.c.h.x.r.CitrixResourceBase] (DirectAgent-269:ctx-18eb6dbd) Task
>>
>>> failed! Task record:                 uuid:
>>> 54c09e4f-cf52-4bc2-7e3f-e3743a63ff92
>>>             nameLabel: Async.host.call_plugin
>>>       nameDescription:
>>>     allowedOperations: []
>>>     currentOperations: {}
>>>               created: Tue Oct 21 14:18:50 BST 2014
>>>              finished: Tue Oct 21 14:18:49 BST 2014
>>>                status: failure
>>>            residentOn: com.xensource.xenapi.Host@1dd85be5
>>>              progress: 1.0
>>>                  type: <none/>
>>>                result:
>>>             errorInfo: [XENAPI_PLUGIN_FAILURE, getSnapshotSize,
>>> CommandException, 2]
>>>           otherConfig: {}
>>>             subtaskOf: com.xensource.xenapi.Task@aaf13f6f
>>>              subtasks: []
>>> WARN  [c.c.h.x.r.CitrixResourceBase] (DirectAgent-269:ctx-18eb6dbd)
>>> callHostPlugin failed for cmd: getSnapshotSize with args snapshotUuid:
>>> b8b3b99a-66b3-461f-b5db-53ad0adec391, isISCSI: true, pri
>>> maryStorageSRUuid: 8649181d-bf39-40b9-4e88-9e1800f8a2bc,  due to Task
>>> failed! Task record:                 uuid:
>>> 54c09e4f-cf52-4bc2-7e3f-e3743a63ff92
>>>             nameLabel: Async.host.call_plugin
>>>       nameDescription:
>>>     allowedOperations: []
>>>     currentOperations: {}
>>>               created: Tue Oct 21 14:18:50 BST 2014
>>>              finished: Tue Oct 21 14:18:49 BST 2014
>>>                status: failure
>>>            residentOn: com.xensource.xenapi.Host@1dd85be5
>>>              progress: 1.0
>>>                  type: <none/>
>>>                result:
>>>             errorInfo: [XENAPI_PLUGIN_FAILURE, getSnapshotSize,
>>> CommandException, 2]
>>>           otherConfig: {}
>>>             subtaskOf: com.xensource.xenapi.Task@aaf13f6f
>>>              subtasks: []
>>> Task failed! Task record:                 uuid:
>>> 54c09e4f-cf52-4bc2-7e3f-e3743a63ff92
>>>             nameLabel: Async.host.call_plugin
>>>       nameDescription:
>>>     allowedOperations: []
>>>     currentOperations: {}
>>>               created: Tue Oct 21 14:18:50 BST 2014
>>>              finished: Tue Oct 21 14:18:49 BST 2014
>>>                status: failure
>>>            residentOn: com.xensource.xenapi.Host@1dd85be5
>>>              progress: 1.0
>>>                  type: <none/>
>>>                result:
>>>             errorInfo: [XENAPI_PLUGIN_FAILURE, getSnapshotSize,
>>> CommandException, 2]
>>>           otherConfig: {}
>>>             subtaskOf: com.xensource.xenapi.Task@aaf13f6f
>>>              subtasks: []
>>>          at
>>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.checkForSuccess(
>>> CitrixResourceBase.java:3293)
>>>
>>
>>
>> On the xenserver we see the following in /var/log/cloud/vmops.log
>>
>> 2014-10-20 16:21:59    DEBUG [root] #### VMOPS enter
>>
>>>   create_secondary_storage_folder ####
>>> 2014-10-20 16:21:59    DEBUG [root] create_secondary_storage_folder,
>>> args:
>>> {'newFolder': 'snapshots/2/257', 'remoteMountPath':
>>> 'hostname:/CS_TEST_SEC_01'}
>>> 2014-10-20 16:21:59    DEBUG [root] Successfully mounted
>>> hostname:/CS_TEST_SEC_01 to
>>> /var/run/cloud_mount/0d932145-892f-4960-8153-a30f4132fdc3
>>> 2014-10-20 16:21:59    DEBUG [root] Successfully unmounted
>>> /var/run/cloud_mount/0d932145-892f-4960-8153-a30f4132fdc3
>>> 2014-10-20 16:21:59    DEBUG [root] #### VMOPS exit
>>>   create_secondary_storage_folder ####
>>> 2014-10-20 16:22:03    DEBUG [root] #### VMOPS enter  gethostvmstats ####
>>> 2014-10-20 16:22:03    DEBUG [statsLogger] Calling URL:
>>> http://localhost/rrd_updates?session_id=OpaqueRef:79767c0a-
>>> a353-4f9c-cbb4-fba8a124bb83&host=true&cf=AVERAGE&interval=
>>> 60&start=1413818423
>>> 2014-10-20 16:22:03    DEBUG [statsLogger] Size of returned XML: 6408
>>> 2014-10-20 16:22:03    DEBUG [root] #### VMOPS exit  gethostvmstats ####
>>> 2014-10-20 16:22:36    DEBUG [root] #### VMOPS enter  gethostvmstats ####
>>> 2014-10-20 16:22:36    DEBUG [statsLogger] Calling URL:
>>> http://localhost/rrd_updates?session_id=OpaqueRef:693d7e20-
>>> 000d-81ee-58e5-10b795e5a330&host=false&cf=AVERAGE&
>>> interval=60&start=1413818456
>>> 2014-10-20 16:22:36    DEBUG [statsLogger] Size of returned XML: 6408
>>> 2014-10-20 16:22:36    DEBUG [root] #### VMOPS exit  gethostvmstats ####
>>> 2014-10-20 16:22:36    DEBUG [root] primarySRPath:
>>> /dev/VG_XenStorage-5e7f09b4-de48-81f1-509c-b0d4dc3571eb
>>> 2014-10-20 16:22:38    DEBUG [root] #### VMOPS enter
>>> deleteSnapshotBackup
>>> ####
>>> 2014-10-20 16:22:38    DEBUG [root] Calling deleteSnapshotBackup with
>>> {'localMountPoint':
>>> '/var/run/cloud_mount/ccd200be-0636-32f6-ad98-fb8e2fa8c70c',
>>> 'backupUUID':
>>> '94f0826b-8bd4-4a75-ae06-5b9724bd3780', 'secondaryStorageMountPath':
>>> 'hostname:/CS_TEST_SEC_01', 'path': 'snapshots/2/257'}
>>> 2014-10-20 16:22:38    DEBUG [root] Successfully mounted
>>> hostname:/CS_TEST_SEC_01/snapshots to
>>> /var/run/cloud_mount/ccd200be-0636-32f6-ad98-fb8e2fa8c70c
>>> 2014-10-20 16:22:38    DEBUG [root] Chdired to
>>> /var/run/cloud_mount/ccd200be-0636-32f6-ad98-fb8e2fa8c70c/2/257
>>> 2014-10-20 16:22:38    DEBUG [root] checking existence of
>>> 94f0826b-8bd4-4a75-ae06-5b9724bd3780.vhd
>>> 2014-10-20 16:22:38    DEBUG [root] backupVHD
>>> 94f0826b-8bd4-4a75-ae06-5b9724bd3780.vhd exists.
>>> 2014-10-20 16:22:38    DEBUG [root] #### VMOPS exit  deleteSnapshotBackup
>>> ####
>>>
>>
>>
>> On the xenserver we see the following in the /var/log/cloud/s3xen.log:
>>
>>
>> 2014-10-21 12:36:57    DEBUG [root] #### VMOPS enter s3 #### ####
>>
>>> 2014-10-21 12:36:57    DEBUG [root] #### VMOPS Operation put on file
>>> /dev/VG_XenStorage-b1802ff2-5a63-d1e7-04de-dea7ba7eab27/
>>> VHD-0cef2f39-03e8-458d-bf19-7a2294c40ac7
>>> from/in bucket cs-testing-01 key
>>> snapshots/2/257/VHD-0cef2f39-03e8-458d-bf19-7a2294c40ac7 ####
>>> 2014-10-21 12:36:57    DEBUG [root] #### VMOPS Traceback (most recent
>>> call
>>> last):
>>>    File "/etc/xapi.d/plugins/s3xen", line 414, in s3
>>>      client.put(bucket, key, filename, maxSingleUploadBytes)
>>>    File "/etc/xapi.d/plugins/s3xen", line 325, in put
>>>      raise Exception(
>>> Exception: Attempt to put
>>> /dev/VG_XenStorage-b1802ff2-5a63-d1e7-04de-dea7ba7eab27/
>>> VHD-0cef2f39-03e8-458d-bf19-7a2294c40ac7
>>> that does not exist.
>>>   ####
>>> 2014-10-21 12:36:57    DEBUG [root] #### VMOPS exit s3 with result false
>>> ####
>>>
>>
>>
>> It appears that Xen is deleting the file on the local filesystem before
>> the
>> S3 script has a chance to upload it, however the snapshot does appear to
>> have been successfully transferred to the secondary storage staging area,
>> after which the S3 transfer is attempted from the local lvm volume. When
>> this fails the snapshot is deleted, as you can see from the above logs.
>>
>> It appears that either the issue with Xen deleting the snapshot needs to
>> be
>> resolved (has this behavior changed recently?) or the PUT request to S3
>> needs to send the file which is located in the staging area, rather than
>> the file on local disk, as happens with the primary storage volume
>> snapshots.
>>
>> Thanks in advance for any help with this issue.
>>
>>
>>
>>  ---------- Forwarded message ----------
>>> From: France <[email protected]>
>>> To: [email protected]
>>> Cc:
>>> Date: Wed, 8 Oct 2014 12:45:13 +0200
>>> Subject: Fwd: ACS 4.1.1 -> 4.3.1 = errorInfo: [XENAPI_PLUGIN_FAILURE,
>>> getSnapshotSize, CommandException, 2] && vhd-util: libvhd::vhd_open:
>>> failed
>>> to open
>>> /dev/VG_XenStorage-197c3580-fc82-35e5-356d-1f0909a9cfd9/
>>> VHD-3d732e11-8697-4a45-9518-44d4154ee6d6:
>>> -2
>>> Sending it here. Maybe here someone will respond.
>>>
>>> Begin forwarded message:
>>>
>>>  From: France <[email protected]>
>>>> Subject: ACS 4.1.1 -> 4.3.1 = errorInfo: [XENAPI_PLUGIN_FAILURE,
>>>>
>>> getSnapshotSize, CommandException, 2] && vhd-util: libvhd::vhd_open:
>>> failed
>>> to open
>>> /dev/VG_XenStorage-197c3580-fc82-35e5-356d-1f0909a9cfd9/
>>> VHD-3d732e11-8697-4a45-9518-44d4154ee6d6:
>>> -2
>>>
>>>> Date: 7 Oct 2014 19:04:15 GMT+2
>>>> To: "[email protected]" <[email protected]>
>>>> Reply-To: [email protected]
>>>>
>>>> Hi guys,
>>>>
>>>> after upgrading ACS from 4.1.1 to 4.3.1 and upgrading XS 6.0.2 to latest
>>>>
>>> hotfixes and manually replacing NFSSR.py with one from ACS 4.3.1, amongst
>>> other errors, I see this error.
>>>
>>>> Snapshot is created successfully. I have created templates from it and
>>>>
>>> used them to create new VMs, but in the log there is an error. (Also
>>> snapshots are not working on volumes which have had snapshots before the
>>> upgrade. But i will deal with that issue later. They might even be
>>> related.)
>>>
>>>> What should I check to provide you with more info about the issue, so we
>>>>
>>> can address it?
>>>
>>>> Google search came up empty for this issue. :-( Please help me sort this
>>>>
>>> out.
>>>
>>>> To me, it looks like Xen removed
>>>>
>>> /dev/VG_XenStorage-197c3580-fc82-35e5-356d-1f0909a9cfd9/
>>> VHD-3d732e11-8697-4a45-9518-44d4154ee6d6
>>> after the copy to secondary storage but before cloudstack ordered the
>>> removal.
>>>
>>>> Should I open a bug for it?
>>>>
>>>>
>>>> This is where i believe the snapshot was ordered:
>>>> /////////////////////////////////////
>>>> 2014-10-07 16:51:16,919 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
>>>>
>>> (http-6443-exec-42:ctx-6a64da65 ctx-e324e472) submit async job-2171,
>>> details: AsyncJobVO {id:2171, userId: 28, accountId: 30, instanceType:
>>> Snapshot, instanceId: 977, cmd:
>>> org.apache.cloudstack.api.command.user.snapshot.CreateSnapshotCmd,
>>> cmdInfo:
>>> {"id":"977","response":"json","sessionkey”:"XXXXX","
>>> cmdEventType":"SNAPSHOT.CREATE","ctxUserId":"28","
>>> httpmethod":"GET","_":"1412693528063","volumeid":"
>>> 72d5b956-fbaf-44c2-8aeb-df49354ddbb3","quiescevm":"
>>> false","ctxAccountId":"30","ctxStartEventId":"60842"},
>>> cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0,
>>> result: null, initMsid: 95545481387, completeMsid: null, lastUpdated:
>>> null,
>>> lastPolled: null, created: null}
>>>
>>>> 2014-10-07 16:51:16,920 INFO  [o.a.c.f.j.i.AsyncJobMonitor]
>>>>
>>> (Job-Executor-84:ctx-649dbc0e) Add job-2171 into job monitoring
>>>
>>>> 2014-10-07 16:51:16,920 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
>>>>
>>> (Job-Executor-84:ctx-649dbc0e) Executing AsyncJobVO {id:2171, userId: 28,
>>> accountId: 30, instanceType: Snapshot, instanceId: 977, cmd:
>>> org.apache.cloudstack.api.command.user.snapshot.CreateSnapshotCmd,
>>> cmdInfo:
>>> {"id":"977","response":"json","sessionkey”:"XXXXX","
>>> cmdEventType":"SNAPSHOT.CREATE","ctxUserId":"28","
>>> httpmethod":"GET","_":"1412693528063","volumeid":"
>>> 72d5b956-fbaf-44c2-8aeb-df49354ddbb3","quiescevm":"
>>> false","ctxAccountId":"30","ctxStartEventId":"60842"},
>>> cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0,
>>> result: null, initMsid: 95545481387, completeMsid: null, lastUpdated:
>>> null,
>>> lastPolled: null, created: null}
>>>
>>>> 2014-10-07 16:51:16,920 DEBUG [c.c.a.ApiServlet]
>>>>
>>> (http-6443-exec-42:ctx-6a64da65 ctx-e324e472) ===END===  XX.XX.XX.XX --
>>> GET
>>> command=createSnapshot&volumeid=72d5b956-fbaf-44c2-
>>> 8aeb-df49354ddbb3&quiescevm=false&response=json&sessionkey=XXXXXX%3D&_=
>>> 1412693528063
>>>
>>>> 2014-10-07 16:51:16,928 DEBUG [c.c.u.AccountManagerImpl]
>>>>
>>> (Job-Executor-84:ctx-649dbc0e ctx-e324e472) Access to
>>> Acct[7ff6cd6b-7400-4d44-980b-9dc3115264eb-XXXXXX] granted to
>>> Acct[7ff6cd6b-7400-4d44-980b-9dc3115264eb-XXXX] by DomainChecker
>>>
>>>> 2014-10-07 16:51:16,953 INFO  [o.a.c.a.c.u.s.CreateSnapshotCmd]
>>>>
>>> (Job-Executor-84:ctx-649dbc0e ctx-e324e472) VOLSS: createSnapshotCmd
>>> starts:1412693476953
>>>
>>>> 2014-10-07 16:51:17,290 DEBUG [c.c.a.t.Request]
>>>>
>>> (Job-Executor-84:ctx-649dbc0e ctx-e324e472) Seq 2-1824254062: Sending  {
>>> Cmd , MgmtId: 95545481387, via: 2(x2.XXX), Ver: v1, Flags: 100011,
>>> [{"org.apache.cloudstack.storage.command.CreateObjectCommand":{"data":{
>>> "org.apache.cloudstack.storage.to.SnapshotObjectTO":{
>>> "volume":{"uuid":"72d5b956-fbaf-44c2-8aeb-df49354ddbb3","
>>> volumeType":"ROOT","dataStore":{"org.apache.cloudstack.
>>> storage.to.PrimaryDataStoreTO":{"uuid":"1c8dbf77-2b23-3244-
>>> 91ee-5037cb2a55a8","id":208,"poolType":"IscsiLUN","host":"
>>> s2.v.XXX","path":"/iqn.2010-02.XXX:storage.XXX.s2/1","
>>> port":3260,"url":"IscsiLUN://s2.v.XXX//iqn.2010-02.XXX:
>>> storage.XXX.s2/1/?ROLE=Primary&STOREUUID=1c8dbf77-
>>> 2b23-3244-91ee-5037cb2a55a8"}},"name":"ROOT-457","size":
>>> 21474836480,"path":"5f54e049-7336-46bf-88be-c364624504f9","
>>> volumeId":556,"vmName":"i-30-457-VM","accountId":30,"
>>> format":"VHD","id":556,"deviceId":0,"hypervisorType":"
>>> XenServer"},"dataStore":{"org.apache.cloudstack.storage.to.
>>> PrimaryDataStoreTO":{"uuid":"1c8dbf77-2b23-3244-91ee-
>>> 5037cb2a55a8","id":208,"poolType":"IscsiLUN","host":"
>>> s2.v.XXX","path":"/iqn.2010-02.XXX:storage.XXX.s2/1","
>>> port":3260,"url":"IscsiLUN://s2.v.XXX//iqn.2010-02.XXX:
>>> storage.XXX.s2/1/?ROLE=Primary&STOREUUID=1c8dbf77-
>>> 2b23-3244-91ee-5037cb2a55a8"}},"vmName":"i-30-457-VM","name"
>>> :"MInecraftDebian_ROOT-457_20141007145115","
>>> hypervisorType":"XenServer","id":977,"quiescevm":false,"
>>> physicalSize":0}},"wait":0}}]
>>> }
>>>
>>>> 2014-10-07 16:51:17,291 DEBUG [c.c.a.t.Request]
>>>>
>>> (Job-Executor-84:ctx-649dbc0e ctx-e324e472) Seq 2-1824254062:
>>> Executing:  {
>>> Cmd , MgmtId: 95545481387, via: 2(x2.XXX), Ver: v1, Flags: 100011,
>>> [{"org.apache.cloudstack.storage.command.CreateObjectCommand":{"data":{
>>> "org.apache.cloudstack.storage.to.SnapshotObjectTO":{
>>> "volume":{"uuid":"72d5b956-fbaf-44c2-8aeb-df49354ddbb3","
>>> volumeType":"ROOT","dataStore":{"org.apache.cloudstack.
>>> storage.to.PrimaryDataStoreTO":{"uuid":"1c8dbf77-2b23-3244-
>>> 91ee-5037cb2a55a8","id":208,"poolType":"IscsiLUN","host":"
>>> s2.v.XXX","path":"/iqn.2010-02.XXX:storage.XXX.s2/1","
>>> port":3260,"url":"IscsiLUN://s2.v.XXX//iqn.2010-02.XXX:
>>> storage.XXX.s2/1/?ROLE=Primary&STOREUUID=1c8dbf77-
>>> 2b23-3244-91ee-5037cb2a55a8"}},"name":"ROOT-457","size":
>>> 21474836480,"path":"5f54e049-7336-46bf-88be-c364624504f9","
>>> volumeId":556,"vmName":"i-30-457-VM","accountId":30,"
>>> format":"VHD","id":556,"deviceId":0,"hypervisorType":"
>>> XenServer"},"dataStore":{"org.apache.cloudstack.storage.to.
>>> PrimaryDataStoreTO":{"uuid":"1c8dbf77-2b23-3244-91ee-
>>> 5037cb2a55a8","id":208,"poolType":"IscsiLUN","host":"
>>> s2.v.XXX","path":"/iqn.2010-02.XXX:storage.XXX.s2/1","
>>> port":3260,"url":"IscsiLUN://s2.v.XXX//iqn.2010-02.XXX:
>>> storage.XXX.s2/1/?ROLE=Primary&STOREUUID=1c8dbf77-
>>> 2b23-3244-91ee-5037cb2a55a8"}},"vmName":"i-30-457-VM","name"
>>> :"MInecraftDebian_ROOT-457_20141007145115","
>>> hypervisorType":"XenServer","id":977,"quiescevm":false,"
>>> physicalSize":0}},"wait":0}}]
>>> }
>>>
>>>> 2014-10-07 16:51:17,291 DEBUG [c.c.a.m.DirectAgentAttache]
>>>>
>>> (DirectAgent-25:ctx-a5197893) Seq 2-1824254062: Executing request
>>>
>>>> 2014-10-07 16:51:18,352 DEBUG [c.c.a.m.AgentManagerImpl]
>>>>
>>> (AgentManager-Handler-7:null) Ping from 21
>>>
>>>> 2014-10-07 16:51:20,075 DEBUG [c.c.a.ApiServlet]
>>>>
>>> (http-6443-exec-34:ctx-c9163aa8) ===START===  XX.XX.XX.XX -- GET
>>> command=queryAsyncJobResult&jobId=89c49a6f-9d61-4bd8-a9ee-
>>> c41390ea3747&response=json&sessionkey=XXXXX%3D&_=1412693532730
>>>
>>>> 2014-10-07 16:51:20,165 DEBUG [c.c.a.ApiServlet]
>>>>
>>> (http-6443-exec-34:ctx-c9163aa8 ctx-25e4cfa1) ===END===  XX.XX.XX.XX --
>>> GET
>>> command=queryAsyncJobResult&jobId=89c49a6f-9d61-4bd8-a9ee-
>>> c41390ea3747&response=json&sessionkey=XXXXXX%3D&_=1412693532730
>>>
>>>> 2014-10-07 16:51:20,238 DEBUG [c.c.a.m.AgentManagerImpl]
>>>>
>>> (AgentManager-Handler-12:null) SeqA 27-139339: Processing Seq
>>> 27-139339:  {
>>> Cmd , MgmtId: -1, via: 27, Ver: v1, Flags: 11,
>>> [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand"
>>> :{"_proxyVmId":473,"_loadInfo":"{\n
>>> \"connections\": []\n}","wait":0}}] }
>>>
>>>> 2014-10-07 16:51:20,281 DEBUG [c.c.a.m.AgentManagerImpl]
>>>>
>>> (AgentManager-Handler-12:null) SeqA 27-139339: Sending Seq 27-139339:  {
>>> Ans: , MgmtId: 95545481387, via: 27, Ver: v1, Flags: 100010,
>>> [{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] }
>>>
>>>> 2014-10-07 16:51:21,767 DEBUG [c.c.a.m.DirectAgentAttache]
>>>>
>>> (DirectAgent-25:ctx-a5197893) Seq 2-1824254062: Response Received:
>>>
>>>> 2014-10-07 16:51:21,767 DEBUG [c.c.a.t.Request]
>>>>
>>> (DirectAgent-25:ctx-a5197893) Seq 2-1824254062: Processing:  { Ans: ,
>>> MgmtId: 95545481387, via: 2, Ver: v1, Flags: 10,
>>> [{"org.apache.cloudstack.storage.command.CreateObjectAnswer":{"data":{"
>>> org.apache.cloudstack.storage.to.SnapshotObjectTO":{"path":"
>>> d38b38f3-ebda-4cf7-9701-4ba22409644c","id":0,"quiescevm":false,"
>>> physicalSize":0}},"result":true,"wait":0}}]
>>> }
>>>
>>>> 2014-10-07 16:51:21,768 DEBUG [c.c.a.t.Request]
>>>>
>>> (Job-Executor-84:ctx-649dbc0e ctx-e324e472) Seq 2-1824254062: Received:
>>> {
>>> Ans: , MgmtId: 95545481387, via: 2, Ver: v1, Flags: 10, {
>>> CreateObjectAnswer } }
>>>
>>>> 2014-10-07 16:51:22,156 DEBUG [o.a.c.s.m.AncientDataMotionStrategy]
>>>>
>>> (Job-Executor-84:ctx-649dbc0e ctx-e324e472) copyAsync inspecting src type
>>> SNAPSHOT copyAsync inspecting dest type SNAPSHOT
>>>
>>>> 2014-10-07 16:51:22,201 DEBUG [c.c.a.t.Request]
>>>>
>>> (Job-Executor-84:ctx-649dbc0e ctx-e324e472) Seq 1-949601544: Sending  {
>>> Cmd
>>> , MgmtId: 95545481387, via: 1(x3.XXX), Ver: v1, Flags: 100011,
>>> [{"org.apache.cloudstack.storage.command.CopyCommand":{
>>> "srcTO":{"org.apache.cloudstack.storage.to.SnapshotObjectTO":{"path":"
>>> d38b38f3-ebda-4cf7-9701-4ba22409644c","volume":{"uuid"
>>> :"72d5b956-fbaf-44c2-8aeb-df49354ddbb3","volumeType":"
>>> ROOT","dataStore":{"org.apache.cloudstack.storage.to.
>>> PrimaryDataStoreTO":{"uuid":"1c8dbf77-2b23-3244-91ee-
>>> 5037cb2a55a8","id":208,"poolType":"IscsiLUN","host":"
>>> s2.v.XXX","path":"/iqn.2010-02.XXX:storage.XXX.s2/1","
>>> port":3260,"url":"IscsiLUN://s2.v.XXX//iqn.2010-02.XXX:
>>> storage.XXX.s2/1/?ROLE=Primary&STOREUUID=1c8dbf77-
>>> 2b23-3244-91ee-5037cb2a55a8"}},"name":"ROOT-457","size":
>>> 21474836480,"path":"5f54e049-7336-46bf-88be-c364624504f9","
>>> volumeId":556,"vmName":"i-30-457-VM","accountId":30,"
>>> format":"VHD","id":556,"deviceId":0,"hypervisorType":"
>>> XenServer"},"dataStore":{"org.apache.cloudstack.storage.to.
>>> PrimaryDataStoreTO":{"uuid":"1c8dbf77-2b23-3244-91ee-
>>> 5037cb2a55a8","id":208,"poolType":"IscsiLUN","host":"
>>> s2.v.XXX","path":"/iqn.2010-02.XXX:storage.XXX.s2/1","
>>> port":3260,"url":"IscsiLUN://s2.v.XXX//iqn.2010-02.XXX:
>>> storage.XXX.s2/1/?ROLE=Primary&STOREUUID=1c8dbf77-
>>> 2b23-3244-91ee-5037cb2a55a8"}},"vmName":"i-30-457-VM","name"
>>> :"MInecraftDebian_ROOT-457_20141007145115","
>>> hypervisorType":"XenServer","id":977,"quiescevm":false,"
>>> physicalSize":0}},"destTO":{"org.apache.cloudstack.storage.
>>> to.SnapshotObjectTO":{"path":"snapshots/30/556","volume":{"
>>> uuid":"72d5b956-fbaf-44c2-8aeb-df49354ddbb3","
>>> volumeType":"ROOT","dataStore":{"org.apache.cloudstack.
>>> storage.to.PrimaryDataStoreTO":{"uuid":"1c8dbf77-2b23-3244-
>>> 91ee-5037cb2a55a8","id":208,"poolType":"IscsiLUN","host":"
>>> s2.v.XXX","path":"/iqn.2010-02.XXX:storage.XXX.s2/1","
>>> port":3260,"url":"IscsiLUN://s2.v.XXX//iqn.2010-02.XXX:
>>> storage.XXX.s2/1/?ROLE=Primary&STOREUUID=1c8dbf77-
>>> 2b23-3244-91ee-5037cb2a55a8"}},"name":"ROOT-457","size":
>>> 21474836480,"path":"5f54e049-7336-46bf-88be-c364624504f9","
>>> volumeId":556,"vmName":"i-30-457-VM","accountId":30,"
>>> format":"VHD","id":556,"deviceId":0,"hypervisorType":"
>>> XenServer"},"dataStore":{"com.cloud.agent.api.to.NfsTO":{"_
>>> url":"nfs://s6.XXX/mnt/rootPool/nfsSecondary","_role"
>>> :"Image"}},"vmName":"i-30-457-VM","name":"MInecraftDebian_
>>> ROOT-457_20141007145115","hypervisorType":"XenServer","
>>> id":977,"quiescevm":false,"physicalSize":0}},"executeInSequence":false,"
>>> options":{"fullSnapshot":"true"},"wait":21600}}]
>>> }
>>>
>>>> 2014-10-07 16:51:22,201 DEBUG [c.c.a.t.Request]
>>>>
>>> (Job-Executor-84:ctx-649dbc0e ctx-e324e472) Seq 1-949601544: Executing:
>>> {
>>> Cmd , MgmtId: 95545481387, via: 1(x3.XXX), Ver: v1, Flags: 100011,
>>> [{"org.apache.cloudstack.storage.command.CopyCommand":{
>>> "srcTO":{"org.apache.cloudstack.storage.to.SnapshotObjectTO":{"path":"
>>> d38b38f3-ebda-4cf7-9701-4ba22409644c","volume":{"uuid"
>>> :"72d5b956-fbaf-44c2-8aeb-df49354ddbb3","volumeType":"
>>> ROOT","dataStore":{"org.apache.cloudstack.storage.to.
>>> PrimaryDataStoreTO":{"uuid":"1c8dbf77-2b23-3244-91ee-
>>> 5037cb2a55a8","id":208,"poolType":"IscsiLUN","host":"
>>> s2.v.XXX","path":"/iqn.2010-02.XXX:storage.XXX.s2/1","
>>> port":3260,"url":"IscsiLUN://s2.v.XXX//iqn.2010-02.XXX:
>>> storage.XXX.s2/1/?ROLE=Primary&STOREUUID=1c8dbf77-
>>> 2b23-3244-91ee-5037cb2a55a8"}},"name":"ROOT-457","size":
>>> 21474836480,"path":"5f54e049-7336-46bf-88be-c364624504f9","
>>> volumeId":556,"vmName":"i-30-457-VM","accountId":30,"
>>> format":"VHD","id":556,"deviceId":0,"hypervisorType":"
>>> XenServer"},"dataStore":{"org.apache.cloudstack.storage.to.
>>> PrimaryDataStoreTO":{"uuid":"1c8dbf77-2b23-3244-91ee-
>>> 5037cb2a55a8","id":208,"poolType":"IscsiLUN","host":"
>>> s2.v.XXX","path":"/iqn.2010-02.XXX:storage.XXX.s2/1","
>>> port":3260,"url":"IscsiLUN://s2.v.XXX//iqn.2010-02.XXX:
>>> storage.XXX.s2/1/?ROLE=Primary&STOREUUID=1c8dbf77-
>>> 2b23-3244-91ee-5037cb2a55a8"}},"vmName":"i-30-457-VM","name"
>>> :"MInecraftDebian_ROOT-457_20141007145115","
>>> hypervisorType":"XenServer","id":977,"quiescevm":false,"
>>> physicalSize":0}},"destTO":{"org.apache.cloudstack.storage.
>>> to.SnapshotObjectTO":{"path":"snapshots/30/556","volume":{"
>>> uuid":"72d5b956-fbaf-44c2-8aeb-df49354ddbb3","
>>> volumeType":"ROOT","dataStore":{"org.apache.cloudstack.
>>> storage.to.PrimaryDataStoreTO":{"uuid":"1c8dbf77-2b23-3244-
>>> 91ee-5037cb2a55a8","id":208,"poolType":"IscsiLUN","host":"
>>> s2.v.XXX","path":"/iqn.2010-02.XXX:storage.XXX.s2/1","
>>> port":3260,"url":"IscsiLUN://s2.v.XXX//iqn.2010-02.XXX:
>>> storage.XXX.s2/1/?ROLE=Primary&STOREUUID=1c8dbf77-
>>> 2b23-3244-91ee-5037cb2a55a8"}},"name":"ROOT-457","size":
>>> 21474836480,"path":"5f54e049-7336-46bf-88be-c364624504f9","
>>> volumeId":556,"vmName":"i-30-457-VM","accountId":30,"
>>> format":"VHD","id":556,"deviceId":0,"hypervisorType":"
>>> XenServer"},"dataStore":{"com.cloud.agent.api.to.NfsTO":{"_
>>> url":"nfs://s6.XXX/mnt/rootPool/nfsSecondary","_role"
>>> :"Image"}},"vmName":"i-30-457-VM","name":"MInecraftDebian_
>>> ROOT-457_20141007145115","hypervisorType":"XenServer","
>>> id":977,"quiescevm":false,"physicalSize":0}},"executeInSequence":false,"
>>> options":{"fullSnapshot":"true"},"wait":21600}}]
>>> }
>>>
>>>> 2014-10-07 16:51:22,202 DEBUG [c.c.a.m.DirectAgentAttache]
>>>>
>>> (DirectAgent-307:ctx-a2d8b59b) Seq 1-949601544: Executing request
>>>
>>>> 2014-10-07 16:51:23,072 DEBUG [c.c.a.ApiServlet]
>>>>
>>> (http-6443-exec-33:ctx-7d98629a) ===START===  XX.XX.XX.XX -- GET
>>> command=queryAsyncJobResult&jobId=89c49a6f-9d61-4bd8-a9ee-
>>> c41390ea3747&response=json&sessionkey=XXXX%3D&_=1412693535726
>>>
>>>> 2014-10-07 16:51:23,091 DEBUG [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-307:ctx-a2d8b59b) Creating a not shared SR for
>>> nfs://s6.XXX/mnt/rootPool/nfsSecondary/snapshots/30/556
>>>
>>>> 2014-10-07 16:51:23,254 DEBUG [c.c.a.ApiServlet]
>>>>
>>> (http-6443-exec-33:ctx-7d98629a ctx-b1c48235) ===END===  XX.XX.XX.XX --
>>> GET
>>> command=queryAsyncJobResult&jobId=89c49a6f-9d61-4bd8-a9ee-
>>> c41390ea3747&response=json&sessionkey=XXXXXD&_=1412693535726
>>>
>>>> 2014-10-07 16:51:23,954 DEBUG [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-307:ctx-a2d8b59b) Checking
>>> 844c542f-2085-350d-8be6-2c2452eb3728 or SR
>>> 197c3580-fc82-35e5-356d-1f0909a9cfd9 on
>>> XS[f6d699aa-f58f-449e-9a51-6e965562f178-XX.XX.023]
>>>
>>>> 2014-10-07 16:51:23,965 DEBUG [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-307:ctx-a2d8b59b) Host XX.XX.023
>>> OpaqueRef:72228e56-6e60-e1dc-677d-d1dd171045c7: Created a SR; UUID is
>>> 197c3580-fc82-35e5-356d-1f0909a9cfd9 device config is
>>> {serverpath=/mnt/rootPool/nfsSecondary/snapshots/30/556, server=s6.XXX}
>>>
>>>> /////////////////////////////////////
>>>>
>>>> At that time x3 hypervisor wrote the following to messages:
>>>> /////////////////////////////////////
>>>> Oct  7 16:51:34 x3 multipathd: dm-24: add map (uevent)
>>>> Oct  7 16:51:34 x3 multipathd: dm-24: add map (uevent)
>>>> Oct  7 16:51:34 x3 multipathd: dm-25: add map (uevent)
>>>> Oct  7 16:51:34 x3 multipathd: dm-25: add map (uevent)
>>>> Oct  7 16:51:34 x3 multipathd: dm-26: add map (uevent)
>>>> Oct  7 16:51:34 x3 multipathd: dm-27: add map (uevent)
>>>> Oct  7 16:51:34 x3 multipathd: dm-27: add map (uevent)
>>>> Oct  7 16:51:34 x3 tapdisk[1677]: tapdisk-control: init, 10 x 4k buffers
>>>> Oct  7 16:51:34 x3 tapdisk[1677]: I/O queue driver: lio
>>>> Oct  7 16:51:34 x3 tapdisk[1677]: tapdisk-log: started, level 0
>>>> Oct  7 16:51:34 x3 tapdisk[1677]:
>>>>
>>> /dev/VG_XenStorage-3b6e386d-8736-6b6b-7006-f3f4df9bd586/
>>> VHD-d38b38f3-ebda-4cf7-9701-4ba22409644c
>>> version: tap 0x00010003, b: 10240, a: 0, f: 0, n: 0
>>>
>>>> Oct  7 16:51:34 x3 tapdisk[1677]: opened image
>>>>
>>> /dev/VG_XenStorage-3b6e386d-8736-6b6b-7006-f3f4df9bd586/
>>> VHD-d38b38f3-ebda-4cf7-9701-4ba22409644c
>>> (1 users, state: 0x00000003, type: 4, ro)
>>>
>>>> Oct  7 16:51:34 x3 tapdisk[1677]:
>>>>
>>> /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--
>>> f3f4df9bd586-VHD--a6e64bdc--9883--476f--87b1--07fc669eade7
>>> version: tap 0x00010003, b: 10240, a: 227, f: 1, n: 0
>>>
>>>> Oct  7 16:51:34 x3 tapdisk[1677]: opened image
>>>>
>>> /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--
>>> f3f4df9bd586-VHD--a6e64bdc--9883--476f--87b1--07fc669eade7
>>> (1 users, state: 0x00000003, type: 4, ro)
>>>
>>>> Oct  7 16:51:34 x3 tapdisk[1677]:
>>>>
>>> /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--
>>> f3f4df9bd586-VHD--19ef64e0--398a--43b4--ad52--1d36eedc2b81
>>> version: tap 0x00010003, b: 10240, a: 9422, f: 8614, n: 0
>>>
>>>> Oct  7 16:51:34 x3 tapdisk[1677]: opened image
>>>>
>>> /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--
>>> f3f4df9bd586-VHD--19ef64e0--398a--43b4--ad52--1d36eedc2b81
>>> (1 users, state: 0x00000003, type: 4, ro)
>>>
>>>> Oct  7 16:51:34 x3 tapdisk[1677]:
>>>>
>>> /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--
>>> f3f4df9bd586-VHD--dde0bd55--1ae3--44bf--be8d--dac901b49c66
>>> version: tap 0x00010003, b: 10240, a: 770, f: 372, n: 0
>>>
>>>> Oct  7 16:51:34 x3 tapdisk[1677]: opened image
>>>>
>>> /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--
>>> f3f4df9bd586-VHD--dde0bd55--1ae3--44bf--be8d--dac901b49c66
>>> (1 users, state: 0x00000003, type: 4, ro)
>>>
>>>> Oct  7 16:51:34 x3 tapdisk[1677]: VBD CHAIN:
>>>> Oct  7 16:51:34 x3 tapdisk[1677]:
>>>>
>>> /dev/VG_XenStorage-3b6e386d-8736-6b6b-7006-f3f4df9bd586/
>>> VHD-d38b38f3-ebda-4cf7-9701-4ba22409644c:
>>> type:vhd(4) storage:lvm(3)
>>>
>>>> Oct  7 16:51:34 x3 tapdisk[1677]:
>>>>
>>> /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--
>>> f3f4df9bd586-VHD--a6e64bdc--9883--476f--87b1--07fc669eade7:
>>> type:vhd(4) storage:lvm(3)
>>>
>>>> Oct  7 16:51:34 x3 tapdisk[1677]:
>>>>
>>> /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--
>>> f3f4df9bd586-VHD--19ef64e0--398a--43b4--ad52--1d36eedc2b81:
>>> type:vhd(4) storage:lvm(3)
>>>
>>>> Oct  7 16:51:34 x3 tapdisk[1677]:
>>>>
>>> /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--
>>> f3f4df9bd586-VHD--dde0bd55--1ae3--44bf--be8d--dac901b49c66:
>>> type:vhd(4) storage:lvm(3)
>>>
>>>> Oct  7 16:51:34 x3 tapdisk[1677]: bdev: capacity=41943040
>>>>
>>> sector_size=512/512 flags=0
>>>
>>>> Oct  7 16:51:34 x3 multipathd: tdg: add path (uevent)
>>>> Oct  7 16:51:34 x3 multipathd: tdg: failed to store path info
>>>> Oct  7 16:51:34 x3 multipathd: uevent trigger error
>>>> Oct  7 16:51:34 x3 kernel: block tdg: sector-size: 512/512 capacity:
>>>>
>>> 41943040
>>>
>>>> Oct  7 16:51:34 x3 kernel: blkback: event-channel 85
>>>> Oct  7 16:51:34 x3 kernel: blkback: ring-ref 8
>>>> Oct  7 16:51:34 x3 kernel: blkback: protocol 1 (x86_32-abi)
>>>> Oct  7 16:51:34 x3 multipathd: xvda: add path (uevent)
>>>> Oct  7 16:51:34 x3 multipathd: xvda: failed to store path info
>>>> Oct  7 16:51:34 x3 multipathd: uevent trigger error
>>>> Oct  7 16:51:34 x3 kernel: blkfront: xvda: barriers enabled
>>>> Oct  7 16:51:34 x3 kernel:  xvda: xvda1 xvda2 < xvda5 >
>>>> Oct  7 16:51:35 x3 vbd.uevent[add](backend/vbd/0/51712): wrote
>>>>
>>> /xapi/0/hotplug/vbd/51712/hotplug = 'online'
>>>
>>>> Oct  7 16:51:35 x3 vbd.uevent[add](backend/vbd/0/51712): wrote
>>>>
>>> backend/vbd/0/51712/hotplug-status = 'connected'
>>>
>>>> Oct  7 16:51:37 x3 tapdisk[2033]: tapdisk-control: init, 10 x 4k buffers
>>>> Oct  7 16:51:37 x3 tapdisk[2033]: I/O queue driver: lio
>>>> Oct  7 16:51:37 x3 tapdisk[2033]: tapdisk-log: started, level 0
>>>> Oct  7 16:51:37 x3 tapdisk[2033]:
>>>>
>>> /var/run/sr-mount/197c3580-fc82-35e5-356d-1f0909a9cfd9/
>>> 3d732e11-8697-4a45-9518-44d4154ee6d6.vhd
>>> version: tap 0x00010003, b: 10240, a: 0, f: 0, n: 87
>>>
>>>> Oct  7 16:51:37 x3 tapdisk[2033]: opened image
>>>>
>>> /var/run/sr-mount/197c3580-fc82-35e5-356d-1f0909a9cfd9/
>>> 3d732e11-8697-4a45-9518-44d4154ee6d6.vhd
>>> (1 users, state: 0x00000001, type: 4, rw)
>>>
>>>> Oct  7 16:51:37 x3 tapdisk[2033]: VBD CHAIN:
>>>> Oct  7 16:51:37 x3 tapdisk[2033]:
>>>>
>>> /var/run/sr-mount/197c3580-fc82-35e5-356d-1f0909a9cfd9/
>>> 3d732e11-8697-4a45-9518-44d4154ee6d6.vhd:
>>> type:vhd(4) storage:nfs(1)
>>>
>>>> Oct  7 16:51:37 x3 tapdisk[2033]: bdev: capacity=41943040
>>>>
>>> sector_size=512/512 flags=0
>>>
>>>> Oct  7 16:51:37 x3 multipathd: tdi: add path (uevent)
>>>> Oct  7 16:51:37 x3 multipathd: tdi: failed to store path info
>>>> Oct  7 16:51:37 x3 multipathd: uevent trigger error
>>>> Oct  7 16:51:37 x3 kernel: block tdi: sector-size: 512/512 capacity:
>>>>
>>> 41943040
>>>
>>>> Oct  7 16:51:37 x3 multipathd: xvdb: add path (uevent)
>>>> Oct  7 16:51:37 x3 multipathd: xvdb: failed to store path info
>>>> Oct  7 16:51:37 x3 multipathd: uevent trigger error
>>>> Oct  7 16:51:37 x3 kernel: blkback: event-channel 87
>>>> Oct  7 16:51:37 x3 kernel: blkback: ring-ref 69
>>>> Oct  7 16:51:37 x3 kernel: blkback: protocol 1 (x86_32-abi)
>>>> Oct  7 16:51:37 x3 kernel: blkfront: xvdb: barriers enabled
>>>> Oct  7 16:51:37 x3 kernel:  xvdb: unknown partition table
>>>> Oct  7 16:51:37 x3 vbd.uevent[add](backend/vbd/0/51728): wrote
>>>>
>>> /xapi/0/hotplug/vbd/51728/hotplug = 'online'
>>>
>>>> Oct  7 16:51:37 x3 vbd.uevent[add](backend/vbd/0/51728): wrote
>>>>
>>> backend/vbd/0/51728/hotplug-status = 'connected'
>>>
>>>> Oct  7 16:51:38 x3 sparse_dd: [ info|x3.XXX|0||sparse_dd] streaming from
>>>>
>>> raw /dev/xvda using BAT from
>>> /dev/VG_XenStorage-3b6e386d-8736-6b6b-7006-f3f4df9bd586/
>>> VHD-d38b38f3-ebda-4cf7-9701-4ba22409644c
>>> (relative to None) to raw file:///dev/xvdb
>>>
>>>> /////////////////////////////////////
>>>>
>>>> This is the actual error, which a believe comes after the copy of the
>>>>
>>> snapshot is made to secondary storage:
>>>
>>>> /////////////////////////////////////
>>>> 2014-10-07 17:58:49,920 WARN  [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-307:ctx-a2d8b59b) Task failed! Task record:
>>>   uuid: 1258a9a9-1b7b-027c-5bfd-a210407b8ba5
>>>
>>>>            nameLabel: Async.host.call_plugin
>>>>      nameDescription:
>>>>    allowedOperations: []
>>>>    currentOperations: {}
>>>>              created: Tue Oct 07 17:58:48 CEST 2014
>>>>             finished: Tue Oct 07 17:58:49 CEST 2014
>>>>               status: failure
>>>>           residentOn: com.xensource.xenapi.Host@c612622b
>>>>             progress: 1.0
>>>>                 type: <none/>
>>>>               result:
>>>>            errorInfo: [XENAPI_PLUGIN_FAILURE, getSnapshotSize,
>>>>
>>> CommandException, 2]
>>>
>>>>          otherConfig: {}
>>>>            subtaskOf: com.xensource.xenapi.Task@aaf13f6f
>>>>             subtasks: []
>>>>
>>>> 2014-10-07 17:58:49,929 WARN  [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-307:ctx-a2d8b59b) callHostPlugin failed for cmd:
>>> getSnapshotSize with args snapshotUuid:
>>> 3d732e11-8697-4a45-9518-44d4154ee6d6, isISCSI: true,
>>> primaryStorageSRUuid:
>>> 197c3580-fc82-35e5-356d-1f0909a9cfd9,  due to Task failed! Task record:
>>>               uuid: 1258a9a9-1b7b-027c-5bfd-a210407b8ba5
>>>
>>>>            nameLabel: Async.host.call_plugin
>>>>      nameDescription:
>>>>    allowedOperations: []
>>>>    currentOperations: {}
>>>>              created: Tue Oct 07 17:58:48 CEST 2014
>>>>             finished: Tue Oct 07 17:58:49 CEST 2014
>>>>               status: failure
>>>>           residentOn: com.xensource.xenapi.Host@c612622b
>>>>             progress: 1.0
>>>>                 type: <none/>
>>>>               result:
>>>>            errorInfo: [XENAPI_PLUGIN_FAILURE, getSnapshotSize,
>>>>
>>> CommandException, 2]
>>>
>>>>          otherConfig: {}
>>>>            subtaskOf: com.xensource.xenapi.Task@aaf13f6f
>>>>             subtasks: []
>>>>
>>>> Task failed! Task record:                 uuid:
>>>>
>>> 1258a9a9-1b7b-027c-5bfd-a210407b8ba5
>>>
>>>>            nameLabel: Async.host.call_plugin
>>>>      nameDescription:
>>>>    allowedOperations: []
>>>>    currentOperations: {}
>>>>              created: Tue Oct 07 17:58:48 CEST 2014
>>>>             finished: Tue Oct 07 17:58:49 CEST 2014
>>>>               status: failure
>>>>           residentOn: com.xensource.xenapi.Host@c612622b
>>>>             progress: 1.0
>>>>                 type: <none/>
>>>>               result:
>>>>            errorInfo: [XENAPI_PLUGIN_FAILURE, getSnapshotSize,
>>>>
>>> CommandException, 2]
>>>
>>>>          otherConfig: {}
>>>>            subtaskOf: com.xensource.xenapi.Task@aaf13f6f
>>>>             subtasks: []
>>>>
>>>>        at
>>>>
>>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.checkForSuccess(
>>> CitrixResourceBase.java:3764)
>>>
>>>>        at
>>>>
>>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.
>>> callHostPluginAsync(CitrixResourceBase.java:3971)
>>>
>>>>        at
>>>>
>>> com.cloud.hypervisor.xen.resource.XenServerStorageProcessor.
>>> getSnapshotSize(XenServerStorageProcessor.java:1161)
>>>
>>>>        at
>>>>
>>> com.cloud.hypervisor.xen.resource.XenServerStorageProcessor.
>>> backupSnapshot(XenServerStorageProcessor.java:1348)
>>>
>>>>        at
>>>>
>>> com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(
>>> StorageSubsystemCommandHandlerBase.java:90)
>>>
>>>>        at
>>>>
>>> com.cloud.storage.resource.StorageSubsystemCommandHandler
>>> Base.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:50)
>>>
>>>>        at
>>>>
>>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(
>>> CitrixResourceBase.java:609)
>>>
>>>>        at
>>>>
>>> com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(
>>> XenServer56Resource.java:59)
>>>
>>>>        at
>>>>
>>> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(
>>> DirectAgentAttache.java:216)
>>>
>>>>        at
>>>>
>>> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(
>>> ManagedContextRunnable.java:49)
>>>
>>>>        at
>>>>
>>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(
>>> DefaultManagedContext.java:56)
>>>
>>>>        at
>>>>
>>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.
>>> callWithContext(DefaultManagedContext.java:103)
>>>
>>>>        at
>>>>
>>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.
>>> runWithContext(DefaultManagedContext.java:53)
>>>
>>>>        at
>>>>
>>> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(
>>> ManagedContextRunnable.java:46)
>>>
>>>>        at
>>>>
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>>
>>>>        at
>>>>
>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>
>>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>>        at
>>>>
>>> java.util.concurrent.ScheduledThreadPoolExecutor$
>>> ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
>>>
>>>>        at
>>>>
>>> java.util.concurrent.ScheduledThreadPoolExecutor$
>>> ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
>>>
>>>>        at
>>>>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(
>>> ThreadPoolExecutor.java:1146)
>>>
>>>>        at
>>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>> ThreadPoolExecutor.java:615)
>>>
>>>>        at java.lang.Thread.run(Thread.java:701)
>>>> 2014-10-07 17:58:49,945 DEBUG [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-307:ctx-a2d8b59b) Host XX.XX.023
>>> OpaqueRef:72228e56-6e60-e1dc-677d-d1dd171045c7: Removing SR
>>>
>>>> 2014-10-07 17:58:49,976 DEBUG [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-307:ctx-a2d8b59b) Host XX.XX.023
>>> OpaqueRef:417a2b3b-8042-359e-6394-180cbf82f600: Unplugging pbd
>>>
>>>> 2014-10-07 17:58:50,423 DEBUG [c.c.a.ApiServlet]
>>>>
>>> (http-6443-exec-42:ctx-b31e9f9b) ===START===  XX.XX.XX.XX -- GET
>>> command=queryAsyncJobResult&jobId=89c49a6f-9d61-4bd8-a9ee-
>>> c41390ea3747&response=json&sessionkey=vnXXXXXXX%3D&_=1412697583219
>>>
>>>> 2014-10-07 17:58:50,446 DEBUG [c.c.a.ApiServlet]
>>>>
>>> (http-6443-exec-42:ctx-b31e9f9b ctx-b14e1b8b) ===END===  XX.XX.XX.XX --
>>> GET
>>> command=queryAsyncJobResult&jobId=89c49a6f-9d61-4bd8-a9ee-
>>> c41390ea3747&response=json&sessionkey=vnXXXXXXX%3D&_=1412697583219
>>>
>>>> 2014-10-07 17:58:50,529 DEBUG [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-307:ctx-a2d8b59b) Host XX.XX.023
>>> OpaqueRef:72228e56-6e60-e1dc-677d-d1dd171045c7: Forgetting
>>>
>>>> 2014-10-07 17:58:50,535 DEBUG [c.c.s.StatsCollector]
>>>>
>>> (StatsCollector-3:ctx-222d1581) VmStatsCollector is running...
>>>
>>>> 2014-10-07 17:58:50,555 DEBUG [c.c.a.m.DirectAgentAttache]
>>>>
>>> (DirectAgent-418:ctx-665da9a5) Seq 1-949601730: Executing request
>>>
>>>> 2014-10-07 17:58:50,979 DEBUG [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-418:ctx-665da9a5) Vm cpu utilization 0.98
>>>
>>>> 2014-10-07 17:58:50,979 DEBUG [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-418:ctx-665da9a5) Vm cpu utilization 7.2124999999999995
>>>
>>>> 2014-10-07 17:58:50,979 DEBUG [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-418:ctx-665da9a5) Vm cpu utilization 1.3299999999999998
>>>
>>>> 2014-10-07 17:58:50,979 DEBUG [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-418:ctx-665da9a5) Vm cpu utilization 0.27
>>>
>>>> 2014-10-07 17:58:51,013 WARN  [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-418:ctx-665da9a5) Error while collecting disk stats from :
>>>
>>>> You gave an invalid object reference.  The object may have recently been
>>>>
>>> deleted.  The class parameter gives the type of reference given, and the
>>> handle parameter echoes the bad value given.
>>>
>>>>        at com.xensource.xenapi.Types.checkResponse(Types.java:209)
>>>>        at com.xensource.xenapi.Connection.dispatch(Connection.java:368)
>>>>        at
>>>>
>>> com.cloud.hypervisor.xen.resource.XenServerConnectionPool$
>>> XenServerConnection.dispatch(XenServerConnectionPool.java:909)
>>>
>>>>        at
>>>>
>>> com.xensource.xenapi.VBDMetrics.getIoReadKbs(VBDMetrics.java:210)
>>>
>>>>        at
>>>>
>>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.
>>> getVmStats(CitrixResourceBase.java:2863)
>>>
>>>>        at
>>>>
>>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.
>>> execute(CitrixResourceBase.java:2763)
>>>
>>>>        at
>>>>
>>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(
>>> CitrixResourceBase.java:493)
>>>
>>>>        at
>>>>
>>> com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(
>>> XenServer56Resource.java:59)
>>>
>>>>        at
>>>>
>>> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(
>>> DirectAgentAttache.java:216)
>>>
>>>>        at
>>>>
>>> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(
>>> ManagedContextRunnable.java:49)
>>>
>>>>        at
>>>>
>>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(
>>> DefaultManagedContext.java:56)
>>>
>>>>        at
>>>>
>>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.
>>> callWithContext(DefaultManagedContext.java:103)
>>>
>>>>        at
>>>>
>>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.
>>> runWithContext(DefaultManagedContext.java:53)
>>>
>>>>        at
>>>>
>>> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(
>>> ManagedContextRunnable.java:46)
>>>
>>>>        at
>>>>
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>>
>>>>        at
>>>>
>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>
>>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>>        at
>>>>
>>> java.util.concurrent.ScheduledThreadPoolExecutor$
>>> ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
>>>
>>>>        at
>>>>
>>> java.util.concurrent.ScheduledThreadPoolExecutor$
>>> ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
>>>
>>>>        at
>>>>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(
>>> ThreadPoolExecutor.java:1146)
>>>
>>>>        at
>>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>> ThreadPoolExecutor.java:615)
>>>
>>>>        at java.lang.Thread.run(Thread.java:701)
>>>> 2014-10-07 17:58:51,036 DEBUG [c.c.a.m.DirectAgentAttache]
>>>>
>>> (DirectAgent-418:ctx-665da9a5) Seq 1-949601730: Response Received:
>>>
>>>> 2014-10-07 17:58:51,036 DEBUG [c.c.a.t.Request]
>>>>
>>> (StatsCollector-3:ctx-222d1581) Seq 1-949601730: Received:  { Ans: ,
>>> MgmtId: 95545481387, via: 1, Ver: v1, Flags: 10, { GetVmStatsAnswer } }
>>>
>>>> 2014-10-07 17:58:51,061 DEBUG [c.c.a.m.DirectAgentAttache]
>>>>
>>> (DirectAgent-165:ctx-3c5b0bdc) Seq 2-1824254287: Executing request
>>>
>>>> 2014-10-07 17:58:51,627 DEBUG [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-165:ctx-3c5b0bdc) Vm cpu utilization 0.53
>>>
>>>> 2014-10-07 17:58:51,627 DEBUG [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-165:ctx-3c5b0bdc) Vm cpu utilization 37.46
>>>
>>>> 2014-10-07 17:58:51,627 DEBUG [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-165:ctx-3c5b0bdc) Vm cpu utilization 0.62
>>>
>>>> 2014-10-07 17:58:51,627 DEBUG [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-165:ctx-3c5b0bdc) Vm cpu utilization 3.375
>>>
>>>> 2014-10-07 17:58:51,627 DEBUG [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-165:ctx-3c5b0bdc) Vm cpu utilization 0.54
>>>
>>>> 2014-10-07 17:58:51,627 DEBUG [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-165:ctx-3c5b0bdc) Vm cpu utilization 60.28
>>>
>>>> 2014-10-07 17:58:51,627 DEBUG [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-165:ctx-3c5b0bdc) Vm cpu utilization 0.63
>>>
>>>> 2014-10-07 17:58:51,660 WARN  [c.c.h.x.r.CitrixResourceBase]
>>>>
>>> (DirectAgent-165:ctx-3c5b0bdc) Error while collecting disk stats from :
>>>
>>>> You gave an invalid object reference.  The object may have recently been
>>>>
>>> deleted.  The class parameter gives the type of reference given, and the
>>> handle parameter echoes the bad value given.
>>>
>>>>        at com.xensource.xenapi.Types.checkResponse(Types.java:209)
>>>>        at com.xensource.xenapi.Connection.dispatch(Connection.java:368)
>>>>        at
>>>>
>>> com.cloud.hypervisor.xen.resource.XenServerConnectionPool$
>>> XenServerConnection.dispatch(XenServerConnectionPool.java:909)
>>>
>>>>        at
>>>>
>>> com.xensource.xenapi.VBDMetrics.getIoReadKbs(VBDMetrics.java:210)
>>>
>>>>        at
>>>>
>>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.
>>> getVmStats(CitrixResourceBase.java:2863)
>>>
>>>>        at
>>>>
>>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.
>>> execute(CitrixResourceBase.java:2763)
>>>
>>>>        at
>>>>
>>> com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(
>>> CitrixResourceBase.java:493)
>>>
>>>>        at
>>>>
>>> com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(
>>> XenServer56Resource.java:59)
>>>
>>>>        at
>>>>
>>> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(
>>> DirectAgentAttache.java:216)
>>>
>>>>        at
>>>>
>>> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(
>>> ManagedContextRunnable.java:49)
>>>
>>>>        at
>>>>
>>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(
>>> DefaultManagedContext.java:56)
>>>
>>>>        at
>>>>
>>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.
>>> callWithContext(DefaultManagedContext.java:103)
>>>
>>>>        at
>>>>
>>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.
>>> runWithContext(DefaultManagedContext.java:53)
>>>
>>>>        at
>>>>
>>> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(
>>> ManagedContextRunnable.java:46)
>>>
>>>>        at
>>>>
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>>
>>>>        at
>>>>
>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>
>>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>>        at
>>>>
>>> java.util.concurrent.ScheduledThreadPoolExecutor$
>>> ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
>>>
>>>>        at
>>>>
>>> java.util.concurrent.ScheduledThreadPoolExecutor$
>>> ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
>>>
>>>>        at
>>>>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(
>>> ThreadPoolExecutor.java:1146)
>>>
>>>>        at
>>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>> ThreadPoolExecutor.java:615)
>>>
>>>>        at java.lang.Thread.run(Thread.java:701)
>>>> 2014-10-07 17:58:51,683 DEBUG [c.c.a.m.DirectAgentAttache]
>>>>
>>> (DirectAgent-165:ctx-3c5b0bdc) Seq 2-1824254287: Response Received:
>>>
>>>> 2014-10-07 17:58:51,683 DEBUG [c.c.a.t.Request]
>>>>
>>> (StatsCollector-3:ctx-222d1581) Seq 2-1824254287: Received:  { Ans: ,
>>> MgmtId: 95545481387, via: 2, Ver: v1, Flags: 10, { GetVmStatsAnswer } }
>>>
>>>> 2014-10-07 17:58:51,708 DEBUG [c.c.a.m.DirectAgentAttache]
>>>>
>>> (DirectAgent-16:ctx-3d6b458a) Seq 3-1392049173: Executing request
>>>
>>>> 2014-10-07 17:58:51,733 DEBUG [c.c.h.x.r.XenServerStorageProcessor]
>>>>
>>> (DirectAgent-307:ctx-a2d8b59b) Successfully destroyed snapshot on volume:
>>> 5f54e049-7336-46bf-88be-c364624504f9 execept this current snapshot
>>> d38b38f3-ebda-4cf7-9701-4ba22409644c
>>>
>>>> 2014-10-07 17:58:51,734 DEBUG [c.c.a.m.DirectAgentAttache]
>>>>
>>> (DirectAgent-307:ctx-a2d8b59b) Seq 1-949601544: Response Received:
>>>
>>>> 2014-10-07 17:58:51,734 DEBUG [c.c.a.t.Request]
>>>>
>>> (DirectAgent-307:ctx-a2d8b59b) Seq 1-949601544: Processing:  { Ans: ,
>>> MgmtId: 95545481387, via: 1, Ver: v1, Flags: 10,
>>> [{"org.apache.cloudstack.storage.command.CopyCmdAnswer"
>>> :{"newData":{"org.apache.cloudstack.storage.to.
>>> SnapshotObjectTO":{"path":"snapshots/30/556/3d732e11-
>>> 8697-4a45-9518-44d4154ee6d6","id":0,"quiescevm":false,"
>>> physicalSize":0}},"result":true,"wait":0}}]
>>> }
>>>
>>>> 2014-10-07 17:58:51,734 DEBUG [c.c.a.t.Request]
>>>>
>>> (Job-Executor-84:ctx-649dbc0e ctx-e324e472) Seq 1-949601544: Received:  {
>>> Ans: , MgmtId: 95545481387, via: 1, Ver: v1, Flags: 10, { CopyCmdAnswer
>>> } }
>>>
>>>> 2014-10-07 17:58:51,784 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
>>>>
>>> (Job-Executor-84:ctx-649dbc0e ctx-e324e472) Complete async job-2171,
>>> jobStatus: SUCCEEDED, resultCode: 0, result:
>>> org.apache.cloudstack.api.response.SnapshotResponse/
>>> snapshot/{"id":"a5df5f35-c5f6-4644-a569-9bd4a26e500d","
>>> account":"martinG","domainid":"1268ca9a-7ec5-4154-9839-
>>> 9b90699dfa4b","domain":"ROOT","snapshottype":"MANUAL","
>>> volumeid":"72d5b956-fbaf-44c2-8aeb-df49354ddbb3","
>>> volumename":"ROOT-457","volumetype":"ROOT","created":"
>>> 2014-10-07T16:51:15+0200","name":"MInecraftDebian_ROOT-
>>> 457_20141007145115","intervaltype":"MANUAL","state"
>>> :"Allocated","tags":[],"revertable":false}
>>>
>>>> /////////////////////////////////////
>>>>
>>>> At that time the x3 hypervisor wrote this to messages:
>>>> /////////////////////////////////////
>>>> Oct  7 17:58:42 x3 multipathd: xvdb: remove path (uevent)
>>>> Oct  7 17:58:42 x3 multipathd: xvdb: spurious uevent, path not in
>>>> pathvec
>>>> Oct  7 17:58:42 x3 multipathd: uevent trigger error
>>>> Oct  7 17:58:42 x3 vbd.uevent[remove](backend/vbd/0/51728): removed
>>>>
>>> /xapi/0/hotplug/vbd/51728/hotplug
>>>
>>>> Oct  7 17:58:42 x3 vbd.uevent[remove](backend/vbd/0/51728): removed
>>>>
>>> backend/vbd/0/51728/hotplug-status
>>>
>>>> Oct  7 17:58:43 x3 multipathd: tdi: remove path (uevent)
>>>> Oct  7 17:58:43 x3 multipathd: tdi: spurious uevent, path not in pathvec
>>>> Oct  7 17:58:43 x3 multipathd: uevent trigger error
>>>> Oct  7 17:58:43 x3 tapdisk[2033]: gaps written/skipped: 0/0
>>>> Oct  7 17:58:43 x3 tapdisk[2033]:
>>>>
>>> /var/run/sr-mount/197c3580-fc82-35e5-356d-1f0909a9cfd9/
>>> 3d732e11-8697-4a45-9518-44d4154ee6d6.vhd:
>>> b: 10240, a: 9510, f: 9510, n: 39029120
>>>
>>>> Oct  7 17:58:43 x3 tapdisk[2033]: closed image
>>>>
>>> /var/run/sr-mount/197c3580-fc82-35e5-356d-1f0909a9cfd9/
>>> 3d732e11-8697-4a45-9518-44d4154ee6d6.vhd
>>> (0 users, state: 0x00000000, type: 4)
>>>
>>>> Oct  7 17:58:43 x3 tapdisk[2033]: tapdisk-log: closing after 0 errors
>>>> Oct  7 17:58:43 x3 tapdisk[2033]: tapdisk-syslog: 11 messages, 893
>>>>
>>> bytes, xmits: 12, failed: 0, dropped: 0
>>>
>>>> Oct  7 17:58:43 x3 tapdisk[2033]: tapdisk-control: draining 1
>>>> connections
>>>> Oct  7 17:58:43 x3 tapdisk[2033]: tapdisk-control: done
>>>> Oct  7 17:58:44 x3 ovs-vswitchd: 46807|timeval|WARN|9 ms poll interval
>>>>
>>> (0 ms user, 0 ms system) is over 14 times the weighted mean interval 1 ms
>>> (54312674 samples)
>>>
>>>> Oct  7 17:58:44 x3 ovs-vswitchd: 46808|coverage|INFO|Skipping details of
>>>>
>>> duplicate event coverage for hash=735285a1 in epoch 54312674
>>>
>>>> Oct  7 17:58:44 x3 multipathd: xvda: remove path (uevent)
>>>> Oct  7 17:58:44 x3 multipathd: xvda: spurious uevent, path not in
>>>> pathvec
>>>> Oct  7 17:58:44 x3 multipathd: uevent trigger error
>>>> Oct  7 17:58:44 x3 vbd.uevent[remove](backend/vbd/0/51712): removed
>>>>
>>> /xapi/0/hotplug/vbd/51712/hotplug
>>>
>>>> Oct  7 17:58:44 x3 vbd.uevent[remove](backend/vbd/0/51712): removed
>>>>
>>> backend/vbd/0/51712/hotplug-status
>>>
>>>> Oct  7 17:58:45 x3 multipathd: tdg: remove path (uevent)
>>>> Oct  7 17:58:45 x3 multipathd: tdg: spurious uevent, path not in pathvec
>>>> Oct  7 17:58:45 x3 tapdisk[1677]: gaps written/skipped: 0/0
>>>> Oct  7 17:58:45 x3 multipathd: uevent trigger error
>>>> Oct  7 17:58:45 x3 tapdisk[1677]:
>>>>
>>> /dev/VG_XenStorage-3b6e386d-8736-6b6b-7006-f3f4df9bd586/
>>> VHD-d38b38f3-ebda-4cf7-9701-4ba22409644c:
>>> b: 10240, a: 0, f: 0, n: 0
>>>
>>>> Oct  7 17:58:45 x3 tapdisk[1677]: closed image
>>>>
>>> /dev/VG_XenStorage-3b6e386d-8736-6b6b-7006-f3f4df9bd586/
>>> VHD-d38b38f3-ebda-4cf7-9701-4ba22409644c
>>> (0 users, state: 0x00000002, type: 4)
>>>
>>>> Oct  7 17:58:45 x3 tapdisk[1677]: gaps written/skipped: 0/0
>>>> Oct  7 17:58:45 x3 tapdisk[1677]:
>>>>
>>> /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--
>>> f3f4df9bd586-VHD--a6e64bdc--9883--476f--87b1--07fc669eade7:
>>> b: 10240, a: 227, f: 1, n: 0
>>>
>>>> Oct  7 17:58:45 x3 tapdisk[1677]: closed image
>>>>
>>> /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--
>>> f3f4df9bd586-VHD--a6e64bdc--9883--476f--87b1--07fc669eade7
>>> (0 users, state: 0x00000002, type: 4)
>>>
>>>> Oct  7 17:58:45 x3 tapdisk[1677]: gaps written/skipped: 0/0
>>>> Oct  7 17:58:45 x3 tapdisk[1677]:
>>>>
>>> /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--
>>> f3f4df9bd586-VHD--19ef64e0--398a--43b4--ad52--1d36eedc2b81:
>>> b: 10240, a: 9422, f: 8614, n: 0
>>>
>>>> Oct  7 17:58:45 x3 tapdisk[1677]: closed image
>>>>
>>> /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--
>>> f3f4df9bd586-VHD--19ef64e0--398a--43b4--ad52--1d36eedc2b81
>>> (0 users, state: 0x00000002, type: 4)
>>>
>>>> Oct  7 17:58:45 x3 tapdisk[1677]: gaps written/skipped: 0/0
>>>> Oct  7 17:58:45 x3 tapdisk[1677]:
>>>>
>>> /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--
>>> f3f4df9bd586-VHD--dde0bd55--1ae3--44bf--be8d--dac901b49c66:
>>> b: 10240, a: 770, f: 372, n: 0
>>>
>>>> Oct  7 17:58:45 x3 tapdisk[1677]: closed image
>>>>
>>> /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--
>>> f3f4df9bd586-VHD--dde0bd55--1ae3--44bf--be8d--dac901b49c66
>>> (0 users, state: 0x00000002, type: 4)
>>>
>>>> Oct  7 17:58:45 x3 tapdisk[1677]: tapdisk-log: closing after 0 errors
>>>> Oct  7 17:58:46 x3 tapdisk[1677]: tapdisk-syslog: 14 messages, 1425
>>>>
>>> bytes, xmits: 15, failed: 0, dropped: 0
>>>
>>>> Oct  7 17:58:45 x3 tapdisk[1677]: tapdisk-control: draining 1
>>>> connections
>>>> Oct  7 17:58:45 x3 tapdisk[1677]: tapdisk-control: done
>>>> Oct  7 17:58:46 x3 multipathd: dm-24: remove map (uevent)
>>>> Oct  7 17:58:46 x3 multipathd: dm-24: devmap not registered, can't
>>>> remove
>>>> Oct  7 17:58:46 x3 multipathd: dm-24: remove map (uevent)
>>>> Oct  7 17:58:46 x3 multipathd: dm-24: devmap not registered, can't
>>>> remove
>>>> Oct  7 17:58:46 x3 multipathd: dm-27: remove map (uevent)
>>>> Oct  7 17:58:46 x3 multipathd: dm-27: devmap not registered, can't
>>>> remove
>>>> Oct  7 17:58:46 x3 multipathd: dm-27: remove map (uevent)
>>>> Oct  7 17:58:46 x3 multipathd: dm-27: devmap not registered, can't
>>>> remove
>>>> Oct  7 17:58:46 x3 multipathd: dm-25: remove map (uevent)
>>>> Oct  7 17:58:46 x3 multipathd: dm-25: devmap not registered, can't
>>>> remove
>>>> Oct  7 17:58:47 x3 multipathd: dm-25: remove map (uevent)
>>>> Oct  7 17:58:47 x3 multipathd: dm-25: devmap not registered, can't
>>>> remove
>>>> Oct  7 17:58:47 x3 multipathd: dm-26: remove map (uevent)
>>>> Oct  7 17:58:47 x3 multipathd: dm-26: devmap not registered, can't
>>>> remove
>>>> Oct  7 17:58:47 x3 multipathd: dm-26: remove map (uevent)
>>>> Oct  7 17:58:47 x3 multipathd: dm-26: devmap not registered, can't
>>>> remove
>>>> Oct  7 17:58:49 x3 vhd-util: libvhd::vhd_open: failed to open
>>>>
>>> /dev/VG_XenStorage-197c3580-fc82-35e5-356d-1f0909a9cfd9/
>>> VHD-3d732e11-8697-4a45-9518-44d4154ee6d6:
>>> -2
>>>
>>>> /////////////////////////////////////
>>>>
>>>> vhd-utils on x3:
>>>>
>>>> [root@x3 log]# find / -name vhd-util
>>>> /usr/sbin/vhd-util
>>>> /opt/xensource/bin/vhd-util
>>>> /opt/cloud/bin/vhd-util
>>>> find: /proc/7815/net: Invalid argument
>>>> [root@x3 log]# md5sum /usr/sbin/vhd-util
>>>> c849eae34a76f18163ffeea7ae1008fe  /usr/sbin/vhd-util
>>>> [root@x3 log]# md5sum /opt/xensource/bin/vhd-util
>>>> 2f3b434842d25d9672cc0a75d103ae90  /opt/xensource/bin/vhd-util
>>>> [root@x3 log]# md5sum /opt/cloud/bin/vhd-util
>>>> 2f3b434842d25d9672cc0a75d103ae90  /opt/cloud/bin/vhd-util
>>>>
>>>> Also in XenCenter I can still see
>>>>
>>> MInecraftDebian_ROOT-457_20141007145115.
>>>
>>>> Thank you for your time.
>>>> Regads,
>>>> F.
>>>>
>>>>
>>>
>>>
>
>

Reply via email to