Hi Andrei:

I tested  the vdi-resize disabling the local-cache in a cloned vdi and
it worked ok. The problem happen only if the local-cache flag is
enabled.

Cheers

On Fri, Mar 25, 2011 at 10:21 AM, Andrei Lifchits
<[email protected]> wrote:
> Hi Marco,
>
> Thanks for providing the info. I was able to reproduce this, and it's 
> actually behaving the way it's supposed to: it's just that resizing of cloned 
> images is unsupported. What happens is that local cache images are 
> automatically snapshotted off of the parent VHD, which is never affected by 
> the resize operation, and this results in the virtual size mismatch between 
> the cache image and the corresponding original that was resized. We should 
> probably explicitly disallow vdi-resize in such situations - sorry for the 
> confusion!
>
> Andrei
>
> -----Original Message-----
> From: Marco Sinhoreli [mailto:[email protected]]
> Sent: Thursday, March 24, 2011 9:30 PM
> To: Andrei Lifchits
> Cc: Jonathan Davies; xen-api
> Subject: Re: [Xen-API] Resizing VDI with Local Caching enabled
>
> Hi Andrei:
>
> I've 4 hosts on pool. The did it in host master.
>
> Need more info? please let me know.
>
> Thanks
>
>
>
> On Thu, Mar 24, 2011 at 4:15 PM, Andrei Lifchits <[email protected]> 
> wrote:
>> Hi Marco,
>>
>> Would you mind also providing detailed repro steps for this? How many hosts 
>> do you have in your pool? Which host did you do it on?
>>
>> Thanks,
>> Andrei
>>
>> -----Original Message-----
>> From: Marco Sinhoreli [mailto:[email protected]]
>> Sent: Thursday, March 24, 2011 5:04 PM
>> To: Jonathan Davies
>> Cc: Andrei Lifchits; xen-api
>> Subject: Re: [Xen-API] Resizing VDI with Local Caching enabled
>>
>> Hi Jonathan:
>>
>> Bellow the /var/log/SMlog file content:
>>
>> <log>
>>
>> [6667] 2011-03-24 13:54:15.913428       ['/usr/sbin/tap-ctl', 'stats',
>> '-p', '28818', '-m', '0'] [6667] 2011-03-24 13:54:15.917045        = 0
>> [6904] 2011-03-24 13:55:11.249673       ['/usr/sbin/tap-ctl', 'list']
>> [6904] 2011-03-24 13:55:11.253768        = 0 [6904] 2011-03-24
>> 13:55:11.259516       ['/usr/sbin/tap-ctl', 'stats', '-p', '26360',
>> '-m', '2'] [6904] 2011-03-24 13:55:11.263313        = 0 [6904]
>> 2011-03-24 13:55:11.263955       ['/usr/sbin/tap-ctl', 'stats', '-p',
>> '28818', '-m', '0'] [6904] 2011-03-24 13:55:11.267715        = 0
>> [7257] 2011-03-24 13:56:06.571060       ['/usr/sbin/tap-ctl', 'list']
>> [7257] 2011-03-24 13:56:06.574950        = 0 [7257] 2011-03-24
>> 13:56:06.580724       ['/usr/sbin/tap-ctl', 'stats', '-p', '26360',
>> '-m', '2'] [7257] 2011-03-24 13:56:06.584472        = 0 [7257]
>> 2011-03-24 13:56:06.585132       ['/usr/sbin/tap-ctl', 'stats', '-p',
>> '28818', '-m', '0'] [7257] 2011-03-24 13:56:06.588925        = 0
>> [7538] 2011-03-24 13:57:01.902472       ['/usr/sbin/tap-ctl', 'list']
>> [7538] 2011-03-24 13:57:01.906591        = 0 [7538] 2011-03-24
>> 13:57:01.912364       ['/usr/sbin/tap-ctl', 'stats', '-p', '26360',
>> '-m', '2'] [7538] 2011-03-24 13:57:01.916432        = 0 [7538]
>> 2011-03-24 13:57:01.917099       ['/usr/sbin/tap-ctl', 'stats', '-p',
>> '28818', '-m', '0'] [7538] 2011-03-24 13:57:01.921214        = 0
>> [7931] 2011-03-24 13:57:57.242696       ['/usr/sbin/tap-ctl', 'list']
>> [7931] 2011-03-24 13:57:57.246488        = 0 [7931] 2011-03-24
>> 13:57:57.252225       ['/usr/sbin/tap-ctl', 'stats', '-p', '26360',
>> '-m', '2'] [7931] 2011-03-24 13:57:57.256070        = 0 [7931]
>> 2011-03-24 13:57:57.256718       ['/usr/sbin/tap-ctl', 'stats', '-p',
>> '28818', '-m', '0'] [7931] 2011-03-24 13:57:57.260511        = 0
>> [8329] 2011-03-24 13:58:52.568972       ['/usr/sbin/tap-ctl', 'list']
>> [8329] 2011-03-24 13:58:52.573106        = 0 [8329] 2011-03-24
>> 13:58:52.578847       ['/usr/sbin/tap-ctl', 'stats', '-p', '26360',
>> '-m', '2'] [8329] 2011-03-24 13:58:52.582838        = 0 [8329]
>> 2011-03-24 13:58:52.583487       ['/usr/sbin/tap-ctl', 'stats', '-p',
>> '28818', '-m', '0'] [8329] 2011-03-24 13:58:52.587380        = 0
>> [8523] 2011-03-24 13:59:47.905467       ['/usr/sbin/tap-ctl', 'list']
>> [8523] 2011-03-24 13:59:47.909332        = 0 [8523] 2011-03-24
>> 13:59:47.915058       ['/usr/sbin/tap-ctl', 'stats', '-p', '26360',
>> '-m', '2'] [8523] 2011-03-24 13:59:47.918973        = 0 [8523]
>> 2011-03-24 13:59:47.919625       ['/usr/sbin/tap-ctl', 'stats', '-p',
>> '28818', '-m', '0'] [8523] 2011-03-24 13:59:47.923533        = 0
>> [8766] 2011-03-24 14:00:16.874228       lock: acquired
>> /var/lock/sm/b5631d2b-7931-c93c-39da-5a20978e823b/sr
>> [8766] 2011-03-24 14:00:16.876175       ['/usr/sbin/td-util', 'query',
>> 'vhd', '-vpf',
>> '/var/run/sr-mount/b5631d2b-7931-c93c-39da-5a20978e823b/5d095db5-4135-
>> 47f7-8118-3d9449965799.vhd'] [8766] 2011-03-24 14:00:16.903674
>> SUCCESS [8766] 2011-03-24 14:00:16.904485       lock: released
>> /var/lock/sm/b5631d2b-7931-c93c-39da-5a20978e823b/sr
>> [8766] 2011-03-24 14:00:16.904600       vdi_deactivate {'sr_uuid':
>> 'b5631d2b-7931-c93c-39da-5a20978e823b', 'subtask_of':
>> 'OpaqueRef:c7c8309d-f5fb-a97c-6443-a8b1f4b3dabe', 'vdi_ref':
>> 'OpaqueRef:c602d033-57a7-bf8f-7d5d-ca76e2520402', 'vdi_on_boot':
>> 'persist', 'args': [], 'vdi_location':
>> '5d095db5-4135-47f7-8118-3d9449965799', 'host_ref':
>> 'OpaqueRef:0ed3e74b-aee7-e262-61cd-fe18a9a6d360', 'session_ref':
>> 'OpaqueRef:fcea3677-4e21-e61b-a171-4369039ffe22', 'device_config':
>> {'options': '', 'SRmaster': 'true', 'serverpath':
>> '/vol/vol6/orquestra', 'server': 'riofd06.globoi.com'}, 'command':
>> 'vdi_deactivate', 'vdi_allow_caching': 'true', 'sr_ref':
>> 'OpaqueRef:e5bbc8cc-f3d5-3cc5-3e11-2c68797c8c45', 'local_cache_sr':
>> '5dea02ae-a488-829b-92d2-3eb5c288b90f', 'vdi_uuid':
>> '5d095db5-4135-47f7-8118-3d9449965799'}
>> [8766] 2011-03-24 14:00:16.904773       blktap2.deactivate [8766]
>> 2011-03-24 14:00:16.904855       lock: acquired
>> /var/lock/sm/5d095db5-4135-47f7-8118-3d9449965799/vdi
>> [8766] 2011-03-24 14:00:16.913169       ['/usr/sbin/tap-ctl', 'list',
>> '-m', '2'] [8766] 2011-03-24 14:00:16.916386        = 0 [8766]
>> 2011-03-24 14:00:16.916680       ['/usr/sbin/tap-ctl', 'close', '-p',
>> '26360', '-m', '2'] [8766] 2011-03-24 14:00:16.992138        = 0
>> [8766] 2011-03-24 14:00:16.992272       ['/usr/sbin/tap-ctl',
>> 'detach', '-p', '26360', '-m', '2'] [8766] 2011-03-24 14:00:16.996854
>> = 0 [8766] 2011-03-24 14:00:16.997078       ['/usr/sbin/tap-ctl',
>> 'free', '-m', '2'] [8766] 2011-03-24 14:00:16.999694        = 0 [8766]
>> 2011-03-24 14:00:16.999870       tap.deactivate: Shut down
>> Tapdisk(vhd:/var/run/sr-mount/5dea02ae-a488-829b-92d2-3eb5c288b90f/5d0
>> 95db5-4135-47f7-8118-3d9449965799.vhdcache,
>> pid=26360, minor=2, state=R)
>> [8766] 2011-03-24 14:00:17.039695       ['/usr/sbin/vhd-util',
>> 'query', '--debug', '-p', '-n',
>> '/var/run/sr-mount/b5631d2b-7931-c93c-39da-5a20978e823b/5d095db5-4135-
>> 47f7-8118-3d9449965799.vhd'] [8766] 2011-03-24 14:00:17.052353
>> SUCCESS [8766] 2011-03-24 14:00:17.058312       Tearing down the cache
>> [8766] 2011-03-24 14:00:17.059648       ['/usr/sbin/td-util', 'query',
>> 'vhd', '-vpf',
>> '/var/run/sr-mount/b5631d2b-7931-c93c-39da-5a20978e823b/7dde9a41-45bf-
>> 4a4a-a17c-10b9ca93c41f.vhd'] [8766] 2011-03-24 14:00:17.102206
>> SUCCESS [8766] 2011-03-24 14:00:17.449727       lock: acquired
>> /var/lock/sm/7dde9a41-45bf-4a4a-a17c-10b9ca93c41f/cachesetup
>> [8766] 2011-03-24 14:00:17.449808       lock: released
>> /var/lock/sm/7dde9a41-45bf-4a4a-a17c-10b9ca93c41f/cachesetup
>> [8766] 2011-03-24 14:00:17.449863       lock: closed
>> /var/lock/sm/7dde9a41-45bf-4a4a-a17c-10b9ca93c41f/cachesetup
>> [8766] 2011-03-24 14:00:17.449926       lock: closed
>> /var/lock/sm/5dea02ae-a488-829b-92d2-3eb5c288b90f/sr
>> [8766] 2011-03-24 14:00:17.734256       ['/usr/sbin/td-util', 'query',
>> 'vhd', '-vpf',
>> '/var/run/sr-mount/b5631d2b-7931-c93c-39da-5a20978e823b/5d095db5-4135-
>> 47f7-8118-3d9449965799.vhd'] [8766] 2011-03-24 14:00:17.746639
>> SUCCESS [8766] 2011-03-24 14:00:17.762216       Removed host key
>> host_OpaqueRef:0ed3e74b-aee7-e262-61cd-fe18a9a6d360 for
>> 5d095db5-4135-47f7-8118-3d9449965799
>> [8766] 2011-03-24 14:00:17.762311       lock: released
>> /var/lock/sm/5d095db5-4135-47f7-8118-3d9449965799/vdi
>> [8766] 2011-03-24 14:00:17.762378       lock: closed
>> /var/lock/sm/5d095db5-4135-47f7-8118-3d9449965799/vdi
>> [8766] 2011-03-24 14:00:17.762456       lock: closed
>> /var/lock/sm/b5631d2b-7931-c93c-39da-5a20978e823b/sr
>> [8766] 2011-03-24 14:00:17.762628       lock: closed
>> /var/lock/sm/b5631d2b-7931-c93c-39da-5a20978e823b/sr
>> [8822] 2011-03-24 14:00:17.884668       lock: acquired
>> /var/lock/sm/b5631d2b-7931-c93c-39da-5a20978e823b/sr
>> [8822] 2011-03-24 14:00:17.886117       ['/usr/sbin/td-util', 'query',
>> 'vhd', '-vpf',
>> '/var/run/sr-mount/b5631d2b-7931-c93c-39da-5a20978e823b/5d095db5-4135-
>> 47f7-8118-3d9449965799.vhd'] [8822] 2011-03-24 14:00:17.902571
>> SUCCESS [8822] 2011-03-24 14:00:17.903878       vdi_detach {'sr_uuid':
>> 'b5631d2b-7931-c93c-39da-5a20978e823b', 'subtask_of':
>> 'OpaqueRef:c7c8309d-f5fb-a97c-6443-a8b1f4b3dabe', 'vdi_ref':
>> 'OpaqueRef:c602d033-57a7-bf8f-7d5d-ca76e2520402', 'vdi_on_boot':
>> 'persist', 'args': [], 'vdi_location':
>> '5d095db5-4135-47f7-8118-3d9449965799', 'host_ref':
>> 'OpaqueRef:0ed3e74b-aee7-e262-61cd-fe18a9a6d360', 'session_ref':
>> 'OpaqueRef:28d24a42-044e-091a-6b31-a49a5b80b565', 'device_config':
>> {'options': '', 'SRmaster': 'true', 'serverpath':
>> '/vol/vol6/orquestra', 'server': 'riofd06.globoi.com'}, 'command':
>> 'vdi_detach', 'vdi_allow_caching': 'true', 'sr_ref':
>> 'OpaqueRef:e5bbc8cc-f3d5-3cc5-3e11-2c68797c8c45', 'local_cache_sr':
>> '5dea02ae-a488-829b-92d2-3eb5c288b90f', 'vdi_uuid':
>> '5d095db5-4135-47f7-8118-3d9449965799'}
>> [8822] 2011-03-24 14:00:17.904166       lock: closed
>> /var/lock/sm/5d095db5-4135-47f7-8118-3d9449965799/vdi
>> [8822] 2011-03-24 14:00:17.904285       lock: released
>> /var/lock/sm/b5631d2b-7931-c93c-39da-5a20978e823b/sr
>> [8822] 2011-03-24 14:00:17.904565       lock: closed
>> /var/lock/sm/b5631d2b-7931-c93c-39da-5a20978e823b/sr
>> [8994] 2011-03-24 14:00:43.246717       ['/usr/sbin/tap-ctl', 'list']
>> [8994] 2011-03-24 14:00:43.250366        = 0 [8994] 2011-03-24
>> 14:00:43.256244       ['/usr/sbin/tap-ctl', 'stats', '-p', '28818',
>> '-m', '0'] [8994] 2011-03-24 14:00:43.260298        = 0 [9095]
>> 2011-03-24 14:01:12.462263       lock: acquired
>> /var/lock/sm/b5631d2b-7931-c93c-39da-5a20978e823b/sr
>> [9095] 2011-03-24 14:01:12.464090       ['/usr/sbin/td-util', 'query',
>> 'vhd', '-vpf',
>> '/var/run/sr-mount/b5631d2b-7931-c93c-39da-5a20978e823b/5d095db5-4135-
>> 47f7-8118-3d9449965799.vhd'] [9095] 2011-03-24 14:01:12.481012
>> SUCCESS [9095] 2011-03-24 14:01:12.482403       lock: released
>> /var/lock/sm/b5631d2b-7931-c93c-39da-5a20978e823b/sr
>> [9095] 2011-03-24 14:01:12.482584       vdi_resize {'sr_uuid':
>> 'b5631d2b-7931-c93c-39da-5a20978e823b', 'subtask_of':
>> 'OpaqueRef:4e9f7429-4c78-35ca-3c35-aa2ba7e87b03', 'vdi_ref':
>> 'OpaqueRef:c602d033-57a7-bf8f-7d5d-ca76e2520402', 'vdi_on_boot':
>> 'persist', 'args': ['42949672960'], 'vdi_location':
>> '5d095db5-4135-47f7-8118-3d9449965799', 'host_ref':
>> 'OpaqueRef:0ed3e74b-aee7-e262-61cd-fe18a9a6d360', 'session_ref':
>> 'OpaqueRef:f7ef4d98-e63d-d4cc-ea06-c1eefd721779', 'device_config':
>> {'options': '', 'SRmaster': 'true', 'serverpath':
>> '/vol/vol6/orquestra', 'server': 'riofd06.globoi.com'}, 'command':
>> 'vdi_resize', 'vdi_allow_caching': 'true', 'sr_ref':
>> 'OpaqueRef:e5bbc8cc-f3d5-3cc5-3e11-2c68797c8c45', 'local_cache_sr':
>> '5dea02ae-a488-829b-92d2-3eb5c288b90f', 'vdi_uuid':
>> '5d095db5-4135-47f7-8118-3d9449965799'}
>> [9095] 2011-03-24 14:01:12.482785       ['/usr/sbin/vhd-util',
>> 'resize', '--debug', '-s', '40960', '-n',
>> '/var/run/sr-mount/b5631d2b-7931-c93c-39da-5a20978e823b/5d095db5-4135-
>> 47f7-8118-3d9449965799.vhd', '-j',
>> '.journal-5d095db5-4135-47f7-8118-3d9449965799']
>> [9095] 2011-03-24 14:01:12.702535       SUCCESS [9095] 2011-03-24
>> 14:01:12.702738       ['/usr/sbin/vhd-util', 'query', '--debug', '-v',
>> '-n',
>> '/var/run/sr-mount/b5631d2b-7931-c93c-39da-5a20978e823b/5d095db5-4135-
>> 47f7-8118-3d9449965799.vhd'] [9095] 2011-03-24 14:01:12.716394
>> SUCCESS [9095] 2011-03-24 14:01:12.749430       _db_update_sm_config:
>> 5d095db5-4135-47f7-8118-3d9449965799 del sm-config:vhd-parent [9095]
>> 2011-03-24 14:01:12.788894       lock: closed
>> /var/lock/sm/b5631d2b-7931-c93c-39da-5a20978e823b/sr
>> [9233] 2011-03-24 14:01:38.575962       ['/usr/sbin/tap-ctl', 'list']
>> [9233] 2011-03-24 14:01:38.579594        = 0 [9233] 2011-03-24
>> 14:01:38.585400       ['/usr/sbin/tap-ctl', 'stats', '-p', '28818',
>> '-m', '0'] [9233] 2011-03-24 14:01:38.589243        = 0
>>
>> </log>
>>
>> On Thu, Mar 24, 2011 at 12:01 PM, Jonathan Davies 
>> <[email protected]> wrote:
>>> Hi Marco,
>>>
>>> No, that's not the expected behaviour...
>>>
>>> Andrei (cc'd) has not been able to reproduce this. Are you able to provide 
>>> /var/log/SMlog and some more detailed reproduction steps?
>>>
>>> Thanks,
>>>
>>> Jonathan
>>>
>>>> -----Original Message-----
>>>> From: [email protected] [mailto:xen-api-
>>>> [email protected]] On Behalf Of Marco Sinhoreli
>>>> Sent: 23 March 2011 15:03
>>>> To: xen-api
>>>> Subject: [Xen-API] Resizing VDI with Local Caching enabled
>>>>
>>>> Hello all:
>>>>
>>>> I'm trying resize a VDI cloned from a gold-image. I'm using storage
>>>> local caching in my tests. After resize, the virtual machine isn't
>>>> booting. Bellow the exception:
>>>>
>>>>
>>>> <output>
>>>> [root@server ~]# vdi=$(xe vbd-list
>>>> vm-uuid=fa537d17-1b07-b472-74c7-67b2a2f5561f device=xvda
>>>> params=vdi-uuid --minimal) [root@server ~]# xe vdi-resize uuid=$vdi
>>>> disk-size=40GiB [root@server ~]# echo $?
>>>> 0
>>>> [root@server ~]# xe vm-start
>>>> uuid=fa537d17-1b07-b472-74c7-67b2a2f5561f
>>>> There was an SR backend failure.
>>>> status: non-zero exit
>>>> stdout:
>>>> stderr: Traceback (most recent call last):
>>>>   File "/opt/xensource/sm/NFSSR", line 253, in ?
>>>>     SRCommand.run(NFSSR, DRIVER_INFO)
>>>>   File "/opt/xensource/sm/SRCommand.py", line 246, in run
>>>>     ret = cmd.run(sr)
>>>>   File "/opt/xensource/sm/SRCommand.py", line 94, in run
>>>>     return self._run_locked(sr)
>>>>   File "/opt/xensource/sm/SRCommand.py", line 131, in _run_locked
>>>>     return self._run(sr, target)
>>>>   File "/opt/xensource/sm/SRCommand.py", line 183, in _run
>>>>     caching_params)
>>>>   File "/opt/xensource/sm/blktap2.py", line 1280, in activate
>>>>     if self._activate_locked(sr_uuid, vdi_uuid, caching_params):
>>>>   File "/opt/xensource/sm/blktap2.py", line 29, in wrapper
>>>>     ret = op(self, *args)
>>>>   File "/opt/xensource/sm/blktap2.py", line 1312, in
>>>> _activate_locked
>>>>     dev_path = self._activate(sr_uuid, vdi_uuid, caching_params)
>>>>   File "/opt/xensource/sm/blktap2.py", line 1326, in _activate
>>>>     dev_path = self.setup_cache(sr_uuid, vdi_uuid, caching_params)
>>>>   File "/opt/xensource/sm/blktap2.py", line 1468, in setup_cache
>>>>     local_sr_uuid, scratch_mode)
>>>>   File "/opt/xensource/sm/blktap2.py", line 1550, in _setup_cache
>>>>     standby=scratch_mode)
>>>>   File "/opt/xensource/sm/blktap2.py", line 711, in launch_on_tap
>>>>     raise TapdiskFailed(cls.Arg(_type, path), ctl)
>>>> blktap2.TapdiskFailed:
>>>> Tapdisk(vhd:/var/run/sr-mount/01cbd019-bd89-1296-a4bf-
>>>> fd9d412f64f4/05732396-9466-4828-9926-2217a10a18c9.vhdcache):
>>>> ['/usr/sbin/tap-ctl', 'open', '-p', '21483', '-m', '3', '-a',
>>>> 'vhd:/var/run/sr-mount/01cbd019-bd89-1296-a4bf-fd9d412f64f4/05732396
>>>> - 9466-4828-9926-2217a10a18c9.vhdcache',
>>>> '-e', '0', '-2',
>>>> 'vhd:/var/run/sr-mount/b5631d2b-7931-c93c-39da-5a20978e823b/05732396
>>>> -
>>>> 9466-4828-9926-2217a10a18c9.vhd']
>>>> failed: status=22, pid=21485, errmsg= </output>
>>>>
>>>> Is it a bug or could I have done something not permitted?
>>>>
>>>> Thoughts?
>>>>
>>>>
>>>> Cheers!
>>>>
>>>> --
>>>> Marco Sinhoreli
>>>>
>>>> _______________________________________________
>>>> xen-api mailing list
>>>> [email protected]
>>>> http://lists.xensource.com/mailman/listinfo/xen-api
>>>
>>
>>
>>
>> --
>> Marco Sinhoreli
>>
>
>
>
> --
> Marco Sinhoreli
>



-- 
Marco Sinhoreli

_______________________________________________
xen-api mailing list
[email protected]
http://lists.xensource.com/mailman/listinfo/xen-api

Reply via email to