One other experiment: just running blkdiscard against the RBD block
device completely clears it, to the point where the rbd-diff method
reports 0 blocks utilized.  So to summarize:

- ESXi sending UNMAP via SCST does not seem to release storage from
RBD (BLOCKIO handler that is supposed to work with UNMAP)

- blkdiscard does release the space

--
Alex Gorbachev
Storcium


On Wed, Jul 27, 2016 at 11:55 AM, Alex Gorbachev <a...@iss-integration.com> 
wrote:
> Hi Vlad,
>
> On Mon, Jul 25, 2016 at 10:44 PM, Vladislav Bolkhovitin <v...@vlnb.net> wrote:
>> Hi,
>>
>> I would suggest to rebuild SCST in the debug mode (after "make 2debug"), 
>> then before
>> calling the unmap command enable "scsi" and "debug" logging for scst and 
>> scst_vdisk
>> modules by 'echo add scsi >/sys/kernel/scst_tgt/trace_level; echo "add scsi"
>>>/sys/kernel/scst_tgt/handlers/vdisk_fileio/trace_level; echo "add debug"
>>>/sys/kernel/scst_tgt/handlers/vdisk_fileio/trace_level', then check, if for 
>>>the unmap
>> command vdisk_unmap_range() is reporting running blkdev_issue_discard() in 
>> the kernel
>> logs.
>>
>> To double check, you might also add trace statement just before 
>> blkdev_issue_discard()
>> in vdisk_unmap_range().
>
> With the debug settings on, I am seeing the below output - this means
> that discard is being sent to the backing (RBD) device, correct?
>
> Including the ceph-users list to see if there is a reason RBD is not
> processing this discard/unmap.
>
> Thank you,
> --
> Alex Gorbachev
> Storcium
>
> Jul 26 08:23:38 e1 kernel: [  858.324715] [20426]: scst:
> scst_cmd_done_local:2272:cmd ffff88201b552940, status 0, msg_status 0,
> host_status 0, driver_status 0, resp_data_len 0
> Jul 26 08:23:38 e1 kernel: [  858.324740] [20426]:
> vdisk_parse_offset:2930:cmd ffff88201b552c00, lba_start 0, loff 0,
> data_len 24
> Jul 26 08:23:38 e1 kernel: [  858.324743] [20426]:
> vdisk_unmap_range:3810:Unmapping lba 61779968 (blocks 8192)
> Jul 26 08:23:38 e1 kernel: [  858.336218] [20426]: scst:
> scst_cmd_done_local:2272:cmd ffff88201b552c00, status 0, msg_status 0,
> host_status 0, driver_status 0, resp_data_len 0
> Jul 26 08:23:38 e1 kernel: [  858.336232] [20426]:
> vdisk_parse_offset:2930:cmd ffff88201b552ec0, lba_start 0, loff 0,
> data_len 24
> Jul 26 08:23:38 e1 kernel: [  858.336234] [20426]:
> vdisk_unmap_range:3810:Unmapping lba 61788160 (blocks 8192)
> Jul 26 08:23:38 e1 kernel: [  858.351446] [20426]: scst:
> scst_cmd_done_local:2272:cmd ffff88201b552ec0, status 0, msg_status 0,
> host_status 0, driver_status 0, resp_data_len 0
> Jul 26 08:23:38 e1 kernel: [  858.351468] [20426]:
> vdisk_parse_offset:2930:cmd ffff88201b553180, lba_start 0, loff 0,
> data_len 24
> Jul 26 08:23:38 e1 kernel: [  858.351471] [20426]:
> vdisk_unmap_range:3810:Unmapping lba 61796352 (blocks 8192)
> Jul 26 08:23:38 e1 kernel: [  858.373407] [20426]: scst:
> scst_cmd_done_local:2272:cmd ffff88201b553180, status 0, msg_status 0,
> host_status 0, driver_status 0, resp_data_len 0
> Jul 26 08:23:38 e1 kernel: [  858.373422] [20426]:
> vdisk_parse_offset:2930:cmd ffff88201b553440, lba_start 0, loff 0,
> data_len 24
> Jul 26 08:23:38 e1 kernel: [  858.373424] [20426]:
> vdisk_unmap_range:3810:Unmapping lba 61804544 (blocks 8192)
>
> Jul 26 08:24:04 e1 kernel: [  884.170201] [6290]: scst_cmd_init_done:829:CDB:
> Jul 26 08:24:04 e1 kernel: [  884.170202]
> (h)___0__1__2__3__4__5__6__7__8__9__A__B__C__D__E__F
> Jul 26 08:24:04 e1 kernel: [  884.170205]    0: 42 00 00 00 00 00 00
> 00 18 00 00 00 00 00 00 00   B...............
> Jul 26 08:24:04 e1 kernel: [  884.170268] [6290]: scst:
> scst_parse_cmd:1312:op_name <UNMAP> (cmd ffff88201b556300),
> direction=1 (expected 1, set yes), lba=0, bufflen=24, data len 24,
> out_bufflen=0, (expected len data 24, expected len DIF 0, out expected
> len 0), flags=0x80260, internal 0, naca 0
> Jul 26 08:24:04 e1 kernel: [  884.173983] [20426]: scst:
> scst_cmd_done_local:2272:cmd ffff88201b556b40, status 0, msg_status 0,
> host_status 0, driver_status 0, resp_data_len 0
> Jul 26 08:24:04 e1 kernel: [  884.173998] [20426]:
> vdisk_parse_offset:2930:cmd ffff88201b556e00, lba_start 0, loff 0,
> data_len 24
> Jul 26 08:24:04 e1 kernel: [  884.174001] [20426]:
> vdisk_unmap_range:3810:Unmapping lba 74231808 (blocks 8192)
> Jul 26 08:24:04 e1 kernel: [  884.174224] [6290]: scst:
> scst_cmd_init_done:828:NEW CDB: len 16, lun 16, initiator
> iqn.1995-05.com.vihl2.ibft, target iqn.2008-10.net.storcium:scst.1,
> queue_type 1, tag 4005936 (cmd ffff88201b5565c0, sess
> ffff880ffa2c0000)
> Jul 26 08:24:04 e1 kernel: [  884.174227] [6290]: scst_cmd_init_done:829:CDB:
> Jul 26 08:24:04 e1 kernel: [  884.174228]
> (h)___0__1__2__3__4__5__6__7__8__9__A__B__C__D__E__F
> Jul 26 08:24:04 e1 kernel: [  884.174231]    0: 42 00 00 00 00 00 00
> 00 18 00 00 00 00 00 00 00   B...............
> Jul 26 08:24:04 e1 kernel: [  884.174256] [6290]: scst:
> scst_parse_cmd:1312:op_name <UNMAP> (cmd ffff88201b5565c0),
> direction=1 (expected 1, set yes), lba=0, bufflen=24, data len 24,
> out_bufflen=0, (expected len data 24, expected len DIF 0, out expected
> len 0), flags=0x80260, internal 0, naca 0
>
>
>
>
>>
>> Alex Gorbachev wrote on 07/23/2016 08:48 PM:
>>> Hi Nick, Vlad, SCST Team,
>>>
>>>>>> I have been looking at using the rbd-nbd tool, so that the caching is
>>>>> provided by librbd and then use BLOCKIO with SCST. This will however need
>>>>> some work on the SCST resource agents to ensure the librbd cache is
>>>>> invalidated on ALUA state change.
>>>>>>
>>>>>> The other thing I have seen is this
>>>>>>
>>>>>> https://lwn.net/Articles/691871/
>>>>>>
>>>>>> Which may mean FILEIO will support thin provisioning sometime in the
>>>>> future???
>>>
>>> I have run this configuration (RBD with BLOCKIO via SCST) through
>>> VMWare's UNMAP test.  Basically, it runs:
>>>
>>> esxcli storage vmfs unmap -l <datastore>
>>>
>>> But the space is not reclaimed from RBD device, as is easily seen
>>> polling the actual use (using method in
>>> http://ceph.com/planet/real-size-of-a-ceph-rbd-image/)
>>>
>>> SCST thin_provisioned attribute in sysfs is showing 1
>>>
>>> Something appears to not be connecting the reclaim action.
>>>
>>> Thank you,
>>> Alex
>>
>>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to