On 4/16/2015 4:46 PM, Akinobu Mita wrote:
2015-04-16 17:52 GMT+09:00 Sagi Grimberg <sa...@dev.mellanox.co.il>:
On 4/15/2015 7:10 PM, Martin K. Petersen wrote:

"Sagi" == Sagi Grimberg <sa...@dev.mellanox.co.il> writes:


By the commit 436f4a0a ("loopback: Add fabric_prot_type attribute
support"), When WRITE_SAME command with WRPROTECT=0 is executed,
sbc_dif_generate() is called but cmd->t_prot_sg is NULL as block
layer didn't allocate it for WRITE_SAME.


Sagi> Actually this is a bug. Why didn't the initiator allocate
Sagi> integrity meta-data for WRITE_SAME? Looking at the code it looks
Sagi> like it should.

We don't issue WRITE SAME with PI so there is no prot SGL.


Is there a specific reason why we don't?

It is not only for the WRITE SAME requests from block device but
also for READ/WRITE with PROTECT=0 requests by SG_IO.


This is specific to loopback which is using target_submit_cmd_map_sgls()
Other fabrics would allocate sgls per IO and the core would allocate
protection SGLs as well.

So isn't is appropreate to allocate prot SGL in
target_write_prot_action() (and mark se_cmd->se_cmd_flags to release
it at deallocation time)?


I'd say that given this is specific to loopback, than tcm_loop needs
to be fixed... But specifically for WRITE_SAME, I'd be careful with
allocating a single 8 byte protection buffer because as Martin said,
unlike the data block, the protection field may change from sector to
sector (ref_tag in Type 1).

So allocating a single 8 byte buf will take it's toll in the backend
(iblock backend would need to allocate all the protection information
and add it to the bio anyway, file/rd will need to do multiple writes).

It might be better that for the special WRITE_SAME case, allocate 8 *
sectors sgl and set it up (incrementing ref_tag for type 1). This way,
the backend code can stay the same (other than opening write_same with
PI in iblock).

Sagi.
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to