Currently these functions are implemented in the scsi layer, but their
actual place should be the block layer since T10-PI is a general data
integrity feature that is used in the nvme protocol as well.
Suggested-by: Christoph Hellwig
Cc: Jens Axboe
Cc: Martin K. Petersen
Signed-off-by: Max
ersen
Signed-off-by: Max Gurtovoy
---
drivers/nvme/host/core.c | 23 +--
drivers/nvme/host/nvme.h | 9 +-
drivers/nvme/host/pci.c | 75 +---
3 files changed, 23 insertions(+), 84 deletions(-)
diff --git a/drivers/nvme/host/core.c b/dr
On 7/24/2018 4:54 AM, Martin K. Petersen wrote:
Christoph,
+void blk_integrity_dif_prepare(struct request *rq, u8 protection_type,
+ u32 ref_tag)
+{
Maybe call this blk_t10_pi_prepare?
The rest of these functions have a blk_integrity_ prefix. So either
stick
Currently this function is implemented in the scsi layer, but it's
actual place should be the block layer since T10-PI is a general
data integrity feature that is used in the nvme protocol as well.
Suggested-by: Christoph Hellwig
Cc: Jens Axboe
Cc: Martin K. Petersen
Signed-off-by
ersen
Signed-off-by: Max Gurtovoy
---
changes from v1:
- check status of nvme_req instead of converting to BLK_STS
---
drivers/nvme/host/core.c | 18
drivers/nvme/host/nvme.h | 9 +-
drivers/nvme/host/pci.c | 75 +---
3 files ch
.
Suggested-by: Christoph Hellwig
Cc: Jens Axboe
Cc: Martin K. Petersen
Signed-off-by: Max Gurtovoy
---
changes from v1 (Christoph, Martin and Keith comments):
- moved the functions to t10-pi.c
- updated tuple size
- changed local variables scope
- remove/add new lines
---
block/t10-pi.c
On 7/24/2018 4:55 PM, Christoph Hellwig wrote:
+/*
+ * The virtual start sector is the one that was originally submitted
+ * by the block layer. Due to partitioning, MD/DM cloning, etc. the
+ * actual physical start sector is likely to be different. Remap
+ * protection information to match t
On 7/24/2018 11:33 PM, Keith Busch wrote:
On Tue, Jul 24, 2018 at 04:33:41PM +0300, Max Gurtovoy wrote:
+void t10_pi_prepare(struct request *rq, u8 protection_type)
+{
+ const int tuple_sz = rq->q->integrity.tuple_size;
+ u32 ref_tag = t10_pi_ref_tag(rq);
+ struct bi
g
Cc: Jens Axboe
Cc: Martin K. Petersen
Signed-off-by: Max Gurtovoy
---
changes from v2:
- add Reviewed-by tag
changes from v1:
- check status of nvme_req instead of converting to BLK_STS
---
drivers/nvme/host/core.c | 18
drivers/nvme/host/nvme.h | 9 +-
drivers/nvme/host/
.
Suggested-by: Christoph Hellwig
Cc: Jens Axboe
Cc: Martin K. Petersen
Signed-off-by: Max Gurtovoy
---
changes from v2:
- convert comments to kerneldoc format
- removed SCSI specific comment
- fix kmap_atomic/kunmap_atomic addresses
- fix iteration over t10_pi_tuple's
changes fr
Currently this function is implemented in the scsi layer, but it's
actual place should be the block layer since T10-PI is a general
data integrity feature that is used in the nvme protocol as well.
Suggested-by: Christoph Hellwig
Cc: Jens Axboe
Cc: Martin K. Petersen
Signed-off-by
On 7/25/2018 2:22 PM, Christoph Hellwig wrote:
+ pmap = kmap_atomic(iv.bv_page) + iv.bv_offset;
+ p = pmap;
Maybe:
pmap = p = kmap_atomic(iv.bv_page) + iv.bv_offset;
+ for (j = 0; j < iv.bv_len; j +=
Currently this function is implemented in the scsi layer, but it's
actual place should be the block layer since T10-PI is a general
data integrity feature that is used in the nvme protocol as well.
Suggested-by: Christoph Hellwig
Cc: Jens Axboe
Cc: Martin K. Petersen
Signed-off-by
ersen
Signed-off-by: Max Gurtovoy
---
drivers/nvme/host/core.c | 18
drivers/nvme/host/nvme.h | 9 +-
drivers/nvme/host/pci.c | 75 +---
3 files changed, 20 insertions(+), 82 deletions(-)
diff --git a/drivers/nvme/host/core.c b/dr
.
Suggested-by: Christoph Hellwig
Cc: Jens Axboe
Cc: Martin K. Petersen
Signed-off-by: Max Gurtovoy
---
changes from v3:
- kmap_atomic/kunmap_atomic the same address
- declare pi struct inside the inner for loop
- check "intervals" inside the for loop condition
changes from v2:
On 7/27/2018 6:38 PM, Jens Axboe wrote:
On 7/27/18 9:21 AM, Jens Axboe wrote:
On 7/25/18 9:46 AM, Max Gurtovoy wrote:
Currently this function is implemented in the scsi layer, but it's
actual place should be the block layer since T10-PI is a general
data integrity feature that is us
Currently this function is implemented in the scsi layer, but it's
actual place should be the block layer since T10-PI is a general
data integrity feature that is used in the nvme protocol as well.
Suggested-by: Christoph Hellwig
Cc: Jens Axboe
Cc: Martin K. Petersen
Signed-off-by
.
Suggested-by: Christoph Hellwig
Cc: Jens Axboe
Cc: Martin K. Petersen
Reviewed-by: Martin K. Petersen
Signed-off-by: Max Gurtovoy
---
changes from v4:
- added Martin's Reviewed-by.
changes from v3:
- kmap_atomic/kunmap_atomic the same address
- declare pi struct inside the inner for
ersen
Reviewed-by: Martin K. Petersen
Acked-by: Keith Busch
Signed-off-by: Max Gurtovoy
---
changes from v4:
- Added Martin's and Keith's signatures
---
drivers/nvme/host/core.c | 18
drivers/nvme/host/nvme.h | 9 +-
drivers/nvme/h
Hi Sagi,
did you have a chance to look on Israel's and mine fixes that we
attached to the first thread ?
there are few issues with this approach. For example in case you don't
have a "free" cpu in the mask for Qi and you take cpu from Qi+j mask.
Also in case we have a non-symetrical affinity
On 11/10/2018 5:13 PM, Jens Axboe wrote:
A polled queued doesn't trigger interrupts, so it's always safe
to grab the queue lock without disabling interrupts.
Jens,
can you share the added value in performance for this change ?
Cc: Keith Busch
Cc: linux-n...@lists.infradead.org
Signed-o
On 3/13/2019 7:55 PM, Christoph Hellwig wrote:
From: James Smart
For some nvme command, when issued by the nvme core layer, there
is an internal buffer which can cause blk_rq_payload_bytes() to
return a non-zero value yet there is no actual/real command payload
and sg list. An example is the
On 3/7/2019 3:56 AM, Sagi Grimberg wrote:
net_dim.h lib exposes an implementation of the DIM algorithm for
dynamically-tuned interrupt
moderation for networking interfaces.
We need the same behavior for any block CQ. The main motivation is
two benefit from maximized
completion rate and re
On 3/18/2019 11:24 AM, Yamin Friedman wrote:
On 3/14/2019 1:45 PM, Max Gurtovoy wrote:
On 3/7/2019 3:56 AM, Sagi Grimberg wrote:
net_dim.h lib exposes an implementation of the DIM algorithm for
dynamically-tuned interrupt
moderation for networking interfaces.
We need the same behavior for
On 3/18/2019 1:08 PM, Max Gurtovoy wrote:
On 3/18/2019 11:24 AM, Yamin Friedman wrote:
On 3/14/2019 1:45 PM, Max Gurtovoy wrote:
On 3/7/2019 3:56 AM, Sagi Grimberg wrote:
net_dim.h lib exposes an implementation of the DIM algorithm for
dynamically-tuned interrupt
moderation for networking
On 3/18/2019 11:34 PM, Sagi Grimberg wrote:
As we discussed, let's check with RDMA maintainers if it's better to
extend alloc_cq API or create alloc_cq_dim API function.
Sagi/Christoph,
how about adding a module param per ULP ? as we use register_always
today, create a use_dimm module par
_GPL(blk_mq_rdma_map_queues);
Otherwise, Looks good.
Reviewed-by: Max Gurtovoy
Any feedback is welcome.
Hi Sagi,
the patchset looks good and of course we can add support for more
drivers in the future.
have you run some performance testing with the nvmf initiator ?
Sagi Grimberg (6):
mlx5: convert to generic pci_alloc_irq_vectors
mlx5: move affinity hints assig
2142.8K/2152.2K 1395.5K/1374.2K
Signed-off-by: Max Gurtovoy
---
block/blk-mq-cpumap.c | 68 -
1 files changed, 22 insertions(+), 46 deletions(-)
diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
index 8e61e86..2cca4fc 100644
On 6/28/2017 5:38 PM, Sagi Grimberg wrote:
Hi Max,
Hi Sagi,
This patch performs sequential mapping between CPUs and queues.
In case the system has more CPUs than HWQs then there are still
CPUs to map to HWQs. In hyperthreaded system, map the unmapped CPUs
and their siblings to the same HW
On 6/28/2017 5:58 PM, Sagi Grimberg wrote:
+static int cpu_to_queue_index(unsigned int nr_queues, const int cpu,
+ const struct cpumask *online_mask)
{
-return cpu * nr_queues / nr_cpus;
+/*
+ * Non online CPU will be mapped to queue index 0.
+ */
+if (!
336.6K/1355.1K/1361.6K
16k690K/690K/691K
32k348K/348K/348K
64k174K/174K/174K
128k 87K/87K/87K
My conclusion is that Sagi's patch is correct (although we see little
bit less performance: 100K-200K less for small block sizes) so you can add:
Tested-by: Max Gurtovoy
Nevertheless
On 11/21/2018 6:23 PM, Christoph Hellwig wrote:
Hi all,
this series optimizes a few bits in the block layer and nvme code
related to polling.
It starts by moving the queue types recently introduce entirely into
the block layer instead of requiring an indirect call for them.
It then switches
On 11/15/2018 7:16 PM, Sagi Grimberg wrote:
connect_work and err_work will be reused by nvme-tcp so
share those in nvme_ctrl for rdma and fc to share.
Signed-off-by: Sagi Grimberg
looks good (and I see that more sharing should be made in the future as
well :) )
Reviewed-by: Max
sure regarding the INVALID_PARAM rc.
maybe use NVME_SC_INTERNAL ?
otherwise,
look fine,
Reviewed-by: Max Gurtovoy
},
{ NVMF_TRTYPE_LOOP, "loop" },
};
looks good,
Reviewed-by: Max Gurtovoy
On 11/17/2018 10:15 PM, David Miller wrote:
From: Sagi Grimberg
Date: Thu, 15 Nov 2018 09:16:22 -0800
+static unsigned nvmet_tcp_recv_budget = 8;
+module_param_named(recv_budget, nvmet_tcp_recv_budget, int, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(recv_budget, "recvs budget");
+
+static unsigne
+static enum blk_eh_timer_return
+nvme_tcp_timeout(struct request *rq, bool reserved)
+{
+ struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
+ struct nvme_tcp_ctrl *ctrl = req->queue->ctrl;
+ struct nvme_tcp_cmd_pdu *pdu = req->pdu;
+
+ dev_dbg(ctrl->ctrl.device,
+ "queue %d
On 11/27/2018 9:48 AM, Sagi Grimberg wrote:
This looks odd. It's not really the timeout handlers job to
call nvme_end_request here.
Well.. if we are not yet LIVE, we will not trigger error
recovery, which means nothing will complete this command so
something needs to do it...
I think that
hi Sagi,
+static inline void nvmet_tcp_put_cmd(struct nvmet_tcp_cmd *cmd)
+{
+ if (unlikely(cmd == &cmd->queue->connect))
+ return;
if you don't return connect cmd to the list please don't add it to it in
the first place (during alloc_cmd). and if you use it once, we migh
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index c81b40e..c290de0 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -322,6 +322,22 @@ int blk_mq_tagset_iter(struct blk_mq_tag_set
*set, void *data,
}
}
+ for (i = 0; i < set->nr_hw_queues; i++) {
+ s
On 10/23/2017 5:51 PM, Christoph Hellwig wrote:
This helpers allows to bounce steal the uncompleted bios from a request so
that they can be reissued on another path.
Signed-off-by: Christoph Hellwig
Reviewed-by: Sagi Grimberg
---
block/blk-core.c | 20
include/l
On 11/9/2017 5:20 PM, Tony Yang wrote:
Hi, All
I downloaded the nvme with multipath kernel, The kernel version is
4.14, I encountered a problem, I use Mellanox connectx-3 infiniband
driver. Because the 4.14 kernel version is too new to install
infiniband driver, does anyone encounter with m
On 7/2/2017 2:56 PM, Sagi Grimberg wrote:
On 02/07/17 13:45, Max Gurtovoy wrote:
On 6/30/2017 8:26 PM, Jens Axboe wrote:
Hi Max,
Hi Jens,
I remembered you reporting this. I think this is a regression introduced
with the scheduling, since ->rqs[] isn't static anymore. ->
On 7/3/2017 3:03 PM, Ming Lei wrote:
On Mon, Jul 03, 2017 at 01:07:44PM +0300, Sagi Grimberg wrote:
Hi Ming,
Yeah, the above change is correct, for any canceling requests in this
way we should use blk_mq_quiesce_queue().
I still don't understand why should blk_mq_flush_busy_ctxs hit a NULL
On 7/5/2017 10:59 AM, Johannes Thumshirn wrote:
On Wed, Jun 28, 2017 at 03:44:40PM +0300, Max Gurtovoy wrote:
This patch performs sequential mapping between CPUs and queues.
In case the system has more CPUs than HWQs then there are still
CPUs to map to HWQs. In hyperthreaded system, map the
In case we use shared tags feature, blk_mq_alloc_tag_set might fail
during module initialization. Check the return value and default to run
without shared tag set before continuing. Also move the tagset
initialization process after defining the amount of submition queues.
Signed-off-by: Max
On 7/6/2017 5:02 PM, Jens Axboe wrote:
On 07/06/2017 07:24 AM, Max Gurtovoy wrote:
In case we use shared tags feature, blk_mq_alloc_tag_set might fail
during module initialization. Check the return value and default to run
without shared tag set before continuing. Also move the tagset
In case we use shared tags feature, blk_mq_alloc_tag_set might fail
during module initialization. In that case, fail the load with the
suitble error code. Also move the tagset initialization process after
defining the amount of submition queues.
Signed-off-by: Max Gurtovoy
---
Changes from v1
t ignore tagset allocation failures
the nvme_dev_add() function silently ignores failures.
In case blk_mq_alloc_tag_set fails, we hit NULL deref while
calling blk_mq_init_queue during nvme_alloc_ns with tagset == NULL.
Instead, we'll not issue the scan_work in case tagset allocation
failed and leave t
pt counter/q_usage_counter when
allocating rq failed")
Reported-by: Max Gurtovoy
Signed-off-by: Keith Busch
---
tested with 4.13-rc5+ using the following commands in a loop:
1. modprobe nvme
2. sleep 10
3. modprobe -r nvme
Looks good,
Tested-by: Max Gurtovoy
++--
6 files changed, 27 insertions(+), 13 deletions(-)
The series looks good to me,
Reviewed-by: Max Gurtovoy
BTW, if we talking about the reinit_tagset, can you explain the
motivation for dereg_mr and alloc new mr for the RDMA transport layer ?
we don't do it in iSER/SRP so I wonder wh
hi all,
is there a way to drain a blk-mq based request queue (similar to
blk_drain_queue for non MQ) ?
I try to fix the following situation:
Running DM-multipath over NVMEoF/RDMA block devices, toggling the switch
ports during traffic using fio and making sure the traffic never fails.
when t
On 2/22/2018 4:59 AM, Ming Lei wrote:
Hi Max,
Hi Ming,
On Tue, Feb 20, 2018 at 11:56:07AM +0200, Max Gurtovoy wrote:
hi all,
is there a way to drain a blk-mq based request queue (similar to
blk_drain_queue for non MQ) ?
Generally speaking, blk_mq_freeze_queue() should be fine to drain
(+)
Looks good,
Reviewed-by: Max Gurtovoy
On 8/2/2019 2:45 AM, Logan Gunthorpe wrote:
From: Chaitanya Kulkarni
Change the return value for nvmet_add_async_event().
This change is needed for the target passthru code to generate async
events.
As a stand alone commit it's not clear what is the purpose of it.
Please add some extra ex
On 8/2/2019 2:45 AM, Logan Gunthorpe wrote:
This function will be needed by the upcoming passthru code.
Same here. As a standalone commit I can't take a lot from here.
Maybe should be squashed ?
[chaitanya.kulka...@wdc.com: this was factored out of a patch
originally authored by Chaita
, incuding executing Vendor Unique Commands.
+
config NVME_TARGET_LOOP
tristate "NVMe loopback device support"
depends on NVME_TARGET
Looks good,
Reviewed-by: Max Gurtovoy
On 8/2/2019 2:45 AM, Logan Gunthorpe wrote:
nvme_ctrl_get_by_path() is analagous to blkdev_get_by_path() except it
gets a struct nvme_ctrl from the path to its char dev (/dev/nvme0).
It makes use of filp_open() to open the file and uses the private
data to obtain a pointer to the struct nvme_ct
On 8/2/2019 2:45 AM, Logan Gunthorpe wrote:
This patch adds helper functions which are used in the NVMeOF configfs
when the user is configuring the passthru subsystem. Here we ensure
that only one subsys is assigned to each nvme_ctrl by using an xarray
on the cntlid.
[chaitanya.kulka...@wdc.co
On 8/2/2019 2:45 AM, Logan Gunthorpe wrote:
This patch rejects any new connection to the passthru-ctrl if this
controller is already connected to a different host. At the time of
allocating the controller we check if the subsys associated with
the passthru ctrl is already connected to a host an
On 8/2/2019 2:45 AM, Logan Gunthorpe wrote:
When CONFIG_NVME_TARGET_PASSTHRU as 'passthru' directory will
be added to each subsystem. The directory is similar to a namespace
and has two attributes: device_path and enable. The user must set the
path to the nvme controller's char device and write
On 8/15/2019 7:06 PM, Logan Gunthorpe wrote:
On 2019-08-15 6:36 a.m., Max Gurtovoy wrote:
On 8/2/2019 2:45 AM, Logan Gunthorpe wrote:
This patch rejects any new connection to the passthru-ctrl if this
controller is already connected to a different host. At the time of
allocating the
On 8/22/2019 3:09 AM, Sagi Grimberg wrote:
I don't understand why we don't limit a regular ctrl to single access
and we do it for the PT ctrl.
I guess the block layer helps to sync between multiple access in
parallel but we can do it as well.
Also, let's say you limit the access to this
During nvme_loop_queue_rq error flow, one must call nvme_cleanup_cmd since
it's symmetric to nvme_setup_cmd.
Signed-off-by: Max Gurtovoy
---
drivers/nvme/target/loop.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/l
Signed-off-by: Max Gurtovoy
---
block/blk-core.c | 6 ++
block/blk-mq.c | 4
block/t10-pi.c | 11 ---
drivers/nvme/host/core.c | 28 +++-
drivers/scsi/sd.c| 28 ++--
drivers/scsi/sd.h| 1
Make the error flow symmetric to the good flow by moving the call to
nvme_cleanup_cmd from nvme_rdma_unmap_data function.
Signed-off-by: Max Gurtovoy
---
drivers/nvme/host/rdma.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/nvme/host/rdma.c b/drivers
The nvme_cleanup_cmd function should be called to avoid resource leakage
(it's the opposite to nvme_setup_cmd). Fix the error flow during command
submission and also fix the missing call in command completion.
Signed-off-by: Max Gurtovoy
---
drivers/nvme/host/tcp.c | 11 +--
1
On 9/4/2019 8:49 AM, Christoph Hellwig wrote:
On Tue, Sep 03, 2019 at 01:21:59PM -0600, Jens Axboe wrote:
On 9/3/19 1:11 PM, Sagi Grimberg wrote:
+ if (blk_integrity_rq(req) && req_op(req) == REQ_OP_READ &&
+ error == BLK_STS_OK)
+ t10_pi_complete(req,
+
On 9/4/2019 8:54 AM, Christoph Hellwig wrote:
On Tue, Sep 03, 2019 at 12:15:48PM -0700, Sagi Grimberg wrote:
The nvme_cleanup_cmd function should be called to avoid resource leakage
(it's the opposite to nvme_setup_cmd). Fix the error flow during command
submission and also fix the missing cal
Signed-off-by: Max Gurtovoy
---
changes from v1:
- seperate from nvme_cleanup command patches
- introduce blk_integrity_interval_shift to avoid div in fast path
---
block/blk-core.c | 6 ++
block/blk-mq.c | 4
block/blk-settings.c | 1 +
block/t10-pi.c
Only type 1 and type 2 have a reference tag by definition.
Suggested-by: Keith Busch
Signed-off-by: Max Gurtovoy
---
block/t10-pi.c | 10 ++
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/block/t10-pi.c b/block/t10-pi.c
index 7d9a151..088c3c7 100644
--- a/block/t10-pi.c
Use block layer definition instead of re-defining it with the same
values.
Suggested-by: Christoph Hellwig
Signed-off-by: Max Gurtovoy
---
drivers/nvme/host/core.c | 12 ++--
include/linux/nvme.h | 3 ---
2 files changed, 6 insertions(+), 9 deletions(-)
diff --git a/drivers/nvme
Signed-off-by: Max Gurtovoy
---
changes from v2:
- remove local variable for protection_type
- remove remapping between NVMe T10 definition to blk definition
- added patches 2/3 and 3/3
- remove pi_type from ns structure
changes from v1:
- seperate from nvme_cleanup command patches
- introduce
On 9/5/2019 11:52 PM, Sagi Grimberg wrote:
Use block layer definition instead of re-defining it with the same
values.
The nvme_setup_rw is fine, but nvme_init_integrity gets values from
the controller id structure so I think it will be better to stick with
the enums that are referenced in t
Use block layer definition instead of re-defining it with the same
values.
Suggested-by: Christoph Hellwig
Reviewed-by: Christoph Hellwig
Signed-off-by: Max Gurtovoy
---
changes from v3:
- added Reviewed-by signature
---
drivers/nvme/host/core.c | 12 ++--
include/linux/nvme.h
Signed-off-by: Max Gurtovoy
---
changes from v3:
- fix > 80 liner
- move the protection_type assignment into nvme_update_disk_info
- added a comment regarding dps and DIF type values
- drop redundant externs from t10-pi.h
changes from v2:
- remove local variable for protection_t
Only type 1 and type 2 have a reference tag by definition.
Suggested-by: Keith Busch
Signed-off-by: Max Gurtovoy
---
changes from v3:
- added blk_integrity_need_remap helper
---
block/t10-pi.c | 14 ++
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/block/t10-pi.c
On 9/9/2019 5:49 AM, Martin K. Petersen wrote:
Keith,
At least for nvme, type 0 means you have meta data but not for
protection information,
Yeah, NVMe does not support DIX Type 0.
so remapping the place the where reference tag exists for other PI
types corrupts the metadata.
But the devi
On 9/9/2019 5:21 AM, Martin K. Petersen wrote:
Hi Max!
Hi Martin,
@@ -309,7 +308,7 @@ static void sd_set_flush_flag(struct scsi_disk *sdkp)
{
struct scsi_disk *sdkp = to_scsi_disk(dev);
- return sprintf(buf, "%u\n", sdkp->protection_type);
+ return sprintf(buf, "%u\n"
On 9/10/2019 5:29 AM, Martin K. Petersen wrote:
Max,
Hi Martin,
thanks for the great explanation !
maybe we can add profiles to type0 and type2 in the future and have
more readable code.
It's a deliberate feature that we treat DIX Type 0, 1, and 2 the
same. It's very common to mix and m
On 9/11/2019 4:16 AM, Martin K. Petersen wrote:
Max,
I guess Type 1 and Type 3 mirrors can work because Type 3 doesn't have
a ref tag, right ?
It will work but you'll lose ref tag checking on the Type 3 side of the
mirror. So not exactly desirable. And in our experience, the ref tag is
hugel
Signed-off-by: Max Gurtovoy
---
block/t10-pi.c | 28 ++--
1 file changed, 14 insertions(+), 14 deletions(-)
diff --git a/block/t10-pi.c b/block/t10-pi.c
index 0c00946..7fed587 100644
--- a/block/t10-pi.c
+++ b/block/t10-pi.c
@@ -27,7 +27,7 @@ static __be16 t10_pi_ip_fn
.complete_fn callbacks within the integrity profile that each type
can implement according to its needs.
Suggested-by: Christoph Hellwig
Suggested-by: Martin K. Petersen
Signed-off-by: Max Gurtovoy
---
changes from v4:
- added .prepare_fn and .complete_fn callbacks
- removed patches 2/3 and 3/3 from
Reviewed-by: Martin K. Petersen
Signed-off-by: Max Gurtovoy
---
changes from v5:
- added Reviewed-by signature
---
block/t10-pi.c | 28 ++--
1 file changed, 14 insertions(+), 14 deletions(-)
diff --git a/block/t10-pi.c b/block/t10-pi.c
index 0c00946..7fed587 100644
.complete_fn callbacks within the integrity profile that each type
can implement according to its needs.
Suggested-by: Christoph Hellwig
Suggested-by: Martin K. Petersen
Reviewed-by: Martin K. Petersen
Signed-off-by: Max Gurtovoy
---
changes from v5:
- removed extra new lines
- use q pointer
Replace all hard-coded values with T10_PI_TYPES to make the code more
readable.
Reviewed-by: Christoph Hellwig
Reviewed-by: Martin K. Petersen
Signed-off-by: Max Gurtovoy
---
changes from v6:
- added Reviewed-by signature
- added commit message
changes from v5:
- added Reviewed-by
.complete_fn callbacks within the integrity profile that each type
can implement according to its needs.
Suggested-by: Christoph Hellwig
Reviewed-by: Christoph Hellwig
Suggested-by: Martin K. Petersen
Reviewed-by: Martin K. Petersen
Signed-off-by: Max Gurtovoy
---
changes from v6:
- added Reviewed
On 9/16/2019 7:12 PM, Jens Axboe wrote:
On 9/16/19 9:44 AM, Max Gurtovoy wrote:
Currently t10_pi_prepare/t10_pi_complete functions are called during the
NVMe and SCSi layers command preparetion/completion, but their actual
place should be the block layer since T10-PI is a general data
st robot
Signed-off-by: Max Gurtovoy
---
block/t10-pi.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/block/t10-pi.c b/block/t10-pi.c
index 0c0120a..57f304a 100644
--- a/block/t10-pi.c
+++ b/block/t10-pi.c
@@ -79,6 +79,9 @@ static blk_status_t t10_pi_verify(struct blk_i
On 9/20/2019 9:05 AM, Nathan Chancellor wrote:
On Thu, Sep 19, 2019 at 03:57:19PM +0200, Arnd Bergmann wrote:
Changing the switch() statement to symbolic constants made
the compiler (at least clang-9, did not check gcc) notice that
there is one enum value that is not handled here:
block/t10-p
On 9/22/2019 2:29 AM, Jens Axboe wrote:
On 9/21/19 4:54 PM, Martin K. Petersen wrote:
Jens,
block/t10-pi.c: In function 't10_pi_verify':
block/t10-pi.c:62:3: warning: enumeration value 'T10_PI_TYPE0_PROTECTION'
not handled in switch [-Wswitch]
switch (type) {
to
silence compiler warnings. Seems backwards.
I agree that enough energy wasted here :)
Attached some proposal to fix this warning.
Let me know if you want me to send it to the mailing list
From 058b2e2da4ada6d27287533a7228abd80de17248 Mon Sep 17 00:00:00 2001
From: Max Gurtovoy
Date: Sun, 22 Sep 20
In this operation set the driver data of the hctx to point to the virtio
block queue. By doing so, we can use this reference in the and reduce
the number of operations in the fast path.
Signed-off-by: Max Gurtovoy
---
drivers/block/virtio_blk.c | 42 --
1
On 01/08/2024 18:13, Michael S. Tsirkin wrote:
On Thu, Aug 01, 2024 at 06:11:37PM +0300, Max Gurtovoy wrote:
In this operation set the driver data of the hctx to point to the virtio
block queue. By doing so, we can use this reference in the and reduce
in the ?
sorry for the type
On 01/08/2024 18:29, Michael S. Tsirkin wrote:
On Thu, Aug 01, 2024 at 06:17:21PM +0300, Max Gurtovoy wrote:
On 01/08/2024 18:13, Michael S. Tsirkin wrote:
On Thu, Aug 01, 2024 at 06:11:37PM +0300, Max Gurtovoy wrote:
In this operation set the driver data of the hctx to point to the virtio
On 01/08/2024 20:56, Stefan Hajnoczi wrote:
On Thu, Aug 01, 2024 at 06:56:44PM +0300, Max Gurtovoy wrote:
On 01/08/2024 18:43, Michael S. Tsirkin wrote:
On Thu, Aug 01, 2024 at 06:39:16PM +0300, Max Gurtovoy wrote:
On 01/08/2024 18:29, Michael S. Tsirkin wrote:
On Thu, Aug 01, 2024 at 06
On 03/08/2024 15:39, Michael S. Tsirkin wrote:
On Sat, Aug 03, 2024 at 01:07:27AM +0300, Max Gurtovoy wrote:
On 01/08/2024 20:56, Stefan Hajnoczi wrote:
On Thu, Aug 01, 2024 at 06:56:44PM +0300, Max Gurtovoy wrote:
On 01/08/2024 18:43, Michael S. Tsirkin wrote:
On Thu, Aug 01, 2024 at 06
Set the driver data of the hardware context (hctx) to point directly to
the virtio block queue. This cleanup improves code readability and
reduces the number of dereferences in the fast path.
Reviewed-by: Stefan Hajnoczi
Signed-off-by: Max Gurtovoy
---
drivers/block/virtio_blk.c | 42
Hi Marek,
On 12/09/2024 9:46, Marek Szyprowski wrote:
Dear All,
On 08.08.2024 00:41, Max Gurtovoy wrote:
Set the driver data of the hardware context (hctx) to point directly to
the virtio block queue. This cleanup improves code readability and
reduces the number of dereferences in the fast
100 matches
Mail list logo