On Thu, 10 Aug 2017 13:02:33 -0400
Waiman Long wrote:
> The lockdep code had reported the following unsafe locking scenario:
>
>CPU0CPU1
>
> lock(s_active#228);
>
On 08/07/2017 06:37 AM, Anton Volkov wrote:
> The early device registration made possible a race leading to allocations
> of disks with wrong minors.
>
> This patch moves the device registration further down the loop_init
> function to make the race infeasible.
>
> Found by Linux Driver
I present a lot of information here, hopefully it is not too much.
Perhaps this should be put in a bugzilla. I'm not familiar with CFQ, or
other I/O schedulers, so have been doing more generic debugging.
PROBLEM:
I am observing some undesirable behavior in the CFQ I/O scheduler. I am
seeing
On 08/14/2017 02:40 PM, Keith Busch wrote:
> blk_mq_get_request() does not release the callers queue usage counter
> when allocation fails. The caller still needs to account for its own
> queue usage when it is unable to allocate a request.
Thanks Keith, applied.
--
Jens Axboe
On 8/14/2017 11:40 PM, Keith Busch wrote:
blk_mq_get_request() does not release the callers queue usage counter
when allocation fails. The caller still needs to account for its own
queue usage when it is unable to allocate a request.
Fixes: 1ad43c0078b7 ("blk-mq: don't leak preempt
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/core.c | 12
1 file changed, 12 insertions(+)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 4344adff7134..19aa68f1fb4a 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
Looks good,
Reviewed-by: Sagi Grimberg
rip out all the controller and queues control plane code,
only maintain queue alloc/free/start/stop and tagset alloc/free.
Signed-off-by: Sagi Grimberg
---
This patch failed to generate a nice diff :(
drivers/nvme/target/loop.c | 443
This code is replicated for several transports, prepare
to move it to the core.
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/rdma.c | 81
1 file changed, 47 insertions(+), 34 deletions(-)
diff --git
We're going to call it from the core, so split
nr_io_queues setting to the call-sites.
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/rdma.c | 30 +++---
1 file changed, 19 insertions(+), 11 deletions(-)
diff --git a/drivers/nvme/host/rdma.c
handle controller setup (probe), reset and delete in nvme-core and
rip it our from nvme-rdma.
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/core.c | 296 +++
drivers/nvme/host/nvme.h | 11 ++
drivers/nvme/host/rdma.c | 290
This is the third part of the attempt to centralize controller reset,
delete and fabrics error recovery in nvme core.
As a reminder, the motivation is to get as much of the duplicate logic
existing in the various nvme transports to coommon code as possible.
We strive to have nvme core and fabrics
Rip the nvme-rdma equivalent.
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/fabrics.c | 103 +++
drivers/nvme/host/fabrics.h | 1 +
drivers/nvme/host/rdma.c| 114 +++-
3 files changed, 110
The core will eventually call this type of callouts
to allocate/stop/free the HW admin queue.
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/rdma.c | 81
1 file changed, 55 insertions(+), 26 deletions(-)
diff --git
We'd like to split the generic part out, so rearrange
to ease the split. post_configure will be called after the
controller basic configuration occured and identification.
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/rdma.c | 89
We're trying to make admin queue configuration generic, so
move the rdma specifics to the queue allocation (based on
the queue index passed).
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/rdma.c | 37 +
1 file changed, 21 insertions(+),
In theory, all fabric transports can/should use these.
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/core.c | 4
drivers/nvme/host/nvme.h | 3 +++
drivers/nvme/host/rdma.c | 29 +++--
3 files changed, 18 insertions(+), 18 deletions(-)
diff
17 matches
Mail list logo