On Fri, Nov 03, 2017 at 02:42:50AM +, Bart Van Assche wrote:
> On Fri, 2017-11-03 at 10:12 +0800, Ming Lei wrote:
> > [root@ibclient srp-test]# ./run_tests
> > modprobe: FATAL: Module target_core_mod is in use.
>
> LIO must be unloaded before srp-test software is started.
Yeah, I can make thi
On Fri, 2017-11-03 at 10:12 +0800, Ming Lei wrote:
> [root@ibclient srp-test]# ./run_tests
> modprobe: FATAL: Module target_core_mod is in use.
LIO must be unloaded before srp-test software is started.
Bart.
Hi Laurence,
On Thu, Nov 02, 2017 at 09:16:04PM -0400, Laurence Oberman wrote:
> Hi Ming
> I have used Bart's tests on my test bed they should run fine.
> Are you using my SRP setup.
Yeah, I am using your SRP setup.
Once the three directories are created, I saw the new failure,
and both ib/srp a
On Fri, 2017-11-03 at 08:15 +0800, Ming Lei wrote:
> On Thu, Nov 02, 2017 at 11:54:57PM +, Bart Van Assche wrote:
> > On Fri, 2017-11-03 at 07:48 +0800, Ming Lei wrote:
> > > Could you please share your srp-tests script? I may find a IB/SRP system
> > > to see if I can reproduce this issue and
On Thu, Nov 02, 2017 at 11:54:57PM +, Bart Van Assche wrote:
> On Fri, 2017-11-03 at 07:48 +0800, Ming Lei wrote:
> > Could you please share your srp-tests script? I may find a IB/SRP system
> > to see if I can reproduce this issue and figure out one solution.
>
> Please have a look at https:/
On Fri, 2017-11-03 at 07:48 +0800, Ming Lei wrote:
> Could you please share your srp-tests script? I may find a IB/SRP system
> to see if I can reproduce this issue and figure out one solution.
Please have a look at https://github.com/bvanassche/srp-test.
Bart.
On Thu, Nov 02, 2017 at 11:43:55PM +, Bart Van Assche wrote:
> On Fri, 2017-11-03 at 07:38 +0800, Ming Lei wrote:
> > On Thu, Nov 02, 2017 at 03:57:05PM +, Bart Van Assche wrote:
> > > On Wed, 2017-11-01 at 08:21 -0600, Jens Axboe wrote:
> > > > Fixed that up, and applied these two patches
On Fri, 2017-11-03 at 07:38 +0800, Ming Lei wrote:
> On Thu, Nov 02, 2017 at 03:57:05PM +, Bart Van Assche wrote:
> > On Wed, 2017-11-01 at 08:21 -0600, Jens Axboe wrote:
> > > Fixed that up, and applied these two patches as well.
> >
> > Hello Jens,
> >
> > Recently I noticed that a test sys
On Thu, Nov 02, 2017 at 03:57:05PM +, Bart Van Assche wrote:
> On Wed, 2017-11-01 at 08:21 -0600, Jens Axboe wrote:
> > Fixed that up, and applied these two patches as well.
>
> Hello Jens,
>
> Recently I noticed that a test system sporadically hangs during boot (Dell
> PowerEdge R720 that bo
We do this by adding a helper that returns the ns_head for a device that
can belong to either the per-controller or per-subsystem block device
nodes, and otherwise reuse all the existing code.
Signed-off-by: Christoph Hellwig
Reviewed-by: Keith Busch
Reviewed-by: Sagi Grimberg
Reviewed-by: Joha
That we we can also poll non blk-mq queues. Mostly needed for
the NVMe multipath code, but could also be useful elsewhere.
Signed-off-by: Christoph Hellwig
Reviewed-by: Hannes Reinecke
---
block/blk-core.c | 11 +++
block/blk-mq.c | 14 +-
drivers/
Hi all,
this series adds support for multipathing, that is accessing nvme
namespaces through multiple controllers to the nvme core driver.
It is a very thin and efficient implementation that relies on
close cooperation with other bits of the nvme driver, and few small
and simple block helpers.
C
This patch adds native multipath support to the nvme driver. For each
namespace we create only single block device node, which can be used
to access that namespace through any of the controllers that refer to it.
The gendisk for each controllers path to the name space still exists
inside the kerne
This allows us to manage the various uniqueue namespace identifiers
together instead needing various variables and arguments.
Signed-off-by: Christoph Hellwig
Reviewed-by: Keith Busch
Reviewed-by: Sagi Grimberg
Reviewed-by: Hannes Reinecke
---
drivers/nvme/host/core.c | 69 +++
Introduce a new struct nvme_ns_head that holds information about an actual
namespace, unlike struct nvme_ns, which only holds the per-controller
namespace information. For private namespaces there is a 1:1 relation of
the two, but for shared namespaces this lets us discover all the paths to
it. F
With this flag a driver can create a gendisk that can be used for I/O
submission inside the kernel, but which is not registered as user
facing block device. This will be useful for the NVMe multipath
implementation.
Signed-off-by: Christoph Hellwig
---
block/genhd.c | 68 +++
This adds a new nvme_subsystem structure so that we can track multiple
controllers that belong to a single subsystem. For now we only use it
to store the NQN, and to check that we don't have duplicate NQNs unless
the involved subsystems support multiple controllers.
Includes code originally from
The hidden gendisks introduced in the next patch need to keep the dev
field in their struct device empty so that udev won't try to create
block device nodes for them. To support that rewrite disk_devt to
look at the major and first_minor fields in the gendisk itself instead
of looking into the str
This helpers allows to bounce steal the uncompleted bios from a request so
that they can be reissued on another path.
Signed-off-by: Christoph Hellwig
Reviewed-by: Sagi Grimberg
Reviewed-by: Hannes Reinecke
---
block/blk-core.c | 21 +
include/linux/blkdev.h | 2 ++
Set aside a bit in the request/bio flags for driver use.
Signed-off-by: Christoph Hellwig
Reviewed-by: Sagi Grimberg
Reviewed-by: Hannes Reinecke
Reviewed-by: Johannes Thumshirn
---
include/linux/blk_types.h | 5 +
1 file changed, 5 insertions(+)
diff --git a/include/linux/blk_types.h b/
This helper allows reinserting a bio into a new queue without much
overhead, but requires all queue limits to be the same for the upper
and lower queues, and it does not provide any recursion preventions.
Signed-off-by: Christoph Hellwig
Reviewed-by: Sagi Grimberg
Reviewed-by: Javier González
R
Hi Jens,
these patches add the block layer helpers / tweaks for NVMe multipath
support. Can you review them for inclusion?
There have been no functional changes to the versions posted with
previous nvme multipath patchset.
This flag should be before the operation-specific REQ_NOUNMAP bit.
Signed-off-by: Christoph Hellwig
Reviewed-by: Sagi Grimberg
Reviewed-by: Hannes Reinecke
Reviewed-by: Johannes Thumshirn
---
include/linux/blk_types.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/inc
We don't need a frozen queue to update the chunk_size, which just is a
hint, and moving it a little earlier will allow for some better code
reuse with the multipath code.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/core.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
dif
Split out the code that applies the calculate value to a given disk/queue
into new helper that can be reused by the multipath code.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/core.c | 49 +---
1 file changed, 26 insertions(+), 23 deletions(
This is safe because the queue is always frozen when we revalidate, and
it simplifies both the existing code as well as the multipath
implementation.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/core.c | 40 ++--
1 file changed, 10 insertions(+), 30
With multipath we don't want a hard DNR bit on a request that is cancelled
by a controller reset, but instead want to be able to retry it on another
patch. To archive this don't always set the DNR bit when the queue is
dying in nvme_cancel_request, but defer that decision to
nvme_req_needs_retry.
To allow reusing this function for the multipath node.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/core.c | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index bc27f6603861..8702e49c5c45 100644
---
To allow reusing this function for the multipath node.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/core.c | 33 +
1 file changed, 17 insertions(+), 16 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 8702e49c5c45..bfd
Hi all,
below are a couple cleanup patches to prepare for the next version
of the nvme multipath code.
On Mon, Oct 30, 2017 at 11:37:55AM +0800, Guan Junxiong wrote:
> > + head->disk->flags = GENHD_FL_EXT_DEVT;
> > + sprintf(head->disk->disk_name, "nvme%dn%d",
> > + ctrl->subsys->instance, nsid);
>
> Is it okay to use head->instance instead of nsid for disk name nvme#n# ?
> Be
Tang--
On 11/01/2017 08:08 PM, tang.jun...@zte.com.cn wrote:
> From: Tang Junhui
>
> Journal bucket is a circular buffer, the bucket
> can be like YYYNNNYY, which means the first valid journal in
> the 7th bucket, and the latest valid journal in third bucket, in
> this case, if we do not try we
On Wed, 2017-11-01 at 08:21 -0600, Jens Axboe wrote:
> Fixed that up, and applied these two patches as well.
Hello Jens,
Recently I noticed that a test system sporadically hangs during boot (Dell
PowerEdge R720 that boots from a hard disk connected to a MegaRAID SAS adapter)
and also that srp-tes
Hi Jens,
This patchset avoids to allocate driver tag beforehand for flush rq
in case of I/O scheduler, then flush rq isn't treated specially
wrt. get/put driver tag, code gets cleanup much, such as,
reorder_tags_to_front() is removed, and we needn't to worry
about request order in dispatch list fo
From: Jianchao Wang
When free the driver tag of the next rq with I/O scheduler
configured, it get the first entry of the list, however, at the
moment, the failed rq has been requeued at the head of the list.
The rq it gets is the failed rq not the next rq.
Free the driver tag of next rq before th
In the following patch, we will use RQF_FLUSH_SEQ to decide:
1) if the flag isn't set, the flush rq need to be inserted via
blk_insert_flush()
2) otherwise, the flush rq need to be dispatched directly since
it is in flush machinery now.
So we use blk_mq_request_bypass_insert() for requests of by
Block flush need this function without running queue, so introduce the
parameter.
Signed-off-by: Ming Lei
---
block/blk-core.c | 2 +-
block/blk-mq.c | 5 +++--
block/blk-mq.h | 2 +-
3 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index bb4
blk_insert_flush() should only insert request since run queue always
follows it.
In case of bypassing flush, we don't need to run queue because every
blk_insert_flush() follows one run queue.
Signed-off-by: Ming Lei
---
block/blk-flush.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
di
In case of IO scheduler we always pre-allocate one driver tag before
calling blk_insert_flush(), and flush request will be marked as
RQF_FLUSH_SEQ once it is in flush machinary.
So if RQF_FLUSH_SEQ isn't set, we call blk_insert_flush() to handle
the request, otherwise the flush request is dispatch
We need this helper to put the driver tag for flush rq, since we will
not share tag in the flush request sequence in the following patch
in case that I/O scheduler is applied.
Signed-off-by: Ming Lei
---
block/blk-mq.c | 32
block/blk-mq.h | 33 ++
The behind idea is simple:
1) for none scheduler, driver tag has to be borrowed for flush rq, otherwise
we may run out of tag, and IO hang is caused. And get/put driver tag is
actually noop, so reorder tags isn't necessary at all.
2) for real I/O scheduler, we needn't to allocate driver tag befor
On Thu, 2017-11-02 at 10:36 +0800, Hongxu Jia wrote:
> Apply this patch, and the test on my platform is passed.
Thank you for having tested this patch. Does this mean that you are OK
with adding the following to this patch: Tested-by: Hongxu Jia
?
Bart.
On 11/01/2017 04:31 PM, Bart Van Assche wrote:
> The changes introduced through commit 82ed4db499b8 assume that the
> sense buffer pointer in struct scsi_request is initialized for all
> requests - passthrough and filesystem requests. Hence make sure
> that that pointer is initialized for filesyste
On 11/02/2017 05:42 AM, Arnd Bergmann wrote:
> Like many storage drivers, skd uses an unsigned 32-bit number for
> interchanging the current time with the firmware. This will overflow in
> y2106 and is otherwise safe.
>
> However, the get_seconds() function is generally considered deprecated
> sin
OK,3ks.
-邮件原件-
发件人: chenxiang (M)
发送时间: 2017年11月2日 20:34
收件人: Zouming (IT) ; linux-block@vger.kernel.org;
ax...@fb.com
抄送: wangzhoumengjian
主题: Re: [bug report after v4.5-rc1]block: When the scsi device has a timeout
IO, the scsi device is stuck when it is deleted
在 2017/11/2 20:16, Z
在 2017/11/2 20:16, Zouming (IT) 写道:
1.Repeat steps:
(1) send IO on the device /dev/sdx.
(2) Simulate an IO lost
(3) Use the command before to delete scsi device before IO timeout
ehco 1 > /sys/class/sdx/device/delete
2.The stack of delete thead is before:
[] msleep+0x2f/0x40
[] __blk_dr
1.Repeat steps:
(1) send IO on the device /dev/sdx.
(2) Simulate an IO lost
(3) Use the command before to delete scsi device before IO timeout
ehco 1 > /sys/class/sdx/device/delete
2.The stack of delete thead is before:
[] msleep+0x2f/0x40
[] __blk_drain_queue+0xa4/0x170
[] blk_clean
Like many storage drivers, skd uses an unsigned 32-bit number for
interchanging the current time with the firmware. This will overflow in
y2106 and is otherwise safe.
However, the get_seconds() function is generally considered deprecated
since the behavior is different between 32-bit and 64-bit ar
48 matches
Mail list logo