On Wed, Sep 20, 2017 at 04:55:46PM +0200, Christoph Hellwig wrote:
> On Wed, Sep 20, 2017 at 01:09:59PM +0200, Johannes Thumshirn wrote:
> > Using your patchset and doing the sanme double connect trick I get the same
> > two block devices of cause.
> >
> > How do I connect using both paths?
>
>
On Wed, Sep 20, 2017 at 04:54:36PM +0200, Christoph Hellwig wrote:
> On Wed, Sep 20, 2017 at 10:36:43AM +0200, Johannes Thumshirn wrote:
> > Being one of the persons who has to backport a lot of NVMe code to older
> > kernels I'm not a huge fan of renaming nmve_ns.
>
> The churn is main main
Christoph,
> I'm really not sure we should check for -EREMOTEIO specifically, but
> Martin who is more familiar with the SCSI code might be able to
> correct me, I'd feel safer about checking for any error which is
> what the old code did.
>
> Except for that the patch looks fine to me.
We
On 09/21/2017 09:29 AM, Christoph Hellwig wrote:
> So the check change here looks good to me.
>
> I don't like like the duplicate code, can you look into sharing
> the new segment checks between the two functions and the existing
> instance in ll_merge_requests_fn by passing say two struct bio
On 09/21/2017 09:29 AM, Christoph Hellwig wrote:
> So the check change here looks good to me.
>
> I don't like like the duplicate code, can you look into sharing
> the new segment checks between the two functions and the existing
> instance in ll_merge_requests_fn by passing say two struct bio
When account the nr_phys_segments during merging bios into rq,
only consider segments merging in individual bio but not all
the bios in a rq. This leads to the bigger nr_phys_segments of
rq than the real one when the segments of bios in rq are
contiguous and mergeable. The nr_phys_segments of rq
So the check change here looks good to me.
I don't like like the duplicate code, can you look into sharing
the new segment checks between the two functions and the existing
instance in ll_merge_requests_fn by passing say two struct bio *bio1
and struct bio *bio2 pointer instead of using req->bio
On Wed, Sep 20, 2017 at 06:58:22PM -0400, Keith Busch wrote:
> > + sprintf(head->disk->disk_name, "nvme/ns%d", head->instance);
>
> Naming it 'nvme/ns<#>', kobject_set_name_vargs is going to change that
> '/' into a '!', so the sysfs entry is named 'nvme!ns<#>'. Not a big
> deal I suppose, but
On Mon, Sep 18, 2017 at 04:14:53PM -0700, Christoph Hellwig wrote:
This is awesome! Looks great, just a minor comment:
> + sprintf(head->disk->disk_name, "nvme/ns%d", head->instance);
Naming it 'nvme/ns<#>', kobject_set_name_vargs is going to change that
'/' into a '!', so the sysfs entry
On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin Steigerwald wrote:
> Ming Lei - 28.08.17, 20:58:
> > On Sun, Aug 27, 2017 at 09:43:52AM +0200, Oleksandr Natalenko wrote:
> > > Hi.
> > >
> > > Here is disk setup for QEMU VM:
> > >
> > > ===
> > > [root@archmq ~]# smartctl -i /dev/sda
> > > …
> >
On Wed, Sep 20, 2017 at 07:25:02PM +0200, Martin Steigerwald wrote:
> Ming Lei - 28.08.17, 21:32:
> > On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin Steigerwald wrote:
> > > Ming Lei - 28.08.17, 20:58:
> > > > On Sun, Aug 27, 2017 at 09:43:52AM +0200, Oleksandr Natalenko wrote:
> > > > > Hi.
> >
On 09/20/2017 03:29 PM, Christoph Hellwig wrote:
> Hi Jens,
>
> a couple nvme fixes for -rc2 are below:
>
> - fixes for the Fibre Channel host/target to fix spec compliance
> - allow a zero keep alive timeout
> - make the debug printk for broken SGLs work better
> - fix queue zeroing during
Hi Jens,
a couple nvme fixes for -rc2 are below:
- fixes for the Fibre Channel host/target to fix spec compliance
- allow a zero keep alive timeout
- make the debug printk for broken SGLs work better
- fix queue zeroing during initialization
The following changes since commit
From: Omar Sandoval
When the request is completed, lo_complete_rq() checks cmd->use_aio.
However, if this is in fact an aio request, cmd->use_aio will have
already been reused as cmd->ref by lo_rw_aio*. Fix it by not using a
union. On x86_64, there's a hole after the union
Use appropriate memory free calls based on allocation type used and
also fix number of times free is called if kmalloc fails.
Signed-off-by: Rakesh Pandit
---
drivers/lightnvm/pblk-init.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git
On 2017/9/20 下午5:40, Michael Lyle wrote:
> On Wed, Sep 20, 2017 at 3:28 AM, Coly Li wrote:
>> Even the read request failed on file system meta data, because finally a
>> stale data will be provided to kernel file system code, it is probably
>> file system won't complain as well.
>
On 2017/9/20 下午6:07, Kent Overstreet wrote:
> On Wed, Sep 20, 2017 at 06:24:33AM +0800, Coly Li wrote:
>> When bcache does read I/Os, for example in writeback or writethrough mode,
>> if a read request on cache device is failed, bcache will try to recovery
>> the request by reading from cached
On Wed, 20 Sep 2017 13:09:31 -0600
Jens Axboe wrote:
>
> I'll take it through my tree, and I'll prune some of that comment
> as well (which should be a commit message thing, not a code comment).
>
Agreed, and thanks.
-- Steve
On Wed, Sep 20, 2017 at 09:28:34PM +0300, Rakesh Pandit wrote:
> Hi Javier,
>
> one small issue I found for error path while going through changes:
>
> On Mon, Jun 26, 2017 at 11:57:17AM +0200, Javier González wrote:
> ..
> > +static int pblk_lines_alloc_metadata(struct pblk *pblk)
> > +{
[..]
>
On 09/20/2017 01:35 PM, Christoph Hellwig wrote:
>> +/*
>> + * When reading or writing the blktrace sysfs files, the references to the
>> + * opened sysfs or device files should prevent the underlying block device
>> + * from being removed. So no further delete protection is really needed.
>> + *
On 09/20/2017 12:47 PM, Bart Van Assche wrote:
> blk_mq_get_tag() can modify data->ctx. This means that in the
> error path of blk_mq_get_request() data->ctx should be passed to
> blk_mq_put_ctx() instead of local_ctx.
It's just a cosmetic thing, the only part that matters is that
we balance the
blk_mq_get_tag() can modify data->ctx. This means that in the
error path of blk_mq_get_request() data->ctx should be passed to
blk_mq_put_ctx() instead of local_ctx.
Fixes: commit 1ad43c0078b7 ("blk-mq: don't leak preempt counter/q_usage_counter
when allocating rq failed")
Signed-off-by: Bart
Hi Javier,
one small issue I found for error path while going through changes:
On Mon, Jun 26, 2017 at 11:57:17AM +0200, Javier González wrote:
..
> +static int pblk_lines_alloc_metadata(struct pblk *pblk)
> +{
> + struct pblk_line_mgmt *l_mg = >l_mg;
> + struct pblk_line_meta *lm = >lm;
Christoph,
Can you give an acked-by for this patch?
Jens,
You want to take this through your tree, or do you want me to?
If you want it, here's my:
Acked-by: Steven Rostedt (VMware)
-- Steve
On Wed, 20 Sep 2017 13:26:11 -0400
Waiman Long wrote:
The lockdep code had reported the following unsafe locking scenario:
CPU0CPU1
lock(s_active#228);
lock(>bd_mutex/1);
lock(s_active#228);
lock(>bd_mutex);
*** DEADLOCK
Ming Lei - 28.08.17, 21:32:
> On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin Steigerwald wrote:
> > Ming Lei - 28.08.17, 20:58:
> > > On Sun, Aug 27, 2017 at 09:43:52AM +0200, Oleksandr Natalenko wrote:
> > > > Hi.
> > > >
> > > > Here is disk setup for QEMU VM:
[…]
> > > > In words: 2 virtual
On Wed, Sep 20, 2017 at 06:24:33AM +0800, Coly Li wrote:
> When bcache does read I/Os, for example in writeback or writethrough mode,
> if a read request on cache device is failed, bcache will try to recovery
> the request by reading from cached device. If the data on cached device is
> not synced
On 09/20/2017 08:38 AM, Josef Bacik wrote:
> On Fri, May 05, 2017 at 10:25:18PM -0400, Josef Bacik wrote:
>> In testing we noticed that nbd would spew if you ran a fio job against
>> the raw device itself. This is because fio calls a block device
>> specific ioctl, however the block layer will
On Wed, Sep 20, 2017 at 10:36:43AM +0200, Johannes Thumshirn wrote:
> Being one of the persons who has to backport a lot of NVMe code to older
> kernels I'm not a huge fan of renaming nmve_ns.
The churn is main main worry. Well and that I don't have a reall good
name for what currently is
On Fri, May 05, 2017 at 10:25:18PM -0400, Josef Bacik wrote:
> In testing we noticed that nbd would spew if you ran a fio job against
> the raw device itself. This is because fio calls a block device
> specific ioctl, however the block layer will first pass this back to the
> driver ioctl handler
On Tue, Sep 19, 2017 at 10:32 PM, Christoph Hellwig wrote:
> On Wed, Sep 06, 2017 at 07:38:10PM +0200, Ilya Dryomov wrote:
>> sd_config_write_same() ignores ->max_ws_blocks == 0 and resets it to
>> permit trying WRITE SAME on older SCSI devices, unless ->no_write_same
>> is set.
On Tue, Sep 19, 2017 at 02:37:50PM -0600, Jens Axboe wrote:
> On 09/02/2017 09:17 AM, Ming Lei wrote:
> > @@ -142,18 +178,31 @@ void blk_mq_sched_dispatch_requests(struct
> > blk_mq_hw_ctx *hctx)
> > if (!list_empty(_list)) {
> > blk_mq_sched_mark_restart_hctx(hctx);
> >
Hi Christoph,
I wanted to test your patches, but I fail to see how I have to set it up.
I do have a host with two RDMA HCAs connected to the target (Linux), for "normal
dm-mpath" test I do nvme connect with the host traddr argument for both of the
HCAs and get two nvme block devices which I can
On 2017/9/20 上午8:59, Michael Lyle wrote:
> Coly--
>
> It's an interesting changeset.
Hi Mike,
Yes it's interesting :-) It fixes a silent database data corruption in
our product kernel. The most dangerous point is, it happens silent even
in-data checksum is used, this issue is detected by
On Mon, Sep 18, 2017 at 04:14:53PM -0700, Christoph Hellwig wrote:
> This patch adds initial multipath support to the nvme driver. For each
> namespace we create a new block device node, which can be used to access
> that namespace through any of the controllers that refer to it.
>
> Currently
On 13 September 2017 at 13:40, Adrian Hunter wrote:
> Currently the host can be claimed by a task. Change this so that the host
> can be claimed by a context that may or may not be a task. This provides
> for the host to be claimed by a block driver queue to support
On Mon, Sep 18, 2017 at 04:14:52PM -0700, Christoph Hellwig wrote:
> Introduce a new struct nvme_ns_head [1] that holds information about
> an actual namespace, unlike struct nvme_ns, which only holds the
> per-controller namespace information. For private namespaces there
> is a 1:1 relation of
Looks good,
Reviewed-by: Johannes Thumshirn
--
Johannes Thumshirn Storage
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham
On Wed, Sep 20, 2017 at 08:18:41AM +0800, Tony Yang wrote:
> Hi , Christoph
>
> I use the above code to recompile the kernel. The following error
> occurred. I can't find the blk_steal_bios function. What's the reason
> for that? Hope to get your help, thank you
Hi Ton,
have you puled it
This function is used by the block layer queue to bail out of
requests if the current request is towards an RPMB
"block device".
This was done to avoid boot time scanning of this "block
device" which was never really a block device, thus duct-taping
over the fact that it was badly engineered.
The RPMB partition on the eMMC devices is a special area used
for storing cryptographically safe information signed by a
special secret key. To write and read records from this special
area, authentication is needed.
The RPMB area is *only* and *exclusively* accessed using
ioctl():s from
Coly--
It's an interesting changeset.
I am not positive if it will work in practice-- the most likely
objects to be cached are filesystem metadata. Won't most filesystems
fall apart if some of their data structures revert back to an earlier
point of time?
Mike
On Tue, Sep 19, 2017 at 3:24 PM,
42 matches
Mail list logo