On Sat, Jun 17, 2017 at 01:59:49PM -0600, Jens Axboe wrote:
> Reviewed-by: Andreas Dilger
> Signed-off-by: Jens Axboe
> ---
> fs/block_dev.c | 2 ++
> fs/direct-io.c | 2 ++
> fs/iomap.c | 1 +
> 3 files changed, 5 insertions(+)
>
> diff --git
Can you add linux-nvme for the next repost?
As said before I think we should rely on implicit streams allocation,
as that will make the whole patch a lot simpler, and it solves the issue
that your current patch will take away your 4 streams from the general
pool on every controller that supports
On Sat, Jun 17, 2017 at 01:59:48PM -0600, Jens Axboe wrote:
> We have a pwritev2(2) interface based on passing in flags. Add an
> fcntl interface for querying these flags, and also for setting them
> as well:
>
> F_GET_RW_HINT Returns the read/write hint set. Right now it
>
On Sat, Jun 17, 2017 at 09:11:30AM -0600, Jens Axboe wrote:
> I have two samples here, and I just tested, and both of them want it
> assigned with nsid=0x or they will fail the writes... So I'd say
> we're better off ensuring we do allocate them globally.
That's clearly against the spec.
On Sat, Jun 17, 2017 at 01:59:47PM -0600, Jens Axboe wrote:
> Add four flags for the pwritev2(2) system call, allowing an application
> to give the kernel a hint about what on-media life times can be
> expected from a given write.
>
> The intent is for these values to be relative to each other,
> +/*
> + * Write life time hint values.
> + */
> +enum rw_hint {
> + WRITE_LIFE_NONE = 0,
> + WRITE_LIFE_SHORT,
> + WRITE_LIFE_MEDIUM,
> + WRITE_LIFE_LONG,
> + WRITE_LIFE_EXTREME,
> +};
> +
> +#define RW_HINT_MASK 0x7 /* 3 bits */
FYI, exposing enums in a uapi is
> +static void nvme_rdma_destroy_admin_queue(struct nvme_rdma_ctrl *ctrl, bool
> remove)
> {
> + nvme_rdma_stop_queue(>queues[0]);
> + if (remove) {
> + blk_cleanup_queue(ctrl->ctrl.admin_connect_q);
> + blk_cleanup_queue(ctrl->ctrl.admin_q);
> +
On Sun, Jun 18, 2017 at 06:21:37PM +0300, Sagi Grimberg wrote:
> We have all we need in these functions now that these
> are aware if we are doing a full instantiation/removal.
>
> For that we move nvme_rdma_configure_admin_queue to avoid
> a forward declaration, and we add blk_mq_ops forward
On Sun, Jun 18, 2017 at 06:21:58PM +0300, Sagi Grimberg wrote:
> we are going to need the name for the core routine...
I think we should just pick this up ASAP as a prep patch..
Reviewed-by: Christoph Hellwig <h...@lst.de>
On Sun, Jun 18, 2017 at 06:22:02PM +0300, Sagi Grimberg wrote:
> Signed-off-by: Sagi Grimberg
Could use a changelog.
Ming: does this solve your problem of not seeing the new queues
after a qemu CPU hotplug + reset?
> ---
> drivers/nvme/host/core.c | 3 +++
> 1 file changed,
Currently we still default to a bounce all highmem setting for block
drivers. This series defaults to no bouncing and instead adds call
to blk_queue_bounce_limit to those drivers that need it. It also
has a few cleanups in that area.
pktcdvd is a make_request based stacking driver and thus doesn't have any
addressing limits on it's own. It also doesn't use bio_data() or
page_address(), so it doesn't need a lowmem bounce either.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/block/pktcdvd.c | 2 --
1 file c
This makes moves the knowledge about bouncing out of the callers into the
block core (just like we do for the normal I/O path), and allows to unexport
blk_queue_bounce.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/blk-map.c | 7 +++
block/bo
For historical reasons we default to bouncing highmem pages for all block
queues. But the blk-mq drivers are easy to audit to ensure that we don't
need this - scsi and mtip32xx set explicit limits and everyone else doesn't
have any particular ones.
Signed-off-by: Christoph Hellwig <h...@lst
We only call blk_queue_bounce for request-based drivers, so stop messing
with it for make_request based drivers.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/blk-core.c | 5 +
block/blk-mq.c | 5 +
block/blk-settings.c
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/blk.h| 13 +
block/bounce.c | 1 +
include/linux/blkdev.h | 13 -
3 files changed, 14 insertions(+), 13 deletions(-)
diff --git a/block/blk.h b/block/blk.h
index 83c8e1100525..4576fb
On Sun, Jun 18, 2017 at 06:21:35PM +0300, Sagi Grimberg wrote:
> In case we reconnect with inflight admin IO we
> need to make sure that the connect comes before
> the admin command. This can be only achieved by
> using a seperate request queue for admin connects.
Use up a few more lines of the
On Sun, Jun 18, 2017 at 06:22:03PM +0300, Sagi Grimberg wrote:
> Signed-off-by: Sagi Grimberg
The subject sounds odd and it could use a changelog. But I'd love to
pick this change up ASAP as it's the right thing to do..
BLK_BOUNCE_ANY is the defauly now, so the call is superflous.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/mmc/core/queue.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 7f20298d892b..b659a28c8018 100644
--- a/d
Now all queues allocators come without abounce limit by default,
dm doesn't have to override this anymore.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/md/dm.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index fbd06b9f9467..402946
Instead move it to the callers. Those that either don't use bio_data() or
page_address() or are specific to architectures that do not support highmem
are skipped.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/blk-core.c| 5 -
drivers/block/aoe/aoeblk.c | 1 +
d
And just move it into scsi_transport_sas which needs it due to low-level
drivers directly derferencing bio_data, and into blk_init_queue_node,
which will need a further push into the callers.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/blk-core.c
On Mon, Jun 19, 2017 at 03:49:43PM +0800, Ye Xiaolong wrote:
> On 06/19, Christoph Hellwig wrote:
> >On Mon, Jun 19, 2017 at 02:03:18PM +0800, kernel test robot wrote:
> >>
> >> Greeting,
> >>
> >> FYI, we noticed a -4% regression of fio.write_bw_
On Mon, Jun 19, 2017 at 10:49:15AM +0300, Sagi Grimberg wrote:
> However, you raise a valid point, I think I added this before we
> had the queue_is_ready protection, which will reject the command
> if the queue is not LIVE (unless its a connect). I think the reason
> its still in is that I tested
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
> We can do that, but this tries to eliminate duplicate code as
> much as possible. It's not like the convention is unprecedented...
It's fairly nasty to follow. OTOH I like your overall cleanup,
so I guess I shouldn't complain about the initial patches to much
but just possibly do another pass
Just wire up the generic TCG OPAL infrastructure to the SCSI disk driver
and the Security In/Out commands.
Note that I don't know of any actual SCSI disks that do support TCG OPAL,
but this is required to support ATA disks through libata.
Signed-off-by: Christoph Hellwig <h...@lst
Hi all,
this patch adds TCG Opal support to the scsi disk driver. As far as I know
only SATA disks actually support OPAL, and as Martin fears RSOC-related
regressions the support is conditional in a flag in struct scsi_device,
which so far only libata sets.
Because of that we should merge the
On Sun, Jun 18, 2017 at 06:21:39PM +0300, Sagi Grimberg wrote:
> This should pair with nvme_rdma_stop_queue. While this
> is not a complete 1x1 reverse, it still pairs up pretty
> well because in fabrics we don't have a disconnect capsule
> but we simply teardown the transport association.
Looks
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
On Sun, Jun 18, 2017 at 06:21:42PM +0300, Sagi Grimberg wrote:
> No need to queue an extra work to indirect controller
> uninit and put the final reference.
Maybe my memory is a little vague, but didn't we need the work_struct
for something? At least it would serialize all the removals for
Looks fine. I'd be happy to take this as an early cleanup.
Reviewed-by: Christoph Hellwig <h...@lst.de>
On Sun, Jun 18, 2017 at 06:21:44PM +0300, Sagi Grimberg wrote:
> Will be used when we centralize control flows. only
> rdma for now.
Should we at some point move the tag_sets themselves to the generic
ctrl instead of just pointers?
On Sun, Jun 18, 2017 at 06:21:45PM +0300, Sagi Grimberg wrote:
> Will be used in centralized code later. only rdma
> for now.
It would be great to initialize it early on for all transports, and
then just use the stored field instead of re-reading CAP in various
places.
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
Looks good,
Reviewed-by: Christoph Hellwig <h...@lst.de>
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
Looks fine, but how about doing this early in the series? There's
quite a bit of churn around this code.
On Sun, Jun 18, 2017 at 06:21:51PM +0300, Sagi Grimberg wrote:
> We're trying to make admin queue configuration generic, so
> move the rdma specifics to the queue allocation (based on
> the queue index passed).
Needs at least a comment, and probably factoring into a little
helper. And once we
On Sun, Jun 18, 2017 at 06:21:54PM +0300, Sagi Grimberg wrote:
> We intent for these handlers to become generic, thus, add them to
> the nvme core controller struct.
Do you remember why we actually need all the different work items?
We need err_work to recover from RDMA QP-level errors. But how
> +static void nvme_free_io_queues(struct nvme_ctrl *ctrl)
> +{
> + int i;
> +
> + for (i = 1; i < ctrl->queue_count; i++)
> + ctrl->ops->free_hw_queue(ctrl, i);
> +}
> +
> +void nvme_stop_io_queues(struct nvme_ctrl *ctrl)
> +{
> + int i;
> +
> + for (i = 1; i <
On Mon, Jun 19, 2017 at 11:03:36AM +0300, Sagi Grimberg wrote:
>
>> The subject sounds odd and it could use a changelog. But I'd love to
>> pick this change up ASAP as it's the right thing to do..
>
> How? where would you place it? there is no nvme_configure_admin_queue in
> nvme-core.
Doh.
On Tue, Jun 06, 2017 at 11:58:02AM +0200, Christoph Hellwig wrote:
> On Mon, Jun 05, 2017 at 08:48:00PM -0400, Martin K. Petersen wrote:
> > For WRITE SAME, scsi_report_opcode() is gated not only by
> > sdev->no_report_opcodes but by sdev->no_write_same.
> >
> &g
Looks fine for now:
Reviewed-by: Christoph Hellwig <h...@lst.de>
But rather sooner than later we need to make this path at least go
through the normal end_request processing. Without that we're just
bound to run into problems like we had with the tag changes again
when the driver is
Looks good,
Reviewed-by: Christoph Hellwig <h...@lst.de>
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
Hi Dmitry,
can you resend this series? I really think we should get this into
4.12 at least.
ore.c
> @@ -730,7 +730,7 @@ struct request_queue *blk_alloc_queue_node(gfp_t
> gfp_mask, int node_id)
> if (q->id < 0)
> goto fail_q;
>
> - q->bio_split = bioset_create(BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS);
> + q->bio_split = bioset_create(BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS |
> BIOSET_NEED_RESCUER);
Please avoid > 80 char lines.
Otherwise looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
although I think we really should kill off the block level bouncing
rather sooner than later.
On Thu, Apr 13, 2017 at 10:23:10PM -0400, Martin K. Petersen wrote:
> The other thing that keeps me a bit on the fence is that a bunch of the
> plumbing to handle a bio with a payload different from bi_size is needed
> for the copy offload token. I'm hoping to have those patches ready for
> 4.13.
On Tue, May 02, 2017 at 12:21:23PM +0200, Jan Kara wrote:
> it makes sense to treat REQ_FUA and REQ_PREFLUSH ops as synchronous in
> op_is_sync() since callers cannot rely on this anyway... Thoughts?
I'm fine with treating them as sync.
This looks reasonable to me, although of course I don't have a way
to test it.
Any reason for the move from ->end_io_data to ->special? I thought
that ->special was something we'd get rid of sooner or later now
that we can have additional per-cmd data even for !mq.
On Thu, Jun 01, 2017 at 02:08:04PM +, Bart Van Assche wrote:
> The first eight patches in this series do not depend on any block layer
> changes.
> Do you want me to repost this patches or can you perhaps queue these without a
> repost?
It would be great if you could repost them, also for
t afs_uuid.
The V1 uuid intrepreatation in struct form isn't really useful to the
rest of the kernel, and not really compatible to it either, so move it
back to AFS instead of polluting the global uuid.h.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
fs/afs/cmservice.c | 16
Hi all,
this series, which is a combined effort from Amir, Andy and me introduces
new uuid_t and guid_t type names that are less confusing than the existing
types, adds new helpers for them and starts switching the fs code over to
it. Andy has additional patches on top to convert many of the
Does it work fine if you call sb_init_dio_done_wq unconditionally?
> +struct blk_zoned {
> + unsigned intnr_zones;
> + unsigned long *seq_zones;
> +};
> +
> struct blk_zone_report_hdr {
> unsigned intnr_zones;
> u8 padding[60];
> @@ -492,6 +497,10 @@ struct request_queue {
> struct blk_integrity integrity;
>
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
Hi Anish,
I looked over the code a bit, and I'm rather confused by the newly
added commands. Which controller supports them? Also the NVMe
working group went down a very different way with the ALUA approch,
which uses different grouping concepts and doesn't require path
activations - for Linux
On Fri, Sep 15, 2017 at 07:06:34PM +0900, Damien Le Moal wrote:
> __blk_mq_debugfs_rq_show() and blk_mq_debugfs_rq_show() are exported
> symbols but ar eonly declared in the block internal file
> block/blk-mq-debugfs.h. which is not cleanly accessible to files outside
> of the block directory.
>
Same as for patch 1: this should stay local to block/ - we don't
want random drivers to grow I/O schedulers.
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
On Mon, Sep 18, 2017 at 09:45:57AM -0600, Michael Moy wrote:
> The write hint needs to be copied to the mapped filesystem
> so it can be passed down to the nvme device driver.
>
> v2: fix tabs in the email
If you want the write hint for buffered I/O you need to set it on the
inode using
On Tue, Sep 19, 2017 at 11:58:34AM -0400, Waiman Long wrote:
> I was trying not to add a new mutex to a structure just for blktrace as
> it is an optional feature that is enabled only if the
> CONFIG_BLK_DEV_IO_TRACE config option is defined and it will only need
> to be taken occasionally.
So?
On Tue, Sep 19, 2017 at 12:15:57PM -0400, Meng Xu wrote:
> Hi Christoph,
>
> By saying not copying the byte twice, did you mean
> copy_from_user(req->cmd, sic->data + sizeof(opcode), cmdlen -
> sizeof(opcode)) ?
>
> Does it affect the how req->cmd will be used later?
> If no, I'll submit another
On Wed, Sep 06, 2017 at 07:38:10PM +0200, Ilya Dryomov wrote:
> sd_config_write_same() ignores ->max_ws_blocks == 0 and resets it to
> permit trying WRITE SAME on older SCSI devices, unless ->no_write_same
> is set. This means blkdev_issue_zeroout() must cope with WRITE SAME
> failing with
On Thu, Sep 21, 2017 at 07:22:17AM +0200, Johannes Thumshirn wrote:
> > But head also has connotations in the SAN world. Maybe nvme_ns_chain?
>
> I know that's why I didn't really like it all too much in the first place as
> well. For nvme_ns_chain, it's not a chain really (the list itself is a
On Wed, Sep 13, 2017 at 02:40:00PM +0300, Adrian Hunter wrote:
> Non-CQE blk-mq showed a 3% decrease in sequential read performance. This
> seemed to be coming from the inferior latency of running work items compared
> with a dedicated thread. Hacking blk-mq workqueue to be unbound reduced the
>
On Wed, Sep 20, 2017 at 10:36:43AM +0200, Johannes Thumshirn wrote:
> Being one of the persons who has to backport a lot of NVMe code to older
> kernels I'm not a huge fan of renaming nmve_ns.
The churn is main main worry. Well and that I don't have a reall good
name for what currently is
Hi Jens,
a couple nvme fixes for -rc2 are below:
- fixes for the Fibre Channel host/target to fix spec compliance
- allow a zero keep alive timeout
- make the debug printk for broken SGLs work better
- fix queue zeroing during initialization
The following changes since commit
On Thu, Sep 21, 2017 at 07:23:45AM +0200, Johannes Thumshirn wrote:
> Ah OK, we maybe should update nvme-cli to recognize it as well then. I just
> looked at the output of nvme list and obviously didn't find it.
Overloading the new per-subsystem nodes into nvme list would be
very confusing I
This looks ok to me, but do we even need to keep the special
cases above? Is there anything relying on the safe but not very
useful ioctls?
Condensing the thing down to:
int scsi_verify_blk_ioctl(struct block_device *bd, unsigned int cmd)
{
if (bd && bd == bd->bd_contains)
On Wed, Sep 20, 2017 at 06:58:22PM -0400, Keith Busch wrote:
> > + sprintf(head->disk->disk_name, "nvme/ns%d", head->instance);
>
> Naming it 'nvme/ns<#>', kobject_set_name_vargs is going to change that
> '/' into a '!', so the sysfs entry is named 'nvme!ns<#>'. Not a big
> deal I suppose, but
So the check change here looks good to me.
I don't like like the duplicate code, can you look into sharing
the new segment checks between the two functions and the existing
instance in ll_merge_requests_fn by passing say two struct bio *bio1
and struct bio *bio2 pointer instead of using req->bio
On Tue, Sep 12, 2017 at 05:38:05PM +0900, Damien Le Moal wrote:
> struct blk_zoned {
> unsigned int nr_zones;
> unsigned long *seq_zones;
> };
>
> struct request_queue {
> ...
> #ifdef CONFIG_BLK_DEV_ZONED
> struct blk_zoned zoned;
> #endif
> ...
Do we even need a
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
On Thu, Sep 07, 2017 at 01:54:34PM +0200, Christoph Hellwig wrote:
> Now fully away, with my fair share of coffee and lunch:
>
>
> Two fixups for the recent bsg-lib fixes, should go into 4.13 stable as
> well.
Jens, can you look at this for Linux 4.14?
The fix for the target sqhdr fix is the critical one that we really want
before -rc2. But we've also accumulated a fair batch of small FC and
RDMA fixes as well.
The following changes since commit cd9e0a08e4f6173f9d7a469cabd09938fc4f0e25:
block: fix a crash caused by wrong API (2017-09-21
On Tue, Sep 19, 2017 at 08:49:12AM -0400, Waiman Long wrote:
> On 09/18/2017 08:01 PM, Christoph Hellwig wrote:
> > Taking a look at this it seems like using a lock in struct block_device
> > isn't the right thing to do anyway - all the action is on fields in
> > struct blk_tr
> diff --git a/block/scsi_ioctl.c b/block/scsi_ioctl.c
> index 7440de4..971044d 100644
> --- a/block/scsi_ioctl.c
> +++ b/block/scsi_ioctl.c
> @@ -466,6 +466,12 @@ int sg_scsi_ioctl(struct request_queue *q, struct
> gendisk *disk, fmode_t mode,
> if (copy_from_user(req->cmd, sic->data,
On Tue, Sep 19, 2017 at 08:55:59AM +0800, jianchao.wang wrote:
> > But can you elaborate a little more on how this found and if there
> > is a way to easily reproduce it, say for a blktests test case?
> >
> It is found when I made the patch of
> 'block: consider merge of segments when merge bio
-by: Christoph Hellwig <h...@lst.de>
---
drivers/nvme/host/core.c | 193 +--
drivers/nvme/host/lightnvm.c | 14 ++--
drivers/nvme/host/nvme.h | 21 -
3 files changed, 193 insertions(+), 35 deletions(-)
diff --git a/drivers/nvme/host/core.c b/d
This flag should be before the operation-specific REQ_NOUNMAP bit.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
include/linux/blk_types.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index a2d2aa
needs to be implemented at the controller
level.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/nvme/host/core.c | 264 ---
drivers/nvme/host/nvme.h | 11 ++
2 files changed, 259 insertions(+), 16 deletions(-)
diff --git a/drivers/nvm
This allows us to manage the various uniqueue namespace identifiers
together instead needing various variables and arguments.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/nvme/host/core.c | 69 +++-
drivers/nvme/host/nvme.
This adds a new nvme_subsystem structure so that we can track multiple
controllers that belong to a single subsystem. For now we only use it
to store the NQN, and to check that we don't have duplicate NQNs unless
the involved subsystems support multiple controllers.
Signed-off-by: Christoph
This helper allows reinserting a bio into a new queue without much
overhead, but requires all queue limits to be the same for the upper
and lower queues, and it does not provide any recursion preventions.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/blk-core.c
This helpers allows to bounce steal the uncompleted bios from a request so
that they can be reissued on another path.
Signed-off-by: Christoph Hellwig <h...@lst.de>
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
---
block/blk-core.c | 20
include/linux/b
Set aside a bit in the request/bio flags for driver use.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
include/linux/blk_types.h | 5 +
1 file changed, 5 insertions(+)
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index acc2f3cdc2fc..7ec2ed097a8a
Hi all,
this series adds support for multipathing, that is accessing nvme
namespaces through multiple controllers to the nvme core driver.
It is a very thin and efficient implementation that relies on
close cooperation with other bits of the nvme driver, and few small
and simple block helpers.
Keith Busch <keith.bu...@intel.com>
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/nvme/host/core.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index d470f031e27f..5589f67d2cd8 100644
--- a/drivers/nvme/host/c
> Signed-off-by: Jianchao Wang <jianchao.w.w...@oracle.com>
This looks fine to me:
Reviewed-by: Christoph Hellwig <h...@lst.de>
But can you elaborate a little more on how this found and if there
is a way to easily reproduce it, say for a blktests test case?
Don't rename it to a way to long name. Either add a separate mutex
for your purpose (unless there is interaction between freezing and
blktrace, which I doubt), or properly comment the usage.
Taking a look at this it seems like using a lock in struct block_device
isn't the right thing to do anyway - all the action is on fields in
struct blk_trace, so having a lock inside that would make a lot more
sense.
It would also help to document what exactly we're actually protecting.
Bart, Ming:
can you guys please work a little better together? We've now got two
patchsets that are getting very similar.
Bart, please at least CC Ming when you send out the patches.
Ming - instead of sending a separate series right after Bart a
differential series would be nice. This also
1301 - 1400 of 2477 matches
Mail list logo