Re: [PATCH 1/4] bcache: finish incremental GC

2018-04-12 Thread Coly Li
On 2018/4/13 11:12 AM, tang.jun...@zte.com.cn wrote: > Hi Coly, > > Hello Coly, > >> On 2018/4/12 2:38 PM, tang.jun...@zte.com.cn wrote: >>> From: Tang Junhui >>> >>> In GC thread, we record the latest GC key in gc_done, which is expected >>> to be used for incremental

Re: [PATCH 2/4] bcache: calculate the number of incremental GC nodesaccording to the total of btree nodes

2018-04-12 Thread tang . junhui
Hi Coly >Hi Junhui, > >> btree_node_prefetch(b, k); >> +/* >> + * initiallize c->gc_stats.nodes >> + * for incremental GC >> + */ >> +b->c->gc_stats.nodes++; >

Re: [PATCH 1/4] bcache: finish incremental GC

2018-04-12 Thread tang . junhui
Hi Coly, Hello Coly, > On 2018/4/12 2:38 PM, tang.jun...@zte.com.cn wrote: > > From: Tang Junhui > > > > In GC thread, we record the latest GC key in gc_done, which is expected > > to be used for incremental GC, but in currently code, we didn't realize > > it. When GC

Re: usercopy whitelist woe in scsi_sense_cache

2018-04-12 Thread Kees Cook
On Thu, Apr 12, 2018 at 3:47 PM, Kees Cook wrote: > After fixing up some build issues in the middle of the 4.16 cycle, I > get an unhelpful bisect result of commit 0a4b6e2f80aa ("Merge branch > 'for-4.16/block'"). Instead of letting the test run longer, I'm going > to

Re: [PATCH v3] blk-mq: Avoid that submitting a bio concurrently with device removal triggers a crash

2018-04-12 Thread Joseph Qi
On 18/4/11 07:02, Bart Van Assche wrote: > Because blkcg_exit_queue() is now called from inside blk_cleanup_queue() > it is no longer safe to access cgroup information during or after the > blk_cleanup_queue() call. Hence protect the generic_make_request_checks() > call with blk_queue_enter() /

Re: sr: get/drop reference to device in revalidate and check_events

2018-04-12 Thread Martin K. Petersen
Jens, > We can't just use scsi_cd() to get the scsi_cd structure, we have > to grab a live reference to the device. For both callbacks, we're > not inside an open where we already hold a reference to the device. Applied to 4.17/scsi-fixes, thanks! -- Martin K. Petersen Oracle Linux

Re: System hung in I/O when booting with sd card

2018-04-12 Thread Bart Van Assche
On 04/12/18 18:38, Shawn Lin wrote: I think your patch solve this. Thanks. Tested-by: Shawn Lin Thanks for the testing! Bart.

Re: [PATCH 4/4] bcache: fix I/O significant decline while backenddevices registering

2018-04-12 Thread tang . junhui
Hi Coly, > On 2018/4/12 2:38 PM, tang.jun...@zte.com.cn wrote: > > From: Tang Junhui > > > > I attached several backend devices in the same cache set, and produced lots > > of dirty data by running small rand I/O writes in a long time, then I > > continue run I/O in the

Re: 4.15.14 crash with iscsi target and dvd

2018-04-12 Thread Ming Lei
On Thu, Apr 12, 2018 at 09:43:02PM -0400, Wakko Warner wrote: > Ming Lei wrote: > > On Tue, Apr 10, 2018 at 08:45:25PM -0400, Wakko Warner wrote: > > > Sorry for the delay. I reverted my change, added this one. I didn't > > > reboot, I just unloaded and loaded this one. > > > Note: /dev/sr1 as

Re: [PATCH] bcache: simplify the calculation of the total amount of flash dirty data

2018-04-12 Thread tang . junhui
Hi Coly, > On 2018/4/12 3:21 PM, tang.jun...@zte.com.cn wrote: > > From: Tang Junhui > > > > Currently we calculate the total amount of flash only devices dirty data > > by adding the dirty data of each flash only device under registering > > locker. It is very

Re: System hung in I/O when booting with sd card

2018-04-12 Thread Shawn Lin
Hi Bart, On 2018/4/12 17:05, Shawn Lin wrote: Hi Bart, On 2018/4/12 9:54, Bart Van Assche wrote: On Thu, 2018-04-12 at 09:48 +0800, Shawn Lin wrote: I ran into 2 times that my system hung here when booting with a ext4 sd card. No sure how to reproduce it but it seems doesn't matter with the

Re: usercopy whitelist woe in scsi_sense_cache

2018-04-12 Thread Kees Cook
On Thu, Apr 12, 2018 at 3:01 PM, Kees Cook wrote: > On Thu, Apr 12, 2018 at 12:04 PM, Oleksandr Natalenko > wrote: >> Hi. >> >> On čtvrtek 12. dubna 2018 20:44:37 CEST Kees Cook wrote: >>> My first bisect attempt gave me commit 5448aca41cd5

Re: [PATCH] block: Ensure that a request queue is dissociated from the cgroup controller

2018-04-12 Thread Bart Van Assche
On Thu, 2018-04-12 at 12:09 -0700, t...@kernel.org wrote: > On Thu, Apr 12, 2018 at 06:56:26PM +, Bart Van Assche wrote: > > On Thu, 2018-04-12 at 11:11 -0700, t...@kernel.org wrote: > > > * Right now, blk_queue_enter/exit() doesn't have any annotations. > > > IOW, there's no way for paths

Re: usercopy whitelist woe in scsi_sense_cache

2018-04-12 Thread Kees Cook
On Thu, Apr 12, 2018 at 12:04 PM, Oleksandr Natalenko wrote: > Hi. > > On čtvrtek 12. dubna 2018 20:44:37 CEST Kees Cook wrote: >> My first bisect attempt gave me commit 5448aca41cd5 ("null_blk: wire >> up timeouts"), which seems insane given that null_blk isn't even

Re: [PATCH v2] block: ratelimite pr_err on IO path

2018-04-12 Thread Martin K. Petersen
Jack, > + pr_err_ratelimited("%s: ref tag error at > location %llu (rcvd %u)\n", I'm a bit concerned about dropping records of potential data loss. Also, what are you doing that compels all these to be logged? This should be a very rare occurrence. -- Martin K.

[PATCH] target: Fix Fortify_panic kernel exception

2018-04-12 Thread Bryant G. Ly
[ 496.212783] [ cut here ] [ 496.212784] kernel BUG at /build/linux-hwe-edge-ojNirv/linux-hwe-edge-4.15.0/lib/string.c:1052! [ 496.212789] Oops: Exception in kernel mode, sig: 5 [#1] [ 496.212791] LE SMP NR_CPUS=2048 NUMA pSeries [ 496.212795] Modules linked in:

Re: [PATCH v2] block: Ensure that a request queue is dissociated from the cgroup controller

2018-04-12 Thread Alexandru Moise
On Thu, Apr 12, 2018 at 01:11:11PM -0600, Bart Van Assche wrote: > Several block drivers call alloc_disk() followed by put_disk() if > something fails before device_add_disk() is called without calling > blk_cleanup_queue(). Make sure that also for this scenario a request > queue is dissociated

[PATCH v2] block: Ensure that a request queue is dissociated from the cgroup controller

2018-04-12 Thread Bart Van Assche
Several block drivers call alloc_disk() followed by put_disk() if something fails before device_add_disk() is called without calling blk_cleanup_queue(). Make sure that also for this scenario a request queue is dissociated from the cgroup controller. This patch avoids that loading the parport_pc,

Re: [PATCH] block: Ensure that a request queue is dissociated from the cgroup controller

2018-04-12 Thread t...@kernel.org
Hello, On Thu, Apr 12, 2018 at 06:56:26PM +, Bart Van Assche wrote: > On Thu, 2018-04-12 at 11:11 -0700, t...@kernel.org wrote: > > * Right now, blk_queue_enter/exit() doesn't have any annotations. > > IOW, there's no way for paths which need enter locked to actually > > assert that. If

Re: usercopy whitelist woe in scsi_sense_cache

2018-04-12 Thread Oleksandr Natalenko
Hi. On čtvrtek 12. dubna 2018 20:44:37 CEST Kees Cook wrote: > My first bisect attempt gave me commit 5448aca41cd5 ("null_blk: wire > up timeouts"), which seems insane given that null_blk isn't even built > in the .config. I managed to get the testing automated now for a "git > bisect run ...",

Re: [PATCH] block: Ensure that a request queue is dissociated from the cgroup controller

2018-04-12 Thread Bart Van Assche
On Thu, 2018-04-12 at 11:11 -0700, t...@kernel.org wrote: > * Right now, blk_queue_enter/exit() doesn't have any annotations. > IOW, there's no way for paths which need enter locked to actually > assert that. If we're gonna protect more things with queue > enter/exit, it gotta be annotated.

[PATCH] loop: fix aio/dio end of request clearing

2018-04-12 Thread Jens Axboe
If we read more than the user asked for, we zero fill the last part. But the current code assumes that the request has just one bio, and since that's not guaranteed to be true, we can run into a situation where we attempt to advance a bio by a bigger amount than its size. Handle the zero filling

Re: usercopy whitelist woe in scsi_sense_cache

2018-04-12 Thread Kees Cook
On Wed, Apr 11, 2018 at 5:03 PM, Kees Cook wrote: > On Wed, Apr 11, 2018 at 3:47 PM, Kees Cook wrote: >> On Tue, Apr 10, 2018 at 8:13 PM, Kees Cook wrote: >>> I'll see about booting with my own kernels, etc, and try to narrow

[PATCH v2] block: do not use interruptible wait anywhere

2018-04-12 Thread Alan Jenkins
When blk_queue_enter() waits for a queue to unfreeze, or unset the PREEMPT_ONLY flag, do not allow it to be interrupted by a signal. The PREEMPT_ONLY flag was introduced later in commit 3a0a529971ec ("block, scsi: Make SCSI quiesce and resume work reliably"). Note the SCSI device is resumed

Re: [PATCH] block: Ensure that a request queue is dissociated from the cgroup controller

2018-04-12 Thread t...@kernel.org
Hello, On Thu, Apr 12, 2018 at 04:29:09PM +, Bart Van Assche wrote: > Any code that submits a bio or request needs blk_queue_enter() / > blk_queue_exit() anyway. Please have a look at the following commit - you will > see that that commit reduces the number of blk_queue_enter() / >

Re: [PATCH] block: do not use interruptible wait anywhere

2018-04-12 Thread Bart Van Assche
On Thu, 2018-04-12 at 17:23 +0100, Alan Jenkins wrote: > @@ -947,14 +946,12 @@ int blk_queue_enter(struct request_queue *q, > blk_mq_req_flags_t flags) >*/ > smp_rmb(); > > - ret = wait_event_interruptible(q->mq_freeze_wq, > +

Re: [PATCH] block: Ensure that a request queue is dissociated from the cgroup controller

2018-04-12 Thread Bart Van Assche
On Thu, 2018-04-12 at 09:12 -0700, t...@kernel.org wrote: > > Did you perhaps mean blkg_lookup_create()? That function has one caller, > > namely blkcg_bio_issue_check(). The only caller of that function is > > generic_make_request_checks(). A patch was posted on the linux-block mailing > > list

[PATCH] block: do not use interruptible wait anywhere

2018-04-12 Thread Alan Jenkins
When blk_queue_enter() waits for a queue to unfreeze, or unset the PREEMPT_ONLY flag, do not allow it to be interrupted by a signal. The PREEMPT_ONLY flag was introduced later in commit 3a0a529971ec ("block, scsi: Make SCSI quiesce and resume work reliably"). Note the SCSI device is resumed

Re: [PATCH] block: Ensure that a request queue is dissociated from the cgroup controller

2018-04-12 Thread t...@kernel.org
Hello, On Thu, Apr 12, 2018 at 04:03:52PM +, Bart Van Assche wrote: > On Thu, 2018-04-12 at 08:37 -0700, Tejun Heo wrote: > > On Thu, Apr 12, 2018 at 08:09:17AM -0600, Bart Van Assche wrote: > > > I have retested hotunplugging by rerunning the srp-test software. It > > > seems like you

Re: [PATCH] block: Ensure that a request queue is dissociated from the cgroup controller

2018-04-12 Thread Bart Van Assche
On Thu, 2018-04-12 at 08:37 -0700, Tejun Heo wrote: > On Thu, Apr 12, 2018 at 08:09:17AM -0600, Bart Van Assche wrote: > > I have retested hotunplugging by rerunning the srp-test software. It > > seems like you overlooked that this patch does not remove the > > blkcg_exit_queue() call from

Re: [PATCH] block: Ensure that a request queue is dissociated from the cgroup controller

2018-04-12 Thread Tejun Heo
Hello, Bart. On Thu, Apr 12, 2018 at 08:09:17AM -0600, Bart Van Assche wrote: > I have retested hotunplugging by rerunning the srp-test software. It > seems like you overlooked that this patch does not remove the > blkcg_exit_queue() call from blk_cleanup_queue()? If a device is > hotunplugged it

Re: [PATCH 4/4] bcache: fix I/O significant decline while backend devices registering

2018-04-12 Thread Coly Li
On 2018/4/12 2:38 PM, tang.jun...@zte.com.cn wrote: > From: Tang Junhui > > I attached several backend devices in the same cache set, and produced lots > of dirty data by running small rand I/O writes in a long time, then I > continue run I/O in the others cached devices,

Re: [PATCH 3/4] bcache: notify allocator to prepare for GC

2018-04-12 Thread Coly Li
On 2018/4/12 2:38 PM, tang.jun...@zte.com.cn wrote: > From: Tang Junhui > > Since no new bucket can be allocated during GC, and front side I/Os would > run out of all the buckets, so notify allocator to pack the free_inc queue > full of buckets before GC, then we could

Re: [PATCH 2/4] bcache: calculate the number of incremental GC nodes according to the total of btree nodes

2018-04-12 Thread Coly Li
On 2018/4/12 2:38 PM, tang.jun...@zte.com.cn wrote: > From: Tang Junhui > > This patch base on "[PATCH] bcache: finish incremental GC". > > Since incremental GC would stop 100ms when front side I/O comes, so when > there are many btree nodes, if GC only processes

Re: [PATCH] block: Ensure that a request queue is dissociated from the cgroup controller

2018-04-12 Thread Bart Van Assche
On 04/12/18 07:51, Tejun Heo wrote: On Wed, Apr 11, 2018 at 07:58:52PM -0600, Bart Van Assche wrote: Several block drivers call alloc_disk() followed by put_disk() if something fails before device_add_disk() is called without calling blk_cleanup_queue(). Make sure that also for this scenario a

Re: [PATCH 1/4] bcache: finish incremental GC

2018-04-12 Thread Coly Li
On 2018/4/12 2:38 PM, tang.jun...@zte.com.cn wrote: > From: Tang Junhui > > In GC thread, we record the latest GC key in gc_done, which is expected > to be used for incremental GC, but in currently code, we didn't realize > it. When GC runs, front side IO would be blocked

Re: [PATCH] block: Ensure that a request queue is dissociated from the cgroup controller

2018-04-12 Thread t...@kernel.org
On Thu, Apr 12, 2018 at 06:58:21AM -0700, t...@kernel.org wrote: > Cool, can we just factor out the queue lock from those drivers? It's > just really messy to be switching locks like we do in the cleanup > path. So, looking at a couple drivers, it looks like all we'd need is a struct which

Re: [PATCH] block: Ensure that a request queue is dissociated from the cgroup controller

2018-04-12 Thread t...@kernel.org
Hello, On Thu, Apr 12, 2018 at 03:56:51PM +0200, h...@lst.de wrote: > On Thu, Apr 12, 2018 at 06:48:12AM -0700, t...@kernel.org wrote: > > > Which sounds like a very good reason not to use a driver controller > > > lock for internals like blkcq. > > > > > > In fact splitting the lock used for

Re: [PATCH] blk-mq: fix race between complete and BLK_EH_RESET_TIMER

2018-04-12 Thread Tejun Heo
On Thu, Apr 12, 2018 at 07:05:13AM +0800, Ming Lei wrote: > > Not really because aborted_gstate right now doesn't have any memory > > barrier around it, so nothing ensures blk_add_timer() actually appears > > before. We can either add the matching barriers in aborted_gstate > > update and when

Re: [PATCH] block: Ensure that a request queue is dissociated from the cgroup controller

2018-04-12 Thread h...@lst.de
On Thu, Apr 12, 2018 at 06:48:12AM -0700, t...@kernel.org wrote: > > Which sounds like a very good reason not to use a driver controller > > lock for internals like blkcq. > > > > In fact splitting the lock used for synchronizing access to queue > > fields from the driver controller lock used to

Re: [PATCH] block: Ensure that a request queue is dissociated from the cgroup controller

2018-04-12 Thread Tejun Heo
On Wed, Apr 11, 2018 at 07:58:52PM -0600, Bart Van Assche wrote: > Several block drivers call alloc_disk() followed by put_disk() if > something fails before device_add_disk() is called without calling > blk_cleanup_queue(). Make sure that also for this scenario a request > queue is dissociated

Re: [PATCH] block: Ensure that a request queue is dissociated from the cgroup controller

2018-04-12 Thread t...@kernel.org
Hello, On Thu, Apr 12, 2018 at 03:14:40PM +0200, h...@lst.de wrote: > > At least the SCSI ULP drivers drop the last reference to a disk after > > the blk_cleanup_queue() call. As explained in the description of commit > > a063057d7c73, removing a request queue from blkcg must happen before > >

Re: [PATCH v4] blk-mq: Fix race conditions in request timeout handling

2018-04-12 Thread t...@kernel.org
Hello, Israel. On Thu, Apr 12, 2018 at 11:59:11AM +0300, Israel Rukshin wrote: > On 4/12/2018 12:31 AM, t...@kernel.org wrote: > >Hey, again. > > > >On Wed, Apr 11, 2018 at 10:07:33AM -0700, t...@kernel.org wrote: > >>Hello, Israel. > >> > >>On Wed, Apr 11, 2018 at 07:16:14PM +0300, Israel

Re: [PATCH v5] blk-mq: Avoid that a completion can be ignored for BLK_EH_RESET_TIMER

2018-04-12 Thread Christoph Hellwig
Looks good: Reviewed-by: Christoph Hellwig In addition to all the arguments in the changelog the diffstat is a pretty clear indicator that a straight forward state machine is exactly what we want.

Re: [PATCH v5] blk-mq: Avoid that a completion can be ignored for BLK_EH_RESET_TIMER

2018-04-12 Thread Christoph Hellwig
On Wed, Apr 11, 2018 at 10:11:05AM +0800, Ming Lei wrote: > On Tue, Apr 10, 2018 at 03:01:57PM -0600, Bart Van Assche wrote: > > The blk-mq timeout handling code ignores completions that occur after > > blk_mq_check_expired() has been called and before blk_mq_rq_timed_out() > > has reset

Re: [PATCH] block: Ensure that a request queue is dissociated from the cgroup controller

2018-04-12 Thread h...@lst.de
On Thu, Apr 12, 2018 at 11:52:11AM +, Bart Van Assche wrote: > On Thu, 2018-04-12 at 07:34 +0200, Christoph Hellwig wrote: > > On Wed, Apr 11, 2018 at 07:58:52PM -0600, Bart Van Assche wrote: > > > Several block drivers call alloc_disk() followed by put_disk() if > > > something fails before

Re: [PATCH] bcache: simplify the calculation of the total amount of flash dirty data

2018-04-12 Thread Coly Li
On 2018/4/12 3:21 PM, tang.jun...@zte.com.cn wrote: > From: Tang Junhui > > Currently we calculate the total amount of flash only devices dirty data > by adding the dirty data of each flash only device under registering > locker. It is very inefficient. > > In this

Re: [PATCH v3] blk-mq: Avoid that submitting a bio concurrently with device removal triggers a crash

2018-04-12 Thread Bart Van Assche
On 04/12/18 00:27, Christoph Hellwig wrote: On Tue, Apr 10, 2018 at 05:02:40PM -0600, Bart Van Assche wrote: Because blkcg_exit_queue() is now called from inside blk_cleanup_queue() it is no longer safe to access cgroup information during or after the blk_cleanup_queue() call. Hence protect the

[PATCH V3] blk-mq: fix race between complete and BLK_EH_RESET_TIMER

2018-04-12 Thread Ming Lei
The normal request completion can be done before or during handling BLK_EH_RESET_TIMER, and this race may cause the request to never be completed since driver's .timeout() may always return BLK_EH_RESET_TIMER. This issue can't be fixed completely by driver, since the normal completion can be done

Re: [PATCH] block: Ensure that a request queue is dissociated from the cgroup controller

2018-04-12 Thread Bart Van Assche
On Thu, 2018-04-12 at 07:34 +0200, Christoph Hellwig wrote: > On Wed, Apr 11, 2018 at 07:58:52PM -0600, Bart Van Assche wrote: > > Several block drivers call alloc_disk() followed by put_disk() if > > something fails before device_add_disk() is called without calling > > blk_cleanup_queue(). Make

[PATCH 2/2] bcache: add code comments for bset.c

2018-04-12 Thread Coly Li
This patch tries to add code comments in bset.c, to make some tricky code and designment to be more comprehensible. Most information of this patch comes from the discussion between Kent and I, he offers very informative details. If there is any mistake of the idea behind the code, no doubt that's

[PATCH 1/2] bcache: remove unncessary code in bch_btree_keys_init()

2018-04-12 Thread Coly Li
Function bch_btree_keys_init() initializes b->set[].size and b->set[].data to zero. As the code comments indicates, these code indeed is unncessary, because both struct btree_keys and struct bset_tree are nested embedded into struct btree, when struct btree is filled with 0 bits by kzalloc() in

Re: 4.15.14 crash with iscsi target and dvd

2018-04-12 Thread Ming Lei
On Tue, Apr 10, 2018 at 08:45:25PM -0400, Wakko Warner wrote: > Ming Lei wrote: > > Sure, thanks for your sharing. > > > > Wakko, could you test the following patch and see if there is any > > difference? > > > > -- > > diff --git a/drivers/target/target_core_pscsi.c > >

Re: [PATCH V2] blk-mq: fix race between complete and BLK_EH_RESET_TIMER

2018-04-12 Thread Ming Lei
On Thu, Apr 12, 2018 at 10:38:56AM +0800, jianchao.wang wrote: > Hi Ming > > On 04/12/2018 07:38 AM, Ming Lei wrote: > > +* > > +* Cover complete vs BLK_EH_RESET_TIMER race in slow path with > > +* helding queue lock. > > */ > > hctx_lock(hctx, _idx); > > if

[PATCH v2] block: ratelimite pr_err on IO path

2018-04-12 Thread Jack Wang
From: Jack Wang This avoid soft lockup below: [ 2328.328429] Call Trace: [ 2328.328433] vprintk_emit+0x229/0x2e0 [ 2328.328436] ? t10_pi_type3_verify_ip+0x20/0x20 [ 2328.328437] printk+0x52/0x6e [ 2328.328439] t10_pi_verify+0x9e/0xf0 [ 2328.328441]

Re: System hung in I/O when booting with sd card

2018-04-12 Thread Shawn Lin
Hi Bart, On 2018/4/12 9:54, Bart Van Assche wrote: On Thu, 2018-04-12 at 09:48 +0800, Shawn Lin wrote: I ran into 2 times that my system hung here when booting with a ext4 sd card. No sure how to reproduce it but it seems doesn't matter with the ext4 as I see it again with a vfat sd card, this

Re: [PATCH v4] blk-mq: Fix race conditions in request timeout handling

2018-04-12 Thread Israel Rukshin
On 4/12/2018 12:31 AM, t...@kernel.org wrote: Hey, again. On Wed, Apr 11, 2018 at 10:07:33AM -0700, t...@kernel.org wrote: Hello, Israel. On Wed, Apr 11, 2018 at 07:16:14PM +0300, Israel Rukshin wrote: Just noticed this one, this looks interesting to me as well. Israel, can you run your test

Re: [PATCH] block: ratelimite pr_err on IO path

2018-04-12 Thread Jinpu Wang
On Wed, Apr 11, 2018 at 7:07 PM, Elliott, Robert (Persistent Memory) wrote: >> -Original Message- >> From: linux-kernel-ow...@vger.kernel.org [mailto:linux-kernel- >> ow...@vger.kernel.org] On Behalf Of Jack Wang >> Sent: Wednesday, April 11, 2018 8:21 AM >> Subject:

Re: [PATCH] block/amiflop: Don't log an error message for an invalid ioctl

2018-04-12 Thread Geert Uytterhoeven
On Thu, Apr 12, 2018 at 3:23 AM, Finn Thain wrote: > Do as the swim3 driver does and just return -ENOTTY. > > Cc: Geert Uytterhoeven > Cc: linux-m...@lists.linux-m68k.org > Signed-off-by: Finn Thain Reviewed-by:

Re: [PATCH 0/4] bcache: incremental GC and dirty data init

2018-04-12 Thread tang . junhui
Hi Coly, > On 2018/4/12 2:38 PM, tang.jun...@zte.com.cn wrote: > > Hi maintainers and folks, > > > > Some patches of this patch set have been sent before, they are not merged > > yet, and I add two new patches to solve some issues I found while testing. > > since They are interdependent, so I

[PATCH] bcache: simplify the calculation of the total amount of flash dirty data

2018-04-12 Thread tang . junhui
From: Tang Junhui Currently we calculate the total amount of flash only devices dirty data by adding the dirty data of each flash only device under registering locker. It is very inefficient. In this patch, we add a member flash_dev_dirty_sectors in struct cache_set to

Re: [PATCH V2] blk-mq: fix race between complete and BLK_EH_RESET_TIMER

2018-04-12 Thread Ming Lei
Hi Jianchao, On Thu, Apr 12, 2018 at 10:38:56AM +0800, jianchao.wang wrote: > Hi Ming > > On 04/12/2018 07:38 AM, Ming Lei wrote: > > +* > > +* Cover complete vs BLK_EH_RESET_TIMER race in slow path with > > +* helding queue lock. > > */ > > hctx_lock(hctx, _idx); > > if

Re: [PATCH 0/4] bcache: incremental GC and dirty data init

2018-04-12 Thread Coly Li
On 2018/4/12 2:38 PM, tang.jun...@zte.com.cn wrote: > Hi maintainers and folks, > > Some patches of this patch set have been sent before, they are not merged > yet, and I add two new patches to solve some issues I found while testing. > since They are interdependent, so I make a patch set and

[PATCH 0/4] bcache: incremental GC and dirty data init

2018-04-12 Thread tang . junhui
Hi maintainers and folks, Some patches of this patch set have been sent before, they are not merged yet, and I add two new patches to solve some issues I found while testing. since They are interdependent, so I make a patch set and resend them. [PATCH 1/4] bcache: finish incremental GC [PATCH

[PATCH 3/4] bcache: notify allocator to prepare for GC

2018-04-12 Thread tang . junhui
From: Tang Junhui Since no new bucket can be allocated during GC, and front side I/Os would run out of all the buckets, so notify allocator to pack the free_inc queue full of buckets before GC, then we could have enough buckets for front side I/Os during GC period.

[PATCH 1/4] bcache: finish incremental GC

2018-04-12 Thread tang . junhui
From: Tang Junhui In GC thread, we record the latest GC key in gc_done, which is expected to be used for incremental GC, but in currently code, we didn't realize it. When GC runs, front side IO would be blocked until the GC over, it would be a long time if there is a lot

[PATCH 4/4] bcache: fix I/O significant decline while backend devices registering

2018-04-12 Thread tang . junhui
From: Tang Junhui I attached several backend devices in the same cache set, and produced lots of dirty data by running small rand I/O writes in a long time, then I continue run I/O in the others cached devices, and stopped a cached device, after a mean while, I register

Re: [PATCH v3] blk-mq: Avoid that submitting a bio concurrently with device removal triggers a crash

2018-04-12 Thread Christoph Hellwig
On Tue, Apr 10, 2018 at 05:02:40PM -0600, Bart Van Assche wrote: > Because blkcg_exit_queue() is now called from inside blk_cleanup_queue() > it is no longer safe to access cgroup information during or after the > blk_cleanup_queue() call. Hence protect the generic_make_request_checks() > call