On Tue, Mar 22, 2022 at 04:16:07PM -0400, Mike Snitzer wrote:
> I did initially think it worthwhile to push the use of
> bio_alloc_percpu_cache() down to bio_alloc_bioset() rather than
> bio_alloc_clone() -- but I started slower with more targetted change
> for DM's needs.
Note that the nvme
On Wed, Mar 23, 2022 at 06:38:22AM +0900, Ryusuke Konishi wrote:
> This looks because the mask of GFP_KERNEL is removed along with
> the removal of mpage_alloc().
>
> The default value of the gfp flag is set to GFP_HIGHUSER_MOVABLE by
> inode_init_always().
> So, __GFP_HIGHMEM hits the gfp
On Wed, Mar 23, 2022 at 07:42:48AM +0100, Christoph Hellwig wrote:
> On Wed, Mar 23, 2022 at 06:38:22AM +0900, Ryusuke Konishi wrote:
> > This looks because the mask of GFP_KERNEL is removed along with
> > the removal of mpage_alloc().
> >
>
> > The default value of the gfp flag is set to
On 3/22/2022 10:45 PM, Christoph Hellwig wrote:
> On Tue, Mar 22, 2022 at 11:05:09PM +, Jane Chu wrote:
>>> This DAX_RECOVERY doesn't actually seem to be used anywhere here or
>>> in the subsequent patches. Did I miss something?
>>
>> dax_iomap_iter() uses the flag in the same patch,
>> +
Hi Linus,
These changes build on Jens' for-5.18 block tree because of various
changes that impacted DM and DM's need for bio_start_io_acct_time().
The following changes since commit bcd2be763252f3a4d5fc4d6008d4d96c601ee74b:
block/bfq_wf2q: correct weight to ioprio (2022-02-16 20:09:14 -0700)
Add REQ_ALLOC_CACHE and set it in %opf passed to bio_alloc_bioset to
inform bio_alloc_bioset (and any stacked block drivers) that bio should
be allocated from respective bioset's per-cpu alloc cache if possible.
This decouples access control to the alloc cache (via REQ_ALLOC_CACHE)
from actual
Hi Jens,
I ran with your suggestion and DM now sees a ~7% improvement in hipri
bio polling with io_uring (using dm-linear on null_blk, IOPS went from
900K to 966K).
Christoph,
I tried to address your review of the previous set. Patch 1 and 2 can
obviously be folded but I left them split out for
A bioset's percpu cache may have broader utility in the future but for
now constrain it to being tightly coupled to QUEUE_FLAG_POLL.
Signed-off-by: Mike Snitzer
---
drivers/md/dm-table.c | 11 ---
drivers/md/dm.c | 6 +++---
drivers/md/dm.h | 4 ++--
3 files changed, 13
These changes allow DM core to make full use of BIOSET_PERCPU_CACHE for
REQ_POLLED bios:
Factor out bio_alloc_percpu_cache() from bio_alloc_kiocb() to allow
use by bio_alloc_clone() too.
Update bioset_init_from_src() to set BIOSET_PERCPU_CACHE if
bio_src->cache is not NULL.
Move
Also change dm_io_complete() to use bio_clear_polled() so that it
properly clears all associated bio state (REQ_POLLED, BIO_PERCPU_CACHE,
etc).
This commit improves DM's hipri bio polling (REQ_POLLED) perf by ~7%.
Signed-off-by: Mike Snitzer
---
drivers/md/dm.c | 6 +++---
1 file changed, 3
On 3/23/22 1:45 PM, Mike Snitzer wrote:
> Hi Jens,
>
> I ran with your suggestion and DM now sees a ~7% improvement in hipri
> bio polling with io_uring (using dm-linear on null_blk, IOPS went from
> 900K to 966K).
>
> Christoph,
>
> I tried to address your review of the previous set. Patch 1
tree:
https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git
dm-5.19
head: 7f8ac95a6464b895e3d2b6175f7ee64a4c10fcfe
commit: 7f8ac95a6464b895e3d2b6175f7ee64a4c10fcfe [132/132] dm: push error
handling down to __split_and_process_bio
config: x86_64-randconfig-a012
-20220323
(https://download.01.org/0day-ci/archive/20220324/202203240638.crxqjfy5-...@intel.com/config)
compiler: s390-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
wget
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O
~/bin/make.cross
chmod +x
dm: push error
> handling down to __split_and_process_bio
> config: s390-buildonly-randconfig-r005-20220323
> (https://download.01.org/0day-ci/archive/20220324/202203240638.crxqjfy5-...@intel.com/config)
> compiler: s390-linux-gcc (GCC) 11.2.0
> reproduce (this is a W=1 b
On Wed, Mar 09, 2022 at 01:03:26PM -0700, Uday Shankar wrote:
> When NVMe disks are added to the system, no uevent containing the
> DISK_RO property is generated. As a result, dm-* nodes backed by
> readonly NVMe disks will not have their RO state set properly. The
> result looks like this:
>
> $
15 matches
Mail list logo