On Fri, Oct 19, 2018 at 10:53:49AM +0800, Ming Lei wrote:
> On Thu, Oct 18, 2018 at 05:22:19PM +0200, Christoph Hellwig wrote:
> > On Thu, Oct 18, 2018 at 08:11:23AM -0700, Matthew Wilcox wrote:
> > > On Thu, Oct 18, 2018 at 04:42:07PM +0200, Christoph Hellwig wrote:
> > > > This all seems quite
On 10/18/18 8:53 PM, Ming Lei wrote:
> On Thu, Oct 18, 2018 at 05:22:19PM +0200, Christoph Hellwig wrote:
>> On Thu, Oct 18, 2018 at 08:11:23AM -0700, Matthew Wilcox wrote:
>>> On Thu, Oct 18, 2018 at 04:42:07PM +0200, Christoph Hellwig wrote:
This all seems quite complicated.
I
On Thu, Oct 18, 2018 at 05:22:19PM +0200, Christoph Hellwig wrote:
> On Thu, Oct 18, 2018 at 08:11:23AM -0700, Matthew Wilcox wrote:
> > On Thu, Oct 18, 2018 at 04:42:07PM +0200, Christoph Hellwig wrote:
> > > This all seems quite complicated.
> > >
> > > I think the interface we'd want is more
On 10/18/18 8:06 PM, Ming Lei wrote:
> On Thu, Oct 18, 2018 at 07:52:59PM -0600, Jens Axboe wrote:
>> On 10/18/18 7:39 PM, Ming Lei wrote:
>>> On Thu, Oct 18, 2018 at 07:33:50PM -0600, Jens Axboe wrote:
On 10/18/18 7:28 PM, Ming Lei wrote:
> On Thu, Oct 18, 2018 at 08:27:28AM -0600, Jens
On Thu, Oct 18, 2018 at 07:52:59PM -0600, Jens Axboe wrote:
> On 10/18/18 7:39 PM, Ming Lei wrote:
> > On Thu, Oct 18, 2018 at 07:33:50PM -0600, Jens Axboe wrote:
> >> On 10/18/18 7:28 PM, Ming Lei wrote:
> >>> On Thu, Oct 18, 2018 at 08:27:28AM -0600, Jens Axboe wrote:
> On 10/18/18 7:18 AM,
On 10/18/18 7:39 PM, Ming Lei wrote:
> On Thu, Oct 18, 2018 at 07:33:50PM -0600, Jens Axboe wrote:
>> On 10/18/18 7:28 PM, Ming Lei wrote:
>>> On Thu, Oct 18, 2018 at 08:27:28AM -0600, Jens Axboe wrote:
On 10/18/18 7:18 AM, Ming Lei wrote:
> Now we only check if DMA IO buffer is aligned
On Thu, Oct 18, 2018 at 07:33:50PM -0600, Jens Axboe wrote:
> On 10/18/18 7:28 PM, Ming Lei wrote:
> > On Thu, Oct 18, 2018 at 08:27:28AM -0600, Jens Axboe wrote:
> >> On 10/18/18 7:18 AM, Ming Lei wrote:
> >>> Now we only check if DMA IO buffer is aligned to queue_dma_alignment()
> >>> for
On 10/18/18 7:28 PM, Ming Lei wrote:
> On Thu, Oct 18, 2018 at 08:27:28AM -0600, Jens Axboe wrote:
>> On 10/18/18 7:18 AM, Ming Lei wrote:
>>> Now we only check if DMA IO buffer is aligned to queue_dma_alignment()
>>> for pass-through request, and it isn't done for normal IO request.
>>>
>>> Given
On Thu, Oct 18, 2018 at 08:27:28AM -0600, Jens Axboe wrote:
> On 10/18/18 7:18 AM, Ming Lei wrote:
> > Now we only check if DMA IO buffer is aligned to queue_dma_alignment()
> > for pass-through request, and it isn't done for normal IO request.
> >
> > Given the check has to be done on each bvec,
We're only setting up the bounce bio sets if we happen
to need bouncing for regular HIGHMEM, not if we only need
it for ISA devices.
Reported-by: Ondrej Zary
Tested-by: Ondrej Zary
Signed-off-by: Jens Axboe
diff --git a/block/bounce.c b/block/bounce.c
index b30071ac4ec6..1356a2f4aae2 100644
On 10/18/18 7:15 AM, Christoph Hellwig wrote:
> Switch all remaining users of the legacy PCI DMA API to the
> generic DMA API.
Applied, thanks.
--
Jens Axboe
Am Mittwoch, 17. Oktober 2018, 08:21:51 CEST schrieb Christoph Hellwig:
> On Tue, Oct 16, 2018 at 08:26:31AM -0600, Jens Axboe wrote:
> > > Yes. Shall I send a patch with your suggestion or will you?
> >
> > Christoph should just fold it in, the bug only exists after his
> > change to it.
>
>
On Thu, 2018-10-18 at 21:18 +0800, Ming Lei wrote:
> Turns out q->dma_alignement should be stack limit because now bvec table
^^
dma_alignment?
> is immutalbe, the underlying queue's dma alignment has to be perceptible
^
>>> Your commit message doesn't really explain what you think the issue
>>> is. We have 4 write hint types, with a fifth being "none". Where
>>> are we exceeding these 5 hints?
>> WRITE_LIFE_EXTREME = RWH_WRITE_LIFE_EXTREME = 5,
>> will be ignored by queue_write_hint_{show_store}.
>Hmm, wonder
>>> Your commit message doesn't really explain what you think the issue
>>> is. We have 4 write hint types, with a fifth being "none". Where
>>> are we exceeding these 5 hints?
>> WRITE_LIFE_EXTREME = RWH_WRITE_LIFE_EXTREME = 5,
>> will be ignored by queue_write_hint_{show_store}.
>Hmm, wonder
On 10/18/18 9:30 AM, Avri Altman wrote:
>> -Original Message-
>> From: Jens Axboe
>> Sent: Thursday, October 18, 2018 5:38 PM
>> To: Avri Altman ; linux-block@vger.kernel.org
>> Cc: Avi Shchislowski ; Alex Lemberg
>>
>> Subject: Re: [PATCH] block: Make write_hints[] big enough
>>
>> On
On Thu, 2018-10-18 at 07:03 -0700, Matthew Wilcox wrote:
> On Thu, Oct 18, 2018 at 09:18:12PM +0800, Ming Lei wrote:
> > Filesystems may allocate io buffer from slab, and use this buffer to
> > submit bio. This way may break storage drivers if they have special
> > requirement on DMA alignment.
>
> -Original Message-
> From: Jens Axboe
> Sent: Thursday, October 18, 2018 5:38 PM
> To: Avri Altman ; linux-block@vger.kernel.org
> Cc: Avi Shchislowski ; Alex Lemberg
>
> Subject: Re: [PATCH] block: Make write_hints[] big enough
>
> On 10/17/18 10:51 AM, Avri Altman wrote:
> > Just
On Thu, Oct 18, 2018 at 08:11:23AM -0700, Matthew Wilcox wrote:
> On Thu, Oct 18, 2018 at 04:42:07PM +0200, Christoph Hellwig wrote:
> > This all seems quite complicated.
> >
> > I think the interface we'd want is more one that has a little
> > cache of a single page in the queue, and a little
On Thu, Oct 18, 2018 at 08:06:05AM -0700, Matthew Wilcox wrote:
> Can you name one that does require 512-byte alignment, preferably still
> in use? Or even >4-byte alignment. I just checked AHCI and that requires
> only 2-byte alignment.
Xen-blkfront, rsxx, various SD/MMC card readers for
On Thu, Oct 18, 2018 at 04:42:07PM +0200, Christoph Hellwig wrote:
> This all seems quite complicated.
>
> I think the interface we'd want is more one that has a little
> cache of a single page in the queue, and a little bitmap which
> sub-page size blocks of it are used.
>
> Something like
On Thu, Oct 18, 2018 at 04:05:51PM +0200, Christoph Hellwig wrote:
> On Thu, Oct 18, 2018 at 07:03:42AM -0700, Matthew Wilcox wrote:
> > Before we go down this road, could we have a discussion about what
> > hardware actually requires this? Storage has this weird assumption that
> > I/Os must be
On 10/18/18 8:43 AM, Christoph Hellwig wrote:
> On Thu, Oct 18, 2018 at 08:27:28AM -0600, Jens Axboe wrote:
>> On 10/18/18 7:18 AM, Ming Lei wrote:
>>> Now we only check if DMA IO buffer is aligned to queue_dma_alignment()
>>> for pass-through request, and it isn't done for normal IO request.
>>>
On Thu, Oct 18, 2018 at 08:27:28AM -0600, Jens Axboe wrote:
> On 10/18/18 7:18 AM, Ming Lei wrote:
> > Now we only check if DMA IO buffer is aligned to queue_dma_alignment()
> > for pass-through request, and it isn't done for normal IO request.
> >
> > Given the check has to be done on each bvec,
This all seems quite complicated.
I think the interface we'd want is more one that has a little
cache of a single page in the queue, and a little bitmap which
sub-page size blocks of it are used.
Something like (pseudo code minus locking):
void *blk_alloc_sector_buffer(struct block_device
On 10/17/18 10:51 AM, Avri Altman wrote:
> Just stumbled over this.
> Looks like the write hints array in the request queue is not allotted
> with the required space to accommodate all the write hint types.
>
> fixes: f793dfd3f39a (blk-mq: expose write hints through debugfs)
Your commit message
On 10/18/18 1:21 AM, Jan Kara wrote:
> On Wed 17-10-18 10:29:22, Jens Axboe wrote:
>> On 10/17/18 4:05 AM, Jan Kara wrote:
>>> On Tue 16-10-18 11:35:59, Jens Axboe wrote:
On 10/15/18 1:44 PM, Paolo Valente wrote:
> Here are some old results with a very simple configuration:
>
On Thu, Oct 18, 2018 at 09:18:15PM +0800, Ming Lei wrote:
> This patch converts .dma_alignment into stacked limit, so the stack
> driver may get updated with underlying dma alignment, and allocate
> IO buffer as queue DMA aligned.
>
> Cc: Vitaly Kuznetsov
> Cc: Dave Chinner
> Cc: Linux FS Devel
On Thu, Oct 18, 2018 at 09:18:14PM +0800, Ming Lei wrote:
> Turns out q->dma_alignement should be stack limit because now bvec table
> is immutalbe, the underlying queue's dma alignment has to be perceptible
> by stack driver, so IO buffer can be allocated as dma aligned before
> adding to bio.
>
> diff --git a/block/blk-merge.c b/block/blk-merge.c
> index 42a46744c11b..d2dbd508cb6d 100644
> --- a/block/blk-merge.c
> +++ b/block/blk-merge.c
> @@ -174,6 +174,8 @@ static struct bio *blk_bio_segment_split(struct
> request_queue *q,
> const unsigned max_sectors = get_max_io_size(q,
On 10/18/18 7:18 AM, Ming Lei wrote:
> Now we only check if DMA IO buffer is aligned to queue_dma_alignment()
> for pass-through request, and it isn't done for normal IO request.
>
> Given the check has to be done on each bvec, it isn't efficient to add the
> check in
Looks good,
Reviewed-by: Johannes Thumshirn
--
Johannes ThumshirnSUSE Labs
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG
Looks good,
Reviewed-by: Johannes Thumshirn
--
Johannes ThumshirnSUSE Labs
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG
On Thu, Oct 18, 2018 at 07:03:42AM -0700, Matthew Wilcox wrote:
> Before we go down this road, could we have a discussion about what
> hardware actually requires this? Storage has this weird assumption that
> I/Os must be at least 512 byte aligned in memory, and I don't know where
> this idea
On Thu, Oct 18, 2018 at 09:18:12PM +0800, Ming Lei wrote:
> Hi,
>
> Filesystems may allocate io buffer from slab, and use this buffer to
> submit bio. This way may break storage drivers if they have special
> requirement on DMA alignment.
Before we go down this road, could we have a discussion
Looks good,
Reviewed-by: Johannes Thumshirn
--
Johannes ThumshirnSUSE Labs
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG
On 10/12/18 1:53 AM, Ming Lei wrote:
> blk_queue_split() does respect this limit via bio splitting, so no
> need to do that in blkdev_issue_discard(), then we can align to
> normal bio submit(bio_add_page() & submit_bio()).
>
> More importantly, this patch fixes one issue introduced in
This patch converts .dma_alignment into stacked limit, so the stack
driver may get updated with underlying dma alignment, and allocate
IO buffer as queue DMA aligned.
Cc: Vitaly Kuznetsov
Cc: Dave Chinner
Cc: Linux FS Devel
Cc: Darrick J. Wong
Cc: x...@vger.kernel.org
Cc: Dave Chinner
Cc:
Turns out q->dma_alignement should be stack limit because now bvec table
is immutalbe, the underlying queue's dma alignment has to be perceptible
by stack driver, so IO buffer can be allocated as dma aligned before
adding to bio.
So this patch moves .dma_alignment into q->limits and prepares for
Hi,
Filesystems may allocate io buffer from slab, and use this buffer to
submit bio. This way may break storage drivers if they have special
requirement on DMA alignment.
The patch 1 adds one warning if the io buffer isn't aligned to DMA
alignment.
The 2nd & 3rd patches make DMA alignment as
Now we only check if DMA IO buffer is aligned to queue_dma_alignment()
for pass-through request, and it isn't done for normal IO request.
Given the check has to be done on each bvec, it isn't efficient to add the
check in generic_make_request_checks().
This patch addes one WARN in
XFS may use kmalloc() to allocate io buffer, this way may not respect
request queue's DMA alignment limit, and cause data corruption.
This patch uses the introduced block layer helpers to allocate this
kind of io buffer, and makes sure that DMA alignment is respected.
Cc: Vitaly Kuznetsov
Cc:
One big issue is that the allocated buffer from slab has to respect
the queue DMA alignment limit.
This patch supports to create one kmem_cache for less-than PAGE_SIZE
allocation, and makes sure that the allocation is aligned with queue
DMA alignment.
For >= PAGE_SIZE allocation, it should be
The PCI DMA API is deprecated, switch to the generic DMA API instead.
Signed-off-by: Christoph Hellwig
---
drivers/block/umem.c | 38 ++
1 file changed, 18 insertions(+), 20 deletions(-)
diff --git a/drivers/block/umem.c b/drivers/block/umem.c
index
The PCI DMA API is deprecated, switch to the generic DMA API instead.
Signed-off-by: Christoph Hellwig
---
drivers/block/sx8.c | 26 +-
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/drivers/block/sx8.c b/drivers/block/sx8.c
index
This code has effectively been commented out since the first commit,
so remove it.
Signed-off-by: Christoph Hellwig
---
drivers/block/sx8.c | 29 +
1 file changed, 5 insertions(+), 24 deletions(-)
diff --git a/drivers/block/sx8.c b/drivers/block/sx8.c
index
The mtip32xx used an odd mix of the old PCI and the generic DMA API,
so switch it over to the generic API entirely.
Note that this also removes a weird fallback to just a 32-bit coherent
dma mask if the 64-bit dma mask doesn't work, as that can't even happen.
Signed-off-by: Christoph Hellwig
The PCI DMA API is deprecated, switch to the generic DMA API instead.
Also make use of the dma_set_mask_and_coherent helper to easily set
the streaming an coherent DMA masks together.
Signed-off-by: Christoph Hellwig
---
drivers/block/skd_main.c | 63
1
The PCI DMA API is deprecated, switch to the generic DMA API instead.
Signed-off-by: Christoph Hellwig
---
drivers/block/rsxx/core.c | 2 +-
drivers/block/rsxx/dma.c | 52 +++
2 files changed, 27 insertions(+), 27 deletions(-)
diff --git
Switch all remaining users of the legacy PCI DMA API to the
generic DMA API.
Add regression test for patch "block/loop: Use global lock for ioctl()
operation." where we can oops while traversing list of loop devices
backing newly created device.
Signed-off-by: Jan Kara
---
src/Makefile | 3 ++-
src/loop_change_fd.c | 48 +
Hello,
these two patches create two new tests for blktests as regression tests
for my recently posted loopback device fixes. More details in individual
patches.
Honza
Add test for setting partscan flag.
Signed-off-by: Jan Kara
---
src/Makefile | 3 ++-
src/loop_set_status_partscan.c | 45 ++
tests/loop/006 | 33 +++
tests/loop/006.out | 2 ++
4
On Wed 17-10-18 10:29:22, Jens Axboe wrote:
> On 10/17/18 4:05 AM, Jan Kara wrote:
> > On Tue 16-10-18 11:35:59, Jens Axboe wrote:
> >> On 10/15/18 1:44 PM, Paolo Valente wrote:
> >>> Here are some old results with a very simple configuration:
> >>>
54 matches
Mail list logo