Jens Axboe writes:
> BTW, quick guess is that it doesn't work so well with fixed buffers, as that
> hasn't been tested. You could try and remove IOCTX_FLAG_FIXEDBUFS from the
> test program and see if that works.
That results in a NULL pointer dereference. I'll stick to block device
testing
Hi, Jens,
Jens Axboe writes:
> You can also find the patches in my aio-poll branch:
>
> http://git.kernel.dk/cgit/linux-block/log/?h=aio-poll
>
> or by cloning:
>
> git://git.kernel.dk/linux-block aio-poll
I made an xfs file system on a partition of an nvme device. I created a
1 GB file on
Jens Axboe writes:
> On 12/6/18 12:27 PM, Jeff Moyer wrote:
>> Jens Axboe writes:
>>
>>> It's 192 bytes, fairly substantial. Most items don't need to be cleared,
>>> especially not upfront. Clear the ones we do need to clear, and leave
>>> the othe
Jens Axboe writes:
> Plugging is meant to optimize submission of a string of IOs, if we don't
> have more than 2 being submitted, don't bother setting up a plug.
Is there really that much overhead in blk_{start|finish}_plug?
-Jeff
>
> Reviewed-by: Christoph Hellwig
> Signed-off-by: Jens Axboe
Jens Axboe writes:
> It's 192 bytes, fairly substantial. Most items don't need to be cleared,
> especially not upfront. Clear the ones we do need to clear, and leave
> the other ones for setup when the iocb is prepared and submitted.
What performance gains do you see from this?
-Jeff
>
Jens Axboe writes:
> From: Christoph Hellwig
>
> This new methods is used to explicitly poll for I/O completion for an
> iocb. It must be called for any iocb submitted asynchronously (that
> is with a non-null ki_complete) which has the IOCB_HIPRI flag set.
>
> The method is assisted by a new
Jens Axboe writes:
> From: Christoph Hellwig
>
> Just call blk_poll on the iocb cookie, we can derive the block device
> from the inode trivially.
Does this work for multi-device file systems?
-Jeff
>
> Reviewed-by: Johannes Thumshirn
> Signed-off-by: Christoph Hellwig
> Signed-off-by:
Jens Axboe writes:
>>> A limit of 4M is imposed as the largest buffer we currently support.
>>> There's nothing preventing us from going larger, but we need some cap,
>>> and 4M seemed like it would definitely be big enough.
>>
>> Doesn't this mean that a user can pin a bunch of memory?
Hi, Jens,
Jens Axboe writes:
> If we have fixed user buffers, we can map them into the kernel when we
> setup the io_context. That avoids the need to do get_user_pages() for
> each and every IO.
>
> To utilize this feature, the application must set both
> IOCTX_FLAG_USERIOCB, to provide iocb's
Jens Axboe writes:
> On 5/30/18 9:06 AM, Jeff Moyer wrote:
>> Hi, Jens,
>>
>> Jens Axboe writes:
>>
>>> On 5/30/18 2:49 AM, Christoph Hellwig wrote:
>>>> While I really don't want drivers to change the I/O schedule themselves
>>>&g
Hi, Jens,
Jens Axboe writes:
> On 5/30/18 2:49 AM, Christoph Hellwig wrote:
>> While I really don't want drivers to change the I/O schedule themselves
>> we have a class of devices (zoned) that don't work at all with certain
>> I/O schedulers. The kernel not chosing something sane and
Hi, Jens,
Jens Axboe <ax...@kernel.dk> writes:
> On 5/25/18 3:14 PM, Jeff Moyer wrote:
>> Bryan Gurney reported I/O errors when using dm-zoned with a host-managed
>> SMR device. It turns out he was using CFQ, which is the default.
>> Unfortunately, as of v4.16, only t
only submitting 1 I/O per zone). Change our
defaults to provide a working configuration.
Reported-by: Bryan Gurney <bgur...@redhat.com>
Signed-off-by: Jeff Moyer <jmo...@redhat.com>
---
block/blk-sysfs.c | 24
1 file changed, 24 insertions(+)
diff --git a/bl
Bryan Gurney reported I/O errors when using dm-zoned with a host-managed
SMR device. It turns out he was using CFQ, which is the default.
Unfortunately, as of v4.16, only the deadline schedulers work well with
host-managed SMR devices. This series aatempts to switch the elevator
to deadline for
The next patch will add a caller that can't trigger module loads.
Also export this function for that caller.
Signed-off-by: Jeff Moyer <jmo...@redhat.com>
---
block/blk.h | 2 ++
block/elevator.c | 7 ---
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/block/blk.h b
adam.manzana...@wdc.com writes:
> From: Adam Manzanares <adam.manzana...@wdc.com>
>
> Now that kiocb has an ioprio field copy this over to the bio when it is
> created from the kiocb.
>
> Signed-off-by: Adam Manzanares <adam.manzana...@wdc.com>
Reviewed-by: Jeff Moy
adam.manzana...@wdc.com writes:
> From: Adam Manzanares
>
> Now that kiocb has an ioprio field copy this over to the bio when it is
> created from the kiocb.
>
> Signed-off-by: Adam Manzanares
> ---
> fs/block_dev.c | 1 +
> 1 file changed, 1
; Reviewed-by: Christoph Hellwig <h...@lst.de>
Reviewed-by: Jeff Moyer <jmo...@redhat.com>
> ---
> block/ioprio.c | 22 --
> include/linux/ioprio.h | 2 ++
> 2 files changed, 18 insertions(+), 6 deletions(-)
>
> diff --git a/block/ioprio.c b/bloc
adam.manzana...@wdc.com writes:
> From: Adam Manzanares <adam.manzana...@wdc.com>
>
> Now that kiocb has an ioprio field copy this over to the bio when it is
> created from the kiocb during direct IO.
>
> Signed-off-by: Adam Manzanares <adam.manzana...@wdc.com>
adam.manzana...@wdc.com writes:
> From: Adam Manzanares <adam.manzana...@wdc.com>
>
> Now that kiocb has an ioprio field copy this over to the bio when it is
> created from the kiocb.
>
> Signed-off-by: Adam Manzanares <adam.manzana...@wdc.com>
Reviewed-by:
nction.
>
> Signed-off-by: Adam Manzanares <adam.manzana...@wdc.com>
Reviewed-by: Jeff Moyer <jmo...@redhat.com>
> ---
> drivers/block/loop.c | 3 +++
> fs/aio.c | 16
> include/linux/fs.h | 3 +++
> includ
Hi, Adam,
adam.manzana...@wdc.com writes:
> From: Adam Manzanares
>
> This is the per-I/O equivalent of the ioprio_set system call.
> See the following link for performance implications on a SATA HDD:
> https://lkml.org/lkml/2016/12/6/495
>
> First patch factors
Hi, Adam,
adam.manzana...@wdc.com writes:
> From: Adam Manzanares
>
> This is the per-I/O equivalent of the ioprio_set system call.
>
> When IOCB_FLAG_IOPRIO is set on the iocb aio_flags field, then we set the
> newly added kiocb ki_ioprio field to the value in the iocb
Kirill Tkhai writes:
>> I think you just need to account the completion ring.
>
> A request of struct aio_kiocb type consumes much more memory, than
> struct io_event does. Shouldn't we account it too?
Not in my opinion. The completion ring is the part that gets pinned
Kirill Tkhai writes:
> On 05.12.2017 00:52, Tejun Heo wrote:
>> Hello, Kirill.
>>
>> On Tue, Dec 05, 2017 at 12:44:00AM +0300, Kirill Tkhai wrote:
Can you please explain how this is a fundamental resource which can't
be controlled otherwise?
>>>
>>> Currently,
Kirill Tkhai writes:
> Hi, Benjamin,
>
> On 04.12.2017 19:52, Benjamin LaHaise wrote:
>> Hi Kirill,
>>
>> On Mon, Dec 04, 2017 at 07:12:51PM +0300, Kirill Tkhai wrote:
>>> Hi,
>>>
>>> this patch set introduces accounting aio_nr and aio_max_nr per blkio cgroup.
>>> It may
Hou Tao writes:
> Hi all,
>
> We need to throttle the O_DIRECT IO on data and metadata device
> of a dm-thin pool and encounter some problems. If we set the
> limitation on the root blkcg, the throttle works. If we set the
> limitation on a child blkcg, the throttle doesn't
Hi, Slava,
Slava Dubeyko writes:
>>The data is lost, that's why you're getting an ECC. It's tantamount
>>to -EIO for a disk block access.
>
> I see the three possible cases here:
> (1) bad block has been discovered (no remap, no recovering) -> data is
>> lost; -EIO
Jan Kara writes:
> On Tue 17-01-17 15:14:21, Vishal Verma wrote:
>> Your note on the online repair does raise another tangentially related
>> topic. Currently, if there are badblocks, writes via the bio submission
>> path will clear the error (if the hardware is able to remap the
Slava Dubeyko writes:
>> Well, the situation with NVM is more like with DRAM AFAIU. It is quite
>> reliable
>> but given the size the probability *some* cell has degraded is quite high.
>> And similar to DRAM you'll get MCE (Machine Check Exception) when you try
>>
Christoph Hellwig <h...@infradead.org> writes:
> On Tue, Jan 17, 2017 at 09:54:27AM -0500, Jeff Moyer wrote:
>> I spoke with Dave before the holidays, and he indicated that
>> PMEM_IMMUTABLE would be an acceptable solution to allowing applications
>> to flush data co
"Darrick J. Wong" writes:
>> - Whenever you mount a filesystem with DAX, it spits out a message that says
>> "DAX enabled. Warning: EXPERIMENTAL, use at your own risk". What criteria
>> needs to be met for DAX to no longer be considered experimental?
>
> For XFS I'd
Hi, Kashyap,
I'm CC-ing Kent, seeing how this is his code.
Kashyap Desai writes:
> Objective of this patch is -
>
> To move code used in bcache module in block layer which is used to
> find IO stream. Reference code @drivers/md/bcache/request.c
>
Christoph Hellwig writes:
> On Thu, Jan 12, 2017 at 05:13:52PM +0900, Damien Le Moal wrote:
>> (3) Any other idea ?
>
> Do nothing and ignore the problem. This whole idea so braindead that
> the person coming up with the T10 language should be shot. Either a device
> has 511
:
don't read inode->i_blkbits multiple times") for the reasoning, and
commit b87570f5d3496 ("Fix a crash when block device is read and block
size is changed at the same time") for a more detailed problem
description and reproducer.
Fixes: 20ce44d545844
Signed-off-by: Jeff Moyer <j
Additionally, don't assign directly to disk->queue, otherwise
blk_put_queue (called via put_disk) will choke (panic) on the errno
stored there.
Bug found by code inspection after Omar found a similar issue in
virtio_blk. Compile-tested only.
Signed-off-by: Jeff Moyer <jmo...@redhat.com&
Hannes Reinecke writes:
> At LSF I'd like to discuss
> - Do we consider blktrace (and any other tracepoint in eg SCSI) as a
> stable API?
I don't have a strong opinion on this.
> - How do we go about modifying blktrace?
Blktrace has a version number associated with trace events.
Jens Axboe <ax...@kernel.dk> writes:
> On 01/03/2017 03:51 PM, Jeff Moyer wrote:
>>
>> /sys/block//queue/io_poll is a boolean. Fix the docs.
>>
>> Signed-off-by: Jeff Moyer <jmo...@redhat.com>
>>
>> diff --git a/Documentation/block/queue-
/sys/block//queue/io_poll is a boolean. Fix the docs.
Signed-off-by: Jeff Moyer <jmo...@redhat.com>
diff --git a/Documentation/block/queue-sysfs.txt
b/Documentation/block/queue-sysfs.txt
index 5164215..c0a3bb5 100644
--- a/Documentation/block/queue-sysfs.txt
+++ b/Documentation/block
Ross Zwisler writes:
> FWIW I think BRD has this same issue where we get block_bio_queue tracepoint
> events but not block_bio_complete. Solving this in bio_endio() would fix that
> driver as well.
Yeah, there are several other drivers that will benefit.
> Where
Christoph Hellwig <h...@infradead.org> writes:
> On Wed, Nov 09, 2016 at 02:43:58PM -0500, Jeff Moyer wrote:
>> But on the issue side, we have different trace actions: Q vs. I. On the
>> completion side, we just have C. You'd end up getting two C events for
>> e
Christoph Hellwig <h...@infradead.org> writes:
> On Wed, Nov 09, 2016 at 02:31:30PM -0500, Jeff Moyer wrote:
>> bio_endio is still called for request_fn drivers, so you'd see two
>> completion events for those drivers if we did that, no?
>
> We'd see the bio_endio trace
Christoph Hellwig <h...@infradead.org> writes:
> On Wed, Nov 09, 2016 at 02:08:33PM -0500, Jeff Moyer wrote:
>> Right now, any of the above three drivers will report Q events in
>> blktrace but no corresponding C events. Fix it.
>
> It seems like that trace point shou
Damien Le Moal writes:
> + if (!is_power_of_2(zone_blocks)) {
> + if (sdkp->first_scan)
> + sd_printk(KERN_NOTICE, sdkp,
> + "Devices with non power of 2 zone "
> + "size are
Damien Le Moal writes:
> diff --git a/Documentation/ABI/testing/sysfs-block
> b/Documentation/ABI/testing/sysfs-block
> index 75a5055..ee2d5cd 100644
> --- a/Documentation/ABI/testing/sysfs-block
> +++ b/Documentation/ABI/testing/sysfs-block
> @@ -251,3 +251,16 @@
Eric Wheeler <bca...@lists.ewheeler.net> writes:
> [+cc Mikulas Patocka, Jeff Moyer; Do either of you have any input on Lars'
> commentary related to patchwork #'s 9204125 and 7398411 and BZ#119841? ]
Sorry, I don't have any time to look at this right now.
Cheers,
Jeff
>
> O
46 matches
Mail list logo