On Thu, Nov 30, 2017 at 2:34 AM, Dave Chinner wrote:
> On Thu, Nov 30, 2017 at 12:48:15AM +0100, Rafael J. Wysocki wrote:
>> On Thu, Nov 30, 2017 at 12:23 AM, Luis R. Rodriguez
>> wrote:
>> > There are use cases where we wish to traverse the superblock
On Thu, Nov 30, 2017 at 12:48:15AM +0100, Rafael J. Wysocki wrote:
> On Thu, Nov 30, 2017 at 12:23 AM, Luis R. Rodriguez wrote:
> > There are use cases where we wish to traverse the superblock list
> > but also capture errors, and in which case we want to avoid having
> > our
On Thu, Nov 30, 2017 at 12:48:15AM +0100, Rafael J. Wysocki wrote:
> On Thu, Nov 30, 2017 at 12:23 AM, Luis R. Rodriguez wrote:
> > +int iterate_supers_excl(int (*f)(struct super_block *, void *), void *arg)
> > +{
> > + struct super_block *sb, *p = NULL;
> > + int
Now we track legacy requests with .q_usage_counter in commit 055f6e18e08f
("block: Make q_usage_counter also track legacy requests"), but that
commit never runs and drains legacy queue before waiting for this counter
becoming zero, then IO hang is caused in the test of pulling disk during IO.
On Thu, Nov 30, 2017 at 12:23 AM, Luis R. Rodriguez wrote:
> There are use cases where we wish to traverse the superblock list
> but also capture errors, and in which case we want to avoid having
> our callers issue a lock themselves since we can do the locking for
> the
[3]
https://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux-next.git/log/?h=20171129-fs-freeze-cleanup
[4]
https://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux.git/log/?h=20171129-fs-freeze-cleanup
Luis R. Rodriguez (11):
fs: provide unlocked helper for freeze_super()
fs
The question of whether or not a superblock is frozen needs to be
augmented in the future to account for differences between a user
initiated freeze and a kernel initiated freeze done automatically
on behalf of the kernel.
Provide helpers so that these can be used instead so that we don't
have to
Userspace can initiate a freeze call using ioctls. If the kernel decides
to freeze a filesystem later it must be able to distinguish if userspace
had initiated the freeze, so that it does not unfreeze it later
automatically on resume.
Likewise if the kernel is initiating a freeze on its own it
freeze_super() holds a write lock, however we wish to also enable
callers which already hold the write lock. To do this provide a helper
and make freeze_super() use it. This way, all that freeze_super() does
now is lock handling and active count management.
This change has no functional changes.
thaw_super() hold a write lock, however we wish to also enable
callers which already hold the write lock. To do this provide a helper
and make thaw_super() use it. This way, all that thaw_super() does
now is lock handling and active count management.
This change has no functional changes.
There are use cases where we wish to traverse the superblock list
but also capture errors, and in which case we want to avoid having
our callers issue a lock themselves since we can do the locking for
the callers. Provide a iterate_supers_excl() which calls a function
with the write lock held. If
This removes superflous freezer calls as they are no longer needed
as the VFS now performs filesystem freezing/thaw if the filesystem has
support for it.
The following Coccinelle rule was used as follows:
spatch --sp-file fs-freeze-cleanup.cocci --in-place fs/$FS/
@ has_freeze_fs @
identifier
This uses the existing filesystem freeze and thaw callbacks to
freeze each filesystem on suspend/hibernation and thaw upon resume.
This is needed so that we properly really stop IO in flight without
races after userspace has been frozen. Without this we rely on
kthread freezing and its semantics
This removes superflous freezer calls as they are no longer needed
as the VFS now performs filesystem freezing/thaw if the filesystem has
support for it.
The following Coccinelle rule was used as follows:
spatch --sp-file fs-freeze-cleanup.cocci --in-place fs/$FS/
@ has_freeze_fs @
identifier
This removes superflous freezer calls as they are no longer needed
as the VFS now performs filesystem freezing/thaw if the filesystem has
support for it.
The following Coccinelle rule was used as follows:
spatch --sp-file fs-freeze-cleanup.cocci --in-place fs/$FS/
@ has_freeze_fs @
identifier
This removes superflous freezer calls as they are no longer needed
as the VFS now performs filesystem freezing/thaw if the filesystem has
support for it.
The following Coccinelle rule was used as follows:
spatch --sp-file fs-freeze-cleanup.cocci --in-place fs/$FS/
@ has_freeze_fs @
identifier
This removes superflous freezer calls as they are no longer needed
as the VFS now performs filesystem freezing/thaw if the filesystem has
support for it.
The following Coccinelle rule was used as follows:
spatch --sp-file fs-freeze-cleanup.cocci --in-place fs/$FS/
@ has_freeze_fs @
identifier
On Wed, Oct 04, 2017 at 01:03:54AM +, Bart Van Assche wrote:
> On Wed, 2017-10-04 at 02:47 +0200, Luis R. Rodriguez wrote:
> > 3) Lookup for kthreads which seem to generate IO -- address / review if
> > removal of the freezer API can be done somehow with a quescing. This
> > is
On Sun, Nov 26, 2017 at 02:10:53PM +0100, Richard Weinberger wrote:
> MAX_SG is 64, used for blk_queue_max_segments(). This comes from
> a0044bdf60c2 ("uml: batch I/O requests"). Is this still a good/sane
> value for blk-mq?
blk-mq itself doesn't change the tradeoff.
> The driver does IO
On 11/29/2017 08:18 PM, Christian Borntraeger wrote:
> Works fine under KVM with virtio-blk, but still hangs during boot in an LPAR.
> FWIW, the system not only has scsi disks via fcp but also DASDs as a boot
> disk.
> Seems that this is the place where the system stops. (see the sysrq-t output
Works fine under KVM with virtio-blk, but still hangs during boot in an LPAR.
FWIW, the system not only has scsi disks via fcp but also DASDs as a boot disk.
Seems that this is the place where the system stops. (see the sysrq-t output
at the bottom).
Message
"[0.247484] Linux version
On 11/29/2017 09:16 AM, Christoph Hellwig wrote:
> Hi Jens,
>
> a few more nvme updates for 4.15. A single small PCIe fix, and a number
> of patches for RDMA that are a little larger than what I'd like to see
> for -rc2, but they fix important issues seen in the wild.
Looks good to me, pulled.
Use blk_cleanup_queue() to shutdown the queue when the driver is removed,
and instead get an extra reference to the queue to prevent the queue being
freed before the final mmc_blk_put().
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/block.c | 17 -
Make mmc_pre_req() and mmc_post_req() available to the card drivers. Later
patches will make use of this.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/core.c | 31 ---
drivers/mmc/core/core.h | 31 +++
2 files
Until mmc has blk-mq support fully implemented and tested, add a parameter
use_blk_mq, set to true if config option MMC_MQ_DEFAULT is selected, which
it is by default.
Signed-off-by: Adrian Hunter
---
drivers/mmc/Kconfig | 10 ++
drivers/mmc/core/core.c |
Define and use a blk-mq queue. Discards and flushes are processed
synchronously, but reads and writes asynchronously. In order to support
slow DMA unmapping, DMA unmapping is not done until after the next request
is started. That means the request is not completed until then. If there is
no next
Add CQE support to the block driver, including:
- optionally using DCMD for flush requests
- "manually" issuing discard requests
- issuing read / write requests to the CQE
- supporting block-layer timeouts
- handling recovery
- supporting re-tuning
CQE offers 25% - 50%
For blk-mq, add support for completing requests directly in the ->done
callback. That means that error handling and urgent background operations
must be handled by recovery_work in that case.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/block.c | 129
Recovery is simpler to understand if it is only used for errors. Create a
separate function for card polling.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/block.c | 29 -
1 file changed, 28 insertions(+), 1 deletion(-)
diff --git
Check error bits and save the exception bit when polling card busy.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/block.c | 37 -
1 file changed, 28 insertions(+), 9 deletions(-)
diff --git a/drivers/mmc/core/block.c
Pedantically, ensure the status is checked for the last time after the full
timeout has passed.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/block.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/mmc/core/block.c
According to the specification, total access time is derived from both TAAC
and NSAC, which means the timeout should add both timeout_ns and
timeout_clks. Host drivers do that, so make the block driver do that too.
Signed-off-by: Adrian Hunter
---
Set a 10 second timeout for polling write request busy state. Note, mmc
core is setting a 3 second timeout for SD cards, and SDHCI has long had a
10 second software timer to timeout the whole request, so 10 seconds should
be ample.
Signed-off-by: Adrian Hunter
---
There are only a few things the recovery needs to do. Primarily, it just
needs to:
Determine the number of bytes transferred
Get the card back to transfer state
Determine whether to retry
There are also a couple of additional features:
Reset the card before the
The block driver's blk-mq paths do not use mmc_start_areq(). In order to
remove mmc_start_areq() entirely, start by removing it from mmc_test.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/mmc_test.c | 122
1 file
Remove config option MMC_MQ_DEFAULT and parameter mmc_use_blk_mq, so that
blk-mq must be used always.
Signed-off-by: Adrian Hunter
---
drivers/mmc/Kconfig | 10 --
drivers/mmc/core/core.c | 7 ---
drivers/mmc/core/core.h | 2 --
drivers/mmc/core/host.c
Remove code no longer needed after the switch to blk-mq.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/block.c | 723 +--
drivers/mmc/core/block.h | 2 -
drivers/mmc/core/queue.c | 240 +---
Remove code no longer needed after the switch to blk-mq.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/bus.c | 2 -
drivers/mmc/core/core.c | 185 +--
drivers/mmc/core/core.h | 8 --
drivers/mmc/core/host.h | 5
On Wed, Nov 15, 2017 at 2:50 PM, Adrian Hunter wrote:
> On 14/11/17 23:17, Linus Walleij wrote:
>> We have the following risk factors:
>>
>> - Observed performance degradation of 1% (on x86 SDHI I guess)
>> - The kernel crashes if SD card is removed (both patch sets)
>
>
39 matches
Mail list logo