On 10/17/2017 10:18 PM, Bart Van Assche wrote:
> On Tue, 2017-10-17 at 08:33 +0200, Hannes Reinecke wrote:
>> How do you ensure that PREEMPT requests are not stuck in the queue
>> _behind_ non-PREEMPT requests?
>> Once they are in the queue the request are already allocated, so your
>> deferred
On 10/17/2017 05:40 PM, Bart Van Assche wrote:
> On Tue, 2017-10-17 at 17:28 -0600, Jens Axboe wrote:
>> On 10/17/2017 05:26 PM, Bart Van Assche wrote:
>>> It is known that during the resume following a hibernate, especially when
>>> using an md RAID1 array created on top of SCSI devices,
On Tue, 2017-10-17 at 17:28 -0600, Jens Axboe wrote:
> On 10/17/2017 05:26 PM, Bart Van Assche wrote:
> > It is known that during the resume following a hibernate, especially when
> > using an md RAID1 array created on top of SCSI devices, sometimes the system
> > hangs instead of coming up
Several block layer and NVMe core functions accept a combination
of BLK_MQ_REQ_* flags through the 'flags' argument but there is
no verification at compile time whether the right type of block
layer flags is passed. Make it possible for sparse to verify this.
This patch does not change any
A side effect of this patch is that the GFP mask that is passed to
several allocation functions in the legacy block layer is changed
from GFP_KERNEL into __GFP_DIRECT_RECLAIM.
Signed-off-by: Bart Van Assche
Reviewed-by: Hannes Reinecke
Tested-by: Martin
Introduce md_stop_all_writes() because the next patch will add
a second caller for this function. This patch does not change
any functionality.
Signed-off-by: Bart Van Assche
Reviewed-by: Johannes Thumshirn
Reviewed-by: Shaohua Li
The contexts from which a SCSI device can be quiesced or resumed are:
* Writing into /sys/class/scsi_device/*/device/state.
* SCSI parallel (SPI) domain validation.
* The SCSI device power management methods. See also scsi_bus_pm_ops.
It is essential during suspend and resume that neither the
Hello Jens,
It is known that during the resume following a hibernate, especially when
using an md RAID1 array created on top of SCSI devices, sometimes the system
hangs instead of coming up properly. This patch series fixes that
problem. These patches have been tested on top of the block layer
Some people use the md driver on laptops and use the suspend and
resume functionality. Since it is essential that submitting of
new I/O requests stops before a hibernation image is created,
interrupt the md resync and reshape actions if the system is
being frozen. Note: the resync and reshape will
From: Ming Lei
This patch makes it possible to pause request allocation for
the legacy block layer by calling blk_mq_freeze_queue() and
blk_mq_unfreeze_queue().
Signed-off-by: Ming Lei
[ bvanassche: Combined two patches into one, edited a comment and
Set RQF_PREEMPT if BLK_MQ_REQ_PREEMPT is passed to
blk_get_request_flags().
Signed-off-by: Bart Van Assche
Reviewed-by: Hannes Reinecke
Tested-by: Martin Steigerwald
Cc: Christoph Hellwig
Cc: Ming Lei
This flag will be used in the next patch to let the block layer
core know whether or not a SCSI request queue has been quiesced.
A quiesced SCSI queue namely only processes RQF_PREEMPT requests.
Signed-off-by: Bart Van Assche
Reviewed-by: Hannes Reinecke
This avoids confusion with the pm notifier that will be added
through a later patch.
Signed-off-by: Bart Van Assche
Reviewed-by: Johannes Thumshirn
Reviewed-by: Shaohua Li
Reviewed-by: Hannes Reinecke
Tested-by:
Convert blk_get_request(q, op, __GFP_RECLAIM) into
blk_get_request_flags(q, op, BLK_MQ_PREEMPT). This patch does not
change any functionality.
Signed-off-by: Bart Van Assche
Tested-by: Martin Steigerwald
Acked-by: David S. Miller
On 10/11/2017 11:39 AM, Omar Sandoval wrote:
> From: Omar Sandoval
>
> When we're getting a domain token, if we fail to get a token on our
> first attempt, we put the current hardware queue on a wait queue and
> then try again just in case a token was freed after our initial
On Tue, 2017-10-17 at 08:43 +0800, Ming Lei wrote:
> On Mon, Oct 16, 2017 at 04:29:04PM -0700, Bart Van Assche wrote:
> > [ ... ]
> > int
> > scsi_device_quiesce(struct scsi_device *sdev)
> > {
> > + struct request_queue *q = sdev->request_queue;
> > int err;
> >
> > + /*
> > +*
On 10/17/2017 10:45 AM, Linus Walleij wrote:
> On Tue, Oct 17, 2017 at 2:45 PM, Paolo Valente
> wrote:
>
>> one of the most time-consuming operations needed by some blkg_*stats_*
>> functions is, e.g., find_next_bit, for which we don't see any trivial
>> replacement.
>
On Tue, Oct 17, 2017 at 2:45 PM, Paolo Valente wrote:
> one of the most time-consuming operations needed by some blkg_*stats_*
> functions is, e.g., find_next_bit, for which we don't see any trivial
> replacement.
So this is one of the things that often falls down to a
On 10/17/2017 10:09 AM, Jens Axboe wrote:
> On 10/16/2017 06:03 PM, Kees Cook wrote:
>> On Fri, Oct 6, 2017 at 7:19 AM, Jens Axboe wrote:
>>> On 10/05/2017 05:13 PM, Kees Cook wrote:
In preparation for unconditionally passing the struct timer_list pointer to
all timer
On 10/16/2017 06:03 PM, Kees Cook wrote:
> On Fri, Oct 6, 2017 at 7:19 AM, Jens Axboe wrote:
>> On 10/05/2017 05:13 PM, Kees Cook wrote:
>>> In preparation for unconditionally passing the struct timer_list pointer to
>>> all timer callbacks, switch to using the new timer_setup()
On 17/10/2017 06:12, Ming Lei wrote:
On Tue, Oct 17, 2017 at 01:04:16PM +0800, Ming Lei wrote:
Hi Jens,
The 1st patch runs idle hctx after dealy in scsi_mq_get_budget(),
so that we can keep same behaviour with before, and it can be
thought as a fix.
The 2nd patch cleans up RESTART, and
On Tue, 2017-10-17 at 08:14 +0200, Hannes Reinecke wrote:
> On 10/17/2017 12:49 AM, Bart Van Assche wrote:
> > [ ... ]
> > void target_free_sgl(struct scatterlist *sgl, int nents)
> > {
> > - struct scatterlist *sg;
> > - int count;
> > -
> > - for_each_sg(sgl, sg, nents, count)
> > -
On Tue, 2017-10-17 at 08:21 +0200, Hannes Reinecke wrote:
> On 10/17/2017 12:49 AM, Bart Van Assche wrote:
> > Signed-off-by: Bart Van Assche
> > Reviewed-by: Johannes Thumshirn
> > Cc: linux-s...@vger.kernel.org
> > Cc: Martin K. Petersen
+Ulf Hansson, Mark Brown, Linus Walleij
> Il giorno 17 ott 2017, alle ore 12:11, Paolo Valente
> ha scritto:
>
> Hi Tejun, all,
> in our work for reducing bfq overhead, we bumped into an unexpected
> fact: the functions blkg_*stats_*, invoked in bfq to update cgroups
On Tue, Oct 17, 2017 at 08:33:36AM +0200, Hannes Reinecke wrote:
> On 10/17/2017 01:29 AM, Bart Van Assche wrote:
> > The contexts from which a SCSI device can be quiesced or resumed are:
> > * Writing into /sys/class/scsi_device/*/device/state.
> > * SCSI parallel (SPI) domain validation.
> > *
On Tue, Oct 17, 2017 at 08:38:01AM +0200, Hannes Reinecke wrote:
> On 10/17/2017 03:29 AM, Ming Lei wrote:
> > On Mon, Oct 16, 2017 at 01:30:09PM +0200, Hannes Reinecke wrote:
> >> On 10/13/2017 07:29 PM, Ming Lei wrote:
> >>> On Fri, Oct 13, 2017 at 05:08:52PM +, Bart Van Assche wrote:
>
Looks good,
Reviewed-by: Johannes Thumshirn
--
Johannes Thumshirn Storage
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham
Thanks Bart,
Reviewed-by: Johannes Thumshirn
--
Johannes Thumshirn Storage
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham
Looks good,
Reviewed-by: Christoph Hellwig
On 10/17/2017 03:29 AM, Ming Lei wrote:
> On Mon, Oct 16, 2017 at 01:30:09PM +0200, Hannes Reinecke wrote:
>> On 10/13/2017 07:29 PM, Ming Lei wrote:
>>> On Fri, Oct 13, 2017 at 05:08:52PM +, Bart Van Assche wrote:
On Sat, 2017-10-14 at 00:45 +0800, Ming Lei wrote:
> On Fri, Oct 13,
On 10/17/2017 01:29 AM, Bart Van Assche wrote:
> Several block layer and NVMe core functions accept a combination
> of BLK_MQ_REQ_* flags through the 'flags' argument but there is
> no verification at compile time whether the right type of block
> layer flags is passed. Make it possible for sparse
On 10/17/2017 01:32 AM, Bart Van Assche wrote:
> blk_mq_get_tag() can modify data->ctx. This means that in the
> error path of blk_mq_get_request() data->ctx should be passed to
> blk_mq_put_ctx() instead of local_ctx. Note: since blk_mq_put_ctx()
> ignores its argument, this patch does not change
On 10/17/2017 01:29 AM, Bart Van Assche wrote:
> The contexts from which a SCSI device can be quiesced or resumed are:
> * Writing into /sys/class/scsi_device/*/device/state.
> * SCSI parallel (SPI) domain validation.
> * The SCSI device power management methods. See also scsi_bus_pm_ops.
>
> It
On 10/17/2017 01:29 AM, Bart Van Assche wrote:
> This flag will be used in the next patch to let the block layer
> core know whether or not a SCSI request queue has been quiesced.
> A quiesced SCSI queue namely only processes RQF_PREEMPT requests.
>
> Signed-off-by: Bart Van Assche
On 10/17/2017 01:29 AM, Bart Van Assche wrote:
> Set RQF_PREEMPT if BLK_MQ_REQ_PREEMPT is passed to
> blk_get_request_flags().
>
> Signed-off-by: Bart Van Assche
> Tested-by: Martin Steigerwald
> Cc: Christoph Hellwig
> Cc: Ming Lei
On 10/17/2017 01:29 AM, Bart Van Assche wrote:
> A side effect of this patch is that the GFP mask that is passed to
> several allocation functions in the legacy block layer is changed
> from GFP_KERNEL into __GFP_DIRECT_RECLAIM.
>
> Signed-off-by: Bart Van Assche
>
On 10/17/2017 01:28 AM, Bart Van Assche wrote:
> From: Ming Lei
>
> This patch makes it possible to pause request allocation for
> the legacy block layer by calling blk_mq_freeze_queue() and
> blk_mq_unfreeze_queue().
>
> Signed-off-by: Ming Lei
> [
On 10/17/2017 01:28 AM, Bart Van Assche wrote:
> Some people use the md driver on laptops and use the suspend and
> resume functionality. Since it is essential that submitting of
> new I/O requests stops before a hibernation image is created,
> interrupt the md resync and reshape actions if the
On 10/17/2017 01:28 AM, Bart Van Assche wrote:
> This avoids confusion with the pm notifier that will be added
> through a later patch.
>
> Signed-off-by: Bart Van Assche
> Reviewed-by: Johannes Thumshirn
> Reviewed-by: Shaohua Li
>
On 10/17/2017 01:28 AM, Bart Van Assche wrote:
> Introduce md_stop_all_writes() because the next patch will add
> a second caller for this function. This patch does not change
> any functionality.
>
> Signed-off-by: Bart Van Assche
> Reviewed-by: Johannes Thumshirn
On 10/17/2017 12:49 AM, Bart Van Assche wrote:
> Use the sgl_alloc_order() and sgl_free_order() functions instead
> of open coding these functions.
>
> Signed-off-by: Bart Van Assche
> Reviewed-by: Johannes Thumshirn
> Cc: linux-s...@vger.kernel.org
>
On 10/17/2017 12:49 AM, Bart Van Assche wrote:
> Signed-off-by: Bart Van Assche
> Reviewed-by: Johannes Thumshirn
> Cc: linux-s...@vger.kernel.org
> Cc: Martin K. Petersen
> Cc: Anil Ravindranath
On 10/17/2017 12:49 AM, Bart Van Assche wrote:
> Use the sgl_alloc_order() and sgl_free_order() functions instead
> of open coding these functions.
>
> Signed-off-by: Bart Van Assche
> Reviewed-by: Johannes Thumshirn
> Cc: linux-s...@vger.kernel.org
>
On 10/17/2017 12:49 AM, Bart Van Assche wrote:
> Use the sgl_alloc_order() and sgl_free() functions instead of open
> coding these functions.
>
> Signed-off-by: Bart Van Assche
> Cc: Nicholas A. Bellinger
> Cc: Christoph Hellwig
> Cc:
On 10/17/2017 12:49 AM, Bart Van Assche wrote:
> Use the sgl_alloc() and sgl_free() functions instead of open coding
> these functions.
>
> Signed-off-by: Bart Van Assche
> Reviewed-by: Johannes Thumshirn
> Cc: Keith Busch
>
On 10/17/2017 12:49 AM, Bart Van Assche wrote:
> Use the sgl_alloc() and sgl_free() functions instead of open coding
> these functions.
>
> Signed-off-by: Bart Van Assche
> Reviewed-by: Johannes Thumshirn
> Cc: Keith Busch
>
On 10/17/2017 12:49 AM, Bart Van Assche wrote:
> Use the sgl_alloc() and sgl_free() functions instead of open coding
> these functions.
>
> Signed-off-by: Bart Van Assche
> Cc: Ard Biesheuvel
> Cc: Herbert Xu
> ---
On 10/17/2017 12:49 AM, Bart Van Assche wrote:
> Many kernel drivers contain code that allocates and frees both a
> scatterlist and the pages that populate that scatterlist.
> Introduce functions in lib/scatterlist.c that perform these tasks
> instead of duplicating this functionality in multiple
48 matches
Mail list logo