Re: [PATCH] blk: clean up plug

2015-04-17 Thread Jeff Moyer
Shaohua Li  writes:

>> Also, Jens or Shaohua or anyone, please review my blk-mq plug fix (patch
>> 1/2 of aforementioned thread).  ;)
>
> You are not alone :), I posted 2 times too
> http://marc.info/?l=linux-kernel=142627559617005=2

Oh, sorry!  I think Jens had mentioned that you had a patch that touched
that code.  I took a quick look at it, and I think the general idea is
good.  I'll take a closer look next week, and I'll also give it to our
performance team for testing.  Hopefully I can follow-up on that patch
by the end of next week.

Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] blk: clean up plug

2015-04-17 Thread Shaohua Li
On Fri, Apr 17, 2015 at 04:54:40PM -0400, Jeff Moyer wrote:
> Shaohua Li  writes:
> 
> > Current code looks like inner plug gets flushed with a
> > blk_finish_plug(). Actually it's a nop. All requests/callbacks are added
> > to current->plug, while only outmost plug is assigned to current->plug.
> > So inner plug always has empty request/callback list, which makes
> > blk_flush_plug_list() a nop. This tries to make the code more clear.
> >
> > Signed-off-by: Shaohua Li 
> 
> Hi, Shaohua,
> 
> I agree that this looks like a clean-up with no behavioral change, and
> it looks good to me.  However,it does make me scratch my head about the
> numbers I was seeing.  Here's the table from that other email thread[1]:
> 
> device|  vanilla   |patch1  |   dio-noplug  |  noflush-nested
> --+++---+-
> rssda | 701,684|1,168,527   |   1,342,177   |   1,297,612
>   | 100%   | +66%   |+91%   |+85%
> vdb0  | 358,264|  902,913   | 906,850   | 922,327
>   | 100%   |+152%   |   +153%   |   +157%
> 
> Patch1 refers to the first patch in this series, which fixes the merge
> logic for single-queue blk-mq devices.  Each column after that includes
> that first patch.  In dio-noplug, I removed the blk_plug from the
> direct-io code path (so there is no nesting at all).  This is a control,
> since it is what I expect the outcome of the noflush-nested column to
> actually be.  Then, the noflush-nested column leaves the blk_plug in
> place in the dio code, but includes the patch that prevents nested
> blk_plug's from being flushed.  All numbers are the average of 5 runs.
> With the exception of the vanilla run on rssda (the first run was
> faster, causing the average to go up), the standard deviation is very
> small.
> 
> For the dio-noplug column, if the inner plug really was a noop, then why
> would we see any change in performance?  Like I said, I agree with your
> reading of the code and the patch.  Color me confused.  I'll poke at it
> more next week.  For now, I think your patch is fine.
> 
> Reviewed-by: Jeff Moyer 

Thanks! I don't know why either the your second makes change.
 
> Also, Jens or Shaohua or anyone, please review my blk-mq plug fix (patch
> 1/2 of aforementioned thread).  ;)

You are not alone :), I posted 2 times too
http://marc.info/?l=linux-kernel=142627559617005=2


Thanks,
Shaohua
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] blk: clean up plug

2015-04-17 Thread Jeff Moyer
Shaohua Li  writes:

> Current code looks like inner plug gets flushed with a
> blk_finish_plug(). Actually it's a nop. All requests/callbacks are added
> to current->plug, while only outmost plug is assigned to current->plug.
> So inner plug always has empty request/callback list, which makes
> blk_flush_plug_list() a nop. This tries to make the code more clear.
>
> Signed-off-by: Shaohua Li 

Hi, Shaohua,

I agree that this looks like a clean-up with no behavioral change, and
it looks good to me.  However,it does make me scratch my head about the
numbers I was seeing.  Here's the table from that other email thread[1]:

device|  vanilla   |patch1  |   dio-noplug  |  noflush-nested
--+++---+-
rssda | 701,684|1,168,527   |   1,342,177   |   1,297,612
  | 100%   | +66%   |+91%   |+85%
vdb0  | 358,264|  902,913   | 906,850   | 922,327
  | 100%   |+152%   |   +153%   |   +157%

Patch1 refers to the first patch in this series, which fixes the merge
logic for single-queue blk-mq devices.  Each column after that includes
that first patch.  In dio-noplug, I removed the blk_plug from the
direct-io code path (so there is no nesting at all).  This is a control,
since it is what I expect the outcome of the noflush-nested column to
actually be.  Then, the noflush-nested column leaves the blk_plug in
place in the dio code, but includes the patch that prevents nested
blk_plug's from being flushed.  All numbers are the average of 5 runs.
With the exception of the vanilla run on rssda (the first run was
faster, causing the average to go up), the standard deviation is very
small.

For the dio-noplug column, if the inner plug really was a noop, then why
would we see any change in performance?  Like I said, I agree with your
reading of the code and the patch.  Color me confused.  I'll poke at it
more next week.  For now, I think your patch is fine.

Reviewed-by: Jeff Moyer 

Also, Jens or Shaohua or anyone, please review my blk-mq plug fix (patch
1/2 of aforementioned thread).  ;)

-Jeff

[1] https://lkml.org/lkml/2015/4/16/366

> ---
>  block/blk-core.c | 24 
>  1 file changed, 12 insertions(+), 12 deletions(-)
>
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 794c3e7..d3161f3 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -3018,21 +3018,20 @@ void blk_start_plug(struct blk_plug *plug)
>  {
>   struct task_struct *tsk = current;
>  
> + /*
> +  * If this is a nested plug, don't actually assign it.
> +  */
> + if (tsk->plug)
> + return;
> +
>   INIT_LIST_HEAD(>list);
>   INIT_LIST_HEAD(>mq_list);
>   INIT_LIST_HEAD(>cb_list);
> -
>   /*
> -  * If this is a nested plug, don't actually assign it. It will be
> -  * flushed on its own.
> +  * Store ordering should not be needed here, since a potential
> +  * preempt will imply a full memory barrier
>*/
> - if (!tsk->plug) {
> - /*
> -  * Store ordering should not be needed here, since a potential
> -  * preempt will imply a full memory barrier
> -  */
> - tsk->plug = plug;
> - }
> + tsk->plug = plug;
>  }
>  EXPORT_SYMBOL(blk_start_plug);
>  
> @@ -3179,10 +3178,11 @@ void blk_flush_plug_list(struct blk_plug *plug, bool 
> from_schedule)
>  
>  void blk_finish_plug(struct blk_plug *plug)
>  {
> + if (plug != current->plug)
> + return;
>   blk_flush_plug_list(plug, false);
>  
> - if (plug == current->plug)
> - current->plug = NULL;
> + current->plug = NULL;
>  }
>  EXPORT_SYMBOL(blk_finish_plug);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] blk: clean up plug

2015-04-17 Thread Jeff Moyer
Shaohua Li s...@fb.com writes:

 Current code looks like inner plug gets flushed with a
 blk_finish_plug(). Actually it's a nop. All requests/callbacks are added
 to current-plug, while only outmost plug is assigned to current-plug.
 So inner plug always has empty request/callback list, which makes
 blk_flush_plug_list() a nop. This tries to make the code more clear.

 Signed-off-by: Shaohua Li s...@fb.com

Hi, Shaohua,

I agree that this looks like a clean-up with no behavioral change, and
it looks good to me.  However,it does make me scratch my head about the
numbers I was seeing.  Here's the table from that other email thread[1]:

device|  vanilla   |patch1  |   dio-noplug  |  noflush-nested
--+++---+-
rssda | 701,684|1,168,527   |   1,342,177   |   1,297,612
  | 100%   | +66%   |+91%   |+85%
vdb0  | 358,264|  902,913   | 906,850   | 922,327
  | 100%   |+152%   |   +153%   |   +157%

Patch1 refers to the first patch in this series, which fixes the merge
logic for single-queue blk-mq devices.  Each column after that includes
that first patch.  In dio-noplug, I removed the blk_plug from the
direct-io code path (so there is no nesting at all).  This is a control,
since it is what I expect the outcome of the noflush-nested column to
actually be.  Then, the noflush-nested column leaves the blk_plug in
place in the dio code, but includes the patch that prevents nested
blk_plug's from being flushed.  All numbers are the average of 5 runs.
With the exception of the vanilla run on rssda (the first run was
faster, causing the average to go up), the standard deviation is very
small.

For the dio-noplug column, if the inner plug really was a noop, then why
would we see any change in performance?  Like I said, I agree with your
reading of the code and the patch.  Color me confused.  I'll poke at it
more next week.  For now, I think your patch is fine.

Reviewed-by: Jeff Moyer jmo...@redhat.com

Also, Jens or Shaohua or anyone, please review my blk-mq plug fix (patch
1/2 of aforementioned thread).  ;)

-Jeff

[1] https://lkml.org/lkml/2015/4/16/366

 ---
  block/blk-core.c | 24 
  1 file changed, 12 insertions(+), 12 deletions(-)

 diff --git a/block/blk-core.c b/block/blk-core.c
 index 794c3e7..d3161f3 100644
 --- a/block/blk-core.c
 +++ b/block/blk-core.c
 @@ -3018,21 +3018,20 @@ void blk_start_plug(struct blk_plug *plug)
  {
   struct task_struct *tsk = current;
  
 + /*
 +  * If this is a nested plug, don't actually assign it.
 +  */
 + if (tsk-plug)
 + return;
 +
   INIT_LIST_HEAD(plug-list);
   INIT_LIST_HEAD(plug-mq_list);
   INIT_LIST_HEAD(plug-cb_list);
 -
   /*
 -  * If this is a nested plug, don't actually assign it. It will be
 -  * flushed on its own.
 +  * Store ordering should not be needed here, since a potential
 +  * preempt will imply a full memory barrier
*/
 - if (!tsk-plug) {
 - /*
 -  * Store ordering should not be needed here, since a potential
 -  * preempt will imply a full memory barrier
 -  */
 - tsk-plug = plug;
 - }
 + tsk-plug = plug;
  }
  EXPORT_SYMBOL(blk_start_plug);
  
 @@ -3179,10 +3178,11 @@ void blk_flush_plug_list(struct blk_plug *plug, bool 
 from_schedule)
  
  void blk_finish_plug(struct blk_plug *plug)
  {
 + if (plug != current-plug)
 + return;
   blk_flush_plug_list(plug, false);
  
 - if (plug == current-plug)
 - current-plug = NULL;
 + current-plug = NULL;
  }
  EXPORT_SYMBOL(blk_finish_plug);
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] blk: clean up plug

2015-04-17 Thread Shaohua Li
On Fri, Apr 17, 2015 at 04:54:40PM -0400, Jeff Moyer wrote:
 Shaohua Li s...@fb.com writes:
 
  Current code looks like inner plug gets flushed with a
  blk_finish_plug(). Actually it's a nop. All requests/callbacks are added
  to current-plug, while only outmost plug is assigned to current-plug.
  So inner plug always has empty request/callback list, which makes
  blk_flush_plug_list() a nop. This tries to make the code more clear.
 
  Signed-off-by: Shaohua Li s...@fb.com
 
 Hi, Shaohua,
 
 I agree that this looks like a clean-up with no behavioral change, and
 it looks good to me.  However,it does make me scratch my head about the
 numbers I was seeing.  Here's the table from that other email thread[1]:
 
 device|  vanilla   |patch1  |   dio-noplug  |  noflush-nested
 --+++---+-
 rssda | 701,684|1,168,527   |   1,342,177   |   1,297,612
   | 100%   | +66%   |+91%   |+85%
 vdb0  | 358,264|  902,913   | 906,850   | 922,327
   | 100%   |+152%   |   +153%   |   +157%
 
 Patch1 refers to the first patch in this series, which fixes the merge
 logic for single-queue blk-mq devices.  Each column after that includes
 that first patch.  In dio-noplug, I removed the blk_plug from the
 direct-io code path (so there is no nesting at all).  This is a control,
 since it is what I expect the outcome of the noflush-nested column to
 actually be.  Then, the noflush-nested column leaves the blk_plug in
 place in the dio code, but includes the patch that prevents nested
 blk_plug's from being flushed.  All numbers are the average of 5 runs.
 With the exception of the vanilla run on rssda (the first run was
 faster, causing the average to go up), the standard deviation is very
 small.
 
 For the dio-noplug column, if the inner plug really was a noop, then why
 would we see any change in performance?  Like I said, I agree with your
 reading of the code and the patch.  Color me confused.  I'll poke at it
 more next week.  For now, I think your patch is fine.
 
 Reviewed-by: Jeff Moyer jmo...@redhat.com

Thanks! I don't know why either the your second makes change.
 
 Also, Jens or Shaohua or anyone, please review my blk-mq plug fix (patch
 1/2 of aforementioned thread).  ;)

You are not alone :), I posted 2 times too
http://marc.info/?l=linux-kernelm=142627559617005w=2


Thanks,
Shaohua
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] blk: clean up plug

2015-04-17 Thread Jeff Moyer
Shaohua Li s...@fb.com writes:

 Also, Jens or Shaohua or anyone, please review my blk-mq plug fix (patch
 1/2 of aforementioned thread).  ;)

 You are not alone :), I posted 2 times too
 http://marc.info/?l=linux-kernelm=142627559617005w=2

Oh, sorry!  I think Jens had mentioned that you had a patch that touched
that code.  I took a quick look at it, and I think the general idea is
good.  I'll take a closer look next week, and I'll also give it to our
performance team for testing.  Hopefully I can follow-up on that patch
by the end of next week.

Cheers,
Jeff
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] blk: clean up plug

2015-04-16 Thread Shaohua Li
Current code looks like inner plug gets flushed with a
blk_finish_plug(). Actually it's a nop. All requests/callbacks are added
to current->plug, while only outmost plug is assigned to current->plug.
So inner plug always has empty request/callback list, which makes
blk_flush_plug_list() a nop. This tries to make the code more clear.

Signed-off-by: Shaohua Li 
---
 block/blk-core.c | 24 
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 794c3e7..d3161f3 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -3018,21 +3018,20 @@ void blk_start_plug(struct blk_plug *plug)
 {
struct task_struct *tsk = current;
 
+   /*
+* If this is a nested plug, don't actually assign it.
+*/
+   if (tsk->plug)
+   return;
+
INIT_LIST_HEAD(>list);
INIT_LIST_HEAD(>mq_list);
INIT_LIST_HEAD(>cb_list);
-
/*
-* If this is a nested plug, don't actually assign it. It will be
-* flushed on its own.
+* Store ordering should not be needed here, since a potential
+* preempt will imply a full memory barrier
 */
-   if (!tsk->plug) {
-   /*
-* Store ordering should not be needed here, since a potential
-* preempt will imply a full memory barrier
-*/
-   tsk->plug = plug;
-   }
+   tsk->plug = plug;
 }
 EXPORT_SYMBOL(blk_start_plug);
 
@@ -3179,10 +3178,11 @@ void blk_flush_plug_list(struct blk_plug *plug, bool 
from_schedule)
 
 void blk_finish_plug(struct blk_plug *plug)
 {
+   if (plug != current->plug)
+   return;
blk_flush_plug_list(plug, false);
 
-   if (plug == current->plug)
-   current->plug = NULL;
+   current->plug = NULL;
 }
 EXPORT_SYMBOL(blk_finish_plug);
 
-- 
1.8.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] blk: clean up plug

2015-04-16 Thread Shaohua Li
Current code looks like inner plug gets flushed with a
blk_finish_plug(). Actually it's a nop. All requests/callbacks are added
to current-plug, while only outmost plug is assigned to current-plug.
So inner plug always has empty request/callback list, which makes
blk_flush_plug_list() a nop. This tries to make the code more clear.

Signed-off-by: Shaohua Li s...@fb.com
---
 block/blk-core.c | 24 
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 794c3e7..d3161f3 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -3018,21 +3018,20 @@ void blk_start_plug(struct blk_plug *plug)
 {
struct task_struct *tsk = current;
 
+   /*
+* If this is a nested plug, don't actually assign it.
+*/
+   if (tsk-plug)
+   return;
+
INIT_LIST_HEAD(plug-list);
INIT_LIST_HEAD(plug-mq_list);
INIT_LIST_HEAD(plug-cb_list);
-
/*
-* If this is a nested plug, don't actually assign it. It will be
-* flushed on its own.
+* Store ordering should not be needed here, since a potential
+* preempt will imply a full memory barrier
 */
-   if (!tsk-plug) {
-   /*
-* Store ordering should not be needed here, since a potential
-* preempt will imply a full memory barrier
-*/
-   tsk-plug = plug;
-   }
+   tsk-plug = plug;
 }
 EXPORT_SYMBOL(blk_start_plug);
 
@@ -3179,10 +3178,11 @@ void blk_flush_plug_list(struct blk_plug *plug, bool 
from_schedule)
 
 void blk_finish_plug(struct blk_plug *plug)
 {
+   if (plug != current-plug)
+   return;
blk_flush_plug_list(plug, false);
 
-   if (plug == current-plug)
-   current-plug = NULL;
+   current-plug = NULL;
 }
 EXPORT_SYMBOL(blk_finish_plug);
 
-- 
1.8.1

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/