Re: [PATCH 16/22] dm: Refactor for new bio cloning/splitting

2013-10-11 Thread Kent Overstreet
On Fri, Oct 11, 2013 at 05:16:42PM -0400, Mike Snitzer wrote:
> On Fri, Oct 11 2013 at 12:13am -0400,
> Kent Overstreet  wrote:
> 
> > On Sun, Oct 06, 2013 at 08:14:10PM -0400, Mike Snitzer wrote:
> > > 
> > > Please fold this fix into your for-jens branch, thanks.  (Could be that
> > > by the time Jens takes your immutable biovec changes we'll need to
> > > rebase but at least it won't slip through the cracks).
> > 
> > Thanks! I knew that bio chaining patch was going to cause a few issues
> > like this, but it seems to useful to pass up. Anything else come up/any
> > comments?
> 
> I'm wondering if there might be a cleaner way to hide the quirky nature
> of the bio chaining with saved/restored bi_end_io.  Joe Thornber
> implemented utility wrappers local to dm-cache, see:
> http://people.redhat.com/msnitzer/patches/upstream/dm-cache-for-v3.13/dm-cache-utils-for-hooking-bios.patch
> 
> would be natural to elevate something like this to the block layer and
> then bury the quirk of needing to bump bi_remaining when the bi_end_io
> is restored in unhook_bio().

Hmm, that might not be a bad idea. I suppose there are a decent number
of places scattered around (like the bio integrity code) that are simple
enough that there's no point in cloning the bio, they just want to hook
into into the completion...

If I get time I may grep around and see what sort of consolidation
that'd enable.

> Aside from that idea, your immutable biovec changes look sane to me.  I
> like how much was able to be removed from DM core.  I don't see any
> remaining problems that stand out.  Feel free to add the following to
> your DM patches in the series:
> 
>  Reviewed-by: Mike Snitzer 

Great, thanks!

> However, I did collect various style nits in the DM code during my
> review.   I'd like to see these changes applied:

I'll fold them in :)

> 
> Author: Mike Snitzer 
> Date:   Fri Sep 27 18:14:38 2013 -0400
> 
> dm: style nits and comment for immutable biovec changes
> ---
>  drivers/md/dm-cache-target.c |   16 ++--
>  drivers/md/dm-crypt.c|8 ++--
>  drivers/md/dm-io.c   |3 +--
>  drivers/md/dm-snap.c |8 
>  drivers/md/dm-thin.c |3 +--
>  drivers/md/dm-verity.c   |3 +--
>  drivers/md/dm.c  |6 ++
>  include/linux/dm-io.h|2 +-
>  8 files changed, 22 insertions(+), 27 deletions(-)
> 
> diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
> index a9acec6..1ccbfff 100644
> --- a/drivers/md/dm-cache-target.c
> +++ b/drivers/md/dm-cache-target.c
> @@ -571,13 +571,13 @@ static void remap_to_cache(struct cache *cache, struct 
> bio *bio,
>  
>   bio->bi_bdev = cache->cache_dev->bdev;
>   if (!block_size_is_power_of_two(cache))
> - bio->bi_iter.bi_sector = (from_cblock(cblock) *
> -   cache->sectors_per_block) +
> - sector_div(bi_sector, cache->sectors_per_block);
> + bio->bi_iter.bi_sector =
> + (from_cblock(cblock) * cache->sectors_per_block) +
> + sector_div(bi_sector, cache->sectors_per_block);
>   else
> - bio->bi_iter.bi_sector = (from_cblock(cblock) <<
> -   cache->sectors_per_block_shift) |
> - (bi_sector & (cache->sectors_per_block - 1));
> + bio->bi_iter.bi_sector =
> + (from_cblock(cblock) << cache->sectors_per_block_shift) 
> |
> + (bi_sector & (cache->sectors_per_block - 1));
>  }
>  
>  static void check_if_tick_bio_needed(struct cache *cache, struct bio *bio)
> @@ -666,6 +666,10 @@ static void writethrough_endio(struct bio *bio, int err)
>   struct per_bio_data *pb = get_per_bio_data(bio, PB_DATA_SIZE_WT);
>   bio->bi_end_io = pb->saved_bi_end_io;
>  
> + /*
> +  * Must bump bi_remaining to allow bio to complete with
> +  * restored bi_end_io.
> +  */
>   atomic_inc(>bi_remaining);
>  
>   if (err) {
> diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
> index e0b902f..4db7e48 100644
> --- a/drivers/md/dm-crypt.c
> +++ b/drivers/md/dm-crypt.c
> @@ -648,12 +648,10 @@ static void crypt_convert_init(struct crypt_config *cc,
>  {
>   ctx->bio_in = bio_in;
>   ctx->bio_out = bio_out;
> -
>   if (bio_in)
>   ctx->iter_in = bio_in->bi_iter;
>   if (bio_out)
>   ctx->iter_out = bio_out->bi_iter;
> -
>   ctx->cc_sector = sector + cc->iv_offset;
>   init_completion(>restart);
>  }
> @@ -752,8 +750,7 @@ static int crypt_convert(struct crypt_config *cc,
>  
>   atomic_set(>cc_pending, 1);
>  
> - while (ctx->iter_in.bi_size &&
> -ctx->iter_out.bi_size) {
> + while (ctx->iter_in.bi_size && ctx->iter_out.bi_size) {
>  
>   crypt_alloc_req(cc, ctx);
>  
> @@ -1676,8 +1673,7 @@ static int crypt_map(struct dm_target *ti, 

Re: [PATCH 16/22] dm: Refactor for new bio cloning/splitting

2013-10-11 Thread Mike Snitzer
On Fri, Oct 11 2013 at 12:13am -0400,
Kent Overstreet  wrote:

> On Sun, Oct 06, 2013 at 08:14:10PM -0400, Mike Snitzer wrote:
> > 
> > Please fold this fix into your for-jens branch, thanks.  (Could be that
> > by the time Jens takes your immutable biovec changes we'll need to
> > rebase but at least it won't slip through the cracks).
> 
> Thanks! I knew that bio chaining patch was going to cause a few issues
> like this, but it seems to useful to pass up. Anything else come up/any
> comments?

I'm wondering if there might be a cleaner way to hide the quirky nature
of the bio chaining with saved/restored bi_end_io.  Joe Thornber
implemented utility wrappers local to dm-cache, see:
http://people.redhat.com/msnitzer/patches/upstream/dm-cache-for-v3.13/dm-cache-utils-for-hooking-bios.patch

would be natural to elevate something like this to the block layer and
then bury the quirk of needing to bump bi_remaining when the bi_end_io
is restored in unhook_bio().

Aside from that idea, your immutable biovec changes look sane to me.  I
like how much was able to be removed from DM core.  I don't see any
remaining problems that stand out.  Feel free to add the following to
your DM patches in the series:

 Reviewed-by: Mike Snitzer 

However, I did collect various style nits in the DM code during my
review.   I'd like to see these changes applied:

Author: Mike Snitzer 
Date:   Fri Sep 27 18:14:38 2013 -0400

dm: style nits and comment for immutable biovec changes
---
 drivers/md/dm-cache-target.c |   16 ++--
 drivers/md/dm-crypt.c|8 ++--
 drivers/md/dm-io.c   |3 +--
 drivers/md/dm-snap.c |8 
 drivers/md/dm-thin.c |3 +--
 drivers/md/dm-verity.c   |3 +--
 drivers/md/dm.c  |6 ++
 include/linux/dm-io.h|2 +-
 8 files changed, 22 insertions(+), 27 deletions(-)

diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
index a9acec6..1ccbfff 100644
--- a/drivers/md/dm-cache-target.c
+++ b/drivers/md/dm-cache-target.c
@@ -571,13 +571,13 @@ static void remap_to_cache(struct cache *cache, struct 
bio *bio,
 
bio->bi_bdev = cache->cache_dev->bdev;
if (!block_size_is_power_of_two(cache))
-   bio->bi_iter.bi_sector = (from_cblock(cblock) *
- cache->sectors_per_block) +
-   sector_div(bi_sector, cache->sectors_per_block);
+   bio->bi_iter.bi_sector =
+   (from_cblock(cblock) * cache->sectors_per_block) +
+   sector_div(bi_sector, cache->sectors_per_block);
else
-   bio->bi_iter.bi_sector = (from_cblock(cblock) <<
- cache->sectors_per_block_shift) |
-   (bi_sector & (cache->sectors_per_block - 1));
+   bio->bi_iter.bi_sector =
+   (from_cblock(cblock) << cache->sectors_per_block_shift) 
|
+   (bi_sector & (cache->sectors_per_block - 1));
 }
 
 static void check_if_tick_bio_needed(struct cache *cache, struct bio *bio)
@@ -666,6 +666,10 @@ static void writethrough_endio(struct bio *bio, int err)
struct per_bio_data *pb = get_per_bio_data(bio, PB_DATA_SIZE_WT);
bio->bi_end_io = pb->saved_bi_end_io;
 
+   /*
+* Must bump bi_remaining to allow bio to complete with
+* restored bi_end_io.
+*/
atomic_inc(>bi_remaining);
 
if (err) {
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index e0b902f..4db7e48 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -648,12 +648,10 @@ static void crypt_convert_init(struct crypt_config *cc,
 {
ctx->bio_in = bio_in;
ctx->bio_out = bio_out;
-
if (bio_in)
ctx->iter_in = bio_in->bi_iter;
if (bio_out)
ctx->iter_out = bio_out->bi_iter;
-
ctx->cc_sector = sector + cc->iv_offset;
init_completion(>restart);
 }
@@ -752,8 +750,7 @@ static int crypt_convert(struct crypt_config *cc,
 
atomic_set(>cc_pending, 1);
 
-   while (ctx->iter_in.bi_size &&
-  ctx->iter_out.bi_size) {
+   while (ctx->iter_in.bi_size && ctx->iter_out.bi_size) {
 
crypt_alloc_req(cc, ctx);
 
@@ -1676,8 +1673,7 @@ static int crypt_map(struct dm_target *ti, struct bio 
*bio)
return DM_MAPIO_REMAPPED;
}
 
-   io = crypt_io_alloc(cc, bio,
-   dm_target_offset(ti, bio->bi_iter.bi_sector));
+   io = crypt_io_alloc(cc, bio, dm_target_offset(ti, 
bio->bi_iter.bi_sector));
 
if (bio_data_dir(io->base_bio) == READ) {
if (kcryptd_io_read(io, GFP_NOWAIT))
diff --git a/drivers/md/dm-io.c b/drivers/md/dm-io.c
index 9cc1f3a..b2b8a10 100644
--- a/drivers/md/dm-io.c
+++ b/drivers/md/dm-io.c
@@ -307,8 +307,7 @@ static void do_region(int rw, unsigned region, struct 

Re: [PATCH 16/22] dm: Refactor for new bio cloning/splitting

2013-10-11 Thread Mike Snitzer
On Fri, Oct 11 2013 at 12:13am -0400,
Kent Overstreet k...@daterainc.com wrote:

 On Sun, Oct 06, 2013 at 08:14:10PM -0400, Mike Snitzer wrote:
  
  Please fold this fix into your for-jens branch, thanks.  (Could be that
  by the time Jens takes your immutable biovec changes we'll need to
  rebase but at least it won't slip through the cracks).
 
 Thanks! I knew that bio chaining patch was going to cause a few issues
 like this, but it seems to useful to pass up. Anything else come up/any
 comments?

I'm wondering if there might be a cleaner way to hide the quirky nature
of the bio chaining with saved/restored bi_end_io.  Joe Thornber
implemented utility wrappers local to dm-cache, see:
http://people.redhat.com/msnitzer/patches/upstream/dm-cache-for-v3.13/dm-cache-utils-for-hooking-bios.patch

would be natural to elevate something like this to the block layer and
then bury the quirk of needing to bump bi_remaining when the bi_end_io
is restored in unhook_bio().

Aside from that idea, your immutable biovec changes look sane to me.  I
like how much was able to be removed from DM core.  I don't see any
remaining problems that stand out.  Feel free to add the following to
your DM patches in the series:

 Reviewed-by: Mike Snitzer snit...@redhat.com

However, I did collect various style nits in the DM code during my
review.   I'd like to see these changes applied:

Author: Mike Snitzer snit...@redhat.com
Date:   Fri Sep 27 18:14:38 2013 -0400

dm: style nits and comment for immutable biovec changes
---
 drivers/md/dm-cache-target.c |   16 ++--
 drivers/md/dm-crypt.c|8 ++--
 drivers/md/dm-io.c   |3 +--
 drivers/md/dm-snap.c |8 
 drivers/md/dm-thin.c |3 +--
 drivers/md/dm-verity.c   |3 +--
 drivers/md/dm.c  |6 ++
 include/linux/dm-io.h|2 +-
 8 files changed, 22 insertions(+), 27 deletions(-)

diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
index a9acec6..1ccbfff 100644
--- a/drivers/md/dm-cache-target.c
+++ b/drivers/md/dm-cache-target.c
@@ -571,13 +571,13 @@ static void remap_to_cache(struct cache *cache, struct 
bio *bio,
 
bio-bi_bdev = cache-cache_dev-bdev;
if (!block_size_is_power_of_two(cache))
-   bio-bi_iter.bi_sector = (from_cblock(cblock) *
- cache-sectors_per_block) +
-   sector_div(bi_sector, cache-sectors_per_block);
+   bio-bi_iter.bi_sector =
+   (from_cblock(cblock) * cache-sectors_per_block) +
+   sector_div(bi_sector, cache-sectors_per_block);
else
-   bio-bi_iter.bi_sector = (from_cblock(cblock) 
- cache-sectors_per_block_shift) |
-   (bi_sector  (cache-sectors_per_block - 1));
+   bio-bi_iter.bi_sector =
+   (from_cblock(cblock)  cache-sectors_per_block_shift) 
|
+   (bi_sector  (cache-sectors_per_block - 1));
 }
 
 static void check_if_tick_bio_needed(struct cache *cache, struct bio *bio)
@@ -666,6 +666,10 @@ static void writethrough_endio(struct bio *bio, int err)
struct per_bio_data *pb = get_per_bio_data(bio, PB_DATA_SIZE_WT);
bio-bi_end_io = pb-saved_bi_end_io;
 
+   /*
+* Must bump bi_remaining to allow bio to complete with
+* restored bi_end_io.
+*/
atomic_inc(bio-bi_remaining);
 
if (err) {
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index e0b902f..4db7e48 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -648,12 +648,10 @@ static void crypt_convert_init(struct crypt_config *cc,
 {
ctx-bio_in = bio_in;
ctx-bio_out = bio_out;
-
if (bio_in)
ctx-iter_in = bio_in-bi_iter;
if (bio_out)
ctx-iter_out = bio_out-bi_iter;
-
ctx-cc_sector = sector + cc-iv_offset;
init_completion(ctx-restart);
 }
@@ -752,8 +750,7 @@ static int crypt_convert(struct crypt_config *cc,
 
atomic_set(ctx-cc_pending, 1);
 
-   while (ctx-iter_in.bi_size 
-  ctx-iter_out.bi_size) {
+   while (ctx-iter_in.bi_size  ctx-iter_out.bi_size) {
 
crypt_alloc_req(cc, ctx);
 
@@ -1676,8 +1673,7 @@ static int crypt_map(struct dm_target *ti, struct bio 
*bio)
return DM_MAPIO_REMAPPED;
}
 
-   io = crypt_io_alloc(cc, bio,
-   dm_target_offset(ti, bio-bi_iter.bi_sector));
+   io = crypt_io_alloc(cc, bio, dm_target_offset(ti, 
bio-bi_iter.bi_sector));
 
if (bio_data_dir(io-base_bio) == READ) {
if (kcryptd_io_read(io, GFP_NOWAIT))
diff --git a/drivers/md/dm-io.c b/drivers/md/dm-io.c
index 9cc1f3a..b2b8a10 100644
--- a/drivers/md/dm-io.c
+++ b/drivers/md/dm-io.c
@@ -307,8 +307,7 @@ static void do_region(int rw, unsigned region, 

Re: [PATCH 16/22] dm: Refactor for new bio cloning/splitting

2013-10-11 Thread Kent Overstreet
On Fri, Oct 11, 2013 at 05:16:42PM -0400, Mike Snitzer wrote:
 On Fri, Oct 11 2013 at 12:13am -0400,
 Kent Overstreet k...@daterainc.com wrote:
 
  On Sun, Oct 06, 2013 at 08:14:10PM -0400, Mike Snitzer wrote:
   
   Please fold this fix into your for-jens branch, thanks.  (Could be that
   by the time Jens takes your immutable biovec changes we'll need to
   rebase but at least it won't slip through the cracks).
  
  Thanks! I knew that bio chaining patch was going to cause a few issues
  like this, but it seems to useful to pass up. Anything else come up/any
  comments?
 
 I'm wondering if there might be a cleaner way to hide the quirky nature
 of the bio chaining with saved/restored bi_end_io.  Joe Thornber
 implemented utility wrappers local to dm-cache, see:
 http://people.redhat.com/msnitzer/patches/upstream/dm-cache-for-v3.13/dm-cache-utils-for-hooking-bios.patch
 
 would be natural to elevate something like this to the block layer and
 then bury the quirk of needing to bump bi_remaining when the bi_end_io
 is restored in unhook_bio().

Hmm, that might not be a bad idea. I suppose there are a decent number
of places scattered around (like the bio integrity code) that are simple
enough that there's no point in cloning the bio, they just want to hook
into into the completion...

If I get time I may grep around and see what sort of consolidation
that'd enable.

 Aside from that idea, your immutable biovec changes look sane to me.  I
 like how much was able to be removed from DM core.  I don't see any
 remaining problems that stand out.  Feel free to add the following to
 your DM patches in the series:
 
  Reviewed-by: Mike Snitzer snit...@redhat.com

Great, thanks!

 However, I did collect various style nits in the DM code during my
 review.   I'd like to see these changes applied:

I'll fold them in :)

 
 Author: Mike Snitzer snit...@redhat.com
 Date:   Fri Sep 27 18:14:38 2013 -0400
 
 dm: style nits and comment for immutable biovec changes
 ---
  drivers/md/dm-cache-target.c |   16 ++--
  drivers/md/dm-crypt.c|8 ++--
  drivers/md/dm-io.c   |3 +--
  drivers/md/dm-snap.c |8 
  drivers/md/dm-thin.c |3 +--
  drivers/md/dm-verity.c   |3 +--
  drivers/md/dm.c  |6 ++
  include/linux/dm-io.h|2 +-
  8 files changed, 22 insertions(+), 27 deletions(-)
 
 diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
 index a9acec6..1ccbfff 100644
 --- a/drivers/md/dm-cache-target.c
 +++ b/drivers/md/dm-cache-target.c
 @@ -571,13 +571,13 @@ static void remap_to_cache(struct cache *cache, struct 
 bio *bio,
  
   bio-bi_bdev = cache-cache_dev-bdev;
   if (!block_size_is_power_of_two(cache))
 - bio-bi_iter.bi_sector = (from_cblock(cblock) *
 -   cache-sectors_per_block) +
 - sector_div(bi_sector, cache-sectors_per_block);
 + bio-bi_iter.bi_sector =
 + (from_cblock(cblock) * cache-sectors_per_block) +
 + sector_div(bi_sector, cache-sectors_per_block);
   else
 - bio-bi_iter.bi_sector = (from_cblock(cblock) 
 -   cache-sectors_per_block_shift) |
 - (bi_sector  (cache-sectors_per_block - 1));
 + bio-bi_iter.bi_sector =
 + (from_cblock(cblock)  cache-sectors_per_block_shift) 
 |
 + (bi_sector  (cache-sectors_per_block - 1));
  }
  
  static void check_if_tick_bio_needed(struct cache *cache, struct bio *bio)
 @@ -666,6 +666,10 @@ static void writethrough_endio(struct bio *bio, int err)
   struct per_bio_data *pb = get_per_bio_data(bio, PB_DATA_SIZE_WT);
   bio-bi_end_io = pb-saved_bi_end_io;
  
 + /*
 +  * Must bump bi_remaining to allow bio to complete with
 +  * restored bi_end_io.
 +  */
   atomic_inc(bio-bi_remaining);
  
   if (err) {
 diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
 index e0b902f..4db7e48 100644
 --- a/drivers/md/dm-crypt.c
 +++ b/drivers/md/dm-crypt.c
 @@ -648,12 +648,10 @@ static void crypt_convert_init(struct crypt_config *cc,
  {
   ctx-bio_in = bio_in;
   ctx-bio_out = bio_out;
 -
   if (bio_in)
   ctx-iter_in = bio_in-bi_iter;
   if (bio_out)
   ctx-iter_out = bio_out-bi_iter;
 -
   ctx-cc_sector = sector + cc-iv_offset;
   init_completion(ctx-restart);
  }
 @@ -752,8 +750,7 @@ static int crypt_convert(struct crypt_config *cc,
  
   atomic_set(ctx-cc_pending, 1);
  
 - while (ctx-iter_in.bi_size 
 -ctx-iter_out.bi_size) {
 + while (ctx-iter_in.bi_size  ctx-iter_out.bi_size) {
  
   crypt_alloc_req(cc, ctx);
  
 @@ -1676,8 +1673,7 @@ static int crypt_map(struct dm_target *ti, struct bio 
 *bio)
   return DM_MAPIO_REMAPPED;
   }
  
 - io = crypt_io_alloc(cc, 

Re: [PATCH 16/22] dm: Refactor for new bio cloning/splitting

2013-10-10 Thread Kent Overstreet
On Sun, Oct 06, 2013 at 08:14:10PM -0400, Mike Snitzer wrote:
> On Fri, Oct 04 2013 at  1:07pm -0400,
> Mike Snitzer  wrote:
>  
> > With your latest fix I was able to create a thin device and format with
> > XFS.  Unfortunately, when I tried to run the thinp-test-suite the very
> > first BasicTests test (test_dd_benchmark) fails -- need to look closer
> > but it would seem to me the thinp saved bio_endio path isn't happy.  We
> > likely need an appropriately placed atomic_inc(>bi_remaining); like
> > you did in dm-cache-target.c
> > 
> > [ cut here ]
> > kernel BUG at fs/bio.c:1722!
> ...
> > Call Trace:
> >  [] process_prepared_mapping+0x79/0x150 [dm_thin_pool]
> >  [] process_prepared+0x87/0xa0 [dm_thin_pool]
> >  [] do_worker+0x33/0x60 [dm_thin_pool]
> >  [] process_one_work+0x182/0x3b0
> >  [] worker_thread+0x120/0x3a0
> >  [] ? manage_workers+0x160/0x160
> >  [] kthread+0xce/0xe0
> >  [] ? kthread_freezable_should_stop+0x70/0x70
> >  [] ret_from_fork+0x7c/0xb0
> >  [] ? kthread_freezable_should_stop+0x70/0x70
> > Code: 1f 84 00 00 00 00 00 48 8b 57 10 83 e2 01 0f 44 f1 eb cd 0f 1f 40 00 
> > 48 8b 7f 50 48 85 ff 74 dd 8b 57 44 48 8d 47 44 85 d2 7f ac <0f> 0b eb fe 
> > 0f 1f 84 00 00 00 00 00 55 48 89 e5 66 66 66 66 90 
> > RIP  [] bio_endio+0x74/0x80
> >  RSP 
> > ---[ end trace acb5a7d638591b7b ]---
> 
> Please fold this fix into your for-jens branch, thanks.  (Could be that
> by the time Jens takes your immutable biovec changes we'll need to
> rebase but at least it won't slip through the cracks).

Thanks! I knew that bio chaining patch was going to cause a few issues
like this, but it seems to useful to pass up. Anything else come up/any
comments?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 16/22] dm: Refactor for new bio cloning/splitting

2013-10-10 Thread Kent Overstreet
On Sun, Oct 06, 2013 at 08:14:10PM -0400, Mike Snitzer wrote:
 On Fri, Oct 04 2013 at  1:07pm -0400,
 Mike Snitzer snit...@redhat.com wrote:
  
  With your latest fix I was able to create a thin device and format with
  XFS.  Unfortunately, when I tried to run the thinp-test-suite the very
  first BasicTests test (test_dd_benchmark) fails -- need to look closer
  but it would seem to me the thinp saved bio_endio path isn't happy.  We
  likely need an appropriately placed atomic_inc(bio-bi_remaining); like
  you did in dm-cache-target.c
  
  [ cut here ]
  kernel BUG at fs/bio.c:1722!
 ...
  Call Trace:
   [a05f2ef9] process_prepared_mapping+0x79/0x150 [dm_thin_pool]
   [a05f2ba7] process_prepared+0x87/0xa0 [dm_thin_pool]
   [a05f58f3] do_worker+0x33/0x60 [dm_thin_pool]
   [81067862] process_one_work+0x182/0x3b0
   [81068c80] worker_thread+0x120/0x3a0
   [81068b60] ? manage_workers+0x160/0x160
   [8106eace] kthread+0xce/0xe0
   [8106ea00] ? kthread_freezable_should_stop+0x70/0x70
   [8152b1ac] ret_from_fork+0x7c/0xb0
   [8106ea00] ? kthread_freezable_should_stop+0x70/0x70
  Code: 1f 84 00 00 00 00 00 48 8b 57 10 83 e2 01 0f 44 f1 eb cd 0f 1f 40 00 
  48 8b 7f 50 48 85 ff 74 dd 8b 57 44 48 8d 47 44 85 d2 7f ac 0f 0b eb fe 
  0f 1f 84 00 00 00 00 00 55 48 89 e5 66 66 66 66 90 
  RIP  [811b1464] bio_endio+0x74/0x80
   RSP 88032c5e7d48
  ---[ end trace acb5a7d638591b7b ]---
 
 Please fold this fix into your for-jens branch, thanks.  (Could be that
 by the time Jens takes your immutable biovec changes we'll need to
 rebase but at least it won't slip through the cracks).

Thanks! I knew that bio chaining patch was going to cause a few issues
like this, but it seems to useful to pass up. Anything else come up/any
comments?
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 16/22] dm: Refactor for new bio cloning/splitting

2013-10-06 Thread Mike Snitzer
On Fri, Oct 04 2013 at  1:07pm -0400,
Mike Snitzer  wrote:
 
> With your latest fix I was able to create a thin device and format with
> XFS.  Unfortunately, when I tried to run the thinp-test-suite the very
> first BasicTests test (test_dd_benchmark) fails -- need to look closer
> but it would seem to me the thinp saved bio_endio path isn't happy.  We
> likely need an appropriately placed atomic_inc(>bi_remaining); like
> you did in dm-cache-target.c
> 
> [ cut here ]
> kernel BUG at fs/bio.c:1722!
...
> Call Trace:
>  [] process_prepared_mapping+0x79/0x150 [dm_thin_pool]
>  [] process_prepared+0x87/0xa0 [dm_thin_pool]
>  [] do_worker+0x33/0x60 [dm_thin_pool]
>  [] process_one_work+0x182/0x3b0
>  [] worker_thread+0x120/0x3a0
>  [] ? manage_workers+0x160/0x160
>  [] kthread+0xce/0xe0
>  [] ? kthread_freezable_should_stop+0x70/0x70
>  [] ret_from_fork+0x7c/0xb0
>  [] ? kthread_freezable_should_stop+0x70/0x70
> Code: 1f 84 00 00 00 00 00 48 8b 57 10 83 e2 01 0f 44 f1 eb cd 0f 1f 40 00 48 
> 8b 7f 50 48 85 ff 74 dd 8b 57 44 48 8d 47 44 85 d2 7f ac <0f> 0b eb fe 0f 1f 
> 84 00 00 00 00 00 55 48 89 e5 66 66 66 66 90 
> RIP  [] bio_endio+0x74/0x80
>  RSP 
> ---[ end trace acb5a7d638591b7b ]---

Please fold this fix into your for-jens branch, thanks.  (Could be that
by the time Jens takes your immutable biovec changes we'll need to
rebase but at least it won't slip through the cracks).

---
 drivers/md/dm-thin.c |8 ++--
 1 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
index a654024..1abb4a2 100644
--- a/drivers/md/dm-thin.c
+++ b/drivers/md/dm-thin.c
@@ -611,8 +611,10 @@ static void cell_defer_no_holder(struct thin_c *tc, struct 
dm_bio_prison_cell *c
 
 static void process_prepared_mapping_fail(struct dm_thin_new_mapping *m)
 {
-   if (m->bio)
+   if (m->bio) {
m->bio->bi_end_io = m->saved_bi_end_io;
+   atomic_inc(>bio->bi_remaining);
+   }
cell_error(m->tc->pool, m->cell);
list_del(>list);
mempool_free(m, m->tc->pool->mapping_pool);
@@ -626,8 +628,10 @@ static void process_prepared_mapping(struct 
dm_thin_new_mapping *m)
int r;
 
bio = m->bio;
-   if (bio)
+   if (bio) {
bio->bi_end_io = m->saved_bi_end_io;
+   atomic_inc(>bi_remaining);
+   }
 
if (m->err) {
cell_error(pool, m->cell);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 16/22] dm: Refactor for new bio cloning/splitting

2013-10-06 Thread Mike Snitzer
On Fri, Oct 04 2013 at  1:07pm -0400,
Mike Snitzer snit...@redhat.com wrote:
 
 With your latest fix I was able to create a thin device and format with
 XFS.  Unfortunately, when I tried to run the thinp-test-suite the very
 first BasicTests test (test_dd_benchmark) fails -- need to look closer
 but it would seem to me the thinp saved bio_endio path isn't happy.  We
 likely need an appropriately placed atomic_inc(bio-bi_remaining); like
 you did in dm-cache-target.c
 
 [ cut here ]
 kernel BUG at fs/bio.c:1722!
...
 Call Trace:
  [a05f2ef9] process_prepared_mapping+0x79/0x150 [dm_thin_pool]
  [a05f2ba7] process_prepared+0x87/0xa0 [dm_thin_pool]
  [a05f58f3] do_worker+0x33/0x60 [dm_thin_pool]
  [81067862] process_one_work+0x182/0x3b0
  [81068c80] worker_thread+0x120/0x3a0
  [81068b60] ? manage_workers+0x160/0x160
  [8106eace] kthread+0xce/0xe0
  [8106ea00] ? kthread_freezable_should_stop+0x70/0x70
  [8152b1ac] ret_from_fork+0x7c/0xb0
  [8106ea00] ? kthread_freezable_should_stop+0x70/0x70
 Code: 1f 84 00 00 00 00 00 48 8b 57 10 83 e2 01 0f 44 f1 eb cd 0f 1f 40 00 48 
 8b 7f 50 48 85 ff 74 dd 8b 57 44 48 8d 47 44 85 d2 7f ac 0f 0b eb fe 0f 1f 
 84 00 00 00 00 00 55 48 89 e5 66 66 66 66 90 
 RIP  [811b1464] bio_endio+0x74/0x80
  RSP 88032c5e7d48
 ---[ end trace acb5a7d638591b7b ]---

Please fold this fix into your for-jens branch, thanks.  (Could be that
by the time Jens takes your immutable biovec changes we'll need to
rebase but at least it won't slip through the cracks).

---
 drivers/md/dm-thin.c |8 ++--
 1 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
index a654024..1abb4a2 100644
--- a/drivers/md/dm-thin.c
+++ b/drivers/md/dm-thin.c
@@ -611,8 +611,10 @@ static void cell_defer_no_holder(struct thin_c *tc, struct 
dm_bio_prison_cell *c
 
 static void process_prepared_mapping_fail(struct dm_thin_new_mapping *m)
 {
-   if (m-bio)
+   if (m-bio) {
m-bio-bi_end_io = m-saved_bi_end_io;
+   atomic_inc(m-bio-bi_remaining);
+   }
cell_error(m-tc-pool, m-cell);
list_del(m-list);
mempool_free(m, m-tc-pool-mapping_pool);
@@ -626,8 +628,10 @@ static void process_prepared_mapping(struct 
dm_thin_new_mapping *m)
int r;
 
bio = m-bio;
-   if (bio)
+   if (bio) {
bio-bi_end_io = m-saved_bi_end_io;
+   atomic_inc(bio-bi_remaining);
+   }
 
if (m-err) {
cell_error(pool, m-cell);
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 16/22] dm: Refactor for new bio cloning/splitting

2013-10-04 Thread Mike Snitzer
On Thu, Oct 03 2013 at  6:50pm -0400,
Mike Snitzer  wrote:

> On Thu, Oct 03 2013 at  5:45pm -0400,
> Kent Overstreet  wrote:
> 
> > On Wed, Oct 02, 2013 at 11:23:21PM -0400, Mike Snitzer wrote:
> > > Great, thanks for finding this, I'll test and review the code further.
> > 
> > Cool - let me know if you find anything else (how thoroughly do you
> > think you've tested it so far?)
> 
> Not very, the issue you resolved was a blocker for DM.  But I haven't
> even verified your fix yet though... will do tomorrow with more
> extensive testing using the dm-thinp and dm-cache test-suite.

With your latest fix I was able to create a thin device and format with
XFS.  Unfortunately, when I tried to run the thinp-test-suite the very
first BasicTests test (test_dd_benchmark) fails -- need to look closer
but it would seem to me the thinp saved bio_endio path isn't happy.  We
likely need an appropriately placed atomic_inc(>bi_remaining); like
you did in dm-cache-target.c

[ cut here ]
kernel BUG at fs/bio.c:1722!
invalid opcode:  [#1] SMP
Modules linked in: dm_cache_cleaner dm_cache_mq dm_cache dm_thin_pool 
dm_bio_prison dm_persistent_data dm_bufio dm_mod xfs exportfs libcrc32c 
ebtable_nat ebtables xt_C
HECKSUM iptable_mangle bridge autofs4 target_core_iblock target_core_file 
target_core_pscsi target_core_mod configfs bnx2fc fcoe 8021q libfcoe libfc garp 
stp llc scsi_
transport_fc scsi_tgt sunrpc cpufreq_ondemand ipt_REJECT nf_conntrack_ipv4 
nf_defrag_ipv4 iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 
nf_defrag_ipv6 xt_stat
e nf_conntrack ip6table_filter ip6_tables bnx2i cnic uio ipv6 cxgb4i cxgb4 
cxgb3i libcxgbi cxgb3 iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi 
vhost_net macvtap
 macvlan vhost tun kvm_intel kvm iTCO_wdt iTCO_vendor_support microcode 
i2c_i801 lpc_ich mfd_core igb i2c_algo_bit i2c_core i7core_edac edac_core ixgbe 
dca ptp pps_cor
e mdio sg ses enclosure acpi_cpufreq freq_table ext4 jbd2 mbcache sr_mod cdrom 
pata_acpi ata_generic ata_piix sd_mod crc_t10dif crct10dif_common megaraid_sas 
[last unl
oaded: dm_cache_mq]
CPU: 0 PID: 342 Comm: kworker/u24:7 Tainted: GW3.12.0-rc2.snitm+ #76
Hardware name: FUJITSU  PRIMERGY RX300 S6 
/D2619, BIOS 6.00 Rev. 1.10.2619.N1   05/24/2011
Workqueue: dm-thin do_worker [dm_thin_pool]
task: 88032cd88ab0 ti: 88032c5e6000 task.ti: 88032c5e6000
RIP: 0010:[]  [] bio_endio+0x74/0x80
RSP: 0018:88032c5e7d48  EFLAGS: 00010246
RAX: 88032f10a784 RBX: 88032d097450 RCX: 
RDX:  RSI:  RDI: 88032f10a740
RBP: 88032c5e7d48 R08: 88033fc12d00 R09: 
R10:  R11: 0002 R12: 88032f10a740
R13: 88032def5880 R14: 88032d0c7c00 R15: 88032d457a05
FS:  () GS:88033fc0() knlGS:
CS:  0010 DS:  ES:  CR0: 8005003b
CR2: 003bfea13000 CR3: 01a0b000 CR4: 07f0
Stack:
 88032c5e7d98 a05f2ef9 88032c5e7da8 
 88032d0c7c00 88032c5e7da8 88032d0c7d98 88032d0974a8
 88032d0c7d14 88032d457a05 88032c5e7dd8 a05f2ba7
Call Trace:
 [] process_prepared_mapping+0x79/0x150 [dm_thin_pool]
 [] process_prepared+0x87/0xa0 [dm_thin_pool]
 [] do_worker+0x33/0x60 [dm_thin_pool]
 [] process_one_work+0x182/0x3b0
 [] worker_thread+0x120/0x3a0
 [] ? manage_workers+0x160/0x160
 [] kthread+0xce/0xe0
 [] ? kthread_freezable_should_stop+0x70/0x70
 [] ret_from_fork+0x7c/0xb0
 [] ? kthread_freezable_should_stop+0x70/0x70
Code: 1f 84 00 00 00 00 00 48 8b 57 10 83 e2 01 0f 44 f1 eb cd 0f 1f 40 00 48 
8b 7f 50 48 85 ff 74 dd 8b 57 44 48 8d 47 44 85 d2 7f ac <0f> 0b eb fe 0f 1f 84 
00 00 00 00 00 55 48 89 e5 66 66 66 66 90 
RIP  [] bio_endio+0x74/0x80
 RSP 
---[ end trace acb5a7d638591b7b ]---
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 16/22] dm: Refactor for new bio cloning/splitting

2013-10-04 Thread Mike Snitzer
On Thu, Oct 03 2013 at  6:50pm -0400,
Mike Snitzer snit...@redhat.com wrote:

 On Thu, Oct 03 2013 at  5:45pm -0400,
 Kent Overstreet k...@daterainc.com wrote:
 
  On Wed, Oct 02, 2013 at 11:23:21PM -0400, Mike Snitzer wrote:
   Great, thanks for finding this, I'll test and review the code further.
  
  Cool - let me know if you find anything else (how thoroughly do you
  think you've tested it so far?)
 
 Not very, the issue you resolved was a blocker for DM.  But I haven't
 even verified your fix yet though... will do tomorrow with more
 extensive testing using the dm-thinp and dm-cache test-suite.

With your latest fix I was able to create a thin device and format with
XFS.  Unfortunately, when I tried to run the thinp-test-suite the very
first BasicTests test (test_dd_benchmark) fails -- need to look closer
but it would seem to me the thinp saved bio_endio path isn't happy.  We
likely need an appropriately placed atomic_inc(bio-bi_remaining); like
you did in dm-cache-target.c

[ cut here ]
kernel BUG at fs/bio.c:1722!
invalid opcode:  [#1] SMP
Modules linked in: dm_cache_cleaner dm_cache_mq dm_cache dm_thin_pool 
dm_bio_prison dm_persistent_data dm_bufio dm_mod xfs exportfs libcrc32c 
ebtable_nat ebtables xt_C
HECKSUM iptable_mangle bridge autofs4 target_core_iblock target_core_file 
target_core_pscsi target_core_mod configfs bnx2fc fcoe 8021q libfcoe libfc garp 
stp llc scsi_
transport_fc scsi_tgt sunrpc cpufreq_ondemand ipt_REJECT nf_conntrack_ipv4 
nf_defrag_ipv4 iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 
nf_defrag_ipv6 xt_stat
e nf_conntrack ip6table_filter ip6_tables bnx2i cnic uio ipv6 cxgb4i cxgb4 
cxgb3i libcxgbi cxgb3 iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi 
vhost_net macvtap
 macvlan vhost tun kvm_intel kvm iTCO_wdt iTCO_vendor_support microcode 
i2c_i801 lpc_ich mfd_core igb i2c_algo_bit i2c_core i7core_edac edac_core ixgbe 
dca ptp pps_cor
e mdio sg ses enclosure acpi_cpufreq freq_table ext4 jbd2 mbcache sr_mod cdrom 
pata_acpi ata_generic ata_piix sd_mod crc_t10dif crct10dif_common megaraid_sas 
[last unl
oaded: dm_cache_mq]
CPU: 0 PID: 342 Comm: kworker/u24:7 Tainted: GW3.12.0-rc2.snitm+ #76
Hardware name: FUJITSU  PRIMERGY RX300 S6 
/D2619, BIOS 6.00 Rev. 1.10.2619.N1   05/24/2011
Workqueue: dm-thin do_worker [dm_thin_pool]
task: 88032cd88ab0 ti: 88032c5e6000 task.ti: 88032c5e6000
RIP: 0010:[811b1464]  [811b1464] bio_endio+0x74/0x80
RSP: 0018:88032c5e7d48  EFLAGS: 00010246
RAX: 88032f10a784 RBX: 88032d097450 RCX: 
RDX:  RSI:  RDI: 88032f10a740
RBP: 88032c5e7d48 R08: 88033fc12d00 R09: 
R10:  R11: 0002 R12: 88032f10a740
R13: 88032def5880 R14: 88032d0c7c00 R15: 88032d457a05
FS:  () GS:88033fc0() knlGS:
CS:  0010 DS:  ES:  CR0: 8005003b
CR2: 003bfea13000 CR3: 01a0b000 CR4: 07f0
Stack:
 88032c5e7d98 a05f2ef9 88032c5e7da8 
 88032d0c7c00 88032c5e7da8 88032d0c7d98 88032d0974a8
 88032d0c7d14 88032d457a05 88032c5e7dd8 a05f2ba7
Call Trace:
 [a05f2ef9] process_prepared_mapping+0x79/0x150 [dm_thin_pool]
 [a05f2ba7] process_prepared+0x87/0xa0 [dm_thin_pool]
 [a05f58f3] do_worker+0x33/0x60 [dm_thin_pool]
 [81067862] process_one_work+0x182/0x3b0
 [81068c80] worker_thread+0x120/0x3a0
 [81068b60] ? manage_workers+0x160/0x160
 [8106eace] kthread+0xce/0xe0
 [8106ea00] ? kthread_freezable_should_stop+0x70/0x70
 [8152b1ac] ret_from_fork+0x7c/0xb0
 [8106ea00] ? kthread_freezable_should_stop+0x70/0x70
Code: 1f 84 00 00 00 00 00 48 8b 57 10 83 e2 01 0f 44 f1 eb cd 0f 1f 40 00 48 
8b 7f 50 48 85 ff 74 dd 8b 57 44 48 8d 47 44 85 d2 7f ac 0f 0b eb fe 0f 1f 84 
00 00 00 00 00 55 48 89 e5 66 66 66 66 90 
RIP  [811b1464] bio_endio+0x74/0x80
 RSP 88032c5e7d48
---[ end trace acb5a7d638591b7b ]---
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 16/22] dm: Refactor for new bio cloning/splitting

2013-10-03 Thread Mike Snitzer
On Thu, Oct 03 2013 at  5:45pm -0400,
Kent Overstreet  wrote:

> On Wed, Oct 02, 2013 at 11:23:21PM -0400, Mike Snitzer wrote:
> > Great, thanks for finding this, I'll test and review the code further.
> 
> Cool - let me know if you find anything else (how thoroughly do you
> think you've tested it so far?)

Not very, the issue you resolved was a blocker for DM.  But I haven't
even verified your fix yet though... will do tomorrow with more
extensive testing using the dm-thinp and dm-cache test-suite.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 16/22] dm: Refactor for new bio cloning/splitting

2013-10-03 Thread Kent Overstreet
On Wed, Oct 02, 2013 at 11:23:21PM -0400, Mike Snitzer wrote:
> Great, thanks for finding this, I'll test and review the code further.

Cool - let me know if you find anything else (how thoroughly do you
think you've tested it so far?)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 16/22] dm: Refactor for new bio cloning/splitting

2013-10-03 Thread Kent Overstreet
On Wed, Oct 02, 2013 at 11:23:21PM -0400, Mike Snitzer wrote:
 Great, thanks for finding this, I'll test and review the code further.

Cool - let me know if you find anything else (how thoroughly do you
think you've tested it so far?)
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 16/22] dm: Refactor for new bio cloning/splitting

2013-10-03 Thread Mike Snitzer
On Thu, Oct 03 2013 at  5:45pm -0400,
Kent Overstreet k...@daterainc.com wrote:

 On Wed, Oct 02, 2013 at 11:23:21PM -0400, Mike Snitzer wrote:
  Great, thanks for finding this, I'll test and review the code further.
 
 Cool - let me know if you find anything else (how thoroughly do you
 think you've tested it so far?)

Not very, the issue you resolved was a blocker for DM.  But I haven't
even verified your fix yet though... will do tomorrow with more
extensive testing using the dm-thinp and dm-cache test-suite.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 16/22] dm: Refactor for new bio cloning/splitting

2013-10-02 Thread Mike Snitzer
On Wed, Oct 02 2013 at 11:17pm -0400,
Kent Overstreet  wrote:

> On Sat, Sep 28, 2013 at 12:59:09AM -0400, Mike Snitzer wrote:
> > Hey Kent,
> > 
> > I haven't been able to pinpoint the issue yet, but using your for-jens
> > branch, if I create a dm-thin volume with this lvm command:
> > lvcreate -L20G -V20G -T vg/pool --name thinlv
> > 
> > and try to format /dev/vg/thinlv with XFS the kernel warns and then
> > hangs with the following:
> 
> Thanks, for whatever reason ext4 wasn't tickling that codepath but xfs
> does. I folded the fix into my for-jens branch, here's what changed:
> 
> diff --git a/drivers/md/dm.c b/drivers/md/dm.c
> index b60b350..79cee1a 100644
> --- a/drivers/md/dm.c
> +++ b/drivers/md/dm.c
> @@ -1098,7 +1098,7 @@ static void clone_bio(struct dm_target_io *tio, struct 
> bio *bio,
>   bio_integrity_clone(clone, bio, GFP_NOIO);
>  
>   bio_advance(clone, to_bytes(sector - clone->bi_iter.bi_sector));
> - bio->bi_iter.bi_size = to_bytes(len);
> + clone->bi_iter.bi_size = to_bytes(len);
>  
>   if (bio_integrity(bio))
>   bio_integrity_trim(clone, 0, len);

The above change matches my findings during review (I clearly should've
shared it sooner)

> @@ -1267,7 +1267,7 @@ static int __split_and_process_non_flush(struct 
> clone_info *ci)
>   if (!dm_target_is_valid(ti))
>   return -EIO;
>  
> - len = min_t(unsigned, max_io_len(ci->sector, ti), bio_sectors(bio));
> + len = min_t(sector_t, max_io_len(ci->sector, ti), ci->sector_count);
>  
>   __clone_and_map_data_bio(ci, ti, ci->sector, len);
>  

Great, thanks for finding this, I'll test and review the code further.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 16/22] dm: Refactor for new bio cloning/splitting

2013-10-02 Thread Kent Overstreet
On Sat, Sep 28, 2013 at 12:59:09AM -0400, Mike Snitzer wrote:
> Hey Kent,
> 
> I haven't been able to pinpoint the issue yet, but using your for-jens
> branch, if I create a dm-thin volume with this lvm command:
> lvcreate -L20G -V20G -T vg/pool --name thinlv
> 
> and try to format /dev/vg/thinlv with XFS the kernel warns and then
> hangs with the following:

Thanks, for whatever reason ext4 wasn't tickling that codepath but xfs
does. I folded the fix into my for-jens branch, here's what changed:

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index b60b350..79cee1a 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1098,7 +1098,7 @@ static void clone_bio(struct dm_target_io *tio, struct 
bio *bio,
bio_integrity_clone(clone, bio, GFP_NOIO);
 
bio_advance(clone, to_bytes(sector - clone->bi_iter.bi_sector));
-   bio->bi_iter.bi_size = to_bytes(len);
+   clone->bi_iter.bi_size = to_bytes(len);
 
if (bio_integrity(bio))
bio_integrity_trim(clone, 0, len);
@@ -1267,7 +1267,7 @@ static int __split_and_process_non_flush(struct 
clone_info *ci)
if (!dm_target_is_valid(ti))
return -EIO;
 
-   len = min_t(unsigned, max_io_len(ci->sector, ti), bio_sectors(bio));
+   len = min_t(sector_t, max_io_len(ci->sector, ti), ci->sector_count);
 
__clone_and_map_data_bio(ci, ti, ci->sector, len);
 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 16/22] dm: Refactor for new bio cloning/splitting

2013-10-02 Thread Kent Overstreet
On Sat, Sep 28, 2013 at 12:59:09AM -0400, Mike Snitzer wrote:
 Hey Kent,
 
 I haven't been able to pinpoint the issue yet, but using your for-jens
 branch, if I create a dm-thin volume with this lvm command:
 lvcreate -L20G -V20G -T vg/pool --name thinlv
 
 and try to format /dev/vg/thinlv with XFS the kernel warns and then
 hangs with the following:

Thanks, for whatever reason ext4 wasn't tickling that codepath but xfs
does. I folded the fix into my for-jens branch, here's what changed:

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index b60b350..79cee1a 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1098,7 +1098,7 @@ static void clone_bio(struct dm_target_io *tio, struct 
bio *bio,
bio_integrity_clone(clone, bio, GFP_NOIO);
 
bio_advance(clone, to_bytes(sector - clone-bi_iter.bi_sector));
-   bio-bi_iter.bi_size = to_bytes(len);
+   clone-bi_iter.bi_size = to_bytes(len);
 
if (bio_integrity(bio))
bio_integrity_trim(clone, 0, len);
@@ -1267,7 +1267,7 @@ static int __split_and_process_non_flush(struct 
clone_info *ci)
if (!dm_target_is_valid(ti))
return -EIO;
 
-   len = min_t(unsigned, max_io_len(ci-sector, ti), bio_sectors(bio));
+   len = min_t(sector_t, max_io_len(ci-sector, ti), ci-sector_count);
 
__clone_and_map_data_bio(ci, ti, ci-sector, len);
 
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 16/22] dm: Refactor for new bio cloning/splitting

2013-10-02 Thread Mike Snitzer
On Wed, Oct 02 2013 at 11:17pm -0400,
Kent Overstreet k...@daterainc.com wrote:

 On Sat, Sep 28, 2013 at 12:59:09AM -0400, Mike Snitzer wrote:
  Hey Kent,
  
  I haven't been able to pinpoint the issue yet, but using your for-jens
  branch, if I create a dm-thin volume with this lvm command:
  lvcreate -L20G -V20G -T vg/pool --name thinlv
  
  and try to format /dev/vg/thinlv with XFS the kernel warns and then
  hangs with the following:
 
 Thanks, for whatever reason ext4 wasn't tickling that codepath but xfs
 does. I folded the fix into my for-jens branch, here's what changed:
 
 diff --git a/drivers/md/dm.c b/drivers/md/dm.c
 index b60b350..79cee1a 100644
 --- a/drivers/md/dm.c
 +++ b/drivers/md/dm.c
 @@ -1098,7 +1098,7 @@ static void clone_bio(struct dm_target_io *tio, struct 
 bio *bio,
   bio_integrity_clone(clone, bio, GFP_NOIO);
  
   bio_advance(clone, to_bytes(sector - clone-bi_iter.bi_sector));
 - bio-bi_iter.bi_size = to_bytes(len);
 + clone-bi_iter.bi_size = to_bytes(len);
  
   if (bio_integrity(bio))
   bio_integrity_trim(clone, 0, len);

The above change matches my findings during review (I clearly should've
shared it sooner)

 @@ -1267,7 +1267,7 @@ static int __split_and_process_non_flush(struct 
 clone_info *ci)
   if (!dm_target_is_valid(ti))
   return -EIO;
  
 - len = min_t(unsigned, max_io_len(ci-sector, ti), bio_sectors(bio));
 + len = min_t(sector_t, max_io_len(ci-sector, ti), ci-sector_count);
  
   __clone_and_map_data_bio(ci, ti, ci-sector, len);
  

Great, thanks for finding this, I'll test and review the code further.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 16/22] dm: Refactor for new bio cloning/splitting

2013-09-27 Thread Mike Snitzer
On Wed, Aug 07 2013 at  5:54pm -0400,
Kent Overstreet  wrote:

> We need to convert the dm code to the new bvec_iter primitives which
> respect bi_bvec_done; they also allow us to drastically simplify dm's
> bio splitting code.
> 
> Also kill bio_sector_offset(), dm was the only user and it doesn't make
> much sense anymore.
> 
> Signed-off-by: Kent Overstreet 
> Cc: Jens Axboe 
> Cc: Alasdair Kergon 
> Cc: dm-de...@redhat.com
> ---
>  drivers/md/dm.c | 170 
> ++--
>  fs/bio.c|  38 
>  include/linux/bio.h |   1 -
>  3 files changed, 18 insertions(+), 191 deletions(-)
> 
> diff --git a/drivers/md/dm.c b/drivers/md/dm.c
> index 5544af7..696269d 100644
> --- a/drivers/md/dm.c
> +++ b/drivers/md/dm.c
> @@ -1050,7 +1050,6 @@ struct clone_info {



>  /*
>   * Creates a bio that consists of range of complete bvecs.
>   */
>  static void clone_bio(struct dm_target_io *tio, struct bio *bio,
> -   sector_t sector, unsigned short idx,
> -   unsigned short bv_count, unsigned len)
> +   sector_t sector, unsigned len)
>  {
>   struct bio *clone = >clone;
> - unsigned trim = 0;
>  
>   __bio_clone(clone, bio);
> - bio_setup_sector(clone, sector, len);
> - bio_setup_bv(clone, idx, bv_count);
>  
> - if (idx != bio->bi_iter.bi_idx ||
> - clone->bi_iter.bi_size < bio->bi_iter.bi_size)
> - trim = 1;
> - clone_bio_integrity(bio, clone, idx, len, 0, trim);
> + if (bio_integrity(bio))
> + bio_integrity_clone(clone, bio, GFP_NOIO);
> +
> + bio_advance(clone, (sector - clone->bi_iter.bi_sector) << 9);
> + bio->bi_iter.bi_size = len << 9;
> +
> + if (bio_integrity(bio))
> + bio_integrity_trim(clone, 0, len);
>  }
>  
>  static struct dm_target_io *alloc_tio(struct clone_info *ci,
> @@ -1182,10 +1137,7 @@ static int __send_empty_flush(struct clone_info *ci)
>  }
>  
>  static void __clone_and_map_data_bio(struct clone_info *ci, struct dm_target 
> *ti,
> -  sector_t sector, int nr_iovecs,
> -  unsigned short idx, unsigned short 
> bv_count,
> -  unsigned offset, unsigned len,
> -  unsigned split_bvec)
> +  sector_t sector, unsigned len)
>  {
>   struct bio *bio = ci->bio;
>   struct dm_target_io *tio;
> @@ -1199,11 +1151,8 @@ static void __clone_and_map_data_bio(struct clone_info 
> *ci, struct dm_target *ti
>   num_target_bios = ti->num_write_bios(ti, bio);
>  
>   for (target_bio_nr = 0; target_bio_nr < num_target_bios; 
> target_bio_nr++) {
> - tio = alloc_tio(ci, ti, nr_iovecs, target_bio_nr);
> - if (split_bvec)
> - clone_split_bio(tio, bio, sector, idx, offset, len);
> - else
> - clone_bio(tio, bio, sector, idx, bv_count, len);
> + tio = alloc_tio(ci, ti, 0, target_bio_nr);
> + clone_bio(tio, bio, sector, len);
>   __map_bio(tio);
>   }
>  }

Hey Kent,

I haven't been able to pinpoint the issue yet, but using your for-jens
branch, if I create a dm-thin volume with this lvm command:
lvcreate -L20G -V20G -T vg/pool --name thinlv

and try to format /dev/vg/thinlv with XFS the kernel warns and then
hangs with the following:

WARNING: CPU: 0 PID: 11789 at include/linux/bio.h:202 bio_advance+0xd0/0xe0()
Attempted to advance past end of bvec iter
Modules linked in: dm_thin_pool dm_bio_prison dm_persistent_data dm_bufio 
libcrc32c skd(O) ebtable_nat ebtables xt_CHECKSUM iptable_mangle bridge autofs4 
target_core_i
block target_core_file target_core_pscsi target_core_mod configfs bnx2fc fcoe 
libfcoe 8021q libfc garp stp llc scsi_transport_fc scsi_tgt sunrpc 
cpufreq_ondemand ipt_R
EJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip_tables ip6t_REJECT 
nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter 
ip6_tables bnx2i cni
c uio ipv6 cxgb4i cxgb4 cxgb3i libcxgbi cxgb3 iscsi_tcp libiscsi_tcp libiscsi 
scsi_transport_iscsi dm_mirror dm_region_hash dm_log vhost_net macvtap macvlan 
vhost tun
kvm_intel kvm iTCO_wdt iTCO_vendor_support microcode i2c_i801 lpc_ich mfd_core 
igb i2c_algo_bit i2c_core i7core_edac edac_core ixgbe dca ptp pps_core mdio 
dm_mod ses e
nclosure sg acpi_cpufreq freq_table ext4 jbd2 mbcache sr_mod cdrom pata_acpi 
ata_generic ata_piix sd_mod crc_t10dif crct10dif_common megaraid_sas
CPU: 0 PID: 11789 Comm: mkfs.xfs Tainted: GW  O 3.12.0-rc2.snitm+ #74
Hardware name: FUJITSU  PRIMERGY RX300 S6 
/D2619, BIOS 6.00 Rev. 1.10.2619.N1   05/24/2011
 00ca 8803313156a8 8151e8e8 00ca
 8803313156f8 8803313156e8 8104c23c 8803
 8802dd524220 0400 8802ddfb9680 

Re: [PATCH 16/22] dm: Refactor for new bio cloning/splitting

2013-09-27 Thread Mike Snitzer
On Wed, Aug 07 2013 at  5:54pm -0400,
Kent Overstreet k...@daterainc.com wrote:

 We need to convert the dm code to the new bvec_iter primitives which
 respect bi_bvec_done; they also allow us to drastically simplify dm's
 bio splitting code.
 
 Also kill bio_sector_offset(), dm was the only user and it doesn't make
 much sense anymore.
 
 Signed-off-by: Kent Overstreet k...@daterainc.com
 Cc: Jens Axboe ax...@kernel.dk
 Cc: Alasdair Kergon a...@redhat.com
 Cc: dm-de...@redhat.com
 ---
  drivers/md/dm.c | 170 
 ++--
  fs/bio.c|  38 
  include/linux/bio.h |   1 -
  3 files changed, 18 insertions(+), 191 deletions(-)
 
 diff --git a/drivers/md/dm.c b/drivers/md/dm.c
 index 5544af7..696269d 100644
 --- a/drivers/md/dm.c
 +++ b/drivers/md/dm.c
 @@ -1050,7 +1050,6 @@ struct clone_info {

snip

  /*
   * Creates a bio that consists of range of complete bvecs.
   */
  static void clone_bio(struct dm_target_io *tio, struct bio *bio,
 -   sector_t sector, unsigned short idx,
 -   unsigned short bv_count, unsigned len)
 +   sector_t sector, unsigned len)
  {
   struct bio *clone = tio-clone;
 - unsigned trim = 0;
  
   __bio_clone(clone, bio);
 - bio_setup_sector(clone, sector, len);
 - bio_setup_bv(clone, idx, bv_count);
  
 - if (idx != bio-bi_iter.bi_idx ||
 - clone-bi_iter.bi_size  bio-bi_iter.bi_size)
 - trim = 1;
 - clone_bio_integrity(bio, clone, idx, len, 0, trim);
 + if (bio_integrity(bio))
 + bio_integrity_clone(clone, bio, GFP_NOIO);
 +
 + bio_advance(clone, (sector - clone-bi_iter.bi_sector)  9);
 + bio-bi_iter.bi_size = len  9;
 +
 + if (bio_integrity(bio))
 + bio_integrity_trim(clone, 0, len);
  }
  
  static struct dm_target_io *alloc_tio(struct clone_info *ci,
 @@ -1182,10 +1137,7 @@ static int __send_empty_flush(struct clone_info *ci)
  }
  
  static void __clone_and_map_data_bio(struct clone_info *ci, struct dm_target 
 *ti,
 -  sector_t sector, int nr_iovecs,
 -  unsigned short idx, unsigned short 
 bv_count,
 -  unsigned offset, unsigned len,
 -  unsigned split_bvec)
 +  sector_t sector, unsigned len)
  {
   struct bio *bio = ci-bio;
   struct dm_target_io *tio;
 @@ -1199,11 +1151,8 @@ static void __clone_and_map_data_bio(struct clone_info 
 *ci, struct dm_target *ti
   num_target_bios = ti-num_write_bios(ti, bio);
  
   for (target_bio_nr = 0; target_bio_nr  num_target_bios; 
 target_bio_nr++) {
 - tio = alloc_tio(ci, ti, nr_iovecs, target_bio_nr);
 - if (split_bvec)
 - clone_split_bio(tio, bio, sector, idx, offset, len);
 - else
 - clone_bio(tio, bio, sector, idx, bv_count, len);
 + tio = alloc_tio(ci, ti, 0, target_bio_nr);
 + clone_bio(tio, bio, sector, len);
   __map_bio(tio);
   }
  }

Hey Kent,

I haven't been able to pinpoint the issue yet, but using your for-jens
branch, if I create a dm-thin volume with this lvm command:
lvcreate -L20G -V20G -T vg/pool --name thinlv

and try to format /dev/vg/thinlv with XFS the kernel warns and then
hangs with the following:

WARNING: CPU: 0 PID: 11789 at include/linux/bio.h:202 bio_advance+0xd0/0xe0()
Attempted to advance past end of bvec iter
Modules linked in: dm_thin_pool dm_bio_prison dm_persistent_data dm_bufio 
libcrc32c skd(O) ebtable_nat ebtables xt_CHECKSUM iptable_mangle bridge autofs4 
target_core_i
block target_core_file target_core_pscsi target_core_mod configfs bnx2fc fcoe 
libfcoe 8021q libfc garp stp llc scsi_transport_fc scsi_tgt sunrpc 
cpufreq_ondemand ipt_R
EJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip_tables ip6t_REJECT 
nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter 
ip6_tables bnx2i cni
c uio ipv6 cxgb4i cxgb4 cxgb3i libcxgbi cxgb3 iscsi_tcp libiscsi_tcp libiscsi 
scsi_transport_iscsi dm_mirror dm_region_hash dm_log vhost_net macvtap macvlan 
vhost tun
kvm_intel kvm iTCO_wdt iTCO_vendor_support microcode i2c_i801 lpc_ich mfd_core 
igb i2c_algo_bit i2c_core i7core_edac edac_core ixgbe dca ptp pps_core mdio 
dm_mod ses e
nclosure sg acpi_cpufreq freq_table ext4 jbd2 mbcache sr_mod cdrom pata_acpi 
ata_generic ata_piix sd_mod crc_t10dif crct10dif_common megaraid_sas
CPU: 0 PID: 11789 Comm: mkfs.xfs Tainted: GW  O 3.12.0-rc2.snitm+ #74
Hardware name: FUJITSU  PRIMERGY RX300 S6 
/D2619, BIOS 6.00 Rev. 1.10.2619.N1   05/24/2011
 00ca 8803313156a8 8151e8e8 00ca
 8803313156f8 8803313156e8 8104c23c 8803
 8802dd524220 0400 8802ddfb9680 8802dd524200
Call Trace: