Re: [PATCH 2/2] loop: Add PF_LESS_THROTTLE to block/loop device thread.

2017-06-16 Thread Christoph Hellwig
why isn't loop using kthread_create_worker()?  Why isn't the less
throttle a flag to kthread_create_worker()?  I hate all this open
coding..


Re: [PATCH 2/2] loop: Add PF_LESS_THROTTLE to block/loop device thread.

2017-06-16 Thread Christoph Hellwig
why isn't loop using kthread_create_worker()?  Why isn't the less
throttle a flag to kthread_create_worker()?  I hate all this open
coding..


[PATCH 2/2] loop: Add PF_LESS_THROTTLE to block/loop device thread.

2017-06-15 Thread NeilBrown
When a filesystem is mounted from a loop device, writes are
throttled by balance_dirty_pages() twice: once when writing
to the filesystem and once when the loop_handle_cmd() writes
to the backing file.  This double-throttling can trigger
positive feedback loops that create significant delays.  The
throttling at the lower level is seen by the upper level as
a slow device, so it throttles extra hard.

The PF_LESS_THROTTLE flag was created to handle exactly this
circumstance, though with an NFS filesystem mounted from a
local NFS server.  It reduces the throttling on the lower
layer so that it can proceed largely unthrottled.

To demonstrate this, create a filesystem on a loop device
and write (e.g. with dd) several large files which combine
to consume significantly more than the limit set by
/proc/sys/vm/dirty_ratio or dirty_bytes.  Measure the total
time taken.

When I do this directly on a device (no loop device) the
total time for several runs (mkfs, mount, write 200 files,
umount) is fairly stable: 28-35 seconds.
When I do this over a loop device the times are much worse
and less stable.  52-460 seconds.  Half below 100seconds,
half above.
When I apply this patch, the times become stable again,
though not as fast as the no-loop-back case: 53-72 seconds.

There may be room for further improvement as the total overhead still
seems too high, but this is a big improvement.

Reviewed-by: Christoph Hellwig 
Reviewed-by: Ming Lei 
Suggested-by: Michal Hocko 
Acked-by: Michal Hocko 
Signed-off-by: NeilBrown 
---
 drivers/block/loop.c |8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 9c457ca6c55e..6ed7c4506951 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -843,10 +843,16 @@ static void loop_unprepare_queue(struct loop_device *lo)
kthread_stop(lo->worker_task);
 }
 
+static int loop_kthread_worker_fn(void *worker_ptr)
+{
+   current->flags |= PF_LESS_THROTTLE;
+   return kthread_worker_fn(worker_ptr);
+}
+
 static int loop_prepare_queue(struct loop_device *lo)
 {
kthread_init_worker(>worker);
-   lo->worker_task = kthread_run(kthread_worker_fn,
+   lo->worker_task = kthread_run(loop_kthread_worker_fn,
>worker, "loop%d", lo->lo_number);
if (IS_ERR(lo->worker_task))
return -ENOMEM;




[PATCH 2/2] loop: Add PF_LESS_THROTTLE to block/loop device thread.

2017-06-15 Thread NeilBrown
When a filesystem is mounted from a loop device, writes are
throttled by balance_dirty_pages() twice: once when writing
to the filesystem and once when the loop_handle_cmd() writes
to the backing file.  This double-throttling can trigger
positive feedback loops that create significant delays.  The
throttling at the lower level is seen by the upper level as
a slow device, so it throttles extra hard.

The PF_LESS_THROTTLE flag was created to handle exactly this
circumstance, though with an NFS filesystem mounted from a
local NFS server.  It reduces the throttling on the lower
layer so that it can proceed largely unthrottled.

To demonstrate this, create a filesystem on a loop device
and write (e.g. with dd) several large files which combine
to consume significantly more than the limit set by
/proc/sys/vm/dirty_ratio or dirty_bytes.  Measure the total
time taken.

When I do this directly on a device (no loop device) the
total time for several runs (mkfs, mount, write 200 files,
umount) is fairly stable: 28-35 seconds.
When I do this over a loop device the times are much worse
and less stable.  52-460 seconds.  Half below 100seconds,
half above.
When I apply this patch, the times become stable again,
though not as fast as the no-loop-back case: 53-72 seconds.

There may be room for further improvement as the total overhead still
seems too high, but this is a big improvement.

Reviewed-by: Christoph Hellwig 
Reviewed-by: Ming Lei 
Suggested-by: Michal Hocko 
Acked-by: Michal Hocko 
Signed-off-by: NeilBrown 
---
 drivers/block/loop.c |8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 9c457ca6c55e..6ed7c4506951 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -843,10 +843,16 @@ static void loop_unprepare_queue(struct loop_device *lo)
kthread_stop(lo->worker_task);
 }
 
+static int loop_kthread_worker_fn(void *worker_ptr)
+{
+   current->flags |= PF_LESS_THROTTLE;
+   return kthread_worker_fn(worker_ptr);
+}
+
 static int loop_prepare_queue(struct loop_device *lo)
 {
kthread_init_worker(>worker);
-   lo->worker_task = kthread_run(kthread_worker_fn,
+   lo->worker_task = kthread_run(loop_kthread_worker_fn,
>worker, "loop%d", lo->lo_number);
if (IS_ERR(lo->worker_task))
return -ENOMEM;