On Tue, Aug 29, 2017 at 08:13:39AM -0700, Shaohua Li wrote:
> On Tue, Aug 29, 2017 at 05:56:05PM +0800, Ming Lei wrote:
> > On Thu, Aug 24, 2017 at 12:24:53PM -0700, Shaohua Li wrote:
> > > From: Shaohua Li <s...@fb.com>
> > > 
> > > Currently loop disables merge. While it makes sense for buffer IO mode,
> > > directio mode can benefit from request merge. Without merge, loop could
> > > send small size IO to underlayer disk and harm performance.
> > 
> > Hi Shaohua,
> > 
> > IMO no matter if merge is used, loop always sends page by page
> > to VFS in both dio or buffer I/O.
> 
> Why do you think so?

do_blockdev_direct_IO() still handles page by page from iov_iter, and
with bigger request, I guess it might be the plug merge working.

>  
> > Also if merge is enabled on loop, that means merge is run
> > on both loop and low level block driver, and not sure if we
> > can benefit from that.
> 
> why does merge still happen in low level block driver?

Because scheduler is still working on low level disk. My question
is that why the scheduler in low level disk doesn't work now
if scheduler on loop can merge?

> 
> > 
> > So Could you provide some performance data about this patch?
> 
> In my virtual machine, a workload improves from ~20M/s to ~50M/s. And I 
> clearly
> see the request size becomes bigger.

Could you share us what the low level disk is?

-- 
Ming

Reply via email to