On Thu, Feb 04, 2021 at 08:30:01PM +0800, Anand Jain wrote:
> 
> Hi Michal,
> 
>  Did you get any chance to run the evaluation with this patchset?
> 
> Thanks, Anand
> 

Hi Anand,

Yes, I tested your policies now. Sorry for late response.

For the singlethreaded test:

  [global]
  name=btrfs-raid1-seqread
  filename=btrfs-raid1-seqread
  rw=read
  bs=64k
  direct=0
  numjobs=1
  time_based=0

  [file1]
  size=10G
  ioengine=libaio

results are:

- raid1c3 with 3 HDDs:
  3 x Segate Barracuda ST2000DM008 (2TB)
  * pid policy
    READ: bw=215MiB/s (226MB/s), 215MiB/s-215MiB/s (226MB/s-226MB/s),
    io=10.0GiB (10.7GB), run=47537-47537msec
  * latency policy
    READ: bw=219MiB/s (229MB/s), 219MiB/s-219MiB/s (229MB/s-229MB/s),
    io=10.0GiB (10.7GB), run=46852-46852msec
  * device policy - didn't test it here, I guess it doesn't make sense
    to check it on non-mixed arrays ;)
- raid1c3 with 2 HDDs and 1 SSD:
  2 x Segate Barracuda ST2000DM008 (2TB)
  1 x Crucial CT256M550SSD1 (256GB)
  * pid policy
    READ: bw=219MiB/s (230MB/s), 219MiB/s-219MiB/s (230MB/s-230MB/s),
    io=10.0GiB (10.7GB), run=46749-46749msec
  * latency policy
    READ: bw=517MiB/s (542MB/s), 517MiB/s-517MiB/s (542MB/s-542MB/s),
    io=10.0GiB (10.7GB), run=19823-19823msec
  * device policy
    READ: bw=517MiB/s (542MB/s), 517MiB/s-517MiB/s (542MB/s-542MB/s),
    io=10.0GiB (10.7GB), run=19810-19810msec

For the multithreaded test:

  [global]
  name=btrfs-raid1-seqread
  filename=btrfs-raid1-seqread
  rw=read
  bs=64k
  direct=0
  numjobs=1
  time_based=0

  [file1]
  size=10G
  ioengine=libaio

results are:

- raid1c3 with 3 HDDs:
  3 x Segate Barracuda ST2000DM008 (2TB)
  * pid policy
    READ: bw=1608MiB/s (1686MB/s), 201MiB/s-201MiB/s (211MB/s-211MB/s),
    io=80.0GiB (85.9GB), run=50948-50949msec
  * latency policy
    READ: bw=1515MiB/s (1588MB/s), 189MiB/s-189MiB/s (199MB/s-199MB/s),
    io=80.0GiB (85.9GB), run=54081-54084msec
- raid1c3 with 2 HDDs and 1 SSD:
  2 x Segate Barracuda ST2000DM008 (2TB)
  1 x Crucial CT256M550SSD1 (256GB)
  * pid policy
    READ: bw=1843MiB/s (1932MB/s), 230MiB/s-230MiB/s (242MB/s-242MB/s),
    io=80.0GiB (85.9GB), run=44449-44450msec
  * latency policy
    READ: bw=4213MiB/s (4417MB/s), 527MiB/s-527MiB/s (552MB/s-552MB/s),
    io=80.0GiB (85.9GB), run=19444-19446msec
  * device policy
    READ: bw=4196MiB/s (4400MB/s), 525MiB/s-525MiB/s (550MB/s-550MB/s),
    io=80.0GiB (85.9GB), run=19522-19522msec

To sum it up - I think that your policies are indeed a very good match
for mixed (nonrot and rot) arrays.

They perform either slightly better or worse (depending on the test)
than pid policy on all-HDD arrays.

I've just sent out my proposal of roundrobin policy, which seems to give
better performance for all-HDD than your policies (and better than pid
policy in all cases):

https://patchwork.kernel.org/project/linux-btrfs/patch/20210209203041.21493-7-mroste...@suse.de/

Cheers,
Michal

Reply via email to