Hi Anand,

I tested this policy with fio and dstat. It performs overall really
well. On my raid1c3 array with two HDDs and one SSD (which is the last
device), I'm getting the following results.


Michal,

 Thank you for verifying. More below...

With direct=0:

   Run status group 0 (all jobs):
      READ: bw=3560MiB/s (3733MB/s), 445MiB/s-445MiB/s (467MB/s-467MB/s),
      io=3129GiB (3360GB), run=900003-900013msec

With direct=1:

   Run status group 0 (all jobs):
      READ: bw=520MiB/s (545MB/s), 64.9MiB/s-65.0MiB/s (68.1MB/s-68.2MB/s),
      io=457GiB (490GB), run=900001-900001msec

However, I was also running dstat at the same time and I noticed that
the read stop sometimes for ~15-20 seconds. For example:

   ----system---- --dsk/sdb-- --dsk/sdc-- --dsk/sdd--
   20-01 00:37:21|   0     0 :   0     0 : 509M    0
   20-01 00:37:22|   0     0 :   0     0 : 517M    0
   20-01 00:37:23|   0     0 :   0     0 : 507M    0
   20-01 00:37:24|   0     0 :   0     0 : 518M    0
   20-01 00:37:25|   0     0 :   0     0 :  22M    0
   20-01 00:37:26|   0     0 :   0     0 :   0     0
   20-01 00:37:27|   0     0 :   0     0 :   0     0
   20-01 00:37:28|   0     0 :   0     0 :   0     0
   20-01 00:37:29|   0     0 :   0     0 :   0     0
   20-01 00:37:30|   0     0 :   0     0 :   0     0
   20-01 00:37:31|   0     0 :   0     0 :   0     0
   20-01 00:37:32|   0     0 :   0     0 :   0     0
   20-01 00:37:33|   0     0 :   0     0 :   0     0
   20-01 00:37:34|   0     0 :   0     0 :   0     0
   20-01 00:37:35|   0     0 :   0     0 :   0     0
   20-01 00:37:36|   0     0 :   0     0 :   0     0
   20-01 00:37:37|   0     0 :   0     0 :   0     0
   20-01 00:37:38|   0     0 :   0     0 :   0     0
   20-01 00:37:39|   0     0 :   0     0 :   0     0
   20-01 00:37:40|   0     0 :   0     0 :   0     0
   20-01 00:37:41|   0     0 :   0     0 :   0     0
   20-01 00:37:42|   0     0 :   0     0 :   0     0
   20-01 00:37:43|   0     0 :   0     0 :   0     0
   20-01 00:37:44|   0     0 :   0     0 :   0     0
   20-01 00:37:45|   0     0 :   0     0 :   0     0
   20-01 00:37:46|   0     0 :   0     0 :  55M    0
   20-01 00:37:47|   0     0 :   0     0 : 516M    0
   20-01 00:37:48|   0     0 :   0     0 : 515M    0
   20-01 00:37:49|   0     0 :   0     0 : 516M    0
   20-01 00:37:50|   0     0 :   0     0 : 520M    0
   20-01 00:37:51|   0     0 :   0     0 : 520M    0
   20-01 00:37:52|   0     0 :   0     0 : 514M    0

Here is the full log:

https://susepaste.org/16928336

I never noticed that happening with the PID policy. Is that maybe
because of reading the part stats for all CPUs while selecting the
mirror?


 I ran fio tests again, now with dstat in an another window. I don't
 notice any such stalls, the read numbers went continuous until fio
 finished. Could you please check with the below fio command, also
 could you please share your fio command options.

fio \
--filename=/btrfs/largefile \
--directory=/btrfs \
--filesize=50G \
--size=50G \
--bs=64k \
--ioengine=libaio \
--rw=read \
--direct=1 \
--numjobs=1 \
--group_reporting \
--thread \
--name iops-test-job

 It is system specific?

Thanks.
Anand


Michal


Reply via email to