Hi Tejun,

Sorry for delayed reply, I was on vacation last week.

The problem still exists in current code of 4.16.0-rc2, 
detail test information is below, if further info is needed please let me know.

Thanks.

———————————————————————————————————————————————
Both read/write bps are limited to 10MB/s in blkio cgroup v1 & v2

$ uname -r
4.16.0-rc2+


[Without this patch]----------------------------------------

CGROUP V1 (direct write):

$ dd if=/dev/zero of=/mnt/sdb1/20/test bs=1M count=1024 oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 102.402 s, 10.5 MB/s

8:16 Read 16384
8:16 Write 2684354560
8:16 Sync 2684370944
8:16 Async 0
8:16 Total 2684370944

CGROUP V1 (read):

$ dd if=/mnt/sdb1/20/test of=/dev/zero bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 102.412 s, 10.5 MB/s

8:16 Read 4831838208
8:16 Write 0
8:16 Sync 4831838208
8:16 Async 0
8:16 Total 4831838208


CGROUP V2 (direct write):

$ cat io.max
8:16 rbps=max wbps=10485760 riops=max wiops=max

$ dd if=/dev/zero of=/mnt/sdb1/20/test bs=1M count=1024 oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 102.408 s, 10.5 MB/s

8:16 rbytes=24576 wbytes=2684354560 rios=5 wios=4096


CGROUP V2 (buffered write):

$ dd if=/dev/zero of=/mnt/sdb1/20/test bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.637822 s, 1.7 GB/s

8:16 rbytes=0 wbytes=4831838208 rios=0 wios=4096

CGROUP V2 (read):

$ cat io.max
8:16 rbps=10485760 wbps=max riops=max wiops=max

$ dd if=/mnt/sdb1/20/test of=/dev/zero bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 102.409 s, 10.5 MB/s

8:16 rbytes=4831846400 wbytes=0 rios=4097 wios=0

[With this patch]----------------------------------------

CGROUP V1 (direct write):

$ dd if=/dev/zero of=/mnt/sdb1/20/test bs=1M count=1024 oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 102.402 s, 10.5 MB/s

8:16 Read 24576
8:16 Write 1073741824
8:16 Sync 1073766400
8:16 Async 0
8:16 Total 1073766400

CGROUP V1 (read):

$ dd if=/mnt/sdb1/20/test of=/dev/zero bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 102.406 s, 10.5 MB/s

8:16 Read 1073741824
8:16 Write 0
8:16 Sync 1073741824
8:16 Async 0
8:16 Total 1073741824

CGROUP V2 (direct write):

$ dd if=/dev/zero of=/mnt/sdb1/20/test bs=1M count=1024 oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 102.407 s, 10.5 MB/s

8:16 rbytes=16384 wbytes=1073741824 rios=4 wios=1024


CGROUP V2 (buffered write):

$ dd if=/dev/zero of=/mnt/sdb1/20/test bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.650783 s, 1.6 GB/s

8:16 rbytes=0 wbytes=1073741824 rios=0 wios=512

CGROUP V2 (read):

$ dd if=/mnt/sdb1/20/test of=/dev/zero bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 102.411 s, 10.5 MB/s

8:16 rbytes=1077314048 wbytes=0 rios=572 wios=0


———————————————————————————————————————————————


> 在 2018年2月13日,下午10:43,Tejun Heo <t...@kernel.org> 写道:
> 
> On Tue, Feb 13, 2018 at 02:45:50PM +0800, Chengguang Xu wrote:
>> In current throttling/upper limit policy of blkio cgroup
>> blkio.throttle.io_service_bytes does not exactly represent
>> the number of bytes issued to the disk by the group, sometimes
>> this number could be counted multiple times of real bytes.
>> This fix introduces BIO_COUNTED flag to avoid multiple counting
>> for same bio.
>> 
>> Signed-off-by: Chengguang Xu <cgxu...@icloud.com>
> 
> We had a series of fixes / changes for this problem during the last
> cycle.  Can you please see whether the current linus master has the
> same problem.
> 
> Thanks.
> 
> -- 
> tejun

Reply via email to