On 2015/6/17 16:24, Ilya Dryomov wrote:
On Wed, Jun 17, 2015 at 6:04 AM, juncheng bai <[email protected]> wrote:Hi. Yeah, you are right, use the default max_segments, the request size can be the object size, because the bi_phys_segments of bio could be recount, there's just a possibility. I want to fully understand the bi_phys_segments, hope you can give me some information, thanks. The test information as shown below: The systemtap script: global greq=0; probe kernel.function("bio_attempt_back_merge") { greq=pointer_arg(2); } probe kernel.function("bio_attempt_back_merge").return { printf("after req addr:%p req segments:%d req offset:%lu req length:%lu\n", greq, @cast(greq, "request")->nr_phys_segments, @cast(greq, "request")->__sector * 512, @cast(greq, "request")->__data_len); } probe kernel.function("blk_mq_start_request") { printf("req addr:%p nr_phys_segments:%d, offset:%lu len:%lu\n", pointer_arg(1), @cast(pointer_arg(1), "request")->nr_phys_segments, @cast(pointer_arg(1), "request")->__sector * 512, @cast(pointer_arg(1), "request")->__data_len); } Test command: dd if=/dev/zero of=/dev/rbd0 bs=4M count=2 oflag=direct seek=100 Cast one: blk_queue_max_segments(q, 256); The output of stap: after req addr:0xffff880ff60a08c0 req segments:73 req offset:419430400 req length:2097152 after req addr:0xffff880ff60a08c0 req segments:73 req offset:419430400 req length:2097152 after req addr:0xffff880ff60a0a80 req segments:186 req offset:421527552 req length:1048576 req addr:0xffff880ff60a08c0 nr_phys_segments:73, offset:419430400 len:2097152 req addr:0xffff880ff60a0a80 nr_phys_segments:186, offset:421527552 len:1048576 req addr:0xffff880ff60a0c40 nr_phys_segments:232, offset:422576128 len:1048576 after req addr:0xffff880ff60a0c40 req segments:73 req offset:423624704 req length:2097152 after req addr:0xffff880ff60a0c40 req segments:73 req offset:423624704 req length:2097152 after req addr:0xffff880ff60a0e00 req segments:186 req offset:425721856 req length:1048576 req addr:0xffff880ff60a0c40 nr_phys_segments:73, offset:423624704 len:2097152 req addr:0xffff880ff60a0e00 nr_phys_segments:186, offset:425721856 len:1048576 req addr:0xffff880ff60a0fc0 nr_phys_segments:232, offset:426770432 len:1048576 Case two: blk_queue_max_segments(q, segment_size / PAGE_SIZE); The output of stap: after req addr:0xffff88101c9a0000 req segments:478 req offset:419430400 req length:4194304 req addr:0xffff88101c9a0000 nr_phys_segments:478, offset:419430400 len:4194304 after req addr:0xffff88101c9a0000 req segments:478 req offset:423624704 req length:4194304 req addr:0xffff88101c9a0000 nr_phys_segments:478, offset:423624704 len:4194304 1.Based on the setting of max_sectors and max_segments, decides the size of a request. 2.We have already set max_sectors to an object's size, so we should try to ensure that a request to the size as possible as merge bio.Yeah, I also tried to explain this in the commit description [1]. Initially I had BIO_MAX_PAGES in there, and realistically I still think it's enough for most cases, but discussion with you made me consider readv/writev case and so I changed it in my patch to max_hw_sectors (i.e. segment_size / SECTOR_SIZE) - this ensures that max_segments will never be a limiting factor even in theory. [1] https://github.com/ceph/ceph-client/commit/2d8006795564fbc0fa68d75758f605fe9f7a108e Thanks, Ilya
Yeah, I agree with you. Thanks. ---- juncheng bai -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html
