Hi,

we're using Ceph as S3-compatible storage to serve static files (mostly
css/js/images + some videos) and I've noticed that there seem to be
huge read amplification for index pool.

Incoming traffic magniture is of around 15k req/sec (mostly sub 1MB
request but index pool is getting hammered:

pool pl-war1.rgw.buckets.index id 10
  client io 632 MiB/s rd, 277 KiB/s wr, 129.92k op/s rd, 415 op/s wr

pool pl-war1.rgw.buckets.data id 11
  client io 4.5 MiB/s rd, 6.8 MiB/s wr, 640 op/s rd, 1.65k op/s wr

and is getting order of magnitude more requests

running 15.2.3, nothing special in terms of tunning aside from
disabling some logging as to not overflow the logs.

We've had similar test cluster on 12.x (and way slower hardware)
getting similar traffic and haven't observed that magnitude of
difference.

when enabling debug on affected OSD I only get spam of

2020-06-17T12:35:05.700+0200 7f80694c4700 10 
bluestore(/var/lib/ceph/osd/ceph-20) omap_get_header 10.51_head oid 
#10:8b1a34d8:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.214:head# = 0
2020-06-17T12:35:05.700+0200 7f80694c4700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b1a34d8:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.214:head#
2020-06-17T12:35:05.700+0200 7f80694c4700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b1a34d8:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.214:head#
2020-06-17T12:35:05.700+0200 7f80694c4700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b1a34d8:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.214:head#
2020-06-17T12:35:05.700+0200 7f80694c4700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b1a34d8:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.214:head#
2020-06-17T12:35:05.704+0200 7f80694c4700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b1a34d8:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.214:head#
2020-06-17T12:35:05.704+0200 7f80694c4700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b1a34d8:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.214:head#
2020-06-17T12:35:05.704+0200 7f80694c4700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b1a34d8:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.214:head#
2020-06-17T12:35:05.704+0200 7f80694c4700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b1a34d8:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.214:head#
2020-06-17T12:35:05.704+0200 7f80694c4700 10 
bluestore(/var/lib/ceph/osd/ceph-20) omap_get_header 10.51_head oid 
#10:8b0d75b0:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.222:head# = 0
2020-06-17T12:35:05.704+0200 7f80694c4700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b0d75b0:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.222:head#
2020-06-17T12:35:05.704+0200 7f80694c4700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b0d75b0:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.222:head#
2020-06-17T12:35:05.704+0200 7f80694c4700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b0d75b0:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.222:head#
2020-06-17T12:35:05.704+0200 7f80694c4700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b0d75b0:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.222:head#
2020-06-17T12:35:05.704+0200 7f80694c4700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b0d75b0:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.222:head#
2020-06-17T12:35:05.704+0200 7f80694c4700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b0d75b0:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.222:head#
2020-06-17T12:35:05.708+0200 7f80694c4700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b0d75b0:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.222:head#
2020-06-17T12:35:05.708+0200 7f80694c4700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b0d75b0:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.222:head#
2020-06-17T12:35:05.716+0200 7f806d4cc700 10 
bluestore(/var/lib/ceph/osd/ceph-20) omap_get_header 10.51_head oid 
#10:8b5ed205:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.151:head# = 0
2020-06-17T12:35:05.716+0200 7f806d4cc700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b5ed205:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.151:head#
2020-06-17T12:35:05.716+0200 7f806d4cc700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b5ed205:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.151:head#
2020-06-17T12:35:05.720+0200 7f806d4cc700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b5ed205:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.151:head#
2020-06-17T12:35:05.720+0200 7f806d4cc700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b5ed205:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.151:head#
2020-06-17T12:35:05.720+0200 7f806d4cc700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b5ed205:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.151:head#
2020-06-17T12:35:05.720+0200 7f806d4cc700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b5ed205:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.151:head#
2020-06-17T12:35:05.720+0200 7f806d4cc700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b5ed205:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.151:head#
2020-06-17T12:35:05.720+0200 7f806d4cc700 10 
bluestore(/var/lib/ceph/osd/ceph-20) get_omap_iterator 10.51_head 
#10:8b5ed205:::.dir.88d4f221-0da5-444d-81a8-517771278350.454759.8.151:head#



-- 
Mariusz Gronczewski (XANi) <[email protected]>
GnuPG: 0xEA8ACE64
http://devrandom.pl
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to