Re: [ceph-users] determining the source of io in the cluster
I can see, that the io/read ops come from the pool where we store VM volumes, but i can't source this issue to a particular volume. You can use this script https://github.com/cernceph/ceph-scripts/blob/master/tools/rbd-io-stats.pl This is for filestore only. I adapted it to use bluestore for myself, but fast and it's not looks good. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] determining the source of io in the cluster
As that is a small cluster I hope you still don't have a lot of instances running... You can add "admin socket" to the client configuration part and then read performance information via that. IIRC that prints total bytes and IOPS, but it should be simple to read/calculate difference. This will generate one socket per volume mounted (thus the I hope you don't have many). On Mon, Dec 18, 2017 at 4:36 PM, Josef Zelenkawrote: > Hi everyone, > > we have recently deployed a Luminous(12.2.1) cluster on Ubuntu - three osd > nodes and three monitors, every osd has 3x 2TB SSD + an NVMe drive for a > blockdb. We use it as a backend for our Openstack cluster, so we store > volumes there. IN the last few days, the read op/s rose to around 10k-25k > constantly(it fluctuates between those two) and it doesn't seem to go down. > I can see, that the io/read ops come from the pool where we store VM > volumes, but i can't source this issue to a particular volume. Is that even > possible? Any experiences with debugging this? Any info or advice is greatly > appreciated. > > Thanks > > Josef Zelenka > > Cloudevelops > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] determining the source of io in the cluster
Quoting Josef Zelenka (josef.zele...@cloudevelops.com): > Hi everyone, > > we have recently deployed a Luminous(12.2.1) cluster on Ubuntu - three osd > nodes and three monitors, every osd has 3x 2TB SSD + an NVMe drive for a > blockdb. We use it as a backend for our Openstack cluster, so we store > volumes there. IN the last few days, the read op/s rose to around 10k-25k > constantly(it fluctuates between those two) and it doesn't seem to go down. > I can see, that the io/read ops come from the pool where we store VM > volumes, but i can't source this issue to a particular volume. Is that even > possible? Any experiences with debugging this? Any info or advice is greatly > appreciated. Ceph has no "QoS" as of yet. You might want to collect the libvirt data from your domains (assuming you are using libvirt / kvm) with: virsh domblkstat domain-id device And see how it changes over time. You might then get an idea what VM uses the most IO. Maybe OpenStack has metrics about the amount of IOPS VM's are doing Gr. Stefan -- | BIT BV http://www.bit.nl/Kamer van Koophandel 09090351 | GPG: 0xD14839C6 +31 318 648 688 / i...@bit.nl ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] determining the source of io in the cluster
Hi everyone, we have recently deployed a Luminous(12.2.1) cluster on Ubuntu - three osd nodes and three monitors, every osd has 3x 2TB SSD + an NVMe drive for a blockdb. We use it as a backend for our Openstack cluster, so we store volumes there. IN the last few days, the read op/s rose to around 10k-25k constantly(it fluctuates between those two) and it doesn't seem to go down. I can see, that the io/read ops come from the pool where we store VM volumes, but i can't source this issue to a particular volume. Is that even possible? Any experiences with debugging this? Any info or advice is greatly appreciated. Thanks Josef Zelenka Cloudevelops ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com