Hi,

At least it used to be like that - I'm not sure if that has changed. I
believe this is also part why it is adviced to go with the same kind of hw
and setup if possible.

Since at least rbd images are spread in objects throughout the cluster
you'll prob. have to wait for a slow disk when reading - writing will still
go journal -> disk so if you have the ssd journal -> sata you prob. wont
notice that much unless you're doing lots and/or heavy writes.

You can peak into it via the admin socket and get some perfstats for each
osd (iirc it is 'perf dump' you want). You could set something up to poll
at given intervals and graph it and prob. spot trends/slow disks that way.

I think it is a manual process to locate a slow drive and either drop it
from the cluster or give it lower weight.

If possible, I'd suggest toying with something like fio/bonnie++ in a guest
and run some tests with and without the osd/node in question - you'll know
for certain then.

Cheers,
Martin


On Tue, Jan 28, 2014 at 4:22 PM, Gautam Saxena <[email protected]> wrote:

> If one node which happens to have a single raid 0 hardisk is "slow", would
> that impact the whole ceph cluster? That is, when VMs interact with the rbd
> pool to read and write data, would the kvm client "wait" for that slow
> hardisk/node to return with the requested data, thus making that slow
> hardisk/node the ultimate bottleneck? Or, would kvm/ceph be smart enough to
> get the needed data from whichever node is ready to serve it up? That is,
> kvm/ceph will request all possilbe osds to return data, but if one osd is
> "done" with its request, it can chose to return more data that the "slow"
> harddisk/node still hasn't returned....I'm trying to decide whether to
> remove the slow harddisk/node from the ceph cluster (depending on how ceph
> works).
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to