[email protected] said:
> Eeek, I was going through my email backlog and came across this thread again.
> Everything here does look good; the data distribution etc is pretty
> reasonable. If you're still testing, we can at least get a rough idea of the
> sorts of IO the OSD is doing by looking at the perfcounters out of the admin
> socket: ceph --admin-daemon /path/to/socket perf dump (I believe the default
> path is /var/run/ceph/ceph-osd.*.asok)
Hi Greg,
Thanks for your help. I've been experimenting with other things,
so the cluster has a different arrangement now, but the performance
seems to be about the same. I've now broken down the RAID arrays into
JBOD disks, and I'm running one OSD per disk, recklessly ignoring
the warning about syncfs being missing. (Performance doesn't seem
any better or worse than it was before when rsyncing a large directory
of small files.) I've also added another osd node into the mix, with
a different disk controller.
For what it's worth, here are "perf dump" outputs for a
couple of OSDs running on the old and new hardware, respectively:
http://ayesha.phys.virginia.edu/~bryan/perf.osd.200.txt
http://ayesha.phys.virginia.edu/~bryan/perf.osd.100.txt
If you could take a look at them and let me know if you see
anything enlightening, I'd really appreciate it.
Thanks,
Bryan
--
========================================================================
Bryan Wright |"If you take cranberries and stew them like
Physics Department | applesauce, they taste much more like prunes
University of Virginia | than rhubarb does." -- Groucho
Charlottesville, VA 22901|
(434) 924-7218 | [email protected]
========================================================================
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html