Without looking at your screencast - some thoughts:

2 mons means increasing failure probability, not reducing it. if you lose one 
mon, the other mon will stop working. This is on intention. You need at least 3 
mons to create a quorum. so using ceph with two nodes is a bad idea.
as for the distribution on disks: this is something that your crush map 
governs. did you tweak this in any respect? what's the disk usage (df) of the 
osd's? how to you monitor disk usage, maybe you use the wrong tool?

Wolfgang
________________________________
Von: [email protected] [[email protected]]" im 
Auftrag von "Shaun Reitan [[email protected]]
Gesendet: Freitag, 01. März 2013 03:45
Bis: [email protected]
Betreff: [ceph-users] Writes to only one OSD?


I’m doing some basic testing of CEPH to see if it will fit our needs.  My 
current test deployment consists of 2 servers, each running a single Monitor 
process, 4 OSD’s, and separate 1gbit public and private networks.  I then have 
another server that is just acting as a client using rados/rbd.  I have 
collectl –sD –oT running on each storage server so that I can monitor the stats 
of each disk.  On the client server I’m running the following...

rbd bench-write benchdisk --io-size 256 --io-threads 256

and then I’m watching the collectl stats.  What I’m seeing is a single disk/osd 
only being used on both storage hosts.  Every so often I see another disk show 
some IO but nothing consistent and it’s only on one other drive.  I’m seeing 
this on both storage servers. What would cause this?  Shouldn’t all the disks 
be doing some work?

btw, My ceph configs are basic, nothing special, all the drives in the storage 
servers are the same make/model.

Here’s a screencast showing everything going for 2 minutes

http://unix-scripts.com/tmpdizzle/ceph-watch-1.mp4

--
Shaun R.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to