On 01/29/2013 02:56 AM, femi anjorin wrote:
Please can anyone  an advise  on how exactly a CEPH production
environment should look like? and what the configuration files should
be. My hardwares include the following:

Server A, B, C configuration
CPU - Intel(R) Core(TM)2 Quad  CPU   Q9550  @ 2.83GHz
RAM - 16GB
Hard drive -  500GB
SSD - 120GB

Server D,E,F,G,H,J configuration
CPU - Intel(R) Atom(TM) CPU D525   @ 1.80GHz
RAM - 4 GB
Boot drive -  320gb
SSD - 120 GB
Storage drives - 16 X 2 TB

I am thinking of these configurations but i am not sure.
Server A - MDS and MON
Server B - MON
Server C - MON
Server D, E,F,G,H,J - OSD


Those 16GB RAM on the monitor nodes vs the 4GB RAM on the osd nodes seem to be a bit wrong to me. The OSDs tend to require much more RAM, for instance for recovery, while the monitor is not as heavy on the memory -- if a cluster grows significantly large, the in-memory maps may grow a lot too, but that reason alone shouldn't be the reason you would give a monitor 16GB RAM and 4GB for an osd.

Furthermore, I see you have 16x2TB storage drives. Is that per OSD node? I'm assuming that's what you're aiming to do, so how many OSDs were you thinking of running on the same host? Usually we go for 1 OSD per drive, but you might have something else on your mind. I am not an expert on server configuration, but my point is that, if you are going to have more than one OSD on the same host, your RAM sure looks smaller than what I would envision.

BTW, not sure if you're placing SSDs on the monitor/mds nodes with the same intent as when you place it on the OSD nodes (keeping the osd journal, maybe?), but if you indeed intend to keep the daemons journals in them I think you should know that the monitor and the mds don't keep a journal. The monitors do keep a store on disk, but the mds don't even do that, instead keeping its data directly on the osds and whatever it needs in-memory.

  -Joao





--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to