Hello,

On Tue, 30 Dec 2014 08:12:21 +1000 Lindsay Mathieson wrote:

> On Mon, 29 Dec 2014 11:12:06 PM Christian Balzer wrote:
> > Is that a private cluster network just between Ceph storage nodes or is
> > this for all ceph traffic (including clients)?
> > The later would probably be better, a private cluster network twice as
> > fast as the client one isn't particular helpful 99% of the time.
> 
> 
> The later - all ceph traffic including clients (qemu rbd).
> 
Very good. ^.^

> > > 3rd Node
> > > 
> > >  - Monitor only, for quorum
> > > 
> > > - Intel Nuc
> > > - 8GB RAM
> > > - CPU: Celeron N2820
> > 
> > Uh oh, a bit weak for a monitor. Where does the OS live (on this and
> > the other nodes)? The leveldb (/var/lib/ceph/..) of the monitors likes
> > it fast, SSDs preferably.
> 
> On a SSD (all the nodes have OS on SSD).
> 
Good.

> Looks like I misunderstood the purpose of the monitors, I presumed they
> were just for monitoring node health. They do more than that?
> 
They keep the maps and the pgmap in particular is of course very busy.
All that action is at: /var/lib/ceph/mon/<monitorname>/store.db/ .

In addition monitors log like no tomorrow, also straining the OS storage.

> 
> > The closer it is to the current storage nodes, the better.
> > The slowest OSD in a cluster can impede all (most of) the others.
> 
> Closer as in similar hardware specs?
> 
Ayup. The less variation, the better and the more predictable things
become.
Again, having 1 node slow down 2 fast nodes is not what you want.

Christian
-- 
Christian Balzer        Network/Systems Engineer                
ch...@gol.com           Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to