On Thu, 23 Feb 2012, Tommi Virtanen wrote:
> On Thu, Feb 23, 2012 at 01:15, ÿÿÿÿÿÿÿÿÿÿÿÿÿÿ-ÿÿÿÿÿÿÿÿÿÿÿÿÿÿ <[email protected]> 
> wrote:
> > ehllo here,
> >
> > i have tried to setup ceph .41 in simple configuration:
> > 3 nodes, each running mon, mds & osd with replication level 3 for data & 
> > metadata pools.
> > Each node mounts ceph locally via ceph-fuse
> > cluster seems running well until one of the nodes goes down for simple 
> > reboot.
> > Then all mount points become inaccessible, data transfer hangs and cluster 
> > stop working
> >
> > What is the purpose of ceph software while such simple case does not go 
> > through ?
> 
> You have a replication factor of 3, and 3 OSDs. If one of them is
> down, the replication factor of 3 cannot be satisfied anymore. You
> need either more nodes, or a smaller replication factor.
> 
> Ceph is not an eventually consistent system; building a POSIX
> filesystem on top of one is pretty much impossible. With Ceph, all
> replicas are always kept up to date.

Just to clarify: what should have happend is that after a few seconds (20 
by default?) the stopped ceph-osd is marked down and life continues with 2 
replicas.  'ceph -s' or 'ceph health' will report some PGs in the 
'degraded' state.

sage

Reply via email to