On Thu, Feb 23, 2012 at 11:00 AM, Tommi Virtanen
<[email protected]> wrote:
> On Thu, Feb 23, 2012 at 01:15, Дениска-редиска <[email protected]> wrote:
>> ehllo here,
>>
>> i have tried to setup ceph .41 in simple configuration:
>> 3 nodes, each running mon, mds & osd with replication level 3 for data & 
>> metadata pools.
>> Each node mounts ceph locally via ceph-fuse
>> cluster seems running well until one of the nodes goes down for simple 
>> reboot.
>> Then all mount points become inaccessible, data transfer hangs and cluster 
>> stop working
>>
>> What is the purpose of ceph software while such simple case does not go 
>> through ?
>
> You have a replication factor of 3, and 3 OSDs. If one of them is
> down, the replication factor of 3 cannot be satisfied anymore. You
> need either more nodes, or a smaller replication factor.
>
> Ceph is not an eventually consistent system; building a POSIX
> filesystem on top of one is pretty much impossible. With Ceph, all
> replicas are always kept up to date.

Actually the OSDs will happily (well, not happily; the will complain.
But they will run) run in degraded mode. However, if you have 3 active
MDSes and you kill one of them without a standby available, you will
lose access to part of your tree. That's probably what happened
here...
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to