Dear developers!

Today i tried  ceph on my new linux cluster
so far i use 1 monitor-only server and 2 storage-only nodes with 2TB 
volumes each,
(6-disk RAID-0)   all connected with Gigabit Ethernet.
OpenSuSE v11.0 everywhere (in the future -- up to 20 storage nodes.)
That is an experimental cluster, so i'd be glad to test your system so far.

The feature i liked, is that when i switched off one of the two storage 
nodes, all data was still accessible !!!
this is a major reason why i'd preffer this solution over others (like 
lustre, coda and etc, which are much
more complex to set up, not speaking about gfs and ocfs)
with 2 nodes running i have read/write ratio about 20MB/sec  (i believe 
that was due to debugging checks, i built it with default options, from 
the default GIT-branch)
i used btrfs for osd volumes.

the only bug so far:  when monitor server goes down, and ceph volume was 
not unmounted, it stalls. nothing happens even when monitor server is up 
again -- i get messages like:  ceph: mds0 session probably timed out, 
requesting mds map

Sincerelly, Vitaly.



-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Ceph-devel mailing list
Ceph-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ceph-devel

Reply via email to