Considering the silence, I understand that stuff still in the heavy development, but 0.19 have been named "stable" and I thought that its time to ask again, but anyway looks like it's time to test. :)
On Tuesday 09 March 2010, Anton wrote: > Sage, do you think CEPH now is ok (stable enough) to be > atleast sure that uploaded data will not be corrupted, > (and could be recovered in the case of fault) - for use > as simple file store - say in the scenario of > "upload/modify rarely, read often"?. Right now I use > Gluster as clustered FS - but overhead of 1+1 > replication in Gluster is just HUGE > > Regards, > Anton. > > On Wednesday 03 March 2010, Sage Weil wrote: > > Hi all, > > > > We've pushed out some stable updates to master granches > > of the git trees and tagged a 0.19.1 stable release. > > For the server side, the fixes are minor: mainly a > > problem with /etc/init.d/ceph when /bin/sh is dash > > (ubuntu) and a rare mds crash. The kernel client > > fixes are a bit more significant: mainly a bad write(2) > > return value and a crash when snapshots were deleted. > > > > sage > > > > -- server side ------------------ > > > > Sage Weil (8): > > debian: mount.ceph in /sbin, not /usr/sbin > > Makefile: include debian/ > > Makefile: fix /sbin hack > > mds: fix sessionmap decoding > > init-ceph: don't barf on dash when no command > > cauthtool: --caps fn alone is a command > > debian: new release, push, build, publish scripts > > 0.19.1 > > > > Yehuda Sadeh (1): > > automake: fix mount sbin dir when configured with > > prefix > > > > > > -- kernel client ----------------- > > > > Alexander Beregalov (1): > > ceph: move dereference after NULL test > > > > Sage Weil (5): > > ceph: fix handle_forward parsing > > ceph: reset front len on return to msgpool; BUG > > on mismatched front iov ceph: set osd request message > > front length correctly ceph: fix osdmap decoding when > > pools include (removed) snaps ceph: v0.19.1 stable > > release > > > > Yehuda Sadeh (1): > > ceph: don't clobber write return value when using > > O_SYNC > > > > > > ------------------------------------------------------- > >-- --------------------- Download Intel® Parallel > > Studio Eval > > Try the new software tools for yourself. Speed > > compiling, find bugs proactively, and fine-tune > > applications for parallel performance. See why Intel > > Parallel Studio got high marks during beta. > > http://p.sf.net/sfu/intel-sw-dev > > _______________________________________________ > > Ceph-devel mailing list > > Ceph-devel@lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/ceph-devel > > --------------------------------------------------------- > --------------------- Download Intel® Parallel > Studio Eval > Try the new software tools for yourself. Speed compiling, > find bugs proactively, and fine-tune applications for > parallel performance. See why Intel Parallel Studio got > high marks during beta. http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > Ceph-devel mailing list > Ceph-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/ceph-devel > ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ Ceph-devel mailing list Ceph-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/ceph-devel