Same story with me. Currently a frustrated Gluster user waiting for that
message saying things are safe to store data with CEPH - in production.
Regards,
Roland
2010/3/9 Anton <anton.va...@gmail.com>
> Sage, do you think CEPH now is ok (stable enough) to be
> atleast sure that uploaded data will not be corrupted, (and
> could be recovered in the case of fault) - for use as simple
> file store - say in the scenario of "upload/modify rarely,
> read often"?. Right now I use Gluster as clustered FS - but
> overhead of 1+1 replication in Gluster is just HUGE
>
> Regards,
> Anton.
>
> On Wednesday 03 March 2010, Sage Weil wrote:
> > Hi all,
> >
> > We've pushed out some stable updates to master granches
> > of the git trees and tagged a 0.19.1 stable release.
> > For the server side, the fixes are minor: mainly a
> > problem with /etc/init.d/ceph when /bin/sh is dash
> > (ubuntu) and a rare mds crash. The kernel client fixes
> > are a bit more significant: mainly a bad write(2) return
> > value and a crash when snapshots were deleted.
> >
> > sage
> >
> > -- server side ------------------
> >
> > Sage Weil (8):
> > debian: mount.ceph in /sbin, not /usr/sbin
> > Makefile: include debian/
> > Makefile: fix /sbin hack
> > mds: fix sessionmap decoding
> > init-ceph: don't barf on dash when no command
> > cauthtool: --caps fn alone is a command
> > debian: new release, push, build, publish scripts
> > 0.19.1
> >
> > Yehuda Sadeh (1):
> > automake: fix mount sbin dir when configured with
> > prefix
> >
> >
> > -- kernel client -----------------
> >
> > Alexander Beregalov (1):
> > ceph: move dereference after NULL test
> >
> > Sage Weil (5):
> > ceph: fix handle_forward parsing
> > ceph: reset front len on return to msgpool; BUG on
> > mismatched front iov ceph: set osd request message front
> > length correctly ceph: fix osdmap decoding when pools
> > include (removed) snaps ceph: v0.19.1 stable release
> >
> > Yehuda Sadeh (1):
> > ceph: don't clobber write return value when using
> > O_SYNC
> >
> >
> > ---------------------------------------------------------
> > --------------------- Download Intel® Parallel
> > Studio Eval
> > Try the new software tools for yourself. Speed compiling,
> > find bugs proactively, and fine-tune applications for
> > parallel performance. See why Intel Parallel Studio got
> > high marks during beta. http://p.sf.net/sfu/intel-sw-dev
> > _______________________________________________
> > Ceph-devel mailing list
> > Ceph-devel@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/ceph-devel
> >
>
>
>
>
> ------------------------------------------------------------------------------
> Download Intel® Parallel Studio Eval
> Try the new software tools for yourself. Speed compiling, find bugs
> proactively, and fine-tune applications for parallel performance.
> See why Intel Parallel Studio got high marks during beta.
> http://p.sf.net/sfu/intel-sw-dev
> _______________________________________________
> Ceph-devel mailing list
> Ceph-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/ceph-devel
>
--
Roland Rabben
Founder & CEO Jotta AS
Cell: +47 90 85 85 39
Phone: +47 21 04 29 00
Email: rol...@jotta.no
------------------------------------------------------------------------------
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
Ceph-devel mailing list
Ceph-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ceph-devel