We deployed a ZFS on Linux server (on Scientific Linux) in the past week. Pretty simple stuff...the only non-default options are atime=off and recordsize=64K (which may be wrong, though some posts about ZFS and AFS suggest it).
About Ceph, we had a test server serving an RBD /vicep partition. And it worked. We're still building up the Ceph cluster (primarily to provide OpenStack Cinder volumes) and once it is in production we plan to run a few virtualized AFS servers with Ceph volumes behind. All of this is in testing, and though we've not had deal breaking incidents, the long term stability is still in question. -- Dan CERN IT Steven Presser <[email protected]> wrote: Out of pure curiosity, does anyone care to share experiences from running OpenAFS on ZFS? If any one is running OpenAFS on top of or in a cluster which also uses Ceph, would you care to share experience, as well as your architecture? Background: I have 4 Thumpers (SunFire x4500s) with 48tb a pop and am wondering how best to set up my storage layer. This cluster will both serve user files and be the backend for a VM cluster. Thanks, Steve On 06/14/2013 06:13 AM, Robert Milkowski wrote: >> >> ... And am I right in >> >> thinking that volumes shouldn't just show up as being>> corrupt >> like this? Should I be looking harder for some>> kind of hardware >> problem? >> > >> > Volumes shouldn't just show up as corrupt like that, yes. >> >> It now looks like it's a hardware problem with the SAN storage for that >> viceb partition. Ouch. > And this is one of the reasons why ZFS is so cool :) > _______________________________________________ OpenAFS-info mailing list [email protected] https://lists.openafs.org/mailman/listinfo/openafs-info _______________________________________________ OpenAFS-info mailing list [email protected] https://lists.openafs.org/mailman/listinfo/openafs-info
