Re: [ceph-users] the state of cephfs in giant

2014-10-30 Thread Florian Haas
Hi Sage, sorry to be late to this thread; I just caught this one as I was reviewing the Giant release notes. A few questions below: On Mon, Oct 13, 2014 at 8:16 PM, Sage Weil s...@newdream.net wrote: [...] * ACLs: implemented, tested for kernel client. not implemented for ceph-fuse. [...]

Re: [ceph-users] the state of cephfs in giant

2014-10-30 Thread John Spray
On Thu, Oct 30, 2014 at 10:55 AM, Florian Haas flor...@hastexo.com wrote: * ganesha NFS integration: implemented, no test coverage. I understood from a conversation I had with John in London that flock() and fcntl() support had recently been added to ceph-fuse, can this be expected to Just

Re: [ceph-users] the state of cephfs in giant

2014-10-30 Thread Sage Weil
On Thu, 30 Oct 2014, Florian Haas wrote: Hi Sage, sorry to be late to this thread; I just caught this one as I was reviewing the Giant release notes. A few questions below: On Mon, Oct 13, 2014 at 8:16 PM, Sage Weil s...@newdream.net wrote: [...] * ACLs: implemented, tested for kernel

Re: [ceph-users] the state of cephfs in giant

2014-10-16 Thread Ric Wheeler
On 10/15/2014 08:43 AM, Amon Ott wrote: Am 14.10.2014 16:23, schrieb Sage Weil: On Tue, 14 Oct 2014, Amon Ott wrote: Am 13.10.2014 20:16, schrieb Sage Weil: We've been doing a lot of work on CephFS over the past few months. This is an update on the current state of things as of Giant. ... *

Re: [ceph-users] the state of cephfs in giant

2014-10-15 Thread Amon Ott
Am 14.10.2014 16:23, schrieb Sage Weil: On Tue, 14 Oct 2014, Amon Ott wrote: Am 13.10.2014 20:16, schrieb Sage Weil: We've been doing a lot of work on CephFS over the past few months. This is an update on the current state of things as of Giant. ... * Either the kernel client (kernel 3.17 or

Re: [ceph-users] the state of cephfs in giant

2014-10-15 Thread Stijn De Weirdt
We've been doing a lot of work on CephFS over the past few months. This is an update on the current state of things as of Giant. ... * Either the kernel client (kernel 3.17 or later) or userspace (ceph-fuse or libcephfs) clients are in good working order. Thanks for all the work and

Re: [ceph-users] the state of cephfs in giant

2014-10-15 Thread Amon Ott
Am 15.10.2014 14:11, schrieb Ric Wheeler: On 10/15/2014 08:43 AM, Amon Ott wrote: Am 14.10.2014 16:23, schrieb Sage Weil: On Tue, 14 Oct 2014, Amon Ott wrote: Am 13.10.2014 20:16, schrieb Sage Weil: We've been doing a lot of work on CephFS over the past few months. This is an update on the

Re: [ceph-users] the state of cephfs in giant

2014-10-15 Thread Sage Weil
On Wed, 15 Oct 2014, Amon Ott wrote: Am 15.10.2014 14:11, schrieb Ric Wheeler: On 10/15/2014 08:43 AM, Amon Ott wrote: Am 14.10.2014 16:23, schrieb Sage Weil: On Tue, 14 Oct 2014, Amon Ott wrote: Am 13.10.2014 20:16, schrieb Sage Weil: We've been doing a lot of work on CephFS over the

Re: [ceph-users] the state of cephfs in giant

2014-10-15 Thread Alphe Salas
For the humble ceph user I am it is really hard to follow what version of what product will get the changes I requiere. Let me explain myself. I use ceph in my company is specialised in disk recovery, my company needs a flexible, easy to maintain, trustable way to store the data from the

Re: [ceph-users] the state of cephfs in giant

2014-10-14 Thread Thomas Lemarchand
Thanks for theses informations. I plan to use CephFS on Giant, with production workload, knowing the risks and having a hot backup near. I hope to be able to provide useful feedback. My cluster is made of 7 servers (3mon, 3osd (27 osd inside), 1mds). I use ceph-fuse on clients. You wrote about

Re: [ceph-users] the state of cephfs in giant

2014-10-14 Thread Sage Weil
On Tue, 14 Oct 2014, Amon Ott wrote: Am 13.10.2014 20:16, schrieb Sage Weil: We've been doing a lot of work on CephFS over the past few months. This is an update on the current state of things as of Giant. ... * Either the kernel client (kernel 3.17 or later) or userspace (ceph-fuse or

Re: [ceph-users] the state of cephfs in giant

2014-10-14 Thread Sage Weil
On Tue, 14 Oct 2014, Thomas Lemarchand wrote: Thanks for theses informations. I plan to use CephFS on Giant, with production workload, knowing the risks and having a hot backup near. I hope to be able to provide useful feedback. My cluster is made of 7 servers (3mon, 3osd (27 osd inside),

Re: [ceph-users] the state of cephfs in giant

2014-10-14 Thread Amon Ott
Am 13.10.2014 20:16, schrieb Sage Weil: We've been doing a lot of work on CephFS over the past few months. This is an update on the current state of things as of Giant. ... * Either the kernel client (kernel 3.17 or later) or userspace (ceph-fuse or libcephfs) clients are in good working

Re: [ceph-users] the state of cephfs in giant

2014-10-14 Thread Sage Weil
On Tue, 14 Oct 2014, Amon Ott wrote: Am 13.10.2014 20:16, schrieb Sage Weil: We've been doing a lot of work on CephFS over the past few months. This is an update on the current state of things as of Giant. ... * Either the kernel client (kernel 3.17 or later) or userspace (ceph-fuse or

Re: [ceph-users] the state of cephfs in giant

2014-10-14 Thread Alphe Salas
Hello sage, last time I used CephFS it had a strange behaviour when if used in conjunction with a nfs reshare of the cephfs mount point, I experienced a partial random disapearance of the tree folders. According to people in the mailing list it was a kernel module bug (not using ceph-fuse) do

Re: [ceph-users] the state of cephfs in giant

2014-10-14 Thread Sage Weil
This sounds like any number of readdir bugs that Zheng has fixed over the last 6 months. sage On Tue, 14 Oct 2014, Alphe Salas wrote: Hello sage, last time I used CephFS it had a strange behaviour when if used in conjunction with a nfs reshare of the cephfs mount point, I experienced a

[ceph-users] the state of cephfs in giant

2014-10-13 Thread Sage Weil
We've been doing a lot of work on CephFS over the past few months. This is an update on the current state of things as of Giant. What we've working on: * better mds/cephfs health reports to the monitor * mds journal dump/repair tool * many kernel and ceph-fuse/libcephfs client bug fixes * file

Re: [ceph-users] the state of cephfs in giant

2014-10-13 Thread Wido den Hollander
On 13-10-14 20:16, Sage Weil wrote: We've been doing a lot of work on CephFS over the past few months. This is an update on the current state of things as of Giant. What we've working on: * better mds/cephfs health reports to the monitor * mds journal dump/repair tool * many kernel and

Re: [ceph-users] the state of cephfs in giant

2014-10-13 Thread Sage Weil
On Mon, 13 Oct 2014, Wido den Hollander wrote: On 13-10-14 20:16, Sage Weil wrote: With Giant, we are at a point where we would ask that everyone try things out for any non-production workloads. We are very interested in feedback around stability, usability, feature gaps, and performance.

Re: [ceph-users] the state of cephfs in giant

2014-10-13 Thread Eric Eastman
I would be interested in testing the Samba VFS and Ganesha NFS integration with CephFS. Are there any notes on how to configure these two interfaces with CephFS? Eric We've been doing a lot of work on CephFS over the past few months. This is an update on the current state of things as of

Re: [ceph-users] the state of cephfs in giant

2014-10-13 Thread Sage Weil
On Mon, 13 Oct 2014, Eric Eastman wrote: I would be interested in testing the Samba VFS and Ganesha NFS integration with CephFS. Are there any notes on how to configure these two interfaces with CephFS? For samba, based on https://github.com/ceph/ceph-qa-suite/blob/master/tasks/samba.py#L106

Re: [ceph-users] the state of cephfs in giant

2014-10-13 Thread Jeff Bailey
On 10/13/2014 4:56 PM, Sage Weil wrote: On Mon, 13 Oct 2014, Eric Eastman wrote: I would be interested in testing the Samba VFS and Ganesha NFS integration with CephFS. Are there any notes on how to configure these two interfaces with CephFS? For ganesha I'm doing something like: FSAL {