Re: [ceph-users] CEPH_FSAL Nfs-ganesha

2019-01-30 Thread solarflow99
Do can you do HA on the NFS shares?

On Wed, Jan 30, 2019 at 9:10 AM David C  wrote:

> Hi Patrick
>
> Thanks for the info. If I did multiple exports, how does that work in
> terms of the cache settings defined in ceph.conf, are those settings per
> CephFS client or a shared cache? I.e if I've definied client_oc_size, would
> that be per export?
>
> Cheers,
>
> On Tue, Jan 15, 2019 at 6:47 PM Patrick Donnelly 
> wrote:
>
>> On Mon, Jan 14, 2019 at 7:11 AM Daniel Gryniewicz 
>> wrote:
>> >
>> > Hi.  Welcome to the community.
>> >
>> > On 01/14/2019 07:56 AM, David C wrote:
>> > > Hi All
>> > >
>> > > I've been playing around with the nfs-ganesha 2.7 exporting a cephfs
>> > > filesystem, it seems to be working pretty well so far. A few
>> questions:
>> > >
>> > > 1) The docs say " For each NFS-Ganesha export, FSAL_CEPH uses a
>> > > libcephfs client,..." [1]. For arguments sake, if I have ten top level
>> > > dirs in my Cephfs namespace, is there any value in creating a separate
>> > > export for each directory? Will that potentially give me better
>> > > performance than a single export of the entire namespace?
>> >
>> > I don't believe there are any advantages from the Ceph side.  From the
>> > Ganesha side, you configure permissions, client ACLs, squashing, and so
>> > on on a per-export basis, so you'll need different exports if you need
>> > different settings for each top level directory.  If they can all use
>> > the same settings, one export is probably better.
>>
>> There may be performance impact (good or bad) with having separate
>> exports for CephFS. Each export instantiates a separate instance of
>> the CephFS client which has its own bookkeeping and set of
>> capabilities issued by the MDS. Also, each client instance has a
>> separate big lock (potentially a big deal for performance). If the
>> data for each export is disjoint (no hard links or shared inodes) and
>> the NFS server is expected to have a lot of load, breaking out the
>> exports can have a positive impact on performance. If there are hard
>> links, then the clients associated with the exports will potentially
>> fight over capabilities which will add to request latency.)
>>
>> --
>> Patrick Donnelly
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CEPH_FSAL Nfs-ganesha

2019-01-30 Thread David C
Hi Patrick

Thanks for the info. If I did multiple exports, how does that work in terms
of the cache settings defined in ceph.conf, are those settings per CephFS
client or a shared cache? I.e if I've definied client_oc_size, would that
be per export?

Cheers,

On Tue, Jan 15, 2019 at 6:47 PM Patrick Donnelly 
wrote:

> On Mon, Jan 14, 2019 at 7:11 AM Daniel Gryniewicz  wrote:
> >
> > Hi.  Welcome to the community.
> >
> > On 01/14/2019 07:56 AM, David C wrote:
> > > Hi All
> > >
> > > I've been playing around with the nfs-ganesha 2.7 exporting a cephfs
> > > filesystem, it seems to be working pretty well so far. A few questions:
> > >
> > > 1) The docs say " For each NFS-Ganesha export, FSAL_CEPH uses a
> > > libcephfs client,..." [1]. For arguments sake, if I have ten top level
> > > dirs in my Cephfs namespace, is there any value in creating a separate
> > > export for each directory? Will that potentially give me better
> > > performance than a single export of the entire namespace?
> >
> > I don't believe there are any advantages from the Ceph side.  From the
> > Ganesha side, you configure permissions, client ACLs, squashing, and so
> > on on a per-export basis, so you'll need different exports if you need
> > different settings for each top level directory.  If they can all use
> > the same settings, one export is probably better.
>
> There may be performance impact (good or bad) with having separate
> exports for CephFS. Each export instantiates a separate instance of
> the CephFS client which has its own bookkeeping and set of
> capabilities issued by the MDS. Also, each client instance has a
> separate big lock (potentially a big deal for performance). If the
> data for each export is disjoint (no hard links or shared inodes) and
> the NFS server is expected to have a lot of load, breaking out the
> exports can have a positive impact on performance. If there are hard
> links, then the clients associated with the exports will potentially
> fight over capabilities which will add to request latency.)
>
> --
> Patrick Donnelly
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CEPH_FSAL Nfs-ganesha

2019-01-15 Thread Patrick Donnelly
On Mon, Jan 14, 2019 at 7:11 AM Daniel Gryniewicz  wrote:
>
> Hi.  Welcome to the community.
>
> On 01/14/2019 07:56 AM, David C wrote:
> > Hi All
> >
> > I've been playing around with the nfs-ganesha 2.7 exporting a cephfs
> > filesystem, it seems to be working pretty well so far. A few questions:
> >
> > 1) The docs say " For each NFS-Ganesha export, FSAL_CEPH uses a
> > libcephfs client,..." [1]. For arguments sake, if I have ten top level
> > dirs in my Cephfs namespace, is there any value in creating a separate
> > export for each directory? Will that potentially give me better
> > performance than a single export of the entire namespace?
>
> I don't believe there are any advantages from the Ceph side.  From the
> Ganesha side, you configure permissions, client ACLs, squashing, and so
> on on a per-export basis, so you'll need different exports if you need
> different settings for each top level directory.  If they can all use
> the same settings, one export is probably better.

There may be performance impact (good or bad) with having separate
exports for CephFS. Each export instantiates a separate instance of
the CephFS client which has its own bookkeeping and set of
capabilities issued by the MDS. Also, each client instance has a
separate big lock (potentially a big deal for performance). If the
data for each export is disjoint (no hard links or shared inodes) and
the NFS server is expected to have a lot of load, breaking out the
exports can have a positive impact on performance. If there are hard
links, then the clients associated with the exports will potentially
fight over capabilities which will add to request latency.)

-- 
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CEPH_FSAL Nfs-ganesha

2019-01-14 Thread Paul Emmerich
We've found that more aggressive prefetching in the Ceph client can
help with some poorly behaving legacy applications (don't know the
option off the top of my head but it's documented).
It can also be useful to disable logging (even the in-memory logs) if
you do a lot IOPS (that's debug client and debug ms mostly).

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Mon, Jan 14, 2019 at 4:11 PM Daniel Gryniewicz  wrote:
>
> Hi.  Welcome to the community.
>
> On 01/14/2019 07:56 AM, David C wrote:
> > Hi All
> >
> > I've been playing around with the nfs-ganesha 2.7 exporting a cephfs
> > filesystem, it seems to be working pretty well so far. A few questions:
> >
> > 1) The docs say " For each NFS-Ganesha export, FSAL_CEPH uses a
> > libcephfs client,..." [1]. For arguments sake, if I have ten top level
> > dirs in my Cephfs namespace, is there any value in creating a separate
> > export for each directory? Will that potentially give me better
> > performance than a single export of the entire namespace?
>
> I don't believe there are any advantages from the Ceph side.  From the
> Ganesha side, you configure permissions, client ACLs, squashing, and so
> on on a per-export basis, so you'll need different exports if you need
> different settings for each top level directory.  If they can all use
> the same settings, one export is probably better.
>
> >
> > 2) Tuning: are there any recommended parameters to tune? So far I've
> > found I had to increase client_oc_size which seemed quite conservative.
>
> Ganesha is just a standard libcephfs client, so any tuning you'd make on
> any other cephfs client also applies to Ganesha.  I'm not aware of
> anything in particular, but I've never deployed it for anything other
> than testing.
>
> Daniel
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CEPH_FSAL Nfs-ganesha

2019-01-14 Thread Daniel Gryniewicz

Hi.  Welcome to the community.

On 01/14/2019 07:56 AM, David C wrote:

Hi All

I've been playing around with the nfs-ganesha 2.7 exporting a cephfs 
filesystem, it seems to be working pretty well so far. A few questions:


1) The docs say " For each NFS-Ganesha export, FSAL_CEPH uses a 
libcephfs client,..." [1]. For arguments sake, if I have ten top level 
dirs in my Cephfs namespace, is there any value in creating a separate 
export for each directory? Will that potentially give me better 
performance than a single export of the entire namespace?


I don't believe there are any advantages from the Ceph side.  From the 
Ganesha side, you configure permissions, client ACLs, squashing, and so 
on on a per-export basis, so you'll need different exports if you need 
different settings for each top level directory.  If they can all use 
the same settings, one export is probably better.




2) Tuning: are there any recommended parameters to tune? So far I've 
found I had to increase client_oc_size which seemed quite conservative.


Ganesha is just a standard libcephfs client, so any tuning you'd make on 
any other cephfs client also applies to Ganesha.  I'm not aware of 
anything in particular, but I've never deployed it for anything other 
than testing.


Daniel
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] CEPH_FSAL Nfs-ganesha

2019-01-14 Thread David C
Hi All

I've been playing around with the nfs-ganesha 2.7 exporting a cephfs
filesystem, it seems to be working pretty well so far. A few questions:

1) The docs say " For each NFS-Ganesha export, FSAL_CEPH uses a libcephfs
client,..." [1]. For arguments sake, if I have ten top level dirs in my
Cephfs namespace, is there any value in creating a separate export for each
directory? Will that potentially give me better performance than a single
export of the entire namespace?

2) Tuning: are there any recommended parameters to tune? So far I've found
I had to increase client_oc_size which seemed quite conservative.

Thanks
David

[1] http://docs.ceph.com/docs/mimic/cephfs/nfs/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com