Re: [ceph-users] samba gateway experiences with cephfs ?

2018-05-24 Thread David Disseldorp
On Thu, 24 May 2018 15:13:09 +0200, Daniel Baumann wrote:

> On 05/24/2018 02:53 PM, David Disseldorp wrote:
> >> [ceph_test]
> >> path = /ceph-kernel
> >> guest ok = no
> >> delete readonly = yes
> >> oplocks = yes
> >> posix locking = no  
> 
> jftr, we use the following to disable all locking (on samba 4.8.2):
> 
>   oplocks = False
>   level2 oplocks = False
>   kernel oplocks = no

oplocks aren't locks per se - they allow the client to cache data
locally (leases in SMB2+), often allowing for improved application
performance. That said, if the same share path is accessible via NFS or
native CephFS then oplocks / leases should be disabled, until proper
vfs_ceph lease support is implemented via the delegation API.

Cheers, David
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] samba gateway experiences with cephfs ?

2018-05-24 Thread Jake Grimmett
Hi David,

Many thanks for your help :)

I'm using Scientific Linux 7.5
thus samba-4.7.1-6.el7.x86_64

I've added these settings to the share:

aio read size = 1
aio write size = 1

...and restarting samba, Helios LanTest didn't show any real changes,
I will test from a Linux machine later and see if I/O improves here.

Glad to hear CTDB will work with posix locks off, I will start testing
this next week.

Oh, the RADOS object lock is definitely worth investigating... thanks
for this too :)

all the best,

Jake

On 24/05/18 13:53, David Disseldorp wrote:
> Hi Jake,
> 
> On Thu, 24 May 2018 13:17:16 +0100, Jake Grimmett wrote:
> 
>> Hi Daniel, David,
>>
>> Many thanks for both of your advice.
>>
>> Sorry not to reply to the list, but I'm subscribed to the digest and my
>> mail client will not reply to individual threads - I've switched back to
>> regular.
> 
> No worries, cc'ing the list in this response.
> 
>> As to this issue, I've turned off posix locking, which has improved
>> write speeds - here are the old benchmarks plus new figures.
>>
>> i.e. Using Helios LanTest 6.0.0 on Osx.
>>
>> Create 300 Files
>>  Cephfs (kernel) > samba (no Posix locks)
>>   average  3600 ms
>>  Cephfs (kernel) > samba. average 5100 ms
>>  Isilon  > CIFS  average 2600 ms
>>  ZFS > samba  average  121 ms
>>
>> Remove 300 files
>>  Cephfs (kernel) > samba (no Posix locks)
>>   average  2200 ms
>>  Cephfs (kernel) > samba. average 2100 ms
>>  Isilon  > CIFS  average  900 ms
>>  ZFS > samba  average  421 ms
>>
>> Write 300MB to file
>>  Cephfs (kernel) > samba (no Posix locks)
>>   average  53 MB/s
>>  Cephfs (kernel) > samba. average 25 MB/s
>>  Isilon  > CIFS  average  17.9 MB/s
>>  ZFS > samba  average  64.4 MB/s
>>
>>
>> Settings as follows:
>> [global]
>> (snip)
>> smb2 leases = yes
>>
>>
>> [ceph_test]
>> path = /ceph-kernel
>> guest ok = no
>> delete readonly = yes
>> oplocks = yes
>> posix locking = no
> 
> Which version of Samba are you using here? If it's relatively recent
> (4.6+), please rerun with asynchronous I/O enabled via:
>   [share]
>   aio read size = 1
>   aio write size = 1
> 
> ...these settings are the default with Samba 4.8+. AIO won't help the
> file creation / deletion benchmarks, but there should be a positive
> affect on read/write performance.
> 
>> Disabling all locking (locking = no) gives some further speed improvements.
>>
>> File locking hopefully will not be an issue...
>>
>> We are not exporting this share via NFS. The shares will only be used by
>> single clients (Windows or OSX Desktops) as a backup location.
>>
>> Specifically, each machine has a separate smb mounted folder, to which
>> they either use ChronoSync or Max SyncUp to write to.
>>
>> One other point...
>> Will CTDB work with "posix locking = no"?
>> It would be great if CTDB works, as I'd like to have a several SMB heads
>> to load-balance the clients
> 
> Yes, it shouldn't affect CTDB. Clustered FS POSIX locks are used by CTDB
> for split-brain avoidance, and are separate to Samba's
> client-lock <-> POSIX-lock mapping.
> (https://wiki.samba.org/index.php/Configuring_the_CTDB_recovery_lock)
> FYI, CTDB is now also capable of using RADOS objects for the recovery
> lock:
> https://ctdb.samba.org/manpages/ctdb_mutex_ceph_rados_helper.7.html
> 
> Cheers, David
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] samba gateway experiences with cephfs ?

2018-05-24 Thread Daniel Baumann
Hi,

On 05/24/2018 02:53 PM, David Disseldorp wrote:
>> [ceph_test]
>> path = /ceph-kernel
>> guest ok = no
>> delete readonly = yes
>> oplocks = yes
>> posix locking = no

jftr, we use the following to disable all locking (on samba 4.8.2):

  oplocks = False
  level2 oplocks = False
  kernel oplocks = no

Regards,
Daniel
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] samba gateway experiences with cephfs ?

2018-05-24 Thread David Disseldorp
Hi Jake,

On Thu, 24 May 2018 13:17:16 +0100, Jake Grimmett wrote:

> Hi Daniel, David,
> 
> Many thanks for both of your advice.
> 
> Sorry not to reply to the list, but I'm subscribed to the digest and my
> mail client will not reply to individual threads - I've switched back to
> regular.

No worries, cc'ing the list in this response.

> As to this issue, I've turned off posix locking, which has improved
> write speeds - here are the old benchmarks plus new figures.
> 
> i.e. Using Helios LanTest 6.0.0 on Osx.
> 
> Create 300 Files
>  Cephfs (kernel) > samba (no Posix locks)
>average  3600 ms
>  Cephfs (kernel) > samba. average 5100 ms
>  Isilon   > CIFS  average 2600 ms
>  ZFS > samba   average  121 ms
> 
> Remove 300 files
>  Cephfs (kernel) > samba (no Posix locks)
>average  2200 ms
>  Cephfs (kernel) > samba. average 2100 ms
>  Isilon   > CIFS  average  900 ms
>  ZFS > samba   average  421 ms
> 
> Write 300MB to file
>  Cephfs (kernel) > samba (no Posix locks)
>average  53 MB/s
>  Cephfs (kernel) > samba. average 25 MB/s
>  Isilon   > CIFS  average  17.9 MB/s
>  ZFS > samba   average  64.4 MB/s
> 
> 
> Settings as follows:
> [global]
> (snip)
> smb2 leases = yes
> 
> 
> [ceph_test]
> path = /ceph-kernel
> guest ok = no
> delete readonly = yes
> oplocks = yes
> posix locking = no

Which version of Samba are you using here? If it's relatively recent
(4.6+), please rerun with asynchronous I/O enabled via:
[share]
aio read size = 1
aio write size = 1

...these settings are the default with Samba 4.8+. AIO won't help the
file creation / deletion benchmarks, but there should be a positive
affect on read/write performance.

> Disabling all locking (locking = no) gives some further speed improvements.
> 
> File locking hopefully will not be an issue...
> 
> We are not exporting this share via NFS. The shares will only be used by
> single clients (Windows or OSX Desktops) as a backup location.
> 
> Specifically, each machine has a separate smb mounted folder, to which
> they either use ChronoSync or Max SyncUp to write to.
> 
> One other point...
> Will CTDB work with "posix locking = no"?
> It would be great if CTDB works, as I'd like to have a several SMB heads
> to load-balance the clients

Yes, it shouldn't affect CTDB. Clustered FS POSIX locks are used by CTDB
for split-brain avoidance, and are separate to Samba's
client-lock <-> POSIX-lock mapping.
(https://wiki.samba.org/index.php/Configuring_the_CTDB_recovery_lock)
FYI, CTDB is now also capable of using RADOS objects for the recovery
lock:
https://ctdb.samba.org/manpages/ctdb_mutex_ceph_rados_helper.7.html

Cheers, David
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] samba gateway experiences with cephfs ?

2018-05-22 Thread David Disseldorp
Hi Daniel and Jake,

On Mon, 21 May 2018 22:46:01 +0200, Daniel Baumann wrote:

> Hi
> 
> On 05/21/2018 05:38 PM, Jake Grimmett wrote:
> > Unfortunately we have a large number (~200) of Windows and Macs clients
> > which need CIFS/SMB  access to cephfs.  
> 
> we too, which is why we're (partially) exporting cephfs over samba too,
> 1.5y in production now.
> 
> for us, cephfs-over-samba is significantly slower than cephfs directly
> too, but it's not really an issue here (basically, if people use a
> windows client here, they're already on the slow track anyway).
> 
> we had to do two things to get it working reliably though:
> 
> a) disable all locking on samba (otherwise "opportunistic locking" on
> windows clients killed within hours all mds (kraken at that time))

Have you seen this on more recent versions? Please raise a bug if so -
client induced MDS outages would be a pretty serious issue.

If your share path is isolated from non-samba clients (as you mention
below), then allowing clients to cache reads / writes locally via
oplocks / SMB2+ leases should offer a significant performance
improvements.

> b) only allow writes to a specific space on cephfs, reserved to samba
> (with luminous; otherwise, we'd have problems with data consistency on
> cephfs with people writing the same files from linux->cephfs and
> samba->cephfs concurrently). my hunch is that samba caches writes and
> doesn't give them back appropriatly.

If you're sharing a kernel CephFS mount, then the Linux page cache will
be used for Samba share I/O, but Samba will fully abide by client sync
requests if "strict sync = yes" (default in Samba 4.7+).

> > Finally, is the vfs_ceph module for Samba useful? It doesn't seem to be
> > widely available pre-complied for for RHEL derivatives. Can anyone
> > comment on their experiences using vfs_ceph, or point me to a Centos 7.x
> > repo that has it?  
> 
> we use debian, with backported kernel and backported samba, which has
> vfs_ceph pre-compiled. however, we couldn't make vfs_ceph work at all -
> the snapshot patters just don't seem to match/align (and nothing we
> tried seem to work).

vfs_ceph doesn't support snapshots at this stage. I hope to work on this
feature in the near future, so that it's fully integrated with the
Explorer previous versions UI.

Cheers, David
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] samba gateway experiences with cephfs ?

2018-05-21 Thread Daniel Baumann
Hi

On 05/21/2018 05:38 PM, Jake Grimmett wrote:
> Unfortunately we have a large number (~200) of Windows and Macs clients
> which need CIFS/SMB  access to cephfs.

we too, which is why we're (partially) exporting cephfs over samba too,
1.5y in production now.

for us, cephfs-over-samba is significantly slower than cephfs directly
too, but it's not really an issue here (basically, if people use a
windows client here, they're already on the slow track anyway).

we had to do two things to get it working reliably though:

a) disable all locking on samba (otherwise "opportunistic locking" on
windows clients killed within hours all mds (kraken at that time))

b) only allow writes to a specific space on cephfs, reserved to samba
(with luminous; otherwise, we'd have problems with data consistency on
cephfs with people writing the same files from linux->cephfs and
samba->cephfs concurrently). my hunch is that samba caches writes and
doesn't give them back appropriatly.

> Finally, is the vfs_ceph module for Samba useful? It doesn't seem to be
> widely available pre-complied for for RHEL derivatives. Can anyone
> comment on their experiences using vfs_ceph, or point me to a Centos 7.x
> repo that has it?

we use debian, with backported kernel and backported samba, which has
vfs_ceph pre-compiled. however, we couldn't make vfs_ceph work at all -
the snapshot patters just don't seem to match/align (and nothing we
tried seem to work).

Regards,
Daniel
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] samba gateway experiences with cephfs ?

2018-05-21 Thread Jake Grimmett
Dear All,

Excited to see snapshots finally becoming a stable feature in cephfs :)

Unfortunately we have a large number (~200) of Windows and Macs clients
which need CIFS/SMB  access to cephfs.

None-the-less, snapshots have prompted us to start testing ceph to see
if we can use it as a scale-out NAS...

cephfs native performance on our test setup appears good, however tests
accessing via samba have been slightly disappointing, especially with
small file I/O. Large file I/O is fair, but could still be improved.

Using Helios LanTest 6.0.0 on Osx.

Create 300 Files
 Cephfs (kernel) > samba. average 5100 ms
 Isilon > CIFS  average 2600 ms
 ZFS > samba average  121 ms

Remove 300 files
 Cephfs (kernel) > samba. average 2100 ms
 Isilon > CIFS  average  900 ms
 ZFS > samba average  421 ms

Write 300MB to file
 Cephfs (kernel) > samba. average 25 MB/s
 Isilon > CIFS  average  17.9 MB/s
 ZFS > samba average  64.4 MB/s

Hardware Used:
CephFS: five node dual Xeon cluster (120 bluestore OSD, 4 x nvme
metadata for Cephfs, bulk data EC 4+1), Scientific Linux 7.5, ceph
12.2.5, kernel client (fuse significantly slower).
Isilon: 6 year old, 8 x NL108
ZFS: SL 6.4 on a Dell R730XD, 24 x 1.8TB drives

Ceph Samba gateway is a separate machine: dual Xeon, 40Gb ethernet,
128GB RAM, also running SL 7.5.

Finally, is the vfs_ceph module for Samba useful? It doesn't seem to be
widely available pre-complied for for RHEL derivatives. Can anyone
comment on their experiences using vfs_ceph, or point me to a Centos 7.x
repo that has it?

many thanks for all and any advice,

Jake

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com