Re: [ceph-users] Samba vfs_ceph or kernel client

2019-05-16 Thread David Disseldorp
Hi Maged, On Fri, 10 May 2019 18:32:15 +0200, Maged Mokhtar wrote: > What is the recommended way for Samba gateway integration: using > vfs_ceph or mounting CephFS via kernel client ? i tested the kernel > solution in a ctdb setup and gave good performance, does it have any > limitations

Re: [ceph-users] CephFS Snapshots in Mimic

2018-07-31 Thread David Disseldorp
Hi Kenneth, On Tue, 31 Jul 2018 16:44:36 +0200, Kenneth Waegeman wrote: > Hi all, > > I updated an existing Luminous cluster to Mimic 13.2.1. All daemons were > updated, so I did ceph osd require-osd-release mimic, so everything > seems up to date. > > I want to try the snapshots in Mimic,

Re: [ceph-users] samba gateway experiences with cephfs ?

2018-05-24 Thread David Disseldorp
On Thu, 24 May 2018 15:13:09 +0200, Daniel Baumann wrote: > On 05/24/2018 02:53 PM, David Disseldorp wrote: > >> [ceph_test] > >> path = /ceph-kernel > >> guest ok = no > >> delete readonly = yes > >> oplocks = yes > >> posix locking = no

Re: [ceph-users] samba gateway experiences with cephfs ?

2018-05-24 Thread David Disseldorp
Hi Jake, On Thu, 24 May 2018 13:17:16 +0100, Jake Grimmett wrote: > Hi Daniel, David, > > Many thanks for both of your advice. > > Sorry not to reply to the list, but I'm subscribed to the digest and my > mail client will not reply to individual threads - I've switched back to > regular. No

Re: [ceph-users] samba gateway experiences with cephfs ?

2018-05-22 Thread David Disseldorp
Hi Daniel and Jake, On Mon, 21 May 2018 22:46:01 +0200, Daniel Baumann wrote: > Hi > > On 05/21/2018 05:38 PM, Jake Grimmett wrote: > > Unfortunately we have a large number (~200) of Windows and Macs clients > > which need CIFS/SMB access to cephfs. > > we too, which is why we're

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-12 Thread David Disseldorp
Hi Maged, On Mon, 12 Mar 2018 20:41:22 +0200, Maged Mokhtar wrote: > I was thinking we would get the block request then loop down to all its > osd requests and cancel those using the same osd request cancel > function. Until we can be certain of termination, I don't think it makes sense to

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-12 Thread David Disseldorp
On Fri, 09 Mar 2018 11:23:02 +0200, Maged Mokhtar wrote: > 2)I undertand that before switching the path, the initiator will send a > TMF ABORT can we pass this to down to the same abort_request() function > in osd_client that is used for osd_request_timeout expiry ? IIUC, the existing

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-07 Thread David Disseldorp
Hi shadowlin, On Wed, 7 Mar 2018 23:24:42 +0800, shadow_lin wrote: > Is it safe to use active/active multipath If use suse kernel with > target_core_rbd? > Thanks. A cross-gateway failover race-condition similar to what Mike described is currently possible with active/active target_core_rbd.

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread David Disseldorp
On Thu, 1 Mar 2018 09:11:21 -0500, Jason Dillaman wrote: > It's very high on our priority list to get a solution merged in the > upstream kernel. There was a proposal to use DLM to distribute the PGR > state between target gateways (a la the SCST target) and it's quite > possible that would have

Re: [ceph-users] How to use vfs_ceph

2017-12-22 Thread David Disseldorp
On Fri, 22 Dec 2017 12:10:18 +0100, Felix Stolte wrote: > I am using Samba 4.6.7 (shipped with Ubuntu 17.10). I've got it working > now by copying the ceph.client.admin.keyring to /etc/ceph (I'm very > unhappy with that). The ceph:user_id smb.conf functionality was first shipped with Samba

Re: [ceph-users] Ceph+RBD+ISCSI = ESXI issue

2017-12-04 Thread David Disseldorp
Hi Nigel, On Fri, 1 Dec 2017 13:32:43 +, nigel davies wrote: > Ceph version 10.2.5 > > i have had an Ceph cluster going for a few months, with iscsi servers that > are linked to Ceph by RBD. > > All of an sudden i am starting the ESXI server will louse the isscsi data > store (disk space

Re: [ceph-users] Ceph-ISCSI

2017-10-12 Thread David Disseldorp
On Wed, 11 Oct 2017 14:03:59 -0400, Jason Dillaman wrote: > On Wed, Oct 11, 2017 at 1:10 PM, Samuel Soulard > wrote: > > Hmmm, If you failover the identity of the LIO configuration including PGRs > > (I believe they are files on disk), this would work no? Using an 2

Re: [ceph-users] Ceph-ISCSI

2017-10-11 Thread David Disseldorp
Hi Jason, Thanks for the detailed write-up... On Wed, 11 Oct 2017 08:57:46 -0400, Jason Dillaman wrote: > On Wed, Oct 11, 2017 at 6:38 AM, Jorge Pinilla López > wrote: > > > As far as I am able to understand there are 2 ways of setting iscsi for > > ceph > > > > 1- using

Re: [ceph-users] State of play for RDMA on Luminous

2017-08-28 Thread David Disseldorp
Hi Florian, On Wed, 23 Aug 2017 10:26:45 +0200, Florian Haas wrote: > - In case there is no such support in the kernel yet: What's the current > status of RDMA support (and testing) with regard to > * libcephfs? > * the Samba Ceph VFS? On the client side, the SMB3 added an SMB-Direct

Re: [ceph-users] Does cephfs guarantee client cache consistency for file data?

2017-04-19 Thread David Disseldorp
Hi, On Wed, 19 Apr 2017 08:19:50 +, 许雪寒 wrote: > I’m new to cephfs. I wonder whether cephfs guarantee client cache consistency > for file content. For example, if client A read some data of file X, then > client B modified the X’s content in the range that A read, will A be > notified of

Re: [ceph-users] rbd iscsi gateway question

2017-04-06 Thread David Disseldorp
On Thu, 6 Apr 2017 14:27:01 +0100, Nick Fisk wrote: ... > > I'm not to sure what you're referring to WRT the spiral of death, but we did > > patch some LIO issues encountered when a command was aborted while > > outstanding at the LIO backstore layer. > > These specific fixes are carried in the

Re: [ceph-users] rbd iscsi gateway question

2017-04-06 Thread David Disseldorp
Hi, On Thu, 6 Apr 2017 13:31:00 +0100, Nick Fisk wrote: > > I believe there > > was a request to include it mainstream kernel but it did not happen, > > probably waiting for TCMU solution which will be better/cleaner design. Indeed, we're proceeding with TCMU as a future upstream acceptable

Re: [ceph-users] Fwd: Upgrade Woes on suse leap with OBS ceph.

2017-02-24 Thread David Disseldorp
Hi, On Thu, 23 Feb 2017 21:07:41 -0800, Schlacta, Christ wrote: > So hopefully when the suse ceph team get 11.2 released it should fix this, > yes? Please raise a bug at bugzilla.opensuse.org, so that we can track this for the next openSUSE maintenance update. Cheers, David

[ceph-users] [RFC] rbdmap unmap - unmap all, or only RBDMAPFILE listed images?

2017-02-15 Thread David Disseldorp
Hi, I'm working on an rbdmap change https://github.com/ceph/ceph/pull/13361, and would appreciate some input from existing users. Currently "rbdmap map" maps any RBD images listed in the rbdmap config file (RBDMAPFILE), whereas "rbdmap unmap" unmaps all mapped RBD images, regardless of whether

Re: [ceph-users] cephfs quota

2016-12-16 Thread David Disseldorp
Hi Matthew, On Fri, 16 Dec 2016 12:30:06 +, Matthew Vernon wrote: > Hello, > On 15/12/16 10:25, David Disseldorp wrote: > > > Are you using the Linux kernel CephFS client (mount.ceph), or the > > userspace ceph-fuse back end? Quota enforcement is perfor

Re: [ceph-users] cephfs quota

2016-12-16 Thread David Disseldorp
On Fri, 16 Dec 2016 12:48:39 +0530, gjprabu wrote: > Now we are mounted client using ceph-fuse and still allowing me to put a data > above the limit(100MB). Below is quota details. > > > > getfattr -n ceph.quota.max_bytes test > > # file: test > > ceph.quota.max_bytes="1" > > >

Re: [ceph-users] cephfs quota

2016-12-15 Thread David Disseldorp
Hi Prabu, On Thu, 15 Dec 2016 13:11:50 +0530, gjprabu wrote: > We are using ceph version 10.2.4 (Jewel) and data's are mounted > with cephfs file system in linux. We are trying to set quota for directory > and files but its don't worked with below document. I have set 100mb for >

Re: [ceph-users] Prevent cephfs clients from mount and browsing "/"

2016-12-05 Thread David Disseldorp
Hi Martin, On Mon, 5 Dec 2016 13:27:01 +0100, Martin Palma wrote: > Ok, just discovered that with the fuse client, we have to add the '-r > /path' option, to treat that as root. So I assume the caps 'mds allow > r' is only needed if we also what to be able to mount the directory > with the

Re: [ceph-users] new Open Source Ceph based iSCSI SAN project

2016-10-17 Thread David Disseldorp
Hi Maged, Thanks for the announcement - good luck with the project! One comment... On Mon, 17 Oct 2016 13:37:29 +0200, Maged Mokhtar wrote: > if you are refering to clustering reservations through VAAI. We are using > upstream code from SUSE Enterprise Storage which adds clustered support for

Re: [ceph-users] mount -t ceph

2016-04-27 Thread David Disseldorp
Hi Tom, On Wed, 27 Apr 2016 20:17:51 +, Deneau, Tom wrote: > I was using SLES 12, SP1 which has 3.12.49 > > It did have a /usr/sbin/mount.ceph command but using it gave > modprobe: FATAL: Module ceph not found. > failed to load ceph kernel module (1) The SLES 12 SP1 kernel doesn't

Re: [ceph-users] ocfs2 for OSDs?

2013-09-11 Thread David Disseldorp
Hi Sage, On Wed, 11 Sep 2013 09:18:13 -0700 (PDT) Sage Weil s...@inktank.com wrote: REFLINKs (inode-based writeable snapshots) This is the one item on this list I see that the ceph-osds could take real advantage of; it would make object clones triggered by things like RBD snapshots