Re: [ceph-users] CephFS "move" operation

2018-05-25 Thread Ric Wheeler
quot;foo", "security.selinux", "system_u:object_r:fusefs_t:s0", 255) > = 30 > > - > But I can assure it's only a single filesystem, and a single ceph-fuse > client running. > > Sa

Re: [ceph-users] CephFS "move" operation

2018-05-25 Thread Ric Wheeler
um 14:57 schrieb Ric Wheeler: > > Is this move between directories on the same file system? > > It is, we only have a single CephFS in use. There's also only a single > ceph-fuse client running. > > What's different, though, are different ACLs set for source and target > directory

Re: [ceph-users] CephFS "move" operation

2018-05-25 Thread Ric Wheeler
Is this move between directories on the same file system? Rename as a system call only works within a file system. The user space mv command becomes a copy when not the same file system. Regards, Ric On Fri, May 25, 2018, 8:51 AM John Spray wrote: > On Fri, May 25, 2018

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Ric Wheeler
On 02/28/2018 10:06 AM, Max Cuttins wrote: Il 28/02/2018 15:19, Jason Dillaman ha scritto: On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini wrote: I was building ceph in order to use with iSCSI. But I just see from the docs that need: CentOS 7.5 (which is not

Re: [ceph-users] Debugging fstrim issues

2018-01-29 Thread Ric Wheeler
I might have missed something in the question. Fstrim does not free up space at the user level that you see with a normal df. It is meant to let the block device know about all of the space unused by the file system. Regards, Ric On Jan 29, 2018 11:56 AM, "Wido den Hollander"

Re: [ceph-users] Ceph 4Kn Disk Support

2017-11-23 Thread Ric Wheeler
In any modern distribution, you should be fine. Regards, Ric On Nov 23, 2017 9:55 AM, "Hüseyin ÇOTUK" wrote: > Hello Everyone, > > We are considering to buy 4Kn block sized disks to use with Ceph. These > disks report native 4kB blocks to OS rather than using 512-byte

Re: [ceph-users] access ceph filesystem at storage level and not via ethernet

2017-09-14 Thread Ric Wheeler
On 09/14/2017 11:17 AM, Ronny Aasen wrote: On 14. sep. 2017 00:34, James Okken wrote: Thanks Ronny! Exactly the info I need. And kinda of what I thought the answer would be as I was typing and thinking clearer about what I was asking. I just was hoping CEPH would work like this since the

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-02 Thread Ric Wheeler
On 08/02/2016 07:26 PM, Ilya Dryomov wrote: This seems to reflect the granularity (4194304), which matches the >8192 pages (8192 x 512 = 4194304). However, there is no alignment >value. > >Can discard_alignment be specified with RBD? It's exported as a read-only sysfs attribute, just like

Re: [ceph-users] Local SSD cache for ceph on each compute node.

2016-03-29 Thread Ric Wheeler
On 03/29/2016 04:53 PM, Nick Fisk wrote: -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Ric Wheeler Sent: 29 March 2016 14:40 To: Nick Fisk <n...@fisk.me.uk>; 'Sage Weil' <s...@newdream.net> Cc: ceph-users@lists.ceph.com; d

Re: [ceph-users] Local SSD cache for ceph on each compute node.

2016-03-29 Thread Ric Wheeler
On 03/29/2016 04:35 PM, Nick Fisk wrote: One thing I picked up on when looking at dm-cache for doing caching with RBD's is that it wasn't really designed to be used as a writeback cache for new writes, as in how you would expect a traditional writeback cache to work. It seems all the policies

Re: [ceph-users] Local SSD cache for ceph on each compute node.

2016-03-29 Thread Ric Wheeler
On 03/29/2016 01:35 PM, Van Leeuwen, Robert wrote: If you try to look at the rbd device under dm-cache from another host, of course any data that was cached on the dm-cache layer will be missing since the dm-cache device itself is local to the host you wrote the data from originally. And here

Re: [ceph-users] Local SSD cache for ceph on each compute node.

2016-03-29 Thread Ric Wheeler
On 03/29/2016 10:06 AM, Van Leeuwen, Robert wrote: On 3/27/16, 9:59 AM, "Ric Wheeler" <rwhee...@redhat.com> wrote: On 03/16/2016 12:15 PM, Van Leeuwen, Robert wrote: My understanding of how a writeback cache should work is that it should only take a few seconds for write

Re: [ceph-users] Local SSD cache for ceph on each compute node.

2016-03-27 Thread Ric Wheeler
On 03/16/2016 12:15 PM, Van Leeuwen, Robert wrote: My understanding of how a writeback cache should work is that it should only take a few seconds for writes to be streamed onto the network and is focussed on resolving the speed issue of small sync writes. The writes would be bundled into

Re: [ceph-users] how ceph osd handle ios sent from crashed ceph client

2016-03-09 Thread Ric Wheeler
On 03/08/2016 08:09 PM, Jason Dillaman wrote: librbd provides crash-consistent IO. It is still up to your application to provide its own consistency by adding barriers (flushes) where necessary. If you flush your IO, once that flush completes you are guaranteed that your previous IO is

Re: [ceph-users] xfs corruption

2016-03-07 Thread Ric Wheeler
you please suggest me such a raid card? Because we are in a verge of deciding on hardware raid or software raid to use. Because our OpenStack cluster uses full SSD storage (local raid 10) and my manager want to utilize hardware raid with SSD disks. On Mon, Mar 7, 2016 at 10:04 AM, Ric Wheeler

Re: [ceph-users] xfs corruption

2016-03-07 Thread Ric Wheeler
configuration is raid 0 or raid 1. On Mon, Mar 7, 2016 at 9:21 AM, Ric Wheeler <rwhee...@redhat.com <mailto:rwhee...@redhat.com>> wrote: It is perfectly reasonable and common to use hardware RAID cards in writeback mode under XFS (and under Ceph) if you configure them properly.

Re: [ceph-users] xfs corruption

2016-03-06 Thread Ric Wheeler
It is perfectly reasonable and common to use hardware RAID cards in writeback mode under XFS (and under Ceph) if you configure them properly. The key thing is that for writeback cache enabled, you need to make sure that the S-ATA drives' write cache itself is disabled. Also make sure that

Re: [ceph-users] OSD size and performance

2016-01-04 Thread Ric Wheeler
I am not sure why you want to layer a clustered file system (OCFS2) on top of Ceph RBD. Seems like a huge overhead and a ton of complexity. Better to use CephFS if you want Ceph at the bottom or to just use iSCSI luns under ocfs2. Regards, Ric On 01/04/2016 10:28 AM, Srinivasula Maram

Re: [ceph-users] xfs corruption, data disaster!

2015-05-11 Thread Ric Wheeler
On 05/05/2015 04:13 AM, Yujian Peng wrote: Emmanuel Florac eflorac@... writes: Le Mon, 4 May 2015 07:00:32 + (UTC) Yujian Peng pengyujian5201314 at 126.com écrivait: I'm encountering a data disaster. I have a ceph cluster with 145 osd. The data center had a power problem yesterday, and

Re: [ceph-users] Understanding High Availability - iSCSI/CIFS/NFS

2015-04-05 Thread Ric Wheeler
On 04/05/2015 11:22 AM, Nick Fisk wrote: Hi Justin, I'm doing iSCSI HA. Myself and several others have had troubles with LIO and Ceph, so until the problems are fixed, I wouldn't recommend that approach. But hopefully it will become the best solution in the future. If you need iSCSI, currently

Re: [ceph-users] xfs/nobarrier

2014-12-29 Thread Ric Wheeler
On 12/27/2014 02:32 AM, Lindsay Mathieson wrote: I see a lot of people mount their xfs osd's with nobarrier for extra performance, certainly it makes a huge difference to my small system. However I don't do it as my understanding is this runs a risk of data corruption in the event of power

Re: [ceph-users] the state of cephfs in giant

2014-10-16 Thread Ric Wheeler
On 10/15/2014 08:43 AM, Amon Ott wrote: Am 14.10.2014 16:23, schrieb Sage Weil: On Tue, 14 Oct 2014, Amon Ott wrote: Am 13.10.2014 20:16, schrieb Sage Weil: We've been doing a lot of work on CephFS over the past few months. This is an update on the current state of things as of Giant. ... *