Re: [ceph-users] application not enabled on pool

2017-08-04 Thread Roger Brown
Ah, yes. This cluster has had all all the versions of Luminous on it. Started with Kraken and went to every Luminous release candidate to date. So I guess I'll just do the `ceph osd pool application enable` commands and be done with it. I appreciate your assistance. Roger On Fri, Aug 4, 2017

Re: [ceph-users] application not enabled on pool

2017-08-04 Thread Gregory Farnum
Roger, was this a test cluster that was already running Luminous? The auto-assignment logic won't work in that case (it's already got the CEPH_RELEASE_LUMINOUS feature set which we're using to run it). I'm not sure if there's a good way to do that upgrade that's worth the effort. -Greg On Fri,

[ceph-users] broken parent/child relationship

2017-08-04 Thread Shawn Edwards
I have a child rbd that doesn't acknowledge its parent. this is with Kraken (11.2.0) The misbehaving child was 'flatten'ed from its parent, but now I can't remove the snapshot because it thinks it has a child still. root@tyr-ceph-mon0:~# rbd snap ls

[ceph-users] Ceph activities at LCA

2017-08-04 Thread Leonardo Vaz
Dear Cephers, As most of you know the deadline for submitting talks on LCA (Linux Conf Australia) is on this Saturday (Aug 4) and we would like to know if anyone here is planning to participate the conference and present talks on Ceph. I was just talking with Sage and besides the talks submitted

Re: [ceph-users] Pg inconsistent / export_files error -5

2017-08-04 Thread Marc Roos
I am still on 12.1.1, it is still a test 3 node cluster, nothing much happening. 2nd node had some issues a while ago, I had an osd.8 that didn’t want to start so I replaced it. -Original Message- From: David Turner [mailto:drakonst...@gmail.com] Sent: vrijdag 4 augustus 2017

Re: [ceph-users] cephfs increase max file size

2017-08-04 Thread Brady Deetz
https://www.spinics.net/lists/ceph-users/msg36285.html On Aug 4, 2017 8:28 AM, "Rhian Resnick" wrote: > Morning, > > > We ran into an issue with the default max file size of a cephfs file. Is > it possible to increase this value to 20 TB from 1 TB without recreating > the file

Re: [ceph-users] application not enabled on pool

2017-08-04 Thread Gregory Farnum
Yes. https://github.com/ceph/ceph/blob/master/src/mon/OSDMonitor.cc#L1069 On Fri, Aug 4, 2017 at 9:14 AM David Turner wrote: > Should they be auto-marked if you upgraded an existing cluster to Luminous? > > On Fri, Aug 4, 2017 at 12:13 PM Gregory Farnum

Re: [ceph-users] application not enabled on pool

2017-08-04 Thread David Turner
Should they be auto-marked if you upgraded an existing cluster to Luminous? On Fri, Aug 4, 2017 at 12:13 PM Gregory Farnum wrote: > All those pools should have been auto-marked as owned by rgw though. We do > have a ticket around that (http://tracker.ceph.com/issues/20891)

Re: [ceph-users] application not enabled on pool

2017-08-04 Thread Gregory Farnum
All those pools should have been auto-marked as owned by rgw though. We do have a ticket around that (http://tracker.ceph.com/issues/20891) but so far it's just confusing. -Greg On Fri, Aug 4, 2017 at 9:07 AM Roger Brown wrote: > Got it, thanks! > > On Fri, Aug 4, 2017 at

Re: [ceph-users] application not enabled on pool

2017-08-04 Thread Roger Brown
Got it, thanks! On Fri, Aug 4, 2017 at 9:48 AM David Turner wrote: > In the 12.1.2 release notes it stated... > > Pools are now expected to be associated with the application using them. > Upon completing the upgrade to Luminous, the cluster will attempt to >

Re: [ceph-users] Pg inconsistent / export_files error -5

2017-08-04 Thread David Turner
It _should_ be enough. What happened in your cluster recently? Power Outage, OSD failures, upgrade, added new hardware, any changes at all. What is your Ceph version? On Fri, Aug 4, 2017 at 11:22 AM Marc Roos wrote: > > I have got a placement group inconsistency, and

Re: [ceph-users] application not enabled on pool

2017-08-04 Thread David Turner
In the 12.1.2 release notes it stated... Pools are now expected to be associated with the application using them. Upon completing the upgrade to Luminous, the cluster will attempt to associate existing pools to known applications (i.e. CephFS, RBD, and RGW). In-use pools that are not

[ceph-users] Pg inconsistent / export_files error -5

2017-08-04 Thread Marc Roos
I have got a placement group inconsistency, and saw some manual where you can export and import this on another osd. But I am getting an export error on every osd. What does this export_files error -5 actually mean? I thought 3 copies should be enough to secure your data. > PG_DAMAGED

[ceph-users] application not enabled on pool

2017-08-04 Thread Roger Brown
Is this something new in Luminous 12.1.2, or did I break something? Stuff still seems to function despite the warnings. $ ceph health detail POOL_APP_NOT_ENABLED application not enabled on 14 pool(s) application not enabled on pool 'default.rgw.buckets.non-ec' application not enabled on

Re: [ceph-users] cephfs increase max file size

2017-08-04 Thread Roger Brown
Woops, nvm my last. My eyes deceived me. On Fri, Aug 4, 2017 at 8:21 AM Roger Brown wrote: > Did you really mean to say "increase this value to 20 TB from 1 TB"? > > > On Fri, Aug 4, 2017 at 7:28 AM Rhian Resnick wrote: > >> Morning, >> >> >> We ran

Re: [ceph-users] cephfs increase max file size

2017-08-04 Thread Roger Brown
Did you really mean to say "increase this value to 20 TB from 1 TB"? On Fri, Aug 4, 2017 at 7:28 AM Rhian Resnick wrote: > Morning, > > > We ran into an issue with the default max file size of a cephfs file. Is > it possible to increase this value to 20 TB from 1 TB without

[ceph-users] cephfs increase max file size

2017-08-04 Thread Rhian Resnick
Morning, We ran into an issue with the default max file size of a cephfs file. Is it possible to increase this value to 20 TB from 1 TB without recreating the file system? Rhian Resnick Assistant Director Middleware and HPC Office of Information Technology Florida Atlantic University

Re: [ceph-users] Rados lib object clone api

2017-08-04 Thread Muthusamy Muthiah
Thank you Greg, I will look into it and I hope the self managed and pool snapshot will work for Erasure pool also, we predominantly use Erasure coding. Thanks, Muthu On Wednesday, 2 August 2017, Gregory Farnum wrote: > On Tue, Aug 1, 2017 at 8:29 AM Muthusamy Muthiah < >

Re: [ceph-users] expanding cluster with minimal impact

2017-08-04 Thread bruno.canning
Hi Laszlo, I've used Dan's script to deploy 9 storage nodes (36 x 6TB data disks/node) into our dev cluster as practice for deployment into our production cluster. The script performs very well. In general, disruption to a cluster (e.g. impact on client I/O) is minimised by osd_max_backfills

Re: [ceph-users] expanding cluster with minimal impact

2017-08-04 Thread Dan van der Ster
Hi Laszlo, The script defaults are what we used to do a large intervention (the default delta weight is 0.01). For our clusters going any faster becomes disruptive, but this really depends on your cluster size and activity. BTW, in case it wasn't clear, to use this script for adding capacity you