[ceph-users] How do I setpolicy to deny deletes for a bucket

2019-05-29 Thread Priya Sehgal
I want to deny deletes on one of my buckets. I tried to run "s3cmd setpolicy". I tried two configs (json files). I do not get any error code and when I try to do getpolicy I see the same json. However, when I delete objects present in the bucket I am able to delete the object. Please let me know

[ceph-users] Using Ceph Ansible to Add Nodes to Cluster at Weight 0

2019-05-29 Thread Mike Cave
Good afternoon, I’m about to expand my cluster from 380 to 480 OSDs (5 nodes with 20 disks per node) and am trying to determine the best way to go about this task. I deployed the cluster with ceph ansible and everything worked well. So I’d like to add the new nodes with ceph ansible as well.

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-05-29 Thread J. Eric Ivancich
Hi Wido, When you run `radosgw-admin gc list`, I assume you are *not* using the "--include-all" flag, right? If you're not using that flag, then everything listed should be expired and be ready for clean-up. If after running `radosgw-admin gc process` the same entries appear in `radosgw-admin gc

Re: [ceph-users] Nfs-ganesha with rados_kv backend

2019-05-29 Thread Jeff Layton
On Wed, 2019-05-29 at 13:49 +, Stolte, Felix wrote: > Hi, > > is anyone running an active-passive nfs-ganesha cluster with cephfs backend > and using the rados_kv recovery backend? My setup runs fine, but takeover is > giving me a headache. On takeover I see the following messages in

Re: [ceph-users] Balancer: uneven OSDs

2019-05-29 Thread Gregory Farnum
These OSDs are far too small at only 10GiB for the balancer to try and do any work. It's not uncommon for metadata like OSDMaps to exceed that size in error states and in any real deployment a single PG will be at least that large. There are probably parameters you can tweak to try and make it

Re: [ceph-users] Balancer: uneven OSDs

2019-05-29 Thread Oliver Freyermuth
Hi Tarek, that's good news, glad my hunch was correct :-). Am 29.05.19 um 19:31 schrieb Tarek Zegar: > Hi Oliver > > Here is the output of the active mgr log after I toggled balancer off / on, I > grep'd out only "balancer" as it was far to verbose (see below). When I look > at ceph osd df I

Re: [ceph-users] performance in a small cluster

2019-05-29 Thread Paul Emmerich
On Wed, May 29, 2019 at 11:37 AM Robert Sander wrote: > Hi, > > Am 29.05.19 um 11:19 schrieb Martin Verges: > > > > We have identified the performance settings in the BIOS as a major > > factor > > > > could you share your insights what options you changed to increase > > performance and

Re: [ceph-users] performance in a small cluster

2019-05-29 Thread Paul Emmerich
On Wed, May 29, 2019 at 9:36 AM Robert Sander wrote: > Am 24.05.19 um 14:43 schrieb Paul Emmerich: > > * SSD model? Lots of cheap SSDs simply can't handle more than that > > The customer currently has 12 Micron 5100 1,92TB (Micron_5100_MTFDDAK1) > SSDs and will get a batch of Micron 5200 in the

Re: [ceph-users] Balancer: uneven OSDs

2019-05-29 Thread Tarek Zegar
Hi Oliver Here is the output of the active mgr log after I toggled balancer off / on, I grep'd out only "balancer" as it was far to verbose (see below). When I look at ceph osd df I see it optimized :) I would like to understand two things however, why is "prepared 0/10 changes" zero if it

Re: [ceph-users] Balancer: uneven OSDs

2019-05-29 Thread Oliver Freyermuth
Hi Tarek, Am 29.05.19 um 18:49 schrieb Tarek Zegar: > Hi Oliver, > > Thank you for the response, I did ensure that min-client-compact-level is > indeed Luminous (see below). I have no kernel mapped rbd clients. Ceph > versions reports mimic. Also below is the output of ceph balancer status.

Re: [ceph-users] Balancer: uneven OSDs

2019-05-29 Thread Tarek Zegar
Hi Oliver, Thank you for the response, I did ensure that min-client-compact-level is indeed Luminous (see below). I have no kernel mapped rbd clients. Ceph versions reports mimic. Also below is the output of ceph balancer status. One thing to note, I did enable the balancer after I already

Re: [ceph-users] Balancer: uneven OSDs

2019-05-29 Thread Oliver Freyermuth
Hi Tarek, what's the output of "ceph balancer status"? In case you are using "upmap" mode, you must make sure to have a min-client-compat-level of at least Luminous: http://docs.ceph.com/docs/mimic/rados/operations/upmap/ Of course, please be aware that your clients must be recent enough

Re: [ceph-users] Balancer: uneven OSDs

2019-05-29 Thread Marc Roos
I had this with balancer active and "crush-compat" MIN/MAX VAR: 0.43/1.59 STDDEV: 10.81 And by increasing the pg of some pools (from 8 to 64) and deleting empty pools, I went to this MIN/MAX VAR: 0.59/1.28 STDDEV: 6.83 (Do not want to go to this upmap yet) -Original Message-

[ceph-users] Balancer: uneven OSDs

2019-05-29 Thread Tarek Zegar
Can anyone help with this? Why can't I optimize this cluster, the pg counts and data distribution is way off. __ I enabled the balancer plugin and even tried to manually invoke it but it won't allow any changes. Looking at ceph osd df, it's not even at all. Thoughts?

Re: [ceph-users] Meaning of Ceph MDS / Rank in "Stopped" state.

2019-05-29 Thread Wesley Dillingham
On further thought, Im now thinking this is telling me which rank is stopped (2), not that two ranks are stopped. I guess I am still curious about why this information is retained here and can rank 2 be made active again? If so, would this be cleaned up out of "stopped"? The state diagram

Re: [ceph-users] [events] Ceph Day Netherlands July 2nd - CFP ends June 3rd

2019-05-29 Thread Mike Perez
Hi everyone, This is the last week to submit for the Ceph Day Netherlands CFP ending June 3rd: https://ceph.com/cephdays/netherlands-2019/ https://zfrmz.com/E3ouYm0NiPF1b3NLBjJk -- Mike Perez (thingee) On Thu, May 23, 2019 at 10:12 AM Mike Perez wrote: > > Hi everyone, > > We will be having

Re: [ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-05-29 Thread Jake Grimmett
Thank you for a lot of detailed and useful information :) I'm tempted to ask a related question on SSD endurance... If 60GB is the sweet spot for each DB/WAL partition, and the SSD has spare capacity, for example, I'd budgeted 266GB per DB/WAL. Would it then be better to make a 60GB "sweet

[ceph-users] Nfs-ganesha with rados_kv backend

2019-05-29 Thread Stolte, Felix
Hi, is anyone running an active-passive nfs-ganesha cluster with cephfs backend and using the rados_kv recovery backend? My setup runs fine, but takeover is giving me a headache. On takeover I see the following messages in ganeshas log file: 29/05/2019 15:38:21 : epoch 5cee88c4 : cephgw-e2-1 :

Re: [ceph-users] Trigger (hot) reload of ceph.conf

2019-05-29 Thread Wido den Hollander
On 5/29/19 11:41 AM, Johan Thomsen wrote: > Hi, > > It doesn't look like SIGHUP causes the osd's to trigger conf reload from > files? Is there any other way I can do that, without restarting?  > No, there isn't. I suggest you look into the new config store which is in Ceph since the Mimic

[ceph-users] Trigger (hot) reload of ceph.conf

2019-05-29 Thread Johan Thomsen
Hi, It doesn't look like SIGHUP causes the osd's to trigger conf reload from files? Is there any other way I can do that, without restarting? I prefer having most of my config in files, but it's annoying that I need to cause the cluster to go in HEALTH_WARN in order to reload them. Thanks for

Re: [ceph-users] performance in a small cluster

2019-05-29 Thread Robert Sander
Hi, Am 29.05.19 um 11:19 schrieb Martin Verges: > > We have identified the performance settings in the BIOS as a major > factor > > could you share your insights what options you changed to increase > performance and could you provide numbers to it? Most default perfomance settings

Re: [ceph-users] performance in a small cluster

2019-05-29 Thread Andrei Mikhailovsky
It would be interesting to learn the improvements types and the BIOS changes that helped you. Thanks > From: "Martin Verges" > To: "Robert Sander" > Cc: "ceph-users" > Sent: Wednesday, 29 May, 2019 10:19:09 > Subject: Re: [ceph-users] performance in a small cluster > Hello Robert, >> We

Re: [ceph-users] performance in a small cluster

2019-05-29 Thread Martin Verges
Hello Robert, We have identified the performance settings in the BIOS as a major factor > could you share your insights what options you changed to increase performance and could you provide numbers to it? Many thanks in advance -- Martin Verges Managing director Mobile: +49 174 9335695

Re: [ceph-users] inconsistent number of pools

2019-05-29 Thread Jan Fajerski
On Tue, May 28, 2019 at 11:50:01AM -0700, Gregory Farnum wrote: You’re the second report I’ve seen if this, and while it’s confusing, you should be Abel to resolve it by restarting your active manager daemon. Maybe this is related? http://tracker.ceph.com/issues/40011 On Sun, May 26,

Re: [ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-05-29 Thread Mattia Belluco
On 5/29/19 5:40 AM, Konstantin Shalygin wrote: > block.db should be 30Gb or 300Gb - anything between is pointless. There > is described why: > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-February/033286.html Following some discussions we had at the past Cephalocon I beg to differ on

[ceph-users] Global Data Deduplication

2019-05-29 Thread Felix Hüttner
Hi everyone, We are currently using Ceph as the backend for our OpenStack blockstorage. For backup of these disks we thought about also using ceph (just with hdd's instead of ssd's). As we will have some volumes that will be backuped daily and that will probably not change too often I searched

Re: [ceph-users] performance in a small cluster

2019-05-29 Thread Robert Sander
Am 24.05.19 um 14:43 schrieb Paul Emmerich: > * SSD model? Lots of cheap SSDs simply can't handle more than that The customer currently has 12 Micron 5100 1,92TB (Micron_5100_MTFDDAK1) SSDs and will get a batch of Micron 5200 in the next days We have identified the performance settings in the

Re: [ceph-users] is rgw crypt default encryption key long term supported ?

2019-05-29 Thread Scheurer François
Hello Casey Thank you for your reply. To close this subject, one last question. Do you know if it is possible to rotate the key defined by "rgw_crypt_default_encryption_key=" ? Best Regards Francois Scheurer From: Casey Bodley Sent: Tuesday, May

[ceph-users] Large OMAP object in RGW GC pool

2019-05-29 Thread Wido den Hollander
Hi, I've got a Ceph cluster with this status: health: HEALTH_WARN 3 large omap objects After looking into it I see that the issue comes from objects in the '.rgw.gc' pool. Investigating it I found that the gc.* objects have a lot of OMAP keys: for OBJ in $(rados -p .rgw.gc

Re: [ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-05-29 Thread Burkhard Linke
Hi, On 5/29/19 8:25 AM, Konstantin Shalygin wrote: We have a similar setup, but 24 disks and 2x P4800X. And the 375GB NVME drives are _not_ large enough: *snipsnap* Your block.db is 29Gb, should be 30Gb to prevent spillover to slow backend. Well, it's the usual gigabyte vs. gigibyte

Re: [ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-05-29 Thread Konstantin Shalygin
We have a similar setup, but 24 disks and 2x P4800X. And the 375GB NVME drives are _not_ large enough: 2019-05-29 07:00:00.000108 mon.bcf-03 [WRN] overall HEALTH_WARN BlueFS spillover detected on 22 OSD(s) root at bcf-10 :~# parted

Re: [ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-05-29 Thread Burkhard Linke
Hi, On 5/29/19 5:23 AM, Frank Yu wrote: Hi Jake, I have same question about size of DB/WAL for OSD。My situations:  12 osd per OSD nodes, 8 TB(maybe 12TB later) per OSD, Intel NVMe SSD (optane P4800x) 375G per OSD nodes, which means DB/WAL can use about 30GB per OSD(8TB), I mainly use CephFS