Re: [ceph-users] Unable to list rbd block > images in nautilus dashboard

2019-04-04 Thread Wes Cilldhaire
Hi Lenz, Thanks for responding. I suspected that the number of rbd images might have had something to do with it so I cleaned up old disposable VM images I am no longer using, taking the list down from ~30 to 16, 2 in the EC pool on hdds and the rest on the replicated ssd pool. They vary in

[ceph-users] typo in news for PG auto-scaler

2019-04-04 Thread Lars Täuber
Hi everybody! There is a small mistake in the news about the PG autoscaler https://ceph.com/rados/new-in-nautilus-pg-merging-and-autotuning/ The command $ ceph osd pool set foo target_ratio .8 should actually be $ ceph osd pool set foo target_size_ratio .8 Thanks for this great improvement!

Re: [ceph-users] Unable to list rbd block > images in nautilus dashboard

2019-04-04 Thread Lenz Grimmer
Hi Wes, On 4/4/19 9:23 PM, Wes Cilldhaire wrote: > Can anyone at all please confirm whether this is expected behaviour / > a known issue, or give any advice on how to diagnose this? As far as > I can tell my mon and mgr are healthy. All rbd images have > object-map and fast-diff enaabled. My g

Re: [ceph-users] Unable to list rbd block > images in nautilus dashboard

2019-04-04 Thread Wes Cilldhaire
Hi again, Can anyone at all please confirm whether this is expected behaviour / a known issue, or give any advice on how to diagnose this? As far as I can tell my mon and mgr are healthy. All rbd images have object-map and fast-diff enaabled. > I've been having an issue with the dashboard b

Re: [ceph-users] CephFS and many small files

2019-04-04 Thread Gregory Farnum
On Mon, Apr 1, 2019 at 4:04 AM Paul Emmerich wrote: > > There are no problems with mixed bluestore_min_alloc_size; that's an > abstraction layer lower than the concept of multiple OSDs. (Also, you > always have that when mixing SSDs and HDDs) > > I'm not sure about the real-world impacts of a lowe

Re: [ceph-users] Wrong certificate delivered on https://ceph.io/

2019-04-04 Thread Gregory Farnum
I believe our community manager Mike is in charge of that? On Wed, Apr 3, 2019 at 6:49 AM Raphaël Enrici wrote: > > Dear all, > > is there somebody in charge of the ceph hosting here, or someone who > knows the guy who knows another guy who may know... > > Saw this while reading the FOSDEM 2019 p

Re: [ceph-users] Disable cephx with centralized configs

2019-04-04 Thread Gregory Farnum
I think this got dealt with on irc, but for those following along at home: I think the problem here is that you've set the central config to disable authentication, but the client doesn't know what those config options look like until it's connected — which it can't do, because it's demanding encr

[ceph-users] Poor cephfs (ceph_fuse) write performance in Mimic

2019-04-04 Thread Andras Pataki
Hi cephers, I'm working through our testing cycle to upgrade our main ceph cluster from Luminous to Mimic, and I ran into a problem with ceph_fuse.  With Luminous, a single client can pretty much max out a 10Gbps network connection writing sequentially on our cluster with Luminous ceph_fuse. 

Re: [ceph-users] RGW: Reshard index of non-master zones in multi-site

2019-04-04 Thread Iain Buclaw
On Wed, 3 Apr 2019 at 09:41, Iain Buclaw wrote: > > On Tue, 19 Feb 2019 at 10:11, Iain Buclaw wrote: > > > > > > # ./radosgw-gc-bucket-indexes.sh master.rgw.buckets.index | wc -l > > 7511 > > > > # ./radosgw-gc-bucket-indexes.sh secondary1.rgw.buckets.index | wc -l > > 3509 > > > > # ./radosgw-gc

Re: [ceph-users] BADAUTHORIZER in Nautilus

2019-04-04 Thread Sage Weil
On Thu, 4 Apr 2019, Shawn Edwards wrote: > It was disabled in a fit of genetic debugging. I've now tried to revert > all config settings related to auth and signing to defaults. > > I can't seem to change the auth_*_required settings. If I try to remove > them, they stay set. If I try to change

Re: [ceph-users] x pgs not deep-scrubbed in time

2019-04-04 Thread Michael Sudnick
Thanks, I'll mess around with them and see what I can do. -Michael On Thu, 4 Apr 2019 at 05:58, Alexandru Cucu wrote: > Hi, > > You are limited by your drives so not much can be done but it should > alt least catch up a bit and reduce the number of pgs that have not > been deep scrubbed in time

Re: [ceph-users] BADAUTHORIZER in Nautilus

2019-04-04 Thread Shawn Edwards
It was disabled in a fit of genetic debugging. I've now tried to revert all config settings related to auth and signing to defaults. I can't seem to change the auth_*_required settings. If I try to remove them, they stay set. If I try to change them, I get both the old and new settings: root@t

Re: [ceph-users] ceph osd pg-upmap-items not working

2019-04-04 Thread Dan van der Ster
There are several more fixes queued up for v12.2.12: 16b7cc1bf9 osd/OSDMap: add log for better debugging 3d2945dd6e osd/OSDMap: calc_pg_upmaps - restrict optimization to origin pools only ab2dbc2089 osd/OSDMap: drop local pool filter in calc_pg_upmaps 119d8cb2a1 crush: fix upmap overkill 0729a7887

Re: [ceph-users] ceph osd pg-upmap-items not working

2019-04-04 Thread Kári Bertilsson
Yeah i agree... the auto balancer is definitely doing a poor job for me. I have been experimenting with this for weeks and i can make way better optimization than the balancer by looking at "ceph osd df tree" and manually running various ceph upmap commands. Too bad this is tedious work, and tend

Re: [ceph-users] BADAUTHORIZER in Nautilus

2019-04-04 Thread Sage Weil
That log shows 2019-04-03 15:39:53.299 7f3733f18700 10 monclient: tick 2019-04-03 15:39:53.299 7f3733f18700 10 cephx: validate_tickets want 53 have 53 need 0 2019-04-03 15:39:53.299 7f3733f18700 20 cephx client: need_tickets: want=53 have=53 need=0 2019-04-03 15:39:53.299 7f3733f18700 10 monclie

Re: [ceph-users] ceph osd pg-upmap-items not working

2019-04-04 Thread Iain Buclaw
On Mon, 18 Mar 2019 at 16:42, Dan van der Ster wrote: > > The balancer optimizes # PGs / crush weight. That host looks already > quite balanced for that metric. > > If the balancing is not optimal for a specific pool that has most of > the data, then you can use the `optimize myplan ` param. > >F

Re: [ceph-users] How to tune Ceph RBD mirroring parameters to speed up replication

2019-04-04 Thread huxia...@horebdata.cn
thanks a lot, Jason. how much performance loss should i expect by enabling rbd mirroring? I really need to minimize any performance impact while using this disaster recovery feature. Will a dedicated journal on Intel Optane NVMe help? If so, how big the size should be? cheers, Samuel huxia

Re: [ceph-users] x pgs not deep-scrubbed in time

2019-04-04 Thread Alexandru Cucu
Hi, You are limited by your drives so not much can be done but it should alt least catch up a bit and reduce the number of pgs that have not been deep scrubbed in time. On Wed, Apr 3, 2019 at 8:13 PM Michael Sudnick wrote: > > Hi Alex, > > I'm okay myself with the number of scrubs performed, wo