[ceph-users] Re: Question about ceph-balancer and OSD reweights

2020-02-28 Thread Joe Comeau
A while ago - before ceph balancer - probably on Jewel We had a bunch of disks with different re-weights to help control pg We upgraded to luminous All our disks are the same, so we set them all back to 1.0 then let them fill accordingly Then ran balancer about 4-5 times, letting each run

[ceph-users] Re: Question about ceph-balancer and OSD reweights

2020-02-28 Thread shubjero
I talked to some guys on IRC about going back to the non-1 reweight OSD's and setting them to 1. I went from a standard deviation of 2+ to 0.5. Awesome. On Wed, Feb 26, 2020 at 10:08 AM shubjero wrote: > > Right, but should I be proactively returning any reweighted OSD's that > are not 1.

[ceph-users] Re: Stately MDS Transitions

2020-02-28 Thread Gregory Farnum
On Fri, Feb 28, 2020 at 9:35 AM wrote: > > Marc; > > If I understand that command correctly, it's tells MDS 'c' to disappear, the > same as rebooting would, right? > > Let me just clarify something then... > > When I run ceph fs dump I get the following: > 110248:

[ceph-users] Re: Stately MDS Transitions

2020-02-28 Thread DHilsbos
Marc; If I understand that command correctly, it's tells MDS 'c' to disappear, the same as rebooting would, right? Let me just clarify something then... When I run ceph fs dump I get the following: 110248: [v2:10.2.80.10:6800/1470324937,v1:10.2.80.10:6801/1470324937] 'S700041' mds.0.29

[ceph-users] Re: Cache tier OSDs crashing due to unfound hitset object 14.2.7

2020-02-28 Thread Lincoln Bryant
Hi all, I just want to report that I was able to recover from the incomplete PG after exporting the PGs and using the dangerous osd_find_best_info_ignore_history_les option on the appropriate OSDs :) Things seem okay for now, fingers crossed. When marking hit set archive objects lost, I was

[ceph-users] Re: continued warnings: Large omap object found

2020-02-28 Thread Seth Galitzer
Thanks for pointing that out, I must have missed it searching earlier. I'll look forward to upgrading when 14.2.8 comes out and see if that addresses the issue. Seth On 2/27/20 11:37 PM, Brad Hubbard wrote: Check the thread titled "[ceph-users] Frequest LARGE_OMAP_OBJECTS in cephfs metadata

[ceph-users] Re: Stately MDS Transitions

2020-02-28 Thread Marc Roos
ceph mds fail c? - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -. F1 Outsourcing Development Sp. z o.o. Poland t: +48 (0)124466845 f: +48 (0)124466843 e: m...@f1-outsourcing.eu -Original Message- From: dhils...@performair.com [mailto:dhils...@performair.com]

[ceph-users] Stately MDS Transitions

2020-02-28 Thread DHilsbos
All; We just started really fiddling with CephFS on our production cluster (Nautilus - 14.2.5 / 14.2.6), and I have a question... Is there a command / set of commands that transitions a standby-replay MDS server to the active role, while swapping the active MDS to standby-replay, or even just

[ceph-users] Best way to merge crush buckets?

2020-02-28 Thread Adrien Georget
Hi all, I'm looking for the best way to merge/remap existing host buckets into one. I'm running a Ceph Nautilus cluster used as a Ceph Cinder backend with 2 pools "volume-service" and "volume-recherche" both with dedicated OSDs : |host cccephnd00x-service {|| ||    id -2   # do

[ceph-users] Re: SSD considerations for block.db and WAL

2020-02-28 Thread Eneko Lacunza
Hi Christian, El 27/2/20 a las 20:08, Christian Wahl escribió: Hi everyone, we currently have 6 OSDs with 8TB HDDs split across 3 hosts. The main usage is KVM-Images. To improve speed we planned on putting the block.db and WAL onto NVMe-SSDs. The plan was to put 2x1TB in each host. One option