Re: [ceph-users] PG problem after reweight (1 PG active+remapped) [solved]

2018-12-19 Thread Athanasios Panterlis
From: Wido den Hollander Sent: Monday, December 3, 2018 6:51 PM To: Athanasios Panterlis; ceph-users@lists.ceph.com Subject: Re: [ceph-users] PG problem after reweight (1 PG active+remapped) Hi, On 12/3/18 4:21 PM, Athanasios Panterlis wrote: > Hi Wido, > > Yeap its quite old, since

Re: [ceph-users] PG problem after reweight (1 PG active+remapped)

2018-12-03 Thread Wido den Hollander
- > *From:* Wido den Hollander > *Sent:* Monday, December 3, 2018 3:53 PM > *To:* Athanasios Panterlis; ceph-users@lists.ceph.com > *Subject:* Re: [ceph-users] PG problem after reweight (1 PG > active+remapped) >   > Hi, > > How old is this cluster? As th

Re: [ceph-users] PG problem after reweight (1 PG active+remapped)

2018-12-03 Thread Athanasios Panterlis
, Nasos Panterlis From: Wido den Hollander Sent: Monday, December 3, 2018 3:53 PM To: Athanasios Panterlis; ceph-users@lists.ceph.com Subject: Re: [ceph-users] PG problem after reweight (1 PG active+remapped) Hi, How old is this cluster? As this might be a CRUSH

Re: [ceph-users] PG problem after reweight (1 PG active+remapped)

2018-12-03 Thread Wido den Hollander
Hi, How old is this cluster? As this might be a CRUSH tunables issue where this pops up. You can try (might move a lot of data!) $ ceph osd getcrushmap -o crushmap.backup $ ceph osd crush tunables optimal If things go wrong you always have the old CRUSHmap: $ ceph osd setcrushmap -i

[ceph-users] PG problem after reweight (1 PG active+remapped)

2018-12-03 Thread Athanasios Panterlis
Hi all, I am managing a typical small ceph cluster that consists of 4 nodes with each one having 7 OSDs (some in hdd pool, some in ssd pool) Having a healthy cluster and following some space issues due to bad pg management from ceph, I tried some reweighs in specific OSDs. Unfortunately the