Re: [ceph-users] Degraded data redundancy: NUM pgs undersized

2018-09-04 Thread Jörg Kastning
Hello Lothar, Thanks for your reply. Am 04.09.2018 um 11:20 schrieb Lothar Gesslein: By pure chance 15 pgs are now actually replicated to all 3 osds, so they have enough copies (clean). But the placement is "wrong", it would like to move the data to different osds (remapped) if possible.

Re: [ceph-users] Degraded data redundancy: NUM pgs undersized

2018-09-04 Thread Lothar Gesslein
On 09/04/2018 09:47 AM, Jörg Kastning wrote: > My questions are: > >  1. What does active+undersized actually mean? I did not find anything > about it in the documentation on docs.ceph.com. http://docs.ceph.com/docs/master/rados/operations/pg-states/ active Ceph will process requests to the

[ceph-users] Degraded data redundancy: NUM pgs undersized

2018-09-04 Thread Jörg Kastning
Good morning folks, As a newbie to Ceph yesterday was the first time I've configured my CRUSH map, added a CRUSH rule and created my first pool using this rule. Since then I get the status HEALTH_WARN with the following output: ~~~ $ sudo ceph status cluster: id: