See this thread:

http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-April/000106.html
http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-June/000113.html

(Wido -- should we kill the ceph-large list??)


On Wed, Jun 13, 2018 at 1:14 PM Marc Roos <[email protected]> wrote:
>
>
> I wonder if this is not a bug or so. Adding the class hdd, to an all hdd
> cluster should not have such result that 60% of objects are moved
> around.
>
>
> pool fs_data.ec21 id 53
>   3866523/6247464 objects misplaced (61.889%)
>   recovery io 93089 kB/s, 22 objects/s
>
>
>
>
>
> -----Original Message-----
> From: Marc Roos
> Sent: woensdag 13 juni 2018 7:14
> To: ceph-users; k0ste
> Subject: Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd
> update necessary?
>
> I just added here 'class hdd'
>
> rule fs_data.ec21 {
>         id 4
>         type erasure
>         min_size 3
>         max_size 3
>         step set_chooseleaf_tries 5
>         step set_choose_tries 100
>         step take default class hdd
>         step choose indep 0 type osd
>         step emit
> }
>
>
> -----Original Message-----
> From: Konstantin Shalygin [mailto:[email protected]]
> Sent: woensdag 13 juni 2018 12:30
> To: Marc Roos; ceph-users
> Subject: *****SPAM***** Re: *****SPAM***** Re: [ceph-users] Add ssd's to
> hdd cluster, crush map class hdd update necessary?
>
> On 06/13/2018 12:06 PM, Marc Roos wrote:
> > Shit, I added this class and now everything start backfilling (10%)
> > How is this possible, I only have hdd's?
>
> This is normal when you change your crush and placement rules.
> Post your output, I will take a look
>
> ceph osd crush tree
> ceph osd crush dump
> ceph osd pool ls detail
>
>
>
>
>
> k
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to