Hi,
this is well known, years ago this was discussed on this list as well.
One could argue that since it's not supported to change the EC
parameters of a pool, you shouldn't change the profile. But the EC
profile is only referenced during pool creation, so you can edit the
profile and create *new* pools with it, this will show accordingly in
the pool details output. But the more important part is the crush
rule, of course. It defines where the chunks will be placed. So you
should be good, but I'd still double test the PG placement (ceph pg
ls-by-pool <pool>) and confirm the required resiliency as well by
shutting down one or two datacenters.
But please take this information with a grain of salt, it has been
years since I last looked into this, we usually don't change EC
profiles. I am just assuming that nothing has changed in this regard.
Regards,
Eugen
Zitat von Niklas Hambüchen <m...@nh2.me>:
I made an EC 4+2 cluster with `crush-failure-domain=host`.
Later after adding more machines, I changed it from `host` to `datacenter`:
ceph osd erasure-code-profile set my_ec_profile_datacenter k=4
m=2 crush-failure-domain=datacenter crush-device-class=hdd
ceph osd crush rule create-erasure rule_my_data_ec_datacenter
my_ec_profile_datacenter
ceph osd pool set my_data_ec crush_rule rule_my_data_ec_datacenter
This seems to have worked, and `ceph osd pool get my_data_ec
crush_rule` outputs:
crush_rule: rule_my_data_ec_datacenter
But `ceph osd pool ls detail` still shows
pool 3 'my_data_ec' erasure profile my_ec_profile ...
with `my_ec_profile` instead of `my_ec_profile_datacenter`.
Is this a problem?
Who wins, the profile or the crush-rule?
If it's not a problem, it is at least confusing; can I fix it somehow?
Thanks!
Niklas
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io