How to force rgw to create its pools as EC?
ceph version 10.2.5
--
I updated region and zone with new cold EC-placement
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "default.rgw.buckets.index",
"data_pool": "default.rgw.buckets.data",
"data_extra_pool": "default.rgw.buckets.non-ec",
"index_type": 0
}
},
{
"key": "EC-placement",
"val": {
"index_pool": ".rgw.EC.buckets.index",
"data_pool": ".rgw.EC.buckets",
"data_extra_pool": ".rgw.EC.buckets.extra",
"index_type": 0
}
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": []
},
{
"name": "EC-placement",
"tags": []
}
],
"default_placement": "default-placement",
Manual creation of rgw pool (extra pool should be on non-EC disks):
ceph osd crush rule create-erasure EC-rule EC-profile
ceph osd pool create .rgw.EC.buckets.index 8 erasure EC-profile
ceph osd pool create .rgw.EC.buckets 64 erasure EC-profile
ceph osd pool create .rgw.EC.buckets.extra 8
radosgw-admin period update
radosgw-admin period update --commit
radosgw-admin regionmap update
systemctl restart
[email protected]<mailto:[email protected]>
then created user:
radosgw-admin metadata put user:bkp < bkp.md.json
"default_placement": "EC-placement",
"placement_tags": ["EC-placement"],
"default_placement": "EC-placement",
"placement_tags": ["default-placement", "EC-placement"],
And there is no possibility of saving to the pool!
The same situation with deleted EC-pools when client firstly saves data and
actually generates EC-pool.
It makes pool on default-placement on hdd.
I try to change default rule to EC:
ceph osd pool set .rgw.EC.buckets crush_ruleset 2
->
Error EINVAL: (22) Invalid argument
How to force rgw to create its pools as EC?
--
Petr Malkov
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com