Hi
Ceph squid. We currently have all our ceph servers at one physical
location but in different rooms. We are looking at creating a new pool
with some servers placed at a different physical location.
Current crush map is this:
root default
datacenter 714
hosts
datacenter HX1
hosts
datacenter UX1
hosts
Sample rule:
{
"rule_id": 0,
"rule_name": "rbd_ec_data",
"type": 3,
"steps": [
{
"op": "set_chooseleaf_tries",
"num": 5
},
{
"op": "set_choose_tries",
"num": 100
},
{
"op": "take",
"item": -24,
"item_name": "default~hdd"
},
{
"op": "choose_indep",
"num": 0,
"type": "datacenter"
},
{
"op": "chooseleaf_indep",
"num": 2,
"type": "host"
},
{
"op": "emit"
}
]
},
New crush map:
root default
datacenter HVH
room 714
hosts
room HX1
hosts
room UX1
hosts
datacenter TBA
hosts
Edited rule, changed root from default to HVH and failure domain from
datacenter to room:
{
"rule_id": 0,
"rule_name": "rbd_ec_data",
"type": 3,
"steps": [
{
"op": "set_chooseleaf_tries",
"num": 5
},
{
"op": "set_choose_tries",
"num": 100
},
{
"op": "take",
"item": -24,
"item_name": "HVH~hdd"
},
{
"op": "choose_indep",
"num": 0,
"type": "room"
},
{
"op": "chooseleaf_indep",
"num": 2,
"type": "host"
},
{
"op": "emit"
}
]
},
Does that look right? Can we just set norebalance, move hosts around in
the crush map, edit our crush rules, run pgremapper and unset norebalance?
Mvh.
Torkil
--
Torkil Svensgaard
Sysadmin
MR-Forskningssektionen, afs. 714
DRCMR, Danish Research Centre for Magnetic Resonance
Hvidovre Hospital
KettegÄrd Allé 30
DK-2650 Hvidovre
Denmark
Tel: +45 386 22828
E-mail: [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]