Hi,
I have created a separate root for my ssd drives
All works well but a reboot ( or restart of the services) wipes out all my
changes

How can I persist changes to crush rules ?

here  are some details

Initial / default  - this is what I am getting after a restart / reboot
If I just do that on one server, the crush rules specific that that server
will be reverted
The new root ( ssds) will persist though

ceph osd tree
ID  CLASS WEIGHT   TYPE NAME          STATUS REWEIGHT PRI-AFF
-16              0 root ssds
-17              0     host osd01-ssd
-18              0     host osd02-ssd
-19              0     host osd03-ssd
-20              0     host osd04-ssd
 -1       32.63507 root default
 -3        8.15877     host osd01
  4   hdd  1.85789         osd.4          up  1.00000 1.00000
  5   hdd  1.85789         osd.5          up  1.00000 1.00000
  6   hdd  1.85789         osd.6          up  1.00000 1.00000
  7   hdd  1.85789         osd.7          up  1.00000 1.00000
  0   ssd  0.72719         osd.0          up  1.00000 1.00000
 -5        8.15877     host osd02
  8   hdd  1.85789         osd.8          up  1.00000 1.00000
  9   hdd  1.85789         osd.9          up  1.00000 1.00000
 10   hdd  1.85789         osd.10         up  1.00000 1.00000
 11   hdd  1.85789         osd.11         up  1.00000 1.00000
  1   ssd  0.72719         osd.1          up  1.00000 1.00000
 -7        8.15877     host osd03
 12   hdd  1.85789         osd.12         up  1.00000 1.00000
 13   hdd  1.85789         osd.13         up  1.00000 1.00000
 14   hdd  1.85789         osd.14         up  1.00000 1.00000
 15   hdd  1.85789         osd.15         up  1.00000 1.00000
  2   ssd  0.72719         osd.2          up  1.00000 1.00000
 -9        8.15877     host osd04
 16   hdd  1.85789         osd.16         up  1.00000 1.00000
 17   hdd  1.85789         osd.17         up  1.00000 1.00000
 18   hdd  1.85789         osd.18         up  1.00000 1.00000
 19   hdd  1.85789         osd.19         up  1.00000 1.00000
  3   ssd  0.72719         osd.3          up  1.00000 1.00000


changes made

ceph osd crush add 0 0.72719 root=ssds

ceph osd crush set osd.0 0.72719 root=ssds host=osd01-ssd

ceph osd crush add 1 0.72719 root=ssds

ceph osd crush set osd.1 0.72719 root=ssds host=osd02-ssd

ceph osd crush add 2 0.72719 root=ssds

ceph osd crush set osd.2 0.72719 root=ssds host=osd03-ssd

ceph osd crush add 3 0.72719 root=ssds

ceph osd crush set osd.3 0.72719 root=ssds host=osd04-ssd


ceph osd tree

ID  CLASS WEIGHT   TYPE NAME          STATUS REWEIGHT PRI-AFF

-16        2.90875 root ssds

-17        0.72719     host osd01-ssd

  0   ssd  0.72719         osd.0          up  1.00000 1.00000

-18        0.72719     host osd02-ssd

  1   ssd  0.72719         osd.1          up  1.00000 1.00000

-19        0.72719     host osd03-ssd

  2   ssd  0.72719         osd.2          up  1.00000 1.00000

-20        0.72719     host osd04-ssd

  3   ssd  0.72719         osd.3          up  1.00000 1.00000

 -1       29.72632 root default

 -3        7.43158     host osd01

  4   hdd  1.85789         osd.4          up  1.00000 1.00000

  5   hdd  1.85789         osd.5          up  1.00000 1.00000

  6   hdd  1.85789         osd.6          up  1.00000 1.00000

  7   hdd  1.85789         osd.7          up  1.00000 1.00000

 -5        7.43158     host osd02

  8   hdd  1.85789         osd.8          up  1.00000 1.00000

  9   hdd  1.85789         osd.9          up  1.00000 1.00000

 10   hdd  1.85789         osd.10         up  1.00000 1.00000

 11   hdd  1.85789         osd.11         up  1.00000 1.00000

 -7        7.43158     host osd03

 12   hdd  1.85789         osd.12         up  1.00000 1.00000

 13   hdd  1.85789         osd.13         up  1.00000 1.00000

 14   hdd  1.85789         osd.14         up  1.00000 1.00000

 15   hdd  1.85789         osd.15         up  1.00000 1.00000

 -9        7.43158     host osd04

 16   hdd  1.85789         osd.16         up  1.00000 1.00000

 17   hdd  1.85789         osd.17         up  1.00000 1.00000

 18   hdd  1.85789         osd.18         up  1.00000 1.00000

 19   hdd  1.85789         osd.19         up  1.00000 1.00000
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to