Hello,

right now we have multiple HDD only clusters with ether filestore journals
on SSDs or on newer installations WAL etc. on SSD.

I plan to extend our ceph clusters with SSDs to provide ssd only pools. In
luminous we have devices classes so that i should be able todo this without
editing around crush map.

In the device class doc it says i can create "new" pools to use only ssds
as example:

ceph osd crush rule create-replicated fast default host ssd


what happens if i fire this on an existing pool but with hdd device class?
I wasnt able to test thi yet in our staging cluster and wanted to ask whats
the way todo this.

I want to set an existing pool called volumes to only use osds with hdd
class. Right now all OSDs have hdds. So in theory it should not use newly
created SSD osds once i set them all up for hdd classes right?

So for existing pool running:

ceph osd crush rule create-replicated volume-hdd-only volumes default hdd
ceph osd pool set volumes crush_rule volume-hdd-only

should be the way to go right?


Regards,

Enrico

-- 

*Enrico Kern*
VP IT Operations

[email protected]
+49 (0) 30 555713017 / +49 (0) 152 26814501
skype: flyersa
LinkedIn Profile <https://www.linkedin.com/in/enricokern>


<http://goog_59398030/> <https://www.glispa.com/>

*Glispa GmbH* | Berlin Office
Stromstr. 11-17  <https://goo.gl/maps/6mwNA77gXLP2>
Berlin, Germany, 10551  <https://goo.gl/maps/6mwNA77gXLP2>

Managing Director Din Karol-Gavish
Registered in Berlin
AG Charlottenburg |
<https://maps.google.com/?q=Sonnenburger+Stra%C3%9Fe+73+10437+Berlin%C2%A0%7C+%3Chttps://maps.google.com/?q%3DSonnenburgerstra%25C3%259Fe%2B73%2B10437%2BBerlin%25C2%25A0%257C%25C2%25A0Germany%26entry%3Dgmail%26source%3Dg%3E%C2%A0Germany&entry=gmail&source=g>
HRB
114678B
–––––––––––––––––––––––––––––
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to