Re: [ceph-users] Crush maps : split the root in two parts on an OSD node with same disks ?

2018-06-13 Thread Hervé Ballans
Thanks Janne for your reply. Here are the reasons which made me think to "physically" split the pools : 1) A different usage of the pools : the first one will be used for user home directories, with an intensive read/write access. And the second one will be used for data storage/backup, with

Re: [ceph-users] Crush maps : split the root in two parts on an OSD node with same disks ?

2018-06-12 Thread Janne Johansson
Den tis 12 juni 2018 kl 15:06 skrev Hervé Ballans < herve.ball...@ias.u-psud.fr>: > Hi all, > > I have a cluster with 6 OSD nodes, each has 20 disks, all of the 120 > disks are strictly identical (model and size). > (The cluster is also composed of 3 MON servers on 3 other machines) > > For

[ceph-users] Crush maps : split the root in two parts on an OSD node with same disks ?

2018-06-12 Thread Hervé Ballans
Hi all, I have a cluster with 6 OSD nodes, each has 20 disks, all of the 120 disks are strictly identical (model and size). (The cluster is also composed of 3 MON servers on 3 other machines) For design reason, I would like to separate my cluster storage into 2 pools of 60 disks. My idea

[ceph-users] Crush Maps

2014-02-06 Thread McNamara, Bradley
I have a test cluster that is up and running. It consists of three mons, and three OSD servers, with each OSD server having eight OSD's and two SSD's for journals. I'd like to move from the flat crushmap to a crushmap with typical depth using most of the predefined types. I have the current

Re: [ceph-users] Crush Maps

2014-02-06 Thread Daniel Schwager
Hallo Bradley, additionally to your question, I'm interesting in the following: 5) can I change all 'type' Ids because adding a new type host-slow to distinguish between OSD's with journal on the same HDD / separate SSD? E.g. from type 0 osd type 1 host

[ceph-users] CRUSH maps for multiple switches

2013-05-08 Thread Gandalf Corvotempesta
Let's assume 20 OSDs servers and 4x 12 ports switches, 2 for public network and 2 for cluster netowork No link between public switches and no link between cluster switches. first 10 OSD servers connected to public switch1 and the other 10 OSDs connected to public switch2. The same apply for