Re: [ceph-users] Crush maps : split the root in two parts on an OSD node with same disks ?

2018-06-13 Thread Hervé Ballans

Thanks Janne for your reply.

Here are the reasons which made me think to "physically" split the pools :

1) A different usage of the pools : the first one will be used for user 
home directories, with an intensive read/write access. And the second 
one will be used for data storage/backup, with essentially a read access 
once data will be stored.


2) Pools will be CephFS data pools. On each MON node I have 2 SSD NVMe 
dedicated for cephfs metadata pools. Here too, I thought about splitting 
the NVMe in the crush maps in order to have 2 different metadata pools. 
To sum up, the first data pool is associated with a metadata pool 
composed of one NVMe per MON node, and the second data pool is 
associated with a metadata pool composed of the second NVMe of each MON.


And a third reason :

3) Since they are all identical disks, I understand that all disks are 
automatically put on the same root of the crush maps, right ?
Actually, I don't know (and I thought it was not possible) how to create 
2 pools from this default configuration ?..


In fact, if the third point solve my issue, so it could indeed simplify 
my pool design !?..


Regards,
Hervé

Le 12/06/2018 à 16:52, Janne Johansson a écrit :
Den tis 12 juni 2018 kl 15:06 skrev Hervé Ballans 
mailto:herve.ball...@ias.u-psud.fr>>:


Hi all,

I have a cluster with 6 OSD nodes, each has 20 disks, all of the 120
disks are strictly identical (model and size).
(The cluster is also composed of 3 MON servers on 3 other machines)

For design reason, I would like to separate my cluster storage into 2
pools of 60 disks.

My idea is to modify the crushmap on each node in order to split the
root top hierarchy in two groups, ie 10 disks of each OSD node for
the
first pool and the 10 others disks of each nodes for the second pool.

I already did that on another cluster with 2 sets of disks of
different
technology (HDD vs SSD) inspiring by :

https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/

But is it relevant to do that when we have a set of identical disks ?


You could do it this way, but you could also just run two pools over 
the same 120 OSD disks.
 Perhaps if you stated the end goal you are trying to reach, it would 
be easier to figure out an

answer to if it's relevant or not?

The storage admin in my thinks you spread load and risk better if all 
120 disks get used for
both pools, but you might have a specific reason and if so, may we 
know it?


--
May the most significant bit of your life be positive.



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Crush maps : split the root in two parts on an OSD node with same disks ?

2018-06-12 Thread Janne Johansson
Den tis 12 juni 2018 kl 15:06 skrev Hervé Ballans <
herve.ball...@ias.u-psud.fr>:

> Hi all,
>
> I have a cluster with 6 OSD nodes, each has 20 disks, all of the 120
> disks are strictly identical (model and size).
> (The cluster is also composed of 3 MON servers on 3 other machines)
>
> For design reason, I would like to separate my cluster storage into 2
> pools of 60 disks.
>
> My idea is to modify the crushmap on each node in order to split the
> root top hierarchy in two groups, ie 10 disks of each OSD node for the
> first pool and the 10 others disks of each nodes for the second pool.
>
> I already did that on another cluster with 2 sets of disks of different
> technology (HDD vs SSD) inspiring by :
>
> https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
>
> But is it relevant to do that when we have a set of identical disks ?
>

You could do it this way, but you could also just run two pools over the
same 120 OSD disks.
 Perhaps if you stated the end goal you are trying to reach, it would be
easier to figure out an
answer to if it's relevant or not?

The storage admin in my thinks you spread load and risk better if all 120
disks get used for
both pools, but you might have a specific reason and if so, may we know it?

-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Crush maps : split the root in two parts on an OSD node with same disks ?

2018-06-12 Thread Hervé Ballans

Hi all,

I have a cluster with 6 OSD nodes, each has 20 disks, all of the 120 
disks are strictly identical (model and size).

(The cluster is also composed of 3 MON servers on 3 other machines)

For design reason, I would like to separate my cluster storage into 2 
pools of 60 disks.


My idea is to modify the crushmap on each node in order to split the 
root top hierarchy in two groups, ie 10 disks of each OSD node for the 
first pool and the 10 others disks of each nodes for the second pool.


I already did that on another cluster with 2 sets of disks of different 
technology (HDD vs SSD) inspiring by : 
https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/


But is it relevant to do that when we have a set of identical disks ?

Thanks in advance for your advice,
Hervé

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com