Adding back the list, didn't realise I omitted it on my first reply :)

On Fri, Jul 3, 2015 at 7:54 PM, Paul Evans <[email protected]> wrote:

>
>
>
>  On Jul 3, 2015, at 9:17 AM, Adrien Gillard <[email protected]>
> wrote:
>
>  I was also thinking of going 6+2 with 9 hosts but the cluster would
> definitely be too large :) It may be considered at the time I need to add
> hosts : having a new 6+2 pool and the backups done on this one. But this is
> a long way to get there :)
>
> See my final comment below...
>
>
>   This is the kind of feedback I am looking for :)
>
> Glad to help.
>
>
>  Yes it will be accessed via RBD. I didn't know the write limitations
> induced the need for a cache tier. So I will need at least two pools in the
> cluster, one replicated for the cache and one EC as a backend for cold data
> ?
>
>
>  The pools can overlay the same disks, which causes double the writes but
> is the price we pay for the current EC implementation.  If you have 230TB
> RAW, you could allocate roughly 2x your daily ingest to the Cache pool, and
> use the rest for the EC pool. Example:  you ingest 5TB/day…create a
> Replicated Pool (size=3) of 30TB, leaving 200TB RAW for the EC pool. No
> need to allocate distinct disks for the Replicated pool.
>

Okay, this ends up  with a behaviour similar to traditional SAN that
integrates a tiering technology except you use the same disks. This also
deals with my concerns about performance, in a way.

>
>
>  Lastly, regarding Cluster Throughput:  EC seems to require a bit more
> CPU and memory than straight replication, which begs the question of how
> much RAM and CPU are you putting into the chassis?  With proper amounts,
> you should be able to hit your throughput targets,.
>
>  Yes, I have read about that, I was thinking 64 GB of RAM  (maybe
> overkill, even with the 1GB of RAM per TB ? but I would rather have an
> optimal RAM configuration in terms of DIMM / channels / CPU) and 2x8 Intel
> cores per host (around 2Ghz per core). As the cluster will be used for
> backups, the goal is not to be limited by the storage backend during the
> backup window overnight. I do not expect much load during daytime.
>
>
>  64G is “OK” provided you tune the system well and DON”T add extra
> services onto your OSD nodes.  If you’ll also have 3 of them acting as
> MONs, more memory is advised (probably 96-128G).
>

 At the moment I am planning to have a smaller dedicated node for the
master monitor ( ~ 8 cores, 32G RAM, SSD) and virtual machines for MON 2
and 3 (with enough resources and virtual disk on SSD)

>
>
>
>  Best regards,
>   Paul
>
>
>
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to