+1 for 18TB and all SSD - If you need any decent IOPS with a cluster
this size then I all SSDs are the way to go.


On Mon, Jun 27, 2016 at 11:47 AM, David <dclistsli...@gmail.com> wrote:
> Yes you should definitely create different pools for different HDD types.
> Another decision you need to make is whether you want dedicated nodes for
> SSD or want to mix them in the same node. You need to ensure you have
> sufficient CPU and fat enough network links to get the most out of your
> SSD's.
>
> You can add multiple data pools to Cephfs so if you can identify the hot and
> cold data in your dataset you could do "manual" tiering as an alternative to
> using a cache tier.
>
> 18TB is a relatively small capacity, have you considered an all-SSD cluster?
>
> On Sun, Jun 26, 2016 at 10:18 AM, EM - SC <eyal.marantenb...@storecast.de>
> wrote:
>>
>> Hi,
>>
>> I'm new to ceph and in the mailing list, so hello all!
>>
>> I'm testing ceph and the plan is to migrate our current 18TB storage
>> (zfs/nfs) to ceph. This will be using CephFS and mounted in our backend
>> application.
>> We are also planning on using virtualisation (opennebula) with rbd for
>> images and, if it makes sense, use rbd for our oracle server.
>>
>> My question is about pools.
>> For what I read, I should create different pools for different HD speed
>> (SAS, SSD, etc).
>> - What else should I consider for creating pools?
>> - should I create different pools for rbd, cephfs, etc?
>>
>> thanks in advanced,
>> em
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to