Thanks for the answers.

SSD could be an option, but the idea is to grow (if business goes well)
from those 18TB.
I am thinking, however, after reading some bad comments about CephFS
with very large directories with many subdirectories (which is our case)
doesn't perform very well.

The big picture here is that we are moving to a new datacenter.
Currently our NAS is on ZFS and has the 18TB of content for our application.
We would like to move away from NFS and use CEPH object gateway. But
this will require dev time, which we will have only after the DC migration.

So, the idea was to go to CephFS just to migrate from our current ZFS
nas, and then, eventually, migrate that data to the object gateway. But,
I'm starting to believe that it is better to have a ZFS NAS in the new
DC and migrate directly from ZFS to object gateway once we are in the
new DC.






Brian :: wrote:
> +1 for 18TB and all SSD - If you need any decent IOPS with a cluster
> this size then I all SSDs are the way to go.
>
>
> On Mon, Jun 27, 2016 at 11:47 AM, David <dclistsli...@gmail.com> wrote:
>> Yes you should definitely create different pools for different HDD types.
>> Another decision you need to make is whether you want dedicated nodes for
>> SSD or want to mix them in the same node. You need to ensure you have
>> sufficient CPU and fat enough network links to get the most out of your
>> SSD's.
>>
>> You can add multiple data pools to Cephfs so if you can identify the hot and
>> cold data in your dataset you could do "manual" tiering as an alternative to
>> using a cache tier.
>>
>> 18TB is a relatively small capacity, have you considered an all-SSD cluster?
>>
>> On Sun, Jun 26, 2016 at 10:18 AM, EM - SC <eyal.marantenb...@storecast.de>
>> wrote:
>>> Hi,
>>>
>>> I'm new to ceph and in the mailing list, so hello all!
>>>
>>> I'm testing ceph and the plan is to migrate our current 18TB storage
>>> (zfs/nfs) to ceph. This will be using CephFS and mounted in our backend
>>> application.
>>> We are also planning on using virtualisation (opennebula) with rbd for
>>> images and, if it makes sense, use rbd for our oracle server.
>>>
>>> My question is about pools.
>>> For what I read, I should create different pools for different HD speed
>>> (SAS, SSD, etc).
>>> - What else should I consider for creating pools?
>>> - should I create different pools for rbd, cephfs, etc?
>>>
>>> thanks in advanced,
>>> em
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to