Re: [ceph-users] Should I use different pool?

2016-06-28 Thread EM - SC
Thanks for the answers. SSD could be an option, but the idea is to grow (if business goes well) from those 18TB. I am thinking, however, after reading some bad comments about CephFS with very large directories with many subdirectories (which is our case) doesn't perform very well. The big

Re: [ceph-users] Should I use different pool?

2016-06-28 Thread Brian ::
+1 for 18TB and all SSD - If you need any decent IOPS with a cluster this size then I all SSDs are the way to go. On Mon, Jun 27, 2016 at 11:47 AM, David wrote: > Yes you should definitely create different pools for different HDD types. > Another decision you need to

Re: [ceph-users] Should I use different pool?

2016-06-27 Thread Kanchana. P
calamari URL displays below error: New Calamari Installation This appears to be the first time you have started Calamari and there are no clusters currently configured. 3 Ceph servers are connected to Calamari, but no Ceph cluster has been created yet. Please use ceph-deploy to create a cluster;

Re: [ceph-users] Should I use different pool?

2016-06-27 Thread David
Yes you should definitely create different pools for different HDD types. Another decision you need to make is whether you want dedicated nodes for SSD or want to mix them in the same node. You need to ensure you have sufficient CPU and fat enough network links to get the most out of your SSD's.

Re: [ceph-users] Should I use different pool?

2016-06-26 Thread Oliver Dzombic
Hi Em, its highly recommanded to bring the journals on SSDs considering https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/ --- Also, if you like speed, its highly recommanded to use cache tier --- Create the pool with a not too much

[ceph-users] Should I use different pool?

2016-06-26 Thread EM - SC
Hi, I'm new to ceph and in the mailing list, so hello all! I'm testing ceph and the plan is to migrate our current 18TB storage (zfs/nfs) to ceph. This will be using CephFS and mounted in our backend application. We are also planning on using virtualisation (opennebula) with rbd for images and,