I have some experience with Kingstons - which model do you plan to use?

Shorter version: don't use Kingstons. For anything. Ever.

Jan

> On 30 Sep 2015, at 11:24, Andrija Panic <andrija.pa...@gmail.com> wrote:
> 
> Make sure to check this blog page 
> http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
>  
> <http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/>
> 
> Since Im not sure if you are playing arround with CEPH, or plan it for 
> production and good performance.
> My experience SSD as journal: SSD Samsung 850 PRO = 200 IOPS sustained 
> writes, vs Intel S3500 18.000 IOPS sustained writes - so you understand the 
> difference,,,
> 
> regards
> 
> On 30 September 2015 at 11:17, Jiri Kanicky <j...@ganomi.com 
> <mailto:j...@ganomi.com>> wrote:
> Thanks to all for responses. Great thread with a lot of info.
> 
> I will go with the 3 partitions on Kingstone SDD for 3 OSDs on each node.
> 
> Thanks
> Jiri
> 
> On 30/09/2015 00:38, Lionel Bouton wrote:
> Hi,
> 
> Le 29/09/2015 13:32, Jiri Kanicky a écrit :
> Hi Lionel.
> 
> Thank you for your reply. In this case I am considering to create
> separate partitions for each disk on the SSD drive. Would be good to
> know what is the performance difference, because creating partitions
> is kind of waste of space.
> The difference is hard to guess : filesystems need more CPU power than
> raw block devices for example, so if you don't have much CPU power this
> can make a significant difference. Filesystems might put more load on
> our storage too (for example ext3/4 with data=journal will at least
> double the disk writes). So there's a lot to consider and nothing will
> be faster for journals than a raw partition. LVM logical volumes come a
> close second behind because usually (if you simply use LVM to create
> your logical volumes and don't try to use anything else like snapshots)
> they don't change access patterns and almost don't need any CPU power.
> 
> One more question, is it a good idea to move journal for 3 OSDs to a
> single SSD considering if SSD fails the whole node with 3 HDDs will be
> down?
> If your SSDs are working well with Ceph and aren't cheap models dying
> under heavy writes, yes. I use one 200GB DC3710 SSD for 6 7200rpm SATA
> OSDs (using 60GB of it for the 6 journals) and it works very well (they
> were a huge performance boost compared to our previous use of internal
> journals).
> Some SSDs are slower than HDDs for Ceph journals though (there has been
> a lot of discussions on this subject on this mailing list).
> 
> Thinking of it, leaving journal on each OSD might be safer, because
> journal on one disk does not affect other disks (OSDs). Or do you
> think that having the journal on SSD is better trade off?
> You will put significantly more stress on your HDD leaving journal on
> them and good SSDs are far more robust than HDDs so if you pick Intel DC
> or equivalent SSD for journal your infrastructure might even be more
> robust than one using internal journals (HDDs are dropping like flies
> when you have hundreds of them). There are other components able to take
> down all your OSDs : the disk controller, the CPU, the memory, the power
> supply, ... So adding one robust SSD shouldn't change the overall
> availabilty much (you must check their wear level and choose the models
> according to the amount of writes you want them to support over their
> lifetime though).
> 
> The main reason for journals on SSD is performance anyway. If your setup
> is already fast enough without them, I wouldn't try to add SSDs.
> Otherwise, if you can't reach the level of performance needed by adding
> the OSDs already needed for your storage capacity objectives, go SSD.
> 
> Best regards,
> 
> Lionel
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
> 
> 
> 
> -- 
> 
> Andrija Panić
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to