Re: [ceph-users] Case where a separate Bluestore WAL/DB device crashes...

2018-03-02 Thread Hervé Ballans
Thanks Jonathan, your feedback is really interesting. It makes me feel good to add separate SSDs for WAL/DBS partitions. Thus, I have to implement a new Ceph cluster with 6 OSD nodes (that each contains 22 OSDs SAS 10k). Following the recommandations on

Re: [ceph-users] Case where a separate Bluestore WAL/DB device crashes...

2018-03-01 Thread Jonathan Proulx
On Thu, Mar 01, 2018 at 04:57:59PM +0100, Hervé Ballans wrote: :Can we find recent benchmarks on this performance issue related to the :location of WAL/DBs ? I don't have benchmarks but I have some anecdotes. we previously had 4T NLSAS (7.2k) filestore data drives with journals on SSD (5:1

Re: [ceph-users] Case where a separate Bluestore WAL/DB device crashes...

2018-03-01 Thread Hervé Ballans
Indeed it makes sense, thanks ! And so, just for my own thinking, for the implementation of a new Bluestore project, we really have to ask ourselves the question of whether separating WAL/DBs significantly increases performance. If the WAL/DB are on the same device as the bluestore data

Re: [ceph-users] Case where a separate Bluestore WAL/DB device crashes...

2018-03-01 Thread Caspar Smit
s/aren't/are/ :) Met vriendelijke groet, Caspar Smit Systemengineer SuperNAS Dorsvlegelstraat 13 1445 PA Purmerend t: (+31) 299 410 414 e: caspars...@supernas.eu w: www.supernas.eu 2018-03-01 16:31 GMT+01:00 David Turner : > This aspect of osds has not changed from

Re: [ceph-users] Case where a separate Bluestore WAL/DB device crashes...

2018-03-01 Thread David Turner
This aspect of osds has not changed from filestore with SSD journals to bluestore with DB and WAL soon SSDs. If the SSD fails, all osds using it aren't lost and need to be removed from the cluster and recreated with a new drive. You can never guarantee data integrity on bluestore or filestore if