Re: [ceph-users] Low traffic Ceph cluster with consumer SSD.

2018-11-26 Thread Eneko Lacunza
Hi, El 25/11/18 a las 18:23, Виталий Филиппов escribió: Ok... That's better than previous thread with file download where the topic starter suffered from normal only-metadata-journaled fs... Thanks for the link, it would be interesting to repeat similar tests. Although I suspect it shouldn't

Re: [ceph-users] Low traffic Ceph cluster with consumer SSD.

2018-11-25 Thread Martin Verges
Hello Anton, we have some bad experience with consumer disks. They tend to fail quite early and sometimes have extrem poor performance in Ceph workloads. If possible, spend some money on reliable Samsung PM/SM863a SSDs. However a customer of us uses the WD Blue 1TB SSDs and seems to be quite

Re: [ceph-users] Low traffic Ceph cluster with consumer SSD.

2018-11-25 Thread Vitaliy Filippov
At least when I run a simple O_SYNC random 4k write test with a random Intel 545s SSD plugged in through USB3-SATA adapter (UASP), pull USB cable out and then recheck written data everything is good and nothing is lost (however iops are of course low, 1100-1200) -- With best regards,

Re: [ceph-users] Low traffic Ceph cluster with consumer SSD.

2018-11-25 Thread Виталий Филиппов
Ok... That's better than previous thread with file download where the topic starter suffered from normal only-metadata-journaled fs... Thanks for the link, it would be interesting to repeat similar tests. Although I suspect it shouldn't be that bad... at least not all desktop SSDs are that

Re: [ceph-users] Low traffic Ceph cluster with consumer SSD.

2018-11-25 Thread Jesper Krogh
On 25 Nov 2018, at 15.17, Vitaliy Filippov wrote: > > All disks (HDDs and SSDs) have cache and may lose non-transactional writes > that are in-flight. However, any adequate disk handles fsync's (i.e SATA > FLUSH CACHE commands). So transactional writes should never be lost, and in > Ceph ALL

Re: [ceph-users] Low traffic Ceph cluster with consumer SSD.

2018-11-25 Thread Vitaliy Filippov
Ceph issues fsync's all the time ...and, of course, it has journaling :) (only fsync is of course not sufficient) with enterprise SSDs which have capacitors fsync just becomes a no-op and thus transactional write performance becomes the same as non-transactional (i.e. 10+ times faster

Re: [ceph-users] Low traffic Ceph cluster with consumer SSD.

2018-11-25 Thread Vitaliy Filippov
the real risk is the lack of power loss protection. Data can be corrupted on unflean shutdowns it's not! lack of "advanced power loss protection" only means lower iops with fsync, but not the possibility of data corruption "advanced power loss protection" is basically the synonym for

Re: [ceph-users] Low traffic Ceph cluster with consumer SSD.

2018-11-25 Thread jesper
>> the real risk is the lack of power loss protection. Data can be >> corrupted on unflean shutdowns > > it's not! lack of "advanced power loss protection" only means lower iops > with fsync, but not the possibility of data corruption > > "advanced power loss protection" is basically the synonym

Re: [ceph-users] Low traffic Ceph cluster with consumer SSD.

2018-11-25 Thread Vitaliy Filippov
On 24 Nov 2018, at 18.09, Anton Aleksandrov wrote We plan to have data on dedicate disk in each node and my question is about WAL/DB for Bluestore. How bad would it be to place it on system-consumer-SSD? How big risk is it, that everything will get "slower than using spinning HDD for the

Re: [ceph-users] Low traffic Ceph cluster with consumer SSD.

2018-11-24 Thread Jesper Krogh
> On 24 Nov 2018, at 18.09, Anton Aleksandrov wrote > We plan to have data on dedicate disk in each node and my question is about > WAL/DB for Bluestore. How bad would it be to place it on system-consumer-SSD? > How big risk is it, that everything will get "slower than using spinning HDD >

Re: [ceph-users] Low traffic Ceph cluster with consumer SSD.

2018-11-24 Thread Ashley Merrick
As it’s consumer hardware / old I am guessing your only be using 1Gbps for the network. If so that will definitely be your bottle neck across the whole environment having both client and replication data sharing a single 1Gbps. Your SSD’s will sit mostly idle, if you have 10Gbps then different

[ceph-users] Low traffic Ceph cluster with consumer SSD.

2018-11-24 Thread Anton Aleksandrov
Hello community, We are building CEPH cluster on pretty old (but free) hardware. We will have 12 nodes with 1 OSD per node and migrate data from single RAID5 setup, so our traffic is not very intense, we basically need more space and possibility to expand it. We plan to have data on