Thanks for the info.

On Thu, Sep 6, 2018 at 7:03 PM Darius Kasparavičius <daz...@gmail.com>
wrote:

> Hello,
>
> I'm currently running a similar setup. It's running a blustore OSD
> with 1 NVME device for db/wal devices. That NVME device is not large
> enough to support 160GB db partition per osd, so I'm stuck with 50GB
> each. Currently haven't had any issues with slowdowns or crashes.
>
> The cluster is relatively idle. Up to 10k iops at peaks with 50/50
> read/write io distribution. Thought throughput is a different matter
> It's more like 10:1 with 1GBps/100MBps.
>
>  I have noticed that the best latencies I can get from using raid0 on
> sas devices is running them in Writeback and disabled readahead on
> controler. It might be that you will have different results. I wish
> you luck in testing it.
>
>
> On Thu, Sep 6, 2018 at 4:14 PM David Turner <drakonst...@gmail.com> wrote:
> >
> > The official ceph documentation recommendations for a db partition for a
> 4TB bluestore osd would be 160GB each.
> >
> > Samsung Evo Pro is not an Enterprise class SSD. A quick search of the ML
> will allow which SSDs people are using.
> >
> > As was already suggested, the better option is an HBA as opposed to a
> raid controller. If you are set on your controllers, write-back is fine as
> long as you have BBU. Otherwise you should be using write-through.
> >
> > On Thu, Sep 6, 2018, 8:54 AM Muhammad Junaid <junaid.fsd...@gmail.com>
> wrote:
> >>
> >> Thanks. Can you please clarify, if we use any other enterprise class
> SSD for journal, should we enable write-back caching available on raid
> controller for journal device or connect it as write through. Regards.
> >>
> >> On Thu, Sep 6, 2018 at 4:50 PM Marc Roos <m.r...@f1-outsourcing.eu>
> wrote:
> >>>
> >>>
> >>>
> >>>
> >>> Do not use Samsung 850 PRO for journal
> >>> Just use LSI logic HBA (eg. SAS2308)
> >>>
> >>>
> >>> -----Original Message-----
> >>> From: Muhammad Junaid [mailto:junaid.fsd...@gmail.com]
> >>> Sent: donderdag 6 september 2018 13:18
> >>> To: ceph-users@lists.ceph.com
> >>> Subject: [ceph-users] help needed
> >>>
> >>> Hi there
> >>>
> >>> Hope, every one will be fine. I need an urgent help in ceph cluster
> >>> design. We are planning 3 OSD node cluster in the beginning. Details
> are
> >>> as under:
> >>>
> >>> Servers: 3 * DELL R720xd
> >>> OS Drives: 2 2.5" SSD
> >>> OSD Drives: 10  3.5" SAS 7200rpm 3/4 TB
> >>> Journal Drives: 2 SSD's Samsung 850 PRO 256GB each Raid controller:
> PERC
> >>> H710 (512MB Cache) OSD Drives: On raid0 mode Journal Drives: JBOD Mode
> >>> Rocks db: On same Journal drives
> >>>
> >>> My question is: is this setup good for a start? And critical question
> >>> is: should we enable write back caching on controller for Journal
> >>> drives? Pls suggest. Thanks in advance. Regards.
> >>>
> >>> Muhammad Junaid
> >>>
> >>>
> >>>
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to