Re: [ceph-users] Bluestore performance 50% of filestore

2017-11-18 Thread Nick Fisk
Behalf Of > Milanov, Radoslav Nikiforov > Sent: 17 November 2017 22:56 > To: Mark Nelson <mnel...@redhat.com>; David Turner > <drakonst...@gmail.com> > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Bluestore performance 50% of filestore > > Here's some

Re: [ceph-users] Bluestore performance 50% of filestore

2017-11-17 Thread Milanov, Radoslav Nikiforov
edu>; David Turner <drakonst...@gmail.com> Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Bluestore performance 50% of filestore It depends on what you expect your typical workload to be like. Ceph (and distributed storage in general) likes high io depths so writes

Re: [ceph-users] Bluestore performance 50% of filestore

2017-11-16 Thread Mark Nelson
David Turner <drakonst...@gmail.com> Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Bluestore performance 50% of filestore Did you happen to have a chance to try with a higher io depth? Mark On 11/16/2017 09:53 AM, Milanov, Radoslav Nikiforov wrote: FYI Having 50GB bock.db made n

Re: [ceph-users] Bluestore performance 50% of filestore

2017-11-16 Thread Milanov, Radoslav Nikiforov
;drakonst...@gmail.com> Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Bluestore performance 50% of filestore Did you happen to have a chance to try with a higher io depth? Mark On 11/16/2017 09:53 AM, Milanov, Radoslav Nikiforov wrote: > FYI > > Having 50GB bock.

Re: [ceph-users] Bluestore performance 50% of filestore

2017-11-16 Thread Mark Nelson
PM *To:* Milanov, Radoslav Nikiforov <rad...@bu.edu> *Cc:* Mark Nelson <mnel...@redhat.com>; ceph-users@lists.ceph.com *Subject:* Re: [ceph-users] Bluestore performance 50% of filestore I'd probably say 50GB to leave some extra space over-provisioned. 50GB should definitely pr

Re: [ceph-users] Bluestore performance 50% of filestore

2017-11-16 Thread Milanov, Radoslav Nikiforov
oslav Nikiforov <rad...@bu.edu<mailto:rad...@bu.edu>> Cc: Mark Nelson <mnel...@redhat.com<mailto:mnel...@redhat.com>>; ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com> Subject: Re: [ceph-users] Bluestore performance 50% of filestore You have to configure the

Re: [ceph-users] Bluestore performance 50% of filestore

2017-11-15 Thread Maged Mokhtar
On 2017-11-14 21:54, Milanov, Radoslav Nikiforov wrote: > Hi > > We have 3 node, 27 OSDs cluster running Luminous 12.2.1 > > In filestore configuration there are 3 SSDs used for journals of 9 OSDs on > each hosts (1 SSD has 3 journal paritions for 3 OSDs). > > I've converted filestore to

Re: [ceph-users] Bluestore performance 50% of filestore

2017-11-14 Thread David Turner
u> > > *Cc:* Mark Nelson <mnel...@redhat.com>; ceph-users@lists.ceph.com > > > *Subject:* Re: [ceph-users] Bluestore performance 50% of filestore > > > > You have to configure the size of the db partition in the config file for > the cluster. If you're db partition i

Re: [ceph-users] Bluestore performance 50% of filestore

2017-11-14 Thread Milanov, Radoslav Nikiforov
<mnel...@redhat.com>; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Bluestore performance 50% of filestore You have to configure the size of the db partition in the config file for the cluster. If you're db partition is 1GB, then I can all but guarantee that you're using your HDD for yo

Re: [ceph-users] Bluestore performance 50% of filestore

2017-11-14 Thread David Turner
t; > - Rado > > > > *From:* David Turner [mailto:drakonst...@gmail.com] > *Sent:* Tuesday, November 14, 2017 4:40 PM > *To:* Mark Nelson <mnel...@redhat.com> > *Cc:* Milanov, Radoslav Nikiforov <rad...@bu.edu>; > ceph-users@lists.ceph.com > > > *Subjec

Re: [ceph-users] Bluestore performance 50% of filestore

2017-11-14 Thread Milanov, Radoslav Nikiforov
nel...@redhat.com> Cc: Milanov, Radoslav Nikiforov <rad...@bu.edu>; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Bluestore performance 50% of filestore How big was your blocks.db partition for each OSD and what size are your HDDs? Also how full is your cluster? It's possib

Re: [ceph-users] Bluestore performance 50% of filestore

2017-11-14 Thread Milanov, Radoslav Nikiforov
ph.com Subject: Re: [ceph-users] Bluestore performance 50% of filestore How big were the writes in the windows test and how much concurrency was there? Historically bluestore does pretty well for us with small random writes so your write results surprise me a bit. I suspect it's the low queue

Re: [ceph-users] Bluestore performance 50% of filestore

2017-11-14 Thread David Turner
he OSDs so additional tests > are possible if that helps. > > > > - Rado > > > > -Original Message- > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf > Of Mark Nelson > > Sent: Tuesday, November 14, 2017 4:04 PM > >

Re: [ceph-users] Bluestore performance 50% of filestore

2017-11-14 Thread Mark Nelson
Of Mark Nelson Sent: Tuesday, November 14, 2017 4:04 PM To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Bluestore performance 50% of filestore Hi Radoslav, Is RBD cache enabled and in writeback mode? Do you have client side readahead? Both are doing better for writes than you'd expect

Re: [ceph-users] Bluestore performance 50% of filestore

2017-11-14 Thread Milanov, Radoslav Nikiforov
: [ceph-users] Bluestore performance 50% of filestore Hi Radoslav, Is RBD cache enabled and in writeback mode? Do you have client side readahead? Both are doing better for writes than you'd expect from the native performance of the disks assuming they are typical 7200RPM drives and you are using

Re: [ceph-users] Bluestore performance 50% of filestore

2017-11-14 Thread Mark Nelson
Hi Radoslav, Is RBD cache enabled and in writeback mode? Do you have client side readahead? Both are doing better for writes than you'd expect from the native performance of the disks assuming they are typical 7200RPM drives and you are using 3X replication (~150IOPS * 27 / 3 = ~1350

[ceph-users] Bluestore performance 50% of filestore

2017-11-14 Thread Milanov, Radoslav Nikiforov
Hi We have 3 node, 27 OSDs cluster running Luminous 12.2.1 In filestore configuration there are 3 SSDs used for journals of 9 OSDs on each hosts (1 SSD has 3 journal paritions for 3 OSDs). I've converted filestore to bluestore by wiping 1 host a time and waiting for recovery. SSDs now contain