Behalf Of
> Milanov, Radoslav Nikiforov
> Sent: 17 November 2017 22:56
> To: Mark Nelson <mnel...@redhat.com>; David Turner
> <drakonst...@gmail.com>
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Bluestore performance 50% of filestore
>
> Here's some
edu>; David Turner
<drakonst...@gmail.com>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Bluestore performance 50% of filestore
It depends on what you expect your typical workload to be like. Ceph (and
distributed storage in general) likes high io depths so writes
David Turner
<drakonst...@gmail.com>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Bluestore performance 50% of filestore
Did you happen to have a chance to try with a higher io depth?
Mark
On 11/16/2017 09:53 AM, Milanov, Radoslav Nikiforov wrote:
FYI
Having 50GB bock.db made n
;drakonst...@gmail.com>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Bluestore performance 50% of filestore
Did you happen to have a chance to try with a higher io depth?
Mark
On 11/16/2017 09:53 AM, Milanov, Radoslav Nikiforov wrote:
> FYI
>
> Having 50GB bock.
PM
*To:* Milanov, Radoslav Nikiforov <rad...@bu.edu>
*Cc:* Mark Nelson <mnel...@redhat.com>; ceph-users@lists.ceph.com
*Subject:* Re: [ceph-users] Bluestore performance 50% of filestore
I'd probably say 50GB to leave some extra space over-provisioned. 50GB
should definitely pr
oslav Nikiforov <rad...@bu.edu<mailto:rad...@bu.edu>>
Cc: Mark Nelson <mnel...@redhat.com<mailto:mnel...@redhat.com>>;
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Bluestore performance 50% of filestore
You have to configure the
On 2017-11-14 21:54, Milanov, Radoslav Nikiforov wrote:
> Hi
>
> We have 3 node, 27 OSDs cluster running Luminous 12.2.1
>
> In filestore configuration there are 3 SSDs used for journals of 9 OSDs on
> each hosts (1 SSD has 3 journal paritions for 3 OSDs).
>
> I've converted filestore to
u>
>
> *Cc:* Mark Nelson <mnel...@redhat.com>; ceph-users@lists.ceph.com
>
>
> *Subject:* Re: [ceph-users] Bluestore performance 50% of filestore
>
>
>
> You have to configure the size of the db partition in the config file for
> the cluster. If you're db partition i
<mnel...@redhat.com>; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Bluestore performance 50% of filestore
You have to configure the size of the db partition in the config file for the
cluster. If you're db partition is 1GB, then I can all but guarantee that
you're using your HDD for yo
t;
> - Rado
>
>
>
> *From:* David Turner [mailto:drakonst...@gmail.com]
> *Sent:* Tuesday, November 14, 2017 4:40 PM
> *To:* Mark Nelson <mnel...@redhat.com>
> *Cc:* Milanov, Radoslav Nikiforov <rad...@bu.edu>;
> ceph-users@lists.ceph.com
>
>
> *Subjec
nel...@redhat.com>
Cc: Milanov, Radoslav Nikiforov <rad...@bu.edu>; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Bluestore performance 50% of filestore
How big was your blocks.db partition for each OSD and what size are your HDDs?
Also how full is your cluster? It's possib
ph.com
Subject: Re: [ceph-users] Bluestore performance 50% of filestore
How big were the writes in the windows test and how much concurrency was there?
Historically bluestore does pretty well for us with small random writes so your
write results surprise me a bit. I suspect it's the low queue
he OSDs so additional tests
> are possible if that helps.
> >
> > - Rado
> >
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
> Of Mark Nelson
> > Sent: Tuesday, November 14, 2017 4:04 PM
> >
Of Mark
Nelson
Sent: Tuesday, November 14, 2017 4:04 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Bluestore performance 50% of filestore
Hi Radoslav,
Is RBD cache enabled and in writeback mode? Do you have client side readahead?
Both are doing better for writes than you'd expect
: [ceph-users] Bluestore performance 50% of filestore
Hi Radoslav,
Is RBD cache enabled and in writeback mode? Do you have client side readahead?
Both are doing better for writes than you'd expect from the native performance
of the disks assuming they are typical 7200RPM drives and you are using
Hi Radoslav,
Is RBD cache enabled and in writeback mode? Do you have client side
readahead?
Both are doing better for writes than you'd expect from the native
performance of the disks assuming they are typical 7200RPM drives and
you are using 3X replication (~150IOPS * 27 / 3 = ~1350
Hi
We have 3 node, 27 OSDs cluster running Luminous 12.2.1
In filestore configuration there are 3 SSDs used for journals of 9 OSDs on each
hosts (1 SSD has 3 journal paritions for 3 OSDs).
I've converted filestore to bluestore by wiping 1 host a time and waiting for
recovery. SSDs now contain
17 matches
Mail list logo