On 13/12/2018 15:10, Mark Nelson wrote:
> Hi Florian,
>
> On 12/13/18 7:52 AM, Florian Haas wrote:
>> On 02/12/2018 19:48, Florian Haas wrote:
>>> Hi Mark,
>>>
>>> just taking the liberty to follow up on this one, as I'd really like to
>>> get to the bottom of this.
>>>
>>> On 28/11/2018 16:53,
Hi Florian,
On 12/13/18 7:52 AM, Florian Haas wrote:
On 02/12/2018 19:48, Florian Haas wrote:
Hi Mark,
just taking the liberty to follow up on this one, as I'd really like to
get to the bottom of this.
On 28/11/2018 16:53, Florian Haas wrote:
On 28/11/2018 15:52, Mark Nelson wrote:
On 02/12/2018 19:48, Florian Haas wrote:
> Hi Mark,
>
> just taking the liberty to follow up on this one, as I'd really like to
> get to the bottom of this.
>
> On 28/11/2018 16:53, Florian Haas wrote:
>> On 28/11/2018 15:52, Mark Nelson wrote:
>>> Option("bluestore_default_buffered_read",
Hi Mark,
just taking the liberty to follow up on this one, as I'd really like to
get to the bottom of this.
On 28/11/2018 16:53, Florian Haas wrote:
> On 28/11/2018 15:52, Mark Nelson wrote:
>> Option("bluestore_default_buffered_read", Option::TYPE_BOOL,
>> Option::LEVEL_ADVANCED)
>>
On 28/11/2018 15:52, Mark Nelson wrote:
>> Shifting over a discussion from IRC and taking the liberty to resurrect
>> an old thread, as I just ran into the same (?) issue. I see
>> *significantly* reduced performance on RBD reads, compared to writes
>> with the same parameters. "rbd bench
On 11/28/18 8:36 AM, Florian Haas wrote:
On 14/08/2018 15:57, Emmanuel Lacour wrote:
Le 13/08/2018 à 16:58, Jason Dillaman a écrit :
See [1] for ways to tweak the bluestore cache sizes. I believe that by
default, bluestore will not cache any data but instead will only
attempt to cache its
On 14/08/2018 15:57, Emmanuel Lacour wrote:
> Le 13/08/2018 à 16:58, Jason Dillaman a écrit :
>>
>> See [1] for ways to tweak the bluestore cache sizes. I believe that by
>> default, bluestore will not cache any data but instead will only
>> attempt to cache its key/value store and metadata.
>
>
Le 13/08/2018 à 16:58, Jason Dillaman a écrit :
>
> See [1] for ways to tweak the bluestore cache sizes. I believe that by
> default, bluestore will not cache any data but instead will only
> attempt to cache its key/value store and metadata.
I suppose too because default ratio is to cache as
On Mon, Aug 13, 2018 at 10:44 AM Emmanuel Lacour
wrote:
> Le 13/08/2018 à 16:29, Jason Dillaman a écrit :
>
>
>
> For such a small benchmark (2 GiB), I wouldn't be surprised if you are not
> just seeing the Filestore-backed OSDs hitting the page cache for the reads
> whereas the Bluestore-backed
Le 13/08/2018 à 16:44, Emmanuel Lacour a écrit :
> Le 13/08/2018 à 16:29, Jason Dillaman a écrit :
>>
>>
>> For such a small benchmark (2 GiB), I wouldn't be surprised if you
>> are not just seeing the Filestore-backed OSDs hitting the page cache
>> for the reads whereas the Bluestore-backed OSDs
Le 13/08/2018 à 16:29, Jason Dillaman a écrit :
>
>
> For such a small benchmark (2 GiB), I wouldn't be surprised if you are
> not just seeing the Filestore-backed OSDs hitting the page cache for
> the reads whereas the Bluestore-backed OSDs need to actually hit the
> disk. Are the two clusters
Le 13/08/2018 à 16:29, Jason Dillaman a écrit :
>
> For such a small benchmark (2 GiB), I wouldn't be surprised if you are
> not just seeing the Filestore-backed OSDs hitting the page cache for
> the reads whereas the Bluestore-backed OSDs need to actually hit the
> disk. Are the two clusters
On Mon, Aug 13, 2018 at 10:01 AM Emmanuel Lacour
wrote:
> Le 13/08/2018 à 15:55, Jason Dillaman a écrit :
>
>
>
>>
>> so problem seems located on "rbd" side ...
>>
>
> That's a pretty big apples-to-oranges comparison (4KiB random IO to 4MiB
> full-object IO). With your RBD workload, the OSDs
Le 13/08/2018 à 15:55, Jason Dillaman a écrit :
>
>
>
> so problem seems located on "rbd" side ...
>
>
> That's a pretty big apples-to-oranges comparison (4KiB random IO to
> 4MiB full-object IO). With your RBD workload, the OSDs will be seeking
> after each 4KiB read but w/ your RADOS bench
On Mon, Aug 13, 2018 at 9:32 AM Emmanuel Lacour
wrote:
> Le 13/08/2018 à 15:21, Jason Dillaman a écrit :
> > Is this a clean (new) cluster and RBD image you are using for your
> > test or has it been burned in? When possible (i.e. it has enough free
> > space), bluestore will essentially turn
Le 13/08/2018 à 15:21, Jason Dillaman a écrit :
> Is this a clean (new) cluster and RBD image you are using for your
> test or has it been burned in? When possible (i.e. it has enough free
> space), bluestore will essentially turn your random RBD image writes
> into sequential writes. This
Is this a clean (new) cluster and RBD image you are using for your test or
has it been burned in? When possible (i.e. it has enough free space),
bluestore will essentially turn your random RBD image writes into
sequential writes. This optimization doesn't work for random reads unless
your read
Dear ceph users,
I set up a new cluster:
- Debian stretch
- ceph 12.2.7
- 3 nodes with mixed mon/osd
- 4 hdd 4TB osd per nodes
- 2 SSDs per nodes shared among osds for db/wal
- each OSD alone in a raid0+WriteBack
Inside a VM I get really good writes(200MB/s, 5k iops for direct 4K rand
18 matches
Mail list logo