Hi all.
I'm trying to change osd's kv backend using instructions mentioned here:
http://pic.doit.com.cn/ceph/pdf/20180322/4/0401.pdf
But, ceph-osdomap-tool --check step fails with the following error:
ceph-osdomap-tool: /build/ceph-12.2.5/src/rocksdb/db/version_edit.h:188: void
Hi,
We use write-around cache tier with libradosstriper-based clients. We faced
with bug which causes performance degradation:
http://tracker.ceph.com/issues/22528 . Especially if it is a lot of small
objects - sizeof(1 striper chunk). Such objects will promote on every
read/write lock:).
And
Thanks for the answers!
As it leads to a decrease of caching efficiency, i've opened an issue:
http://tracker.ceph.com/issues/22528
15.12.2017, 23:03, "Gregory Farnum" <gfar...@redhat.com>:
> On Thu, Dec 14, 2017 at 9:11 AM, Захаров Алексей <zakharov@yandex.ru>
;Gregory Farnum" <gfar...@redhat.com>:Voluntary “locking” in RADOS is an “object class” operation. These are not part of the core API and cannot run on EC pools, so any operation using them will cause an immediate promotion.On Wed, Dec 13, 2017 at 4:02 AM Захаров Алексей <zakharov...
Hello,
I've found that when client gets lock on object then ceph ignores any promotion
settings and promotes this object immedeatly.
Is it a bug or a feature?
Is it configurable?
Hope for any help!
Ceph version: 10.2.10 and 12.2.2
We use libradosstriper-based clients.
Cache pool settings:
Hi, Nick
Thank you for the answer!
It's still unclear for me, do those options have no effect at all?
Or disk thread is used for some other operations?
09.11.2017, 04:18, "Nick Fisk" :
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com]
Hello,
Today we use ceph jewel with:
osd disk thread ioprio class=idle
osd disk thread ioprio priority=7
and "nodeep-scrub" flag is set.
We want to change scheduler from CFQ to deadline, so these options will lose
effect.
I've tried to find out what operations are performed in "disk thread".