Re: [ceph-users] Running Jewel and Luminous mixed for a longer period

2018-01-01 Thread Christian Balzer
Hello, On Tue, 2 Jan 2018 01:23:45 +0100 Ronny Aasen wrote: > On 30.12.2017 15:41, Milanov, Radoslav Nikiforov wrote: > > Performance as well - in my testing FileStore was much quicker than > > BlueStore. > > > with filestore you often have a ssd journal in front, this will often >

[ceph-users] Question about librbd with qemu-kvm

2018-01-01 Thread 冷镇宇
Hi all, I am using librbd of Ceph10.2.0 with Qemu-kvm. When the virtual machine booted, I found that there is only one tp_librbd thread for one rbd image. Then the iops of 4KB read for one rbd image is only 20,000. I'm wondering if there are some configures for librbd in qemu which can add

[ceph-users] in the same ceph cluster, why the object in the same osd some are 8M and some are 4M?

2018-01-01 Thread linghucongsong
Hi, all! I just use ceph rbd for openstack. my ceph version is 10.2.7. I find a surprise thing that the object save in the osd , in some pgs the objects are 8M, and in some pgs the objects are 4M, can someone tell me why? thanks!

Re: [ceph-users] PG active+clean+remapped status

2018-01-01 Thread 한승진
Are all odsd are same version? I recently experienced similar situation. I upgraded all osds to exact same version and reset of pool configuration like below ceph osd pool set min_size 5 I have 5+2 erasure code the important thing is not the number of min_size but re-configuration I think. I

Re: [ceph-users] Running Jewel and Luminous mixed for a longer period

2018-01-01 Thread Ronny Aasen
On 30.12.2017 15:41, Milanov, Radoslav Nikiforov wrote: Performance as well - in my testing FileStore was much quicker than BlueStore. with filestore you often have a ssd journal in front, this will often mask/hide slow spinning disk write performance, until the journal size becomes the