12:30,Jason Dillaman <jdill...@redhat.com> 写道:On Thu, Nov 15, 2018 at 2:30 PM 赵赵贺东 <zhaohed...@gmail.com> wrote:I test in 12 osds cluster, change objecter_inflight_op_bytes from 100MB to 300MB, performance seems not change obviously.But at the beginning , librbd works in better performanc
not have an effect on krbd, only effect
librbd?
Thanks.
> 在 2018年11月15日,下午3:50,赵赵贺东 写道:
>
> Thanks you for your suggestion.
> It really give me a lot of inspirations.
>
>
> I will test as your suggestion, and browse through src/common/config_opts.h
> to see if I can
ll pretty
> quickly hit the default "objecter_inflight_op_bytes = 100 MiB" limit,
> which will drastically slow (stall) librados. I would recommend
> re-testing librbd w/ a much higher throttle override.
> On Thu, Nov 15, 2018 at 11:34 AM 赵赵贺东 wrote:
>>
>> Th
ata about how your test is configured and run
> for us to have a good idea. IIRC librbd is often faster than krbd because it
> can support newer features and things, but krbd may have less overhead and is
> not dependent on the VM's driver configuration in QEMU...
>
> On Thu, Nov 15,
Hi cephers,
All our cluster osds are deployed in armhf.
Could someone say something about what is the rational performance rates for
librbd VS KRBD ?
Or rational performance loss range when we use librbd compare to KRBD.
I googled a lot, but I could not find a solid criterion.
In fact , it
what is the size of your file?What about a big size file?
If the file is big enough, it can not be stored by only two osds.
If the file is very small, as you know object size is 4MB, so it can be stored
by only one object in one primary osd, and slave osd.
> 在 2018年8月2日,下午6:56,Surya Bala 写道:
>
Hi Cephers,
One of our cluster’s osd can not start because of pg in the osd can not load
infover_key from rocksdb, log as the following.
Could someone talk something about this, thank you guys!
Log:
2018-06-26 15:09:16.036832 b66c6000 0 osd.41 3712 load_pgs
2056114 2018-06-26
gt; requirements and related problems will tell you that this simply isn't going
>> to work. The slightest hint of a problem will simply kill the OSD nodes with
>> OOM. Have you tried with smaller disks - like 1TB models (or even smaller
>> like 256GB SSDs) and see if the same pr
te reply.
You are right , when the backend is bluestore , there was OOM from time to time.
Now will upgrade our HW to see whether we avoid OOM.
Besides, after we upgrade kernel from 4.4.39 to 4.4.120, the activating osd xfs
error seems to be fixed.
>
>
> On Tue, 6 Mar 2018 at 10:51, 赵赵贺东 &
Hi Brad,
Thank you for your attention.
> 在 2018年3月8日,下午4:47,Brad Hubbard 写道:
>
> On Thu, Mar 8, 2018 at 5:01 PM, 赵贺东 wrote:
>> Hi All,
>>
>> Every time after we activate osd, we got “Structure needs cleaning” in
>>
Hello ceph-users,It is a really really Really tough problem for our team.We investigated in the problem for a long time, try a lot of efforts, but can’t solve the problem, even the concentrate cause of the problem is still unclear for us!So, Anyone give any solution/suggestion/opinion whatever
e would work (as
> long as systemd is available)
>
> On Sat, Dec 30, 2017 at 9:11 PM, 赵赵贺东 <zhaohed...@gmail.com> wrote:
>> Hello Cary,
>>
>> Thank you for your detailed description, it’s really helpful for me!
>> I will have a try when I get back to my office!
> I am a Gentoo user and use OpenRC, so this may not apply to you.
> ==
> cd /etc/init.d/
> ln -s ceph ceph-osd.12
> /etc/init.d/ceph-osd.12 start
> rc-update add ceph-osd.12 default
>
> Cary
>
> On Fri, Dec 29, 2017 at 8:47 AM, 赵赵贺东 <zha
...@gmail.com> 写道:
>
>
> You could add a file named /usr/sbin/systemctl and add:
> exit 0
> to it.
>
> Cary
>
> On Dec 28, 2017, at 18:45, 赵赵贺东 <zhaohed...@gmail.com
> <mailto:zhaohed...@gmail.com>> wrote:
>
>
> Hello ceph-users!
>
Hello ceph-users!
I am a ceph user from china.
Our company deploy ceph on arm ubuntu 14.04.
Ceph Version is luminous 12.2.2.
When I try to activate osd by ceph-volume, I got the following error.(osd
prepare stage seems work normally)
It seems that ceph-volume only work under systemd, but
15 matches
Mail list logo