12:30,Jason Dillaman <jdill...@redhat.com> 写道:On Thu, Nov 15, 2018 at 2:30 PM 赵赵贺东 <zhaohed...@gmail.com> wrote:I test in 12 osds cluster, change objecter_inflight_op_bytes from 100MB to 300MB, performance seems not change obviously.But at the beginning , librbd works in better performanc
not have an effect on krbd, only effect
librbd?
Thanks.
> 在 2018年11月15日,下午3:50,赵赵贺东 写道:
>
> Thanks you for your suggestion.
> It really give me a lot of inspirations.
>
>
> I will test as your suggestion, and browse through src/common/config_opts.h
> to see if I can
ll pretty
> quickly hit the default "objecter_inflight_op_bytes = 100 MiB" limit,
> which will drastically slow (stall) librados. I would recommend
> re-testing librbd w/ a much higher throttle override.
> On Thu, Nov 15, 2018 at 11:34 AM 赵赵贺东 wrote:
>>
>> Thank
u'll need to provide more data about how your test is configured and run
> for us to have a good idea. IIRC librbd is often faster than krbd because it
> can support newer features and things, but krbd may have less overhead and is
> not dependent on the VM's driver configurati
Hi cephers,
All our cluster osds are deployed in armhf.
Could someone say something about what is the rational performance rates for
librbd VS KRBD ?
Or rational performance loss range when we use librbd compare to KRBD.
I googled a lot, but I could not find a solid criterion.
In fact , it con
nd gives this result
>
> Regards
> Surya Balan
>
>
> On Thu, Aug 2, 2018 at 1:30 PM, 赵贺东 <mailto:zhaohed...@gmail.com>> wrote:
> Hello,
>
> file -> many objects-> many PG(each pg has two copies, because your
> replication count is two)-> many
Hello,
file -> many objects-> many PG(each pg has two copies, because your replication
count is two)-> many OSD
pgs can be distributed in OSDs, no limitation for only 2, replication count
2only determine pg copies is 2.
Hope this will help.
> 在 2018年8月2日,下午3:43,Surya Bala 写道:
>
> Hi folks,
Hi Cephers,
One of our cluster’s osd can not start because of pg in the osd can not load
infover_key from rocksdb, log as the following.
Could someone talk something about this, thank you guys!
Log:
2018-06-26 15:09:16.036832 b66c6000 0 osd.41 3712 load_pgs
2056114 2018-06-26 15:09:16.0369
Hi Cephers,
One of our cluster’s osd can not start because of pg in the osd can not load
infover_key from rocksdb, log as the following.
Could someone talk something about this, thank you guys!
Log:
2018-06-26 15:09:16.036832 b66c6000 0 osd.41 3712 load_pgs
2056114 2018-06-26 15:09:16.0369
will tell you that this simply isn't going
>> to work. The slightest hint of a problem will simply kill the OSD nodes with
>> OOM. Have you tried with smaller disks - like 1TB models (or even smaller
>> like 256GB SSDs) and see if the same problem persists?
>>
>>
n the backend is bluestore , there was OOM from time to time.
Now will upgrade our HW to see whether we avoid OOM.
Besides, after we upgrade kernel from 4.4.39 to 4.4.120, the activating osd xfs
error seems to be fixed.
>
>
> On Tue, 6 Mar 2018 at 10:51, 赵赵贺东 <mailto:zhaohed...@gmail
Thank you for your suggestions.
We will upgrade ubuntu distro and linux kernel to see if the problem still
exists or not.
> 在 2018年3月8日,下午5:51,Brad Hubbard 写道:
>
> On Thu, Mar 8, 2018 at 7:33 PM, 赵赵贺东 <mailto:zhaohed...@gmail.com>> wrote:
>> Hi Brad,
>>
&
Hi Wido,
Thank you for attention!
> 在 2018年3月8日,下午4:21,Wido den Hollander 写道:
>
>
>
> On 03/08/2018 08:01 AM, 赵贺东 wrote:
>> Hi All,
>> Every time after we activate osd, we got “Structure needs cleaning” in
>> /var/lib/ceph/osd/ceph-xxx/current/meta.
>>
Hi Brad,
Thank you for your attention.
> 在 2018年3月8日,下午4:47,Brad Hubbard 写道:
>
> On Thu, Mar 8, 2018 at 5:01 PM, 赵贺东 wrote:
>> Hi All,
>>
>> Every time after we activate osd, we got “Structure needs cleaning” in
>> /var/lib/ceph/osd/ceph-xxx/current/meta.
Hi All,
Every time after we activate osd, we got “Structure needs cleaning” in
/var/lib/ceph/osd/ceph-xxx/current/meta.
/var/lib/ceph/osd/ceph-xxx/current/meta
# ls -l
ls: reading directory .: Structure needs cleaning
total 0
Could Anyone say something about this error?
Thank you!
_
Hello ceph-users,It is a really really Really tough problem for our team.We investigated in the problem for a long time, try a lot of efforts, but can’t solve the problem, even the concentrate cause of the problem is still unclear for us!So, Anyone give any solution/suggestion/opinion whatever wil
t; long as systemd is available)
>
> On Sat, Dec 30, 2017 at 9:11 PM, 赵赵贺东 wrote:
>> Hello Cary,
>>
>> Thank you for your detailed description, it’s really helpful for me!
>> I will have a try when I get back to my office!
>>
>> Thank you for your attenti
apply to you.
> ==
> cd /etc/init.d/
> ln -s ceph ceph-osd.12
> /etc/init.d/ceph-osd.12 start
> rc-update add ceph-osd.12 default
>
> Cary
>
> On Fri, Dec 29, 2017 at 8:47 AM, 赵赵贺东 wrote:
>> Hello Cary!
>&g
道:
>
>
> You could add a file named /usr/sbin/systemctl and add:
> exit 0
> to it.
>
> Cary
>
> On Dec 28, 2017, at 18:45, 赵赵贺东 <mailto:zhaohed...@gmail.com>> wrote:
>
>
> Hello ceph-users!
>
> I am a ceph user from china.
> Our company dep
Hello ceph-users!
I am a ceph user from china.
Our company deploy ceph on arm ubuntu 14.04.
Ceph Version is luminous 12.2.2.
When I try to activate osd by ceph-volume, I got the following error.(osd
prepare stage seems work normally)
It seems that ceph-volume only work under systemd, but ubuntu
20 matches
Mail list logo