I add one OSD node to the cluster and I get 500MB/s throughput over my
disks and it was 2 or 3 times better than before! but my latency raised 5
times!!!
When I enable bluefs_buffered_io the throughput on disks gets 200MB/s and
my latency gets down!
Is there any kernel config/tuning that should be used to have correct
latency without bluefs buffered io?

On Mon, Nov 23, 2020 at 3:52 PM Igor Fedotov <ifedo...@suse.de> wrote:

> Hi Seena,
>
> just to note  - this ticket might be relevant.
>
> https://tracker.ceph.com/issues/48276
>
>
> Mind leaving a comment there?
>
>
> Thanks,
>
> Igor
>
> On 11/23/2020 2:51 AM, Seena Fallah wrote:
> > Now one of my OSDs gets segfault.
> > Here is the full trace: https://paste.ubuntu.com/p/4KHcCG9YQx/
> >
> > On Mon, Nov 23, 2020 at 2:16 AM Seena Fallah <seenafal...@gmail.com>
> wrote:
> >
> >> Hi all,
> >>
> >> After I upgrade from 14.2.9 to 14.2.14 my OSDs are using less more
> memory
> >> than before! I give each OSD 6GB memory target and before the free
> memory
> >> was 20GB and now after 24h from the upgrade I have 104GB free memory of
> >> 128GB memory! Also, my OSD latency got increases!
> >> This happens in both SSD and HDD tier.
> >>
> >> Are there any notes from the upgrade I missed? Is it related to
> >> bluefs_buffered_io?
> >> If BlueFS do a direct IO shouldn't BlueFS/Bluestore use the targeted
> >> memory for its cache and does it mean before the upgrade the memory used
> >> was by a kernel that buffers the IO and wasn't for the ceph-osd?
> >>
> >> Thanks.
> >>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to