HI all.
Im trying to deploy openstack with ceph kraken bluestore osds.
Deploy went well, but then i execute ceph osd tree i can see wrong weight on
bluestore disks.
ceph osd tree | tail
-3 0.91849 host krk-str02
23 0.00980 osd.23 up 1.0 1.0
24 0.90869
I may fool myself, but as far as I know:
- kraken introduces compression for RGW (not on the OSD level, not for rbd)
- kraken stabilizes bluestore, a new OSD format, that introduces
compression on the OSD level
On 06/06/2017 04:36, Daniel K wrote:
> Hi,
>
> I see several mentions that compressi
Hi,
I see several mentions that compression is available in Kraken for
bluestore OSDs, however, I can find almost nothing in the documentation
that indicates how to use it.
I've found:
- http://docs.ceph.com/docs/master/radosgw/compression/
- http://ceph.com/releases/v11-2-0-kraken-released/
I'm
I think it could be because of this:
http://tracker.ceph.com/issues/19407
The clients were meant to stop trying to send reports to the mgr when
it goes offline, but the monitor may not have been correctly updating
the mgr map to inform clients that the active mgr had gone offline.
John
On Wed, M
Hello,
We manually fixed the issue and below is our analysis.
Due to high CPU utilisation we stopped the ceph-mgr on all our cluster.
On one of our cluster we saw high memory usage by OSDs some grater than 5GB
causing OOM , resulting in process kill.
The memory was released immediately when the
Hello,
We still facing same memory leak issue even if we specify
bluestore_cache_size to 100M which caused ceph OSD process killed by out of
memory .
Mar 27 01:57:05 cn1 kernel: ceph-osd invoked oom-killer: gfp_mask=0x280da,
order=0, oom_score_adj=0
Mar 27 01:57:05 cn1 kernel: ceph-osd cpuset=/ m
Hi,
Does anyone have any cluster of a decent scale running on Kraken and bluestore?
How are you finding it? Have you had any big issues arise?
Was it running non bluestore before and have you noticed any improvement? Read
? Write? IOPS?
,Ashley
Sent from my iPhone
_
Hello Shinobu,
We already raised ticket for this issue. FYI -
http://tracker.ceph.com/issues/18924
Thanks
Jayaram
On Mon, Feb 20, 2017 at 12:36 AM, Shinobu Kinjo wrote:
> Please open ticket at http://tracker.ceph.com, if you haven't yet.
>
> On Thu, Feb 16, 2017 at 6:07 PM, Muthusamy Muthiah
Please open ticket at http://tracker.ceph.com, if you haven't yet.
On Thu, Feb 16, 2017 at 6:07 PM, Muthusamy Muthiah
wrote:
> Hi Wido,
>
> Thanks for the information and let us know if this is a bug.
> As workaround we will go with small bluestore_cache_size to 100MB.
>
> Thanks,
> Muthu
>
> On
Hi Wido,
Thanks for the information and let us know if this is a bug.
As workaround we will go with small bluestore_cache_size to 100MB.
Thanks,
Muthu
On 16 February 2017 at 14:04, Wido den Hollander wrote:
>
> > Op 16 februari 2017 om 7:19 schreef Muthusamy Muthiah <
> muthiah.muthus...@gmail
> Op 16 februari 2017 om 7:19 schreef Muthusamy Muthiah
> :
>
>
> Thanks IIya Letkowski for the information we will change this value
> accordingly.
>
What I understand from yesterday's performance meeting is that this seems like
a bug. Lowering this buffer reduces memory, but the root-cause
Thanks IIya Letkowski for the information we will change this value
accordingly.
Thanks,
Muthu
On 15 February 2017 at 17:03, Ilya Letkowski
wrote:
> Hi, Muthusamy Muthiah
>
> I'm not totally sure that this is a memory leak.
> We had same problems with bluestore on ceph v11.2.0.
> Reduce bluesto
Hi, Muthusamy Muthiah
I'm not totally sure that this is a memory leak.
We had same problems with bluestore on ceph v11.2.0.
Reduce bluestore cache helped us to solve it and stabilize OSD memory
consumption on the 3GB level.
Perhaps this will help you:
bluestore_cache_size = 104857600
On Wed, Fe
Hi All,
On all our 5 node cluster with ceph 11.2.0 we encounter memory leak issues.
Cluster details : 5 node with 24/68 disk per node , EC : 4+1 , RHEL 7.2
Some traces using sar are below and attached the memory utilisation graph .
(16:54:42)[cn2.c1 sa] # sar -r
07:50:01 kbmemfree kbmemused %me
14 matches
Mail list logo