Hi,
Quoting Stefan Kooman (ste...@bit.nl):
> Hi,
>
> TL;DR: we see "used" memory grows indefinitely on our OSD servers.
> Until the point that either 1) a OSD process gets killed by OOMkiller,
> or 2) OSD aborts (proably because malloc cannot provide more RAM). I
> suspect a memory leak of the OS
With default memory settings, the assumed memory requirements of Ceph are
1GB RAM/1TB of OSD size. Increasing any settings from default will
increase that baseline.
On Tue, Mar 27, 2018 at 1:10 AM Alex Gorbachev
wrote:
> On Mon, Mar 26, 2018 at 3:08 PM, Igor Fedotov wrote:
> > Hi Alex,
> >
> >
On Mon, Mar 26, 2018 at 3:08 PM, Igor Fedotov wrote:
> Hi Alex,
>
> I can see your bug report: https://tracker.ceph.com/issues/23462
>
> if your settings from there are applicable for your comment here then you
> have bluestore cache size limit set to 5 Gb that totals in 90 Gb RAM for 18
> OSD fo
On 03/26/2018 09:09 PM, Alex Gorbachev wrote:
I am seeing these entries under load - should be plenty of RAM on a
node with 128GB RAM and 18 OSDs
This is self inflected because you have increased:
bluestore_cache_size_hdd = 5368709120
* 18 = 96636764160 bytes
Look at your dashboards.
k
Hi Alex,
I can see your bug report: https://tracker.ceph.com/issues/23462
if your settings from there are applicable for your comment here then
you have bluestore cache size limit set to 5 Gb that totals in 90 Gb RAM
for 18 OSD for BlueStore cache only.
There is also additional memory overh
On Wed, Mar 21, 2018 at 2:26 PM, Kjetil Joergensen wrote:
> I retract my previous statement(s).
>
> My current suspicion is that this isn't a leak as much as it being
> load-driven, after enough waiting - it generally seems to settle around some
> equilibrium. We do seem to sit on the mempools x 2
I retract my previous statement(s).
My current suspicion is that this isn't a leak as much as it being
load-driven, after enough waiting - it generally seems to settle around
some equilibrium. We do seem to sit on the mempools x 2.4 ~ ceph-osd RSS,
which is on the higher side (I see documentation
We don't run compression as far as I know, so that wouldn't be it. We do
actually run a mix of bluestore & filestore - due to the rest of the
cluster predating a stable bluestore by some amount.
12.2.2 -> 12.2.4 at 2018/03/10: I don't see increase of memory usage. No
any compressions of cour
ot;: 64136
},
"buffer_anon": {
"items": 76863,
"bytes": 21327234
},
"buffer_meta": {
"items": 910,
"bytes": 80080
},
"osd": {
"items": 328,
"bytes
Hi,
addendum: We're running 12.2.4 (52085d5249a80c5f5121a76d6288429f35e4e77b).
The workload is a mix of 3xreplicated & ec-coded (rbd, cephfs, rgw).
-KJ
On Tue, Mar 6, 2018 at 3:53 PM, Kjetil Joergensen
wrote:
> Hi,
>
> so.. +1
>
> We don't run compression as far as I know, so that wouldn't be
Hi,
so.. +1
We don't run compression as far as I know, so that wouldn't be it. We do
actually run a mix of bluestore & filestore - due to the rest of the
cluster predating a stable bluestore by some amount.
The interesting part is - the behavior seems to be specific to our
bluestore nodes.
Belo
On Thu, Mar 1, 2018 at 5:37 PM, Subhachandra Chandra
wrote:
> Even with bluestore we saw memory usage plateau at 3-4GB with 8TB drives
> filled to around 90%. One thing that does increase memory usage is the
> number of clients simultaneously sending write requests to a particular
> primary OSD if
Even with bluestore we saw memory usage plateau at 3-4GB with 8TB drives
filled to around 90%. One thing that does increase memory usage is the
number of clients simultaneously sending write requests to a particular
primary OSD if the write sizes are large.
Subhachandra
On Thu, Mar 1, 2018 at 6:1
With default memory settings, the general rule is 1GB ram/1TB OSD. If you
have a 4TB OSD, you should plan to have at least 4GB ram. This was the
recommendation for filestore OSDs, but it was a bit much memory for the
OSDs. From what I've seen, this rule is a little more appropriate with
bluestor
Quoting Caspar Smit (caspars...@supernas.eu):
> Stefan,
>
> How many OSD's and how much RAM are in each server?
Currently 7 OSDs, 128 GB RAM. Max wil be 10 OSDs in these servers. 12
cores (at least one core per OSD).
> bluestore_cache_size=6G will not mean each OSD is using max 6GB RAM right?
A
Stefan,
How many OSD's and how much RAM are in each server?
bluestore_cache_size=6G will not mean each OSD is using max 6GB RAM right?
Our bluestore hdd OSD's with bluestore_cache_size at 1G use ~4GB of total
RAM. The cache is a part of the memory usage by bluestore OSD's.
Kind regards,
Caspar
Hi Stefan,
can you disable compression and check if memory is still leaking.
If it stops then the issue is definitely somewhere along the "compress"
path.
Thanks,
Igor
On 2/28/2018 6:18 PM, Stefan Kooman wrote:
Hi,
TL;DR: we see "used" memory grows indefinitely on our OSD servers.
Until
Hi,
TL;DR: we see "used" memory grows indefinitely on our OSD servers.
Until the point that either 1) a OSD process gets killed by OOMkiller,
or 2) OSD aborts (proably because malloc cannot provide more RAM). I
suspect a memory leak of the OSDs.
We were running 12.2.2. We are now running 12.2.3.
Hi,
I'm using ceph jewel 10.2.2, I noticed that, when I put multiple object of
the same file, same user to ceph-rgw s3 then RAM memory of ceph-osd
increased and not reduced anymore? This time, the upload speed is reduced
significantly.
Please help me solve this problem?
Thank!
19 matches
Mail list logo