Thanks, that's useful to know. I've pasted the output you asked for
below, thanks for taking a look.
Here's the output of dump_mempools:
{
"mempool": {
"by_pool": {
"bloom_filter": {
"items": 4806709,
"bytes": 4806709
},
"bluestore_alloc": {
"items": 0,
"bytes": 0
},
"bluestore_cache_data": {
"items": 0,
"bytes": 0
},
"bluestore_cache_onode": {
"items": 0,
"bytes": 0
},
"bluestore_cache_other": {
"items": 0,
"bytes": 0
},
"bluestore_fsck": {
"items": 0,
"bytes": 0
},
"bluestore_txc": {
"items": 0,
"bytes": 0
},
"bluestore_writing_deferred": {
"items": 0,
"bytes": 0
},
"bluestore_writing": {
"items": 0,
"bytes": 0
},
"bluefs": {
"items": 0,
"bytes": 0
},
"buffer_anon": {
"items": 1303621,
"bytes": 6643324694
},
"buffer_meta": {
"items": 2397,
"bytes": 153408
},
"osd": {
"items": 0,
"bytes": 0
},
"osd_mapbl": {
"items": 0,
"bytes": 0
},
"osd_pglog": {
"items": 0,
"bytes": 0
},
"osdmap": {
"items": 8222,
"bytes": 185840
},
"osdmap_mapping": {
"items": 0,
"bytes": 0
},
"pgmap": {
"items": 0,
"bytes": 0
},
"mds_co": {
"items": 160660321,
"bytes": 4080240182
},
"unittest_1": {
"items": 0,
"bytes": 0
},
"unittest_2": {
"items": 0,
"bytes": 0
}
},
"total": {
"items": 166781270,
"bytes": 10728710833
}
}
}
and heap_stats:
MALLOC: 12418630040 (11843.3 MiB) Bytes in use by application
MALLOC: + 1310720 ( 1.2 MiB) Bytes in page heap freelist
MALLOC: + 378986760 ( 361.4 MiB) Bytes in central cache freelist
MALLOC: + 4713472 ( 4.5 MiB) Bytes in transfer cache freelist
MALLOC: + 20722016 ( 19.8 MiB) Bytes in thread cache freelists
MALLOC: + 62652416 ( 59.8 MiB) Bytes in malloc metadata
MALLOC: ------------
MALLOC: = 12887015424 (12290.0 MiB) Actual memory used (physical + swap)
MALLOC: + 309624832 ( 295.3 MiB) Bytes released to OS (aka unmapped)
MALLOC: ------------
MALLOC: = 13196640256 (12585.3 MiB) Virtual address space used
MALLOC:
MALLOC: 921411 Spans in use
MALLOC: 20 Thread heaps in use
MALLOC: 8192 Tcmalloc page size
On Wed, Aug 1, 2018 at 10:31 PM, Yan, Zheng <[email protected]> wrote:
> On Thu, Aug 2, 2018 at 3:36 AM Benjeman Meekhof <[email protected]> wrote:
>>
>> I've been encountering lately a much higher than expected memory usage
>> on our MDS which doesn't align with the cache_memory limit even
>> accounting for potential over-runs. Our memory limit is 4GB but the
>> MDS process is steadily at around 11GB used.
>>
>> Coincidentally we also have a new user heavily relying on hard links.
>> This led me to the following (old) document which says "Hard links are
>> also supported, although in their current implementation each link
>> requires a small bit of MDS memory and so there is an implied limit
>> based on your available memory. "
>> (https://ceph.com/geen-categorie/cephfs-mds-status-discussion/)
>>
>> Is that statement still correct, could it potentially explain why our
>> memory usage appears so high? As far as I know this is a recent
>> development and it does very closely correspond to a new user doing a
>> lot of hardlinking. Ceph Mimic 13.2.1, though we first saw the issue
>> while still running 13.2.0.
>>
>
> That statement is no longer correct. what are output of "ceph
> daemon mds.x dump_mempools" and "ceph tell mds.x heap stats"?
>
>
>> thanks,
>> Ben
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com