Hi,
i just did a look at ::kmastat on different hosts:
###
512 GB Ram
kmem_slab_cache 72 44808715 44808720 3.11G 44834222 0
kmem_bufctl_cache 24 91214588 91214732 2.08G 91304910 0
kmem_va_4096 4K 4933423 4933440 18.8G 4962056 0
kmem_va_8192 8K 464952 464976 3.55G 476457 0
kmem_va_16384 16K 466813 466816 7.12G 467777 0
zfs_file_data_4096 4K 17205159 17205184 65.6G 17209189 0
zfs_file_data_8192 8K 23957041 23957056 183G 23957941 0
anon_cache 48 31681244 31698779 1.46G 2479954481 0
zio_cache 904 144 1603809 1.36G 507594403 0
zio_data_buf_4096 4K 17199415 17200444 65.6G 62800020 0
zio_buf_8192 8K 3 261028 1.99G 1119321164 0
zio_data_buf_8192 8K 23954209 23957041 183G 168397492 0
zio_buf_16384 16K 392807 466167 7.11G 807004683 0
zio_buf_131072 128K 12688 20714 2.53G 228536333 0
zio_data_buf_131072 128K 5870 26895 3.28G 2778607 0
dmu_buf_impl_t 184 33010375 33013365 6.00G 340780063 0
arc_buf_hdr_t_full 176 41664496 41668066 7.23G 212907854 0
arc_buf_t 48 41677165 41680691 1.92G 235402260 0
Total [kmem_msb] 5.37G 136757487 0
Total [kmem_va] 29.6G 5922600 0
Total [kmem_default] 32.5G 2348564905 0
Total [kmem_io_4G] 72K 4056 0
Total [zfs_file_data] 249G 41173769 0
Total [zfs_file_data_buf] 252G 304008873 0
heap 48.7G 15.5T 0 1923241 0
vmem_metadata 5.91G 5.91G 5.91G 1451285 0
vmem_seg 5.54G 5.54G 5.54G 1451115 0
kmem_metadata 5.80G 5.80G 5.80G 1407951 0
kmem_msb 5.37G 5.37G 5.37G 1407224 0
kmem_firewall_va 4.44G 4.44G 4.44G 1278567 0
kmem_oversize 4.44G 4.44G 4.44G 1278567 0
kmem_va 32.5G 32.5G 32.5G 266161 0
kmem_default 32.5G 32.5G 32.5G 5928357 0
###
###
128 GB Ram
kmem_slab_cache 72 8577543 21876965 1.52G 43097349 0
kmem_bufctl_cache 24 25470174 44978778 1.03G 81024812 0
kmem_va_4096 4K 2679996 2680288 10.2G 3691603 0
kmem_va_16384 16K 164112 222944 3.40G 1180984 0
zfs_file_data_4096 4K 7584636 13495072 51.5G 41093472 0
zio_data_buf_4096 4K 7513082 7584536 28.9G 639155166 0
zio_buf_16384 16K 163190 163754 2.50G 496175975 0
zio_buf_131072 128K 9981 10325 1.26G 372501742 0
dmu_buf_impl_t 184 7683463 21000609 3.81G 499528971 0
arc_buf_hdr_t_full 176 14661541 19820922 3.44G 693303872 0
Total [kmem_msb] 3.57G 129534748 0
Total [kmem_va] 14.6G 5037306 0
Total [kmem_default] 15.1G 423444326 0
Total [kmem_io_4G] 12K 27 0
Total [zfs_file_data] 51.5G 41420525 0
Total [zfs_file_data_buf] 29.0G 639952381 0
heap 24.1G 15.9T 0 1108151 0
vmem_metadata 3.02G 3.02G 3.02G 740843 0
vmem_seg 2.83G 2.83G 2.83G 740693 0
kmem_metadata 3.79G 3.79G 3.79G 937749 0
kmem_msb 3.57G 3.57G 3.57G 937072 0
kmem_firewall_va 1.13G 1.13G 1.13G 700961 0
kmem_oversize 1.13G 1.13G 1.13G 700961 0
kmem_va 16.1G 16.1G 16.1G 214509 0
kmem_default 15.1G 15.1G 15.1G 4750057 0
###
###
64 GB Ram
kmem_va_4096 4K 862244 862432 3.29G 1006947 0
kmem_va_8192 8K 87146 183952 1.40G 1201674 0
kmem_va_16384 16K 99997 131600 2.01G 366031
0
zfs_file_data_8192 8K 2692925 4103776 31.3G 5314893 0
zio_data_buf_8192 8K 2673475 2692925 20.5G 251004041 0
zio_buf_16384 16K 96685 99731 1.52G 266516682 0
arc_buf_hdr_t_full 176 5739187 7616026 1.32G 246448201 0
Total [kmem_va] 6.74G 2608893 0
Total [kmem_default] 6.54G 1481913652 0
Total [kmem_io_4G] 60K 7286 0
Total [kmem_io_2G] 20K 53 0
Total [zfs_file_data] 31.3G 5320557 0
Total [zfs_file_data_buf] 20.6G 251273793 0
heap 9.86G 954G 0 673579737 0
kmem_va 7.77G 7.77G 7.77G 109983 0
kmem_default 6.54G 6.54G 6.54G 2466175 0
###
Just copied the lines which i think should be interesting (just ignore the ZFS
stuff).
Greets
Kilian
________________________________________
Von: Robert Mustacchi <[email protected]>
Gesendet: Freitag, 24. Juni 2016 19:05
An: [email protected]
Betreff: Re: [smartos-discuss] Kernel RAM usage
On 6/24/16 8:24 , Kilian Ries wrote:
> My question now is: is there any possibility to free up RAM to start a new
> VM? ZFS ARC is correctly shrinking, but kernel RAM usage doesn't shrink.
The answer here is that it depends on what's using that memory. For
example, if it's memory allocated due to socket buffers, then it won't
be reclaimed as long as that exists.
So here's what I'd recommend as next steps for debugging this. You can
use the ::kmastat dcmd with mdb to get a pretty good break down of where
the different groups of memory are being used. I think it might be
interesting to take a look at this in the different conditions that you
were looking at ::memstat. There will obviously be a lot of growth, so I
think the trick will be to try and do some analysis where you look at
relative ratios of what's changing and seeing what looks out of whack.
Depending on what that shows, we may be able to dig a bit deeper into
the question of what's allocating that memory.
Robert
-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription:
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com