Dear John,
Thank for your respone.
My cluster have 4616 pgs and 13 pools show as below:
in which default.rgw.buckets.data is erasure pool with config:
+ pg_num=pgp_num=1792
+ size=12
+ erause-code-profile:
directory=/usr/lib64/ceph/erasure-code
k=9
m=3
plugin=isa
You should subtract buffers and cache from the used memory to get a more
accurate representation of how much memory is actually available to
processes. In this case that puts you around 22G of used - or a better term
might be unavailable memory. Buffers and cache can be reallocated when
needed -
Hi Sam,
Thank for your reply. I use BTRFS file system on OSDs.
Here is result of "*free -hw*":
total used freeshared
buffers cache available
Mem: 125G 58G 31G1.2M3.7M
36G 60G
and "*ceph df*":
Hi Khang,
What file system do you use in OSD node?
XFS always use Memory for caching data before writing to disk.
So, don't worry, it always holds memory in your system as much as possible.
2017-02-15 10:35 GMT+07:00 Khang Nguyễn Nhật
:
> Hi all,
> My ceph
Hi all,
My ceph OSDs is running on Fedora-server24 with config are:
128GB RAM DDR3, CPU Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz, 72 OSDs (8TB
per OSD). My cluster was use ceph object gateway with S3 API. Now, it had
contained 500GB data but it was used > 50GB RAM. I'm worry my OSD will dead
if i