Hi Sam,
Thank for your reply. I use BTRFS file system on OSDs.
Here is result of "*free -hw*":

                   total         used          free        shared
buffers       cache     available
Mem:           125G         58G         31G        1.2M        3.7M
36G         60G

and "*ceph df*":

GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED
    523T      522T        1539G          0.29
POOLS:
    NAME                           ID     USED     %USED     MAX AVAIL
OBJECTS
    ............
    default.rgw.buckets.data  92     597G      0.15              391T
     84392
    ............

I was reviced this a few minutes ago.

2017-02-15 10:50 GMT+07:00 Sam Huracan <[email protected]>:

> Hi Khang,
>
> What file system do you use in OSD node?
> XFS always use Memory for caching data before writing to disk.
>
> So, don't worry, it always holds memory in your system as much as possible.
>
>
>
> 2017-02-15 10:35 GMT+07:00 Khang Nguyễn Nhật <
> [email protected]>:
>
>> Hi all,
>> My ceph OSDs is running on Fedora-server24 with config are:
>> 128GB RAM DDR3, CPU Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz, 72 OSDs
>> (8TB per OSD). My cluster was use ceph object gateway with S3 API. Now, it
>> had contained 500GB data but it was used > 50GB RAM. I'm worry my OSD will
>> dead if i continue put file to it. I had read "OSDs do not require as
>> much RAM for regular operations (e.g., 500MB of RAM per daemon instance);
>> however, during recovery they need significantly more RAM (e.g., ~1GB per
>> 1TB of storage per daemon)." in Ceph Hardware Recommendations. Someone
>> can give me advice on this issue? Thank
>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to