Even with bluestore we saw memory usage plateau at 3-4GB with 8TB drives
filled to around 90%. One thing that does increase memory usage is the
number of clients simultaneously sending write requests to a particular
primary OSD if the write sizes are large.

Subhachandra

On Thu, Mar 1, 2018 at 6:18 AM, David Turner <[email protected]> wrote:

> With default memory settings, the general rule is 1GB ram/1TB OSD.  If you
> have a 4TB OSD, you should plan to have at least 4GB ram.  This was the
> recommendation for filestore OSDs, but it was a bit much memory for the
> OSDs.  From what I've seen, this rule is a little more appropriate with
> bluestore now and should still be observed.
>
> Please note that memory usage in a HEALTH_OK cluster is not the same
> amount of memory that the daemons will use during recovery.  I have seen
> deployments with 4x memory usage during recovery.
>
> On Thu, Mar 1, 2018 at 8:11 AM Stefan Kooman <[email protected]> wrote:
>
>> Quoting Caspar Smit ([email protected]):
>> > Stefan,
>> >
>> > How many OSD's and how much RAM are in each server?
>>
>> Currently 7 OSDs, 128 GB RAM. Max wil be 10 OSDs in these servers. 12
>> cores (at least one core per OSD).
>>
>> > bluestore_cache_size=6G will not mean each OSD is using max 6GB RAM
>> right?
>>
>> Apparently. Sure they will use more RAM than just cache to function
>> correctly. I figured 3 GB per OSD would be enough ...
>>
>> > Our bluestore hdd OSD's with bluestore_cache_size at 1G use ~4GB of
>> total
>> > RAM. The cache is a part of the memory usage by bluestore OSD's.
>>
>> A factor 4 is quite high, isn't it? Where is all this RAM used for
>> besides cache? RocksDB?
>>
>> So how should I size the amount of RAM in a OSD server for 10 bluestore
>> SSDs in a
>> replicated setup?
>>
>> Thanks,
>>
>> Stefan
>>
>> --
>> | BIT BV  http://www.bit.nl/        Kamer van Koophandel 09090351
>> | GPG: 0xD14839C6                   +31 318 648 688
>> <+31%20318%20648%20688> / [email protected]
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to