It depends on which version of ceph, but it's pretty normal under newer
versions.

There are a bunch of variables.  How many PGs per OSD, how much data is in
the PGs, etc.  I'm a bit light on the PGs (~60 PGs per OSD), and heavy on
the data (~3 TiB of data on each OSD).  In the production cluster, under
peak user traffic, my OSDs are using around 1GiB of memory.

If there is some scrubbing, deep-scrubbing, or a recovery, I've seen
individual OSDs go as high as 4 GiB.  Which causes some problems...



On Thu, Nov 6, 2014 at 11:00 PM, 谢锐 <[email protected]> wrote:

> and make one osd down.then do stress test by fio.
>
> ------------------ Original ------------------
>
>
> From:  "谢锐"<[email protected]>;
>
> Date:  Fri, Nov 7, 2014 02:50 PM
>
> To:  "ceph-users"<[email protected]>;
>
>
> Subject:  [ceph-users] Is it normal that osd's memory exceed 1GB under
> stresstest?
>
>
> I set mon_osd_down_out_interval to two days,and do stress test. the memory
> of osd exceed 1GB.
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to