Thanks, Greg.  This is as I suspected. Ceph is full of subtleties and I wanted 
to be sure.

-- aad


> 
> The osd_map_cache_size controls the OSD’s cache of maps; the change in 13.2.3 
> is to the default for the monitors’.
> On Mon, Jan 7, 2019 at 8:24 AM Anthony D'Atri <[email protected] 
> <mailto:[email protected]>> wrote:
> 
> 
> > * The default memory utilization for the mons has been increased
> >  somewhat.  Rocksdb now uses 512 MB of RAM by default, which should
> >  be sufficient for small to medium-sized clusters; large clusters
> >  should tune this up.  Also, the `mon_osd_cache_size` has been
> >  increase from 10 OSDMaps to 500, which will translate to an
> >  additional 500 MB to 1 GB of RAM for large clusters, and much less
> >  for small clusters.
> 
> 
> Just I don't perseverate on this:   mon_osd_cache_size is a [mon] setting for 
> ceph-mon only?  Does it relate to osd_map_cache_size?  ISTR that in the past 
> the latter defaulted to 500; I had seen a presentation (I think from Dan) at 
> an OpenStack Summit advising its decrease and it defaults to 50 now.  
> 
> I like to be very clear about where additional memory is needed, especially 
> for dense systems.
> 
> -- Anthony
> 
> _______________________________________________
> ceph-users mailing list
> [email protected] <mailto:[email protected]>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to