On Mon, Apr 28, 2014 at 1:26 AM, Gandalf Corvotempesta <[email protected]> wrote: > 2014-04-27 23:20 GMT+02:00 Andrey Korolyov <[email protected]>: >> For the record, ``rados df'' will give an object count. Would you mind >> to send out your ceph.conf? I cannot imagine what parameter may raise >> memory consumption so dramatically, so config at a glance may reveal >> some detail. Also core dump should be extremely useful (though it`s >> better to pass the flag to Inktank there). > > http://pastie.org/pastes/9118130/text?key=vsjj5g4ybetbxu7swflyvq > > From the config below, i've removed the single mon definition to hide > IPs and hostname from posting to ML. > > [global] > auth cluster required = cephx > auth service required = cephx > auth client required = cephx > fsid = 6b9916f9-c209-4f53-98c6-581adcdf0955 > osd pool default pg num = 8192 > osd pool default pgp num = 8192 > osd pool default size = 3 > > [mon] > mon osd down out interval = 600 > mon osd mon down reporters = 7 > > [osd] > osd mkfs type = xfs > osd journal size = 16384 > osd mon heartbeat interval = 30 > # Performance tuning > filestore merge threshold = 40 > filestore split multiple = 8 > osd op threads = 8 > # Recovery tuning > osd recovery max active = 5 > osd max backfills = 2 > osd recovery op priority = 2
Nothing looks wrong, except heartbeat interval which probably should be smaller due to recovery considerations. Try ``ceph osd tell X heap release'' and if it will not change memory consumption, file a bug. _______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
