Re: [ceph-users] emperor - firefly : Significant increase in RAM usage

2014-07-07 Thread Sylvain Munaut
Hi,

 Yesterday I finally updated our cluster to emperor (lastest stable
 commit) and what's fairly apparent is a much higher RAM usage on the
 OSD:

 http://i.imgur.com/qw9iKSV.png

 Has anyone noticed the same ? I mean 25% sudden increase in the idle
 ram usage is hard to ignore ...


So no one noticed the same ?

Or do people just accept that a 25% RAM bump per release is acceptable ?


Cheers,

   Sylvain
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] emperor - firefly : Significant increase in RAM usage

2014-07-07 Thread Dane Elwell
Hi,

We actually saw a decrease in memory usage after upgrading to Firefly,
though we did reboot the nodes after the upgrade while we had the
maintenance window. This is with 216 OSDs total (32-40 per node):
http://i.imgur.com/BC7RuXJ.png

(The daily spikes were caused by a rogue updatedb process running
every morning which is now fixed).

Dane

On 7 July 2014 09:23, Wido den Hollander w...@42on.com wrote:
 On 07/07/2014 09:28 AM, Sylvain Munaut wrote:

 Hi,

 Yesterday I finally updated our cluster to emperor (lastest stable
 commit) and what's fairly apparent is a much higher RAM usage on the
 OSD:

 http://i.imgur.com/qw9iKSV.png

 Has anyone noticed the same ? I mean 25% sudden increase in the idle
 ram usage is hard to ignore ...



 So no one noticed the same ?

 Or do people just accept that a 25% RAM bump per release is acceptable ?


 I haven't seen that many Firefly production clusters yet. Most of them are
 still running Dumpling and are getting ready for the upgrade to Firefly.

 All the Firefly clusters I know of have been installed with Firefly, so it's
 hard to judge.




 Cheers,

 Sylvain
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 --
 Wido den Hollander
 42on B.V.
 Ceph trainer and consultant

 Phone: +31 (0)20 700 9902
 Skype: contact42on

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] emperor - firefly : Significant increase in RAM usage

2014-07-07 Thread Sylvain Munaut
Hi,


 We actually saw a decrease in memory usage after upgrading to Firefly,
 though we did reboot the nodes after the upgrade while we had the
 maintenance window. This is with 216 OSDs total (32-40 per node):
 http://i.imgur.com/BC7RuXJ.png


Interesting. Is that cluster for RBD or RGW ?  My RBD OSDs are a bit
better behaved but still had this 25% bump in mem usage ...



Here the memory pretty much just grows continually.

This is the log over the last year.

http://i.imgur.com/0NUFjpz.png

At the very beginning (~250M per process) those OSD were empty, just
added. Then we changed the crushmap to map all the RGW pools we have
to them, then it just grows slowly with a bump at pretty much each
update.

And this is a pretty small pool of OSDs, for theses there is only 8
OSD processes over 4 nodes, storing barely 1 To in 2.5 millions
objects, split into 7 pools and 5376 PGs (some pools have size=3,
other size=2)

1.5 Go per OSD process seems a bit big to me.


Cheers,

   Sylvain
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] emperor - firefly : Significant increase in RAM usage

2014-07-07 Thread Gregory Farnum
We don't test explicitly for this, but I'm surprised to hear about a
jump of that magnitude. Do you have any more detailed profiling? Can
you generate some? (With the tcmalloc heap dumps.)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com

On Mon, Jul 7, 2014 at 3:03 AM, Sylvain Munaut
s.mun...@whatever-company.com wrote:
 Hi,


 We actually saw a decrease in memory usage after upgrading to Firefly,
 though we did reboot the nodes after the upgrade while we had the
 maintenance window. This is with 216 OSDs total (32-40 per node):
 http://i.imgur.com/BC7RuXJ.png


 Interesting. Is that cluster for RBD or RGW ?  My RBD OSDs are a bit
 better behaved but still had this 25% bump in mem usage ...



 Here the memory pretty much just grows continually.

 This is the log over the last year.

 http://i.imgur.com/0NUFjpz.png

 At the very beginning (~250M per process) those OSD were empty, just
 added. Then we changed the crushmap to map all the RGW pools we have
 to them, then it just grows slowly with a bump at pretty much each
 update.

 And this is a pretty small pool of OSDs, for theses there is only 8
 OSD processes over 4 nodes, storing barely 1 To in 2.5 millions
 objects, split into 7 pools and 5376 PGs (some pools have size=3,
 other size=2)

 1.5 Go per OSD process seems a bit big to me.


 Cheers,

Sylvain
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] emperor - firefly : Significant increase in RAM usage

2014-07-04 Thread Sylvain Munaut
Hi,


Yesterday I finally updated our cluster to emperor (lastest stable
commit) and what's fairly apparent is a much higher RAM usage on the
OSD:

http://i.imgur.com/qw9iKSV.png

Has anyone noticed the same ? I mean 25% sudden increase in the idle
ram usage is hard to ignore ...

Those OSD are pretty much entirely dedicated to RGW data pools FWIW.


Cheers,

Sylvain
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com