Since jemalloc tries to create arena on different thread for lock contention
problems, no matter when system is busy.
This causes increasing memory usage.
I think we probably need to think how to a little bit carefully make use of:
pthread_create()
tcache
redzone
Etc...
And I also think it's better to think of chunk size.
There would be trade-off between performance and cost.
I'm not sure how busy system was.
There might had been memory leak. Which means that there might have been blocks
which were not freed after malloc.
Did you check?
> it seems jemalloc is consuming ~200MB more memory/osd during IO run...
If I've missed anything, just point out it to me.
I'm really really pick in terms of performance tuning.
Shinobu
----- Original Message -----
From: "Somnath Roy" <[email protected]>
To: "Chad William Seys" <[email protected]>, "池信泽" <[email protected]>
Cc: [email protected]
Sent: Saturday, August 29, 2015 12:55:34 AM
Subject: Re: [ceph-users] RAM usage only very slowly decreases after cluster
recovery
Yeah, that means tcmalloc probably caching those as I suspected..
There are some discussion going on in that front, but, unfortunately we
concluded to have tcmalloc as default and if somebody needs performance should
move to jemalloc.
One of the reason is, it seems jemalloc is consuming ~200MB more memory/osd
during IO run...
But, I think this is one of the serious issue of tcmalloc we need to consider
as well..I posted this findings earlier in ceph-devl during my write path
optimization investigation.
There are some settings in tcmalloc that should expedite this memory release
faster though. I tried, but, didn't work. I didn't dig down further in that
route though.
Mark,
Did you observe similar tcmalloc behavior in your recovery experiment for
tcmalloc vs jemalloc?
Thanks & Regards
Somnath
-----Original Message-----
From: Chad William Seys [mailto:[email protected]]
Sent: Friday, August 28, 2015 7:58 AM
To: 池信泽
Cc: Somnath Roy; Haomai Wang; [email protected]
Subject: Re: [ceph-users] RAM usage only very slowly decreases after cluster
recovery
Thanks! 'ceph tell osd.* heap release' seems to have worked! Guess I'll
sprinkle it around my maintenance scripts.
Somnath Is there a plan to make jemalloc standard in Ceph in the future?
Thanks!
Chad.
________________________________
PLEASE NOTE: The information contained in this electronic mail message is
intended only for the use of the designated recipient(s) named above. If the
reader of this message is not the intended recipient, you are hereby notified
that you have received this message in error and that any review,
dissemination, distribution, or copying of this message is strictly prohibited.
If you have received this communication in error, please notify the sender by
telephone or e-mail (as shown above) immediately and destroy any and all copies
of this message in your possession (whether hard copies or electronically
stored copies).
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html