[ceph-users] tcmalloc use a lot of CPU

2015-08-17 Thread YeYin
Hi, all, When I do performance test with rados bench, I found tcmalloc consumed a lot of CPU: Samples: 265K of event 'cycles', Event count (approx.): 104385445900 + 27.58% libtcmalloc.so.4.1.0[.] tcmalloc::CentralFreeList::FetchFromSpans() + 15.25% libtcmalloc.so.4.1.0

Re: [ceph-users] tcmalloc use a lot of CPU

2015-08-17 Thread Alexandre DERUMIER
Hi, Is this phenomenon normal?Is there any idea about this problem? It's a known problem with tcmalloc (search on the ceph mailing). starting osd with TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=128M environnement variable should help. Another way, is to compile ceph with jemalloc instead tcmalloc

[ceph-users] Increase tcmalloc thread cache bytes - still recommended?

2018-07-19 Thread Robert Stanford
It seems that the Ceph community no longer recommends changing to jemalloc. However this also recommends to do what's in this email's subject: https://ceph.com/geen-categorie/the-ceph-and-tcmalloc-performance-story/ Is it still recommended to increase the tcmalloc thread cache bytes

Re: [ceph-users] tcmalloc use a lot of CPU

2015-08-17 Thread Mark Nelson
On 08/17/2015 07:03 AM, Alexandre DERUMIER wrote: Hi, Is this phenomenon normal?Is there any idea about this problem? It's a known problem with tcmalloc (search on the ceph mailing). starting osd with TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=128M environnement variable should help. Note

[ceph-users] НА: tcmalloc use a lot of CPU

2015-08-17 Thread Межов Игорь Александрович
still not got any positive results - TCMalloc usage is high. The usage is lowered to 10%, when disable crc in messages, disable debug and disable cephx auth, but this is od course not for production use. Also we got a different trace, while performin FIO-RBD benchmarks on ssd pool: --- 46,07

Re: [ceph-users] НА: tcmalloc use a lot of CPU

2015-08-17 Thread Luis Periquito
- TCMalloc usage is high. The usage is lowered to 10%, when disable crc in messages, disable debug and disable cephx auth, but this is od course not for production use. Also we got a different trace, while performin FIO-RBD benchmarks on ssd pool: --- 46,07% [kernel] [k

Re: [ceph-users] tcmalloc use a lot of CPU

2015-08-18 Thread Alexandre DERUMIER
Hi Mark, Yep! At least from what I've seen so far, jemalloc is still a little faster for 4k random writes even compared to tcmalloc with the patch + 128MB thread cache. Should have some data soon (mostly just a reproduction of Sandisk and Intel's work). I definitively switch to jemmaloc from

[ceph-users] Ceph allocator and performance

2015-08-11 Thread Межов Игорь Александрович
faster, and yes, we got ~25kiops. We have low iowait (~1-3%), but surprisingly high user cpu activity 70% Perf top shows us, than most calls are in tcmalloc library: 19,61% libtcmalloc.so.4.2.2 [.] tcmalloc::CentralFreeList::FetchFromOneSpans(int, void**, void**) 15,53

Re: [ceph-users] Ceph allocator and performance

2015-08-11 Thread Jan Schermer
Hi, if you look in the archive you'll see I posted something similiar about 2 months ago. You can try something experimenting with 1) stock binaries - tcmalloc 2) LD_PRELOADed jemalloc 3) ceph recompiled with neither (glibc malloc) 4) ceph recompiled with jemalloc (?) We simply recompiled ceph

[ceph-users] tcmalloc performance still relevant?

2018-07-17 Thread Robert Stanford
Looking here: https://ceph.com/geen-categorie/the-ceph-and-tcmalloc-performance-story/ I see that it was a good idea to change to JEMalloc. Is this still the case, with up to date Linux and current Ceph? ___ ceph-users mailing list ceph-users

Re: [ceph-users] Increase tcmalloc thread cache bytes - still recommended?

2018-07-19 Thread Gregory Farnum
I don't think that's a default recommendation — Ceph is doing more configuration of tcmalloc these days, tcmalloc has resolved a lot of bugs, and that was only ever a thing that mattered for SSD-backed OSDs anyway. -Greg On Thu, Jul 19, 2018 at 5:50 AM Robert Stanford wrote: > > It

[ceph-users] using jemalloc in trusty

2016-05-23 Thread Luis Periquito
tcmalloc: I still see the "tcmalloc::CentralFreeList::FetchFromSpans()" and it's accompanying lines in perf top. Also from a lsof I can see the tcmalloc libraries being used, but not the jemalloc ones... Does anyone know what I'm doing wrong? I'm using the standard binaries from the r

Re: [ceph-users] Build version question

2016-11-29 Thread Brad Hubbard
d in this build? > For instance tcmalloc? tcmalloc is dynamically linked so it will use the version of the shared library installed on the host at runtime. $ ldd bin/ceph-osd|grep tcmalloc libtcmalloc.so.4 => /lib64/libtcmalloc.so.4 (0x7fc60daeb000) -- HTH, Brad

Re: [ceph-users] High Load and High Apply Latency

2017-12-20 Thread John Petrini
Hello, Looking at perf top it looks as though Ceph is spending most of it's CPU cycles on tcmalloc. Looking around online i found that this is a known issue and in fact I found this guide on how to increase the tcmalloc thread cache size: https://swamireddy.wordpress.com/2017/01/27/increase

[ceph-users] squeeze tcmalloc memory leak

2013-06-19 Thread Sage Weil
Hi all- Just a quick note to Debian squeeze users: in the course of debugging ceph-mon memory growth over time, we've determined that (at least in Stefan Priebe's environment) the tcmalloc (google perftools) library on Debian squeeze is leaking memory. If you are a Ceph user on squeeze

Re: [ceph-users] strange benchmark problem : restarting osd daemon improve performance from 100k iops to 300k iops

2015-04-24 Thread Milosz Tanski
In my experience jemalloc is much more proactive at returning memory to the OS, vs. tcmalloc in the default setting is much greedier with keeping/reusing memory. jemalloc tends to do better if you application benefits from a large page cache. Also, jemalloc's aggressive behavior is better if you're

Re: [ceph-users] RAM usage only very slowly decreases after cluster recovery

2015-08-28 Thread Somnath Roy
Yeah, that means tcmalloc probably caching those as I suspected.. There are some discussion going on in that front, but, unfortunately we concluded to have tcmalloc as default and if somebody needs performance should move to jemalloc. One of the reason is, it seems jemalloc is consuming ~200MB

Re: [ceph-users] Segfault in libtcmalloc.so.4.2.2

2016-05-13 Thread Somnath Roy
I am not sure about debian , but, for Ubuntu latest tcmalloc is not incorporated till 3.16.0.50.. You can use the attached program to detect if your tcmalloc is okay or not. Do this.. $ g++ -o gperftest tcmalloc_test.c -ltcmalloc $ TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=67108864 ./gperftest

Re: [ceph-users] jemalloc / Bluestore

2018-07-05 Thread Uwe Sauter
to tcmalloc. You know, so much to test and benchmark, so little time… Regards, Uwe Am 05.07.2018 um 19:08 schrieb Mark Nelson: Hi Uwe, As luck would have it we were just looking at memory allocators again and ran some quick RBD and RGW tests that stress memory allocation: https

Re: [ceph-users] Increase tcmalloc thread cache bytes - still recommended?

2018-07-19 Thread Mark Nelson
to increasing it over default though for SSDs. Mark On 07/19/2018 01:35 PM, Gregory Farnum wrote: I don't think that's a default recommendation — Ceph is doing more configuration of tcmalloc these days, tcmalloc has resolved a lot of bugs, and that was only ever a thing that mattered for SSD-backed OSDs

[ceph-users] НА: НА: tcmalloc use a lot of CPU

2015-08-18 Thread Межов Игорь Александрович
70% load on OSD drives - perf top shows 7,53% libtcmalloc.so.4.2.2 [.] tcmalloc::SLL_Next(void*) 1,86% libtcmalloc.so.4.2.2 [.] tcmalloc::CentralFreeList::FetchFromOneSpans(int, void**, void**) 1,51% libpthread-2.19.so

Re: [ceph-users] [luminous 12.2.2] Cluster write performance degradation problem(possibly tcmalloc related)

2017-12-26 Thread shadow_lin
I have disabled scrub before the test. 2017-12-27 shadow_lin 发件人:Webert de Souza Lima <webert.b...@gmail.com> 发送时间:2017-12-22 20:37 主题:Re: [ceph-users] [luminous 12.2.2] Cluster write performance degradation problem(possibly tcmalloc related) 收件人:"ceph-users"<ceph-users

Re: [ceph-users] Switching from tcmalloc

2015-06-24 Thread Jan Schermer
There were essentialy three things we had to do for such a drastic drop 1) recompile CEPH —without-tcmalloc 2) pin the OSDs to a set of a specific NUMA zone - we had this for a long time and it really helped 3) migrate the OSD memory to the correct CPU with migratepages - we will use cgroups

Re: [ceph-users] [luminous 12.2.2] Cluster write performance degradation problem(possibly tcmalloc related)

2017-12-22 Thread Webert de Souza Lima
On Thu, Dec 21, 2017 at 12:52 PM, shadow_lin <shadow_...@163.com> wrote: > > After 18:00 suddenly the write throughput dropped and the osd latency > increased. TCmalloc started relcaim page heap freelist much more > frequently.All of this happened very fast and every osd

[ceph-users] Luminous and jemalloc

2018-03-23 Thread Xavier Trilla
Hi, Does anybody have information about using jemalloc with Luminous? For what I've seen on the mailing list and online, bluestor crashes when using jemalloc. We've been running ceph with jemalloc since Hammer, as performance with tcmalloc was terrible (We run a quite big full SSD cluster

Re: [ceph-users] jemalloc / Bluestore

2018-07-05 Thread Igor Fedotov
Hi Uwe, AFAIK jemalloc isn't recommended for use with BlueStore anymore. tcmalloc is the right way so far. Thanks, Igor On 7/5/2018 4:08 PM, Uwe Sauter wrote: Hi all, is using jemalloc still recommended for Ceph? There are multiple sites (e.g. https://ceph.com/geen-categorie/the-ceph

Re: [ceph-users] tcmalloc performance still relevant?

2018-07-17 Thread Uwe Sauter
I asked a similar question about 2 weeks ago, subject "jemalloc / Bluestore". Have a look at the archives Regards, Uwe Am 17.07.2018 um 15:27 schrieb Robert Stanford: > Looking here: > https://ceph.com/geen-categorie/the-ceph-and-tcmalloc-performance-st

Re: [ceph-users] Switching from tcmalloc

2015-06-24 Thread Jan Schermer
We already had the migratepages in place before we disabled tcmalloc. It didn’t do much. Disabling tcmalloc made immediate difference but there were still spikes and the latency wasn’t that great. (CPU usage was) Migrating memory helped a lot after that - it didn’t help (at least

Re: [ceph-users] Switching from tcmalloc

2015-06-26 Thread Alexandre DERUMIER
same than tcmalloc, maybe a little bit slower, but it's marginal. for read, I was around 250k iops by osd with jemalloc vs 260k iops with tcmalloc. - Mail original - De: Mark Nelson mnel...@redhat.com À: ceph-users ceph-users@lists.ceph.com Envoyé: Jeudi 25 Juin 2015 18:25:26 Objet: Re

[ceph-users] Switching from tcmalloc

2015-06-24 Thread Jan Schermer
Can you guess when we did that? Still on dumpling, btw... http://www.zviratko.net/link/notcmalloc.png http://www.zviratko.net/link/notcmalloc.png Jan___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] OSD crash

2015-09-08 Thread Alex Gorbachev
40] 3: (tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache::FreeList*, unsigned long, int)+0x103) [0x7faea067fac3] 4: (tcmalloc::ThreadCache::ListTooLong(tcmalloc::ThreadCache::FreeList*, unsigned long)+0x1b) [0x7faea067fb7b] 5: (operator delete(void*)+0x1f8) [0x7faea068ef68] 6: (std::_R

[ceph-users] jemalloc-enabled packages on trusty?

2016-01-20 Thread Zoltan Arnold Nagy
Hi, Has someone published prebuilt debs for trusty from hammer with jemalloc compiled-in instead of tcmalloc or does everybody need to compile it themselves? :-) Cheers, Zoltan ___ ceph-users mailing list ceph-users@lists.ceph.com http

Re: [ceph-users] Segfault in libtcmalloc.so.4.2.2

2016-05-13 Thread Somnath Roy
What is the exact kernel version ? Ubuntu has a new tcmalloc incorporated from 3.16.0.50 kernel onwards. If you are using older kernel than this better to upgrade kernel or try building latest tcmalloc and try to see if this is happening there. Ceph is not packaging tcmalloc it is using

[ceph-users] Build version question

2016-11-29 Thread McFarland, Bruce
Using the ceph version string, for example ceph version 10.2.2-118-g894a5f8 (894a5f8d878d4b267f80b90a4bffce157f2b4ba7), how would I determine the versions of the various dependancies used in this build? For instance tcmalloc? Thanks, Bruce ___ ceph

[ceph-users] jemalloc / Bluestore

2018-07-05 Thread Uwe Sauter
Hi all, is using jemalloc still recommended for Ceph? There are multiple sites (e.g. https://ceph.com/geen-categorie/the-ceph-and-tcmalloc-performance-story/) from 2015 where jemalloc is praised for higher performance but I found a bug report that Bluestore crashes when used with jemalloc

Re: [ceph-users] jemalloc / Bluestore

2018-07-05 Thread Mark Nelson
Hi Uwe, As luck would have it we were just looking at memory allocators again and ran some quick RBD and RGW tests that stress memory allocation: https://drive.google.com/uc?export=download=1VlWvEDSzaG7fE4tnYfxYtzeJ8mwx4DFg The gist of it is that tcmalloc looks like it's doing pretty well

Re: [ceph-users] RAM usage only very slowly decreases after cluster recovery

2015-09-09 Thread Mark Nelson
On 08/28/2015 10:55 AM, Somnath Roy wrote: Yeah, that means tcmalloc probably caching those as I suspected.. There are some discussion going on in that front, but, unfortunately we concluded to have tcmalloc as default and if somebody needs performance should move to jemalloc. One

Re: [ceph-users] Restarting OSD leads to lower CPU usage

2015-06-11 Thread Somnath Roy
Yeah ! Then it is the tcmalloc issue.. If you are using the version coming with OS , the TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES won't do anything. Try building the latest tcmalloc and set the env variable and see if it improves or not. Also, you can try with latest ceph build with jemalloc

Re: [ceph-users] ceph-mon cpu usage

2015-07-23 Thread Gregory Farnum
Spans in use MALLOC: 21 Thread heaps in use MALLOC: 8192 Tcmalloc page size after that I ran the heap release and it went back to normal

Re: [ceph-users] Switching from tcmalloc

2015-06-25 Thread Jan Schermer
further. Maybe we could preload it know that tcmalloc support was disabled - it is supposed to be a drop-in replacement to glibc malloc() after all... Our memory stays seems exactly the same even after 2 days in production, only virtual memory jumped up immediately. Jan P.S. pinning scripts

Re: [ceph-users] Switching from tcmalloc

2015-06-25 Thread Dzianis Kahanovich
all related to distro. Mark Nelson пишет: It would be really interesting if you could give jemalloc a try. Originally tcmalloc was used to get around some serious memory fragmentation issues in the OSD. You can read the original bug tracker entry from 5 years ago here: http://tracker.ceph.com

Re: [ceph-users] Switching from tcmalloc

2015-06-24 Thread Ben Hines
, Jan Schermer j...@schermer.cz wrote: There were essentialy three things we had to do for such a drastic drop 1) recompile CEPH —without-tcmalloc 2) pin the OSDs to a set of a specific NUMA zone - we had this for a long time and it really helped 3) migrate the OSD memory to the correct CPU

Re: [ceph-users] ceph-mon cpu usage

2015-07-23 Thread Luis Periquito
Hi Greg, I've been looking at the tcmalloc issues, but did seem to affect osd's, and I do notice it in heavy read workloads (even after the patch and increasing TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728). This is affecting the mon process though. looking at perf top I'm getting most

Re: [ceph-users] RAM usage only very slowly decreases after cluster recovery

2015-09-05 Thread Shinobu Kinjo
t;cws...@physics.wisc.edu>, "池信泽" <xmdx...@gmail.com> Cc: ceph-users@lists.ceph.com Sent: Saturday, August 29, 2015 12:55:34 AM Subject: Re: [ceph-users] RAM usage only very slowly decreases after cluster recovery Yeah, that means tcmalloc probably caching those as I suspected

Re: [ceph-users] ceph-mon high cpu usage, and response slow

2015-11-30 Thread Joao Eduardo Luis
ernel] [k] file_read_actor > 1.05% libc-2.15.so [.] 0x0015f24f > 0.92% libtcmalloc.so.0.1.0 [.] operator delete[](void*) > 0.59% libtcmalloc.so.0.1.0 [.] > tcmalloc::PageHeap::MergeIntoFreeList(tcmalloc::Span*) > 0.49% [kernel] [k

Re: [ceph-users] Switching from tcmalloc

2015-06-24 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Did you see what the effect of just restarting the OSDs before using tcmalloc? I've noticed that there is usually a good drop for us just by restarting them. I don't think it is usually this drastic. - Robert LeBlanc GPG

Re: [ceph-users] Restarting OSD leads to lower CPU usage

2015-06-11 Thread Jan Schermer
Hi, I looked at it briefly before leaving, tcmalloc was at the top. I can provide a full listing tomorrow if it helps. 12.80% libtcmalloc.so.4.1.0 [.] tcmalloc::CentralFreeList::FetchFromSpans() 8.40% libtcmalloc.so.4.1.0 [.] tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc

Re: [ceph-users] using jemalloc in trusty

2016-05-24 Thread Alexandre DERUMIER
>>Is it true for Xenial too or only for Trusty? I don't want to rebuild Jewel on >>xenial hosts... yes, for xenial (and debian wheezy/jessie too). I don't don't why they are LD_PRELOAD commented in /etc/default/ceph, because it's really don't do nothing, if tcmalloc is present. y

Re: [ceph-users] Ceph on different OS version

2016-09-23 Thread Jaroslaw Owsiewski
aven't seen real issues, but a few which I could think of which > *potentially* might be a problem: > > - Different tcmalloc version > - Different libc versions > - Different kernel behavior with TCP connections > > Now, again, I haven't seen any problems, but these are the ones I

Re: [ceph-users] How to release Hammer osd RAM when compiled with jemalloc

2016-12-13 Thread Sage Weil
On Tue, 13 Dec 2016, Dong Wu wrote: > Hi, all >I have a cluster with nearly 1000 osds, and each osd already > occupied 2.5G physical memory on average, which cause each host 90% > memory useage. when use tcmalloc, we can use "ceph tell osd.* release" > to release

Re: [ceph-users] Segfault in libtcmalloc.so.4.2.2

2016-05-13 Thread David
libtcmalloc-minimal4:amd64/jessie 2.2.1-0.2 uptodate > 13 maj 2016 kl. 16:02 skrev Somnath Roy <somnath@sandisk.com>: > > What is the exact kernel version ? > Ubuntu has a new tcmalloc incorporated from 3.16.0.50 kernel onwards. If you > are using older kernel than this bet

Re: [ceph-users] ceph-fuse segfaults ( jewel 10.2.2)

2016-07-11 Thread Yan, Zheng
On Tue, Jul 12, 2016 at 1:07 AM, Gregory Farnum <gfar...@redhat.com> wrote: > Oh, is this one of your custom-built packages? Are they using > tcmalloc? That difference between VSZ and RSS looks like a glibc > malloc problem. > -Greg > ceph-fuse at http://download.ceph.com

Re: [ceph-users] using jemalloc in trusty

2016-05-23 Thread Luis Periquito
loc in trusty > > I've been running some tests with jewel, and wanted to enable jemalloc. > I noticed that the new jewel release now loads properly /etc/default/ceph and > has an option to use jemalloc. > > I've installed jemalloc, enabled the LD_PRELOAD option, however doing so

Re: [ceph-users] using jemalloc in trusty

2016-05-23 Thread Somnath Roy
eems that it's still using tcmalloc: I still see the "tcmalloc::CentralFreeList::FetchFromSpans()" and it's accompanying lines in perf top. Also from a lsof I can see the tcmalloc libraries being used, but not the jemalloc ones... Does anyone know what I'm doing wrong? I'm usin

Re: [ceph-users] Luminous and jemalloc

2018-03-23 Thread Alexandre DERUMIER
Hi, I think it's no more a problem since async messenger is default. Difference is minimal now between jemalloc and tcmalloc. Regards, Alexandre - Mail original - De: "Xavier Trilla" <xavier.tri...@silicontower.net> À: "ceph-users" <ceph-users@lis

[ceph-users] Memory Allocators and Ceph

2015-05-27 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 With all the talk of tcmalloc and jemalloc, I decided to do some testing og the different memory allocating technologies between KVM and Ceph. These tests were done a pre-production system so I've tried to remove some the variance with many runs

Re: [ceph-users] Memory Allocators and Ceph

2015-05-27 Thread Mark Nelson
On 05/27/2015 12:40 PM, Robert LeBlanc wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 With all the talk of tcmalloc and jemalloc, I decided to do some testing og the different memory allocating technologies between KVM and Ceph. These tests were done a pre-production system so I've

Re: [ceph-users] using jemalloc in trusty

2016-05-24 Thread Alexandre DERUMIER
>>Given the messages in this thread, it seems that the jemalloc library isn't >>actually being used? But if so, why would it be loaded (and why would >>tcmalloc *also* be loaded)? I think this is because rocksdb is static linked, and use tcmalloc and from jemalloc doc:

Re: [ceph-users] Restarting OSD leads to lower CPU usage

2015-06-11 Thread Jan Schermer
more performance, but we’re not ready for that yet either (but working on it :)) I’d expect the tcmalloc issue to manifest almost immediately? There are thousands of threads, hundreds of connections - surely it would manifest sooner? People were seeing regressions with just two clients

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-26 Thread Matthew Anderson
] wait_sb_inodes 0.55% ceph-osd ceph-osd [.] leveldb::Block::Iter::Valid() const 0.51% ceph-osd libtcmalloc.so.0.1.0 [.] tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache:: 0.50% ceph-osd libtcmalloc.so.0.1.0 [.] tcmalloc::CentralFreeList::FetchFromSpans

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-27 Thread Samuel Just
[kernel.kallsyms] [k] wait_sb_inodes 0.55% ceph-osd ceph-osd [.] leveldb::Block::Iter::Valid() const 0.51% ceph-osd libtcmalloc.so.0.1.0 [.] tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache:: 0.50% ceph-osd libtcmalloc.so.0.1.0 [.] tcmalloc

Re: [ceph-users] Switching from tcmalloc

2015-06-24 Thread Jan Schermer
be solved with this. We graph latency, outstanding operations, you name it - I can share a few graphs with you tomorrow if I get the permission from my boss :-) Makes for a nice comparison with real workload to have one node tcmalloc-free and the others running vanilla ceph-osd. I guess I can

Re: [ceph-users] Switching from tcmalloc

2015-06-24 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 From what I understand, you probably got most of your reduction from co-locating your memory to the right NUMA nodes. tcmalloc/jemalloc should be much higher in performance because of how they hold memory in thread pools (less locking to allocate

Re: [ceph-users] ceph-mon cpu usage

2015-07-24 Thread Jan Schermer
Periquito periqu...@gmail.com wrote: Hi Greg, I've been looking at the tcmalloc issues, but did seem to affect osd's, and I do notice it in heavy read workloads (even after the patch and increasing TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728). This is affecting the mon process though

Re: [ceph-users] ceph-mon cpu usage

2015-07-24 Thread Mark Nelson
ceph-osd … The last time we tried it segfaulted after a few minutes, so YMMV and be careful. Jan On 23 Jul 2015, at 18:18, Luis Periquito periqu...@gmail.com mailto:periqu...@gmail.com wrote: Hi Greg, I've been looking at the tcmalloc issues, but did seem to affect

Re: [ceph-users] ceph-mon cpu usage

2015-07-24 Thread Luis Periquito
/libjemalloc.so.1 ceph-osd … The last time we tried it segfaulted after a few minutes, so YMMV and be careful. Jan On 23 Jul 2015, at 18:18, Luis Periquito periqu...@gmail.com wrote: Hi Greg, I've been looking at the tcmalloc issues, but did seem to affect osd's, and I do notice it in heavy read

Re: [ceph-users] using jemalloc in trusty

2016-05-23 Thread Somnath Roy
using jemalloc in trusty > > I've been running some tests with jewel, and wanted to enable jemalloc. > I noticed that the new jewel release now loads properly /etc/default/ceph and > has an option to use jemalloc. > > I've installed jemalloc, enabled the LD_PRELOAD option, howeve

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-27 Thread Matthew Anderson
[.] tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache::FreeList*, unsigned long, int) 0.51% ceph-osd [kernel.kallsyms] [k] wait_sb_inodes 0.47% ceph-osd libpthread-2.15.so [.] pthread_mutex_unlock 0.47% ceph-osd libstdc++.so.6.0.16[.] std::string::assign

Re: [ceph-users] OSD crash

2015-09-22 Thread Alex Gorbachev
80).fault with nothing to send, going to standby > > 2015-09-07 14:56:16.948998 7fae643e8700 -1 *** Caught signal > (Segmentation > > fault) ** > > in thread 7fae643e8700 > > > ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3) > > 1: /usr/bin/ceph-osd()

Re: [ceph-users] OSD crash

2015-09-22 Thread Brad Hubbard
09-07 14:56:16.948998 7fae643e8700 -1 *** Caught signal (Segmentation > fault) ** > in thread 7fae643e8700 > ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3) > 1: /usr/bin/ceph-osd() [0xacb3ba] > 2: (()+0x10340) [0x7faea044e340] > 3: > (tcmalloc::ThreadCache::Re

Re: [ceph-users] Uneven CPU usage on OSD nodes

2015-03-23 Thread Gregory Farnum
-cpu' nodes have tmalloc calls able to explain the cpu difference. We don't see them on 'low-cpu' nodes : 12,15% libtcmalloc.so.4.1.2 [.] tcmalloc::CentralFreeList::FetchFromSpans Huh. The tcmalloc (memory allocator) workload should be roughly the same across all nodes, especially

Re: [ceph-users] Switching from tcmalloc

2015-06-24 Thread Jan Schermer
...@lists.ceph.com] On Behalf Of Jan Schermer Sent: Wednesday, June 24, 2015 10:54 AM To: Ben Hines Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Switching from tcmalloc We did, but I don’t have the numbers. I have lots of graphs, though. We were mainly trying to solve the CPU usage, since our

Re: [ceph-users] Switching from tcmalloc

2015-06-24 Thread Somnath Roy
-users-boun...@lists.ceph.com] On Behalf Of Jan Schermer Sent: Wednesday, June 24, 2015 10:54 AM To: Ben Hines Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Switching from tcmalloc We did, but I don’t have the numbers. I have lots of graphs, though. We were mainly trying to solve the CPU

Re: [ceph-users] memory stats

2015-10-06 Thread Gregory Farnum
On Mon, Oct 5, 2015 at 10:40 PM, Serg M <it.se...@gmail.com> wrote: > What difference between memory statistics of "ceph tell {daemon}.{id} heap > stats" Assuming you're using tcmalloc (by default you are) this will get information straight from the memory allocator about

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-09 Thread Jan Schermer
The memory gets used for additional PGs on the OSD. If you were to "swap" PGs between two OSDs, you'll get memory wasted on both of them because tcmalloc doesn't release it.* It usually gets stable after few days even during backfills, so it does get reused if needed. If for some r

Re: [ceph-users] BUG ON librbd or libc

2016-08-23 Thread Jason Dillaman
There was almost the exact same issue on the master branch right after the switch to cmake because tcmalloc was incorrectly (and partially) linked into librados/librbd. What occurred was that the std::list within ceph::buffer::ptr was allocated via tcmalloc but was freed within librados/librbd via

Re: [ceph-users] How to release Hammer osd RAM when compiled with jemalloc

2016-12-13 Thread Sage Weil
gt;> occupied 2.5G physical memory on average, which cause each host 90% > >> memory useage. when use tcmalloc, we can use "ceph tell osd.* release" > >> to release unused memory, but in my cluster, ceph is build with > >> jemalloc, so can't use "ceph tell os

[ceph-users] [luminous 12.2.2] Cluster write performance degradation problem(possibly tcmalloc related)

2017-12-21 Thread shadow_lin
,the pattern are identical) https://pasteboard.co/GZfmfzo.png Graph of osd perf https://pasteboard.co/GZfmZNx.png There are some interesting founding from the graph. After 18:00 suddenly the write throughput dropped and the osd latency increased. TCmalloc started

Re: [ceph-users] Memory Allocators and Ceph

2015-05-27 Thread Haomai Wang
On Thu, May 28, 2015 at 1:40 AM, Robert LeBlanc rob...@leblancnet.us wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 With all the talk of tcmalloc and jemalloc, I decided to do some testing og the different memory allocating technologies between KVM and Ceph. These tests were done

Re: [ceph-users] Memory Allocators and Ceph

2015-05-27 Thread Robert LeBlanc
, May 27, 2015 at 11:59 AM, Haomai Wang wrote: On Thu, May 28, 2015 at 1:40 AM, Robert LeBlanc wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 With all the talk of tcmalloc and jemalloc, I decided to do some testing og the different memory allocating technologies between KVM and Ceph

Re: [ceph-users] Initial performance cluster SimpleMessenger vs AsyncMessenger results

2015-10-13 Thread Haomai Wang
ync messenger code base, do you have an > explanation of the behavior (like good performance with default tcmalloc) > Mark reported ? Is it using lot less thread overall than Simple ? Originally async messenger mainly want to solve with high thread number problem which limited the ceph clust

Re: [ceph-users] Ceph performance calculator

2016-07-26 Thread 席智勇
Me there are just too many factors that can have a huge impact > on latency and performance. Look at the tcmalloc/jemalloc threadcache > results from last year and how huge of a performance impact that alone can > have for example. You might think: "ok, that's one parameter that has a

Re: [ceph-users] Ceph performance calculator

2016-07-25 Thread Mark Nelson
performance is low enough and mostly static enough that minor code change and new drive models probably won't ruin the model. For SSD/NVMe there are just too many factors that can have a huge impact on latency and performance. Look at the tcmalloc/jemalloc threadcache results from last year and how huge

Re: [ceph-users] monitor always seg fault after first restart

2014-06-18 Thread Joao Eduardo Luis
(a38fe1169b6d2ac98b427334c12d7cf81f809b74) 1: /usr/bin/ceph-mon() [0x89419d] 2: (()+0xf7c0) [0x7f202887e7c0] 3: (()+0x61c0) [0x7f2026b621c0] 4: (_ULx86_64_step()+0x9) [0x7f2026b632a9] 5: (()+0x393a5) [0x7f2028ac53a5] 6: (GetStackTrace(void**, int, int)+0xe) [0x7f2028ac4d1e] 7: (tcmalloc::PageHeap

Re: [ceph-users] monitor always seg fault after first restart

2014-06-18 Thread Jan Kalcic
)+0xe) [0x7f2028ac4d1e] 7: (tcmalloc::PageHeap::GrowHeap(unsigned long)+0x10f) [0x7f2028ab4b5f] 8: (tcmalloc::PageHeap::New(unsigned long)+0xbb) [0x7f2028ab52ab] 9: (tcmalloc::CentralFreeList::Populate()+0x7b) [0x7f2028ab30ab] 10: (tcmalloc::CentralFreeList::FetchFromOneSpansSafe(int, void

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-27 Thread Oliver Daudey
::Block::Iter::Valid() const 0.51% ceph-osd libtcmalloc.so.0.1.0 [.] tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache:: 0.50% ceph-osd libtcmalloc.so.0.1.0 [.] tcmalloc::CentralFreeList::FetchFromSpans() 0.47% ceph-osd libstdc++.so.6.0.16[.] 0x9ebc8 0.46

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-27 Thread Mark Nelson
ceph-osd [.] leveldb::Block::Iter::Valid() const 0.51% ceph-osd libtcmalloc.so.0.1.0 [.] tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache:: 0.50% ceph-osd libtcmalloc.so.0.1.0 [.] tcmalloc::CentralFreeList::FetchFromSpans() 0.47% ceph-osd libstdc++.so

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-27 Thread Ian Colle
libtcmalloc.so.0.1.0 [.] tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache:: 0.50% ceph-osd libtcmalloc.so.0.1.0 [.] tcmalloc::CentralFreeList::FetchFromSpans() 0.47% ceph-osd libstdc++.so.6.0.16[.] 0x9ebc8 0.46% ceph-osd libc-2.15.so [.] vfprintf

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-27 Thread Oliver Daudey
0.55% ceph-osd ceph-osd [.] leveldb::Block::Iter::Valid() const 0.51% ceph-osd libtcmalloc.so.0.1.0 [.] tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache:: 0.50% ceph-osd libtcmalloc.so.0.1.0 [.] tcmalloc::CentralFreeList::FetchFromSpans

Re: [ceph-users] Switching from tcmalloc

2015-06-25 Thread Dzianis Kahanovich
I use zone_reclaim_mode=7 all time on boot (I use qemu with NUMA memory locking on same nodes, so need to keep proportional RAM use). Now I trying to use migratepages for all ceph daemons (with tcmalloc -DTCMALLOC_SMALL_BUT_SLOW - to avoid OSDs memory abuse). There my script (migrate to node

Re: [ceph-users] Issues compiling Ceph (master branch) on Debian Wheezy (armhf)

2014-07-25 Thread Owen Synge
Dear Deven, Another solution is to compile leveldb and ceph without tcmalloc support :) Ceph and leveldb work just fine without gperftools, and I am yet to do benchmarks as to how much performance benefit you get from google-perftools replacement tcmalloc of globc malloc. Best regards Owen

Re: [ceph-users] Uneven CPU usage on OSD nodes

2015-03-25 Thread Somnath Roy
Hi Fredrick, See my response inline. Thanks Regards Somnath From: f...@univ-lr.fr [mailto:f...@univ-lr.fr] Sent: Wednesday, March 25, 2015 8:07 AM To: Somnath Roy Cc: Ceph Users Subject: Re: [ceph-users] Uneven CPU usage on OSD nodes Hi Somnath, Thanks, the tcmalloc env variable trick

Re: [ceph-users] RAM usage only very slowly decreases after cluster recovery

2015-08-27 Thread Somnath Roy
Slow memory release could also be because of tcmalloc. Tcmalloc doesn't release the memory the moment application issue a 'delete' but it cached it inside for future use. If it is not a production cluster and you have spare time to reproduce this, I would suggest to build Ceph code

Re: [ceph-users] using jemalloc in trusty

2016-05-24 Thread Joshua M. Boniface
253,0 223936 > 6829 /usr/lib/x86_64-linux-gnu/libjemalloc.so.1 Given the messages in this thread, it seems that the jemalloc library isn't actually being used? But if so, why would it be loaded (and why would tcmalloc *also* be loaded)? And if we still need to add explicit suppor

Re: [ceph-users] libjemalloc.so.1 not used?

2017-03-27 Thread Alexandre DERUMIER
you need to recompile ceph with jemalloc, without have tcmalloc dev librairies. LD_PRELOAD has never work for jemalloc and ceph - Mail original - De: "Engelmann Florian" <florian.engelm...@everyware.ch> À: "ceph-users" <ceph-users@lists.ceph.com> Env

Re: [ceph-users] High Load and High Apply Latency

2018-02-17 Thread Marc Roos
But that is already the default not? (on CentOS7 rpms) [@c03 ~]# cat /etc/sysconfig/ceph # /etc/sysconfig/ceph # # Environment file for ceph daemon systemd unit files. # # Increase tcmalloc cache size TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -Original Message- From: John

Re: [ceph-users] ARM v8

2014-12-22 Thread Loic Dachary
Hi, You will need to compile them from source. Depending on the distribution you may also want to compile google perftools and tcmalloc: I'm not sure why exactly but that has been repored to help. I strongly advise that you run make check ( or run-make-check.sh if you compile master

Re: [ceph-users] ARM v8

2014-12-22 Thread Ken Dreyer
On 12/22/2014 09:28 AM, Loic Dachary wrote: You will need to compile them from source. Depending on the distribution you may also want to compile google perftools and tcmalloc: I'm not sure why exactly but that has been repored to help. I strongly advise that you run make check ( or run-make

[ceph-users] Ceph erasure code benchmark failing

2015-07-01 Thread Nitin Saxena
--with-debug --without-tcmalloc --without-fuse;make Am I missing something here? Thanks in advance Nitin ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph erasure code benchmark failing

2015-07-01 Thread David Casier AEVOO
have checked out master branch and compiled as ceph with following steps ./autogen.sh ; ./configure --with-debug --without-tcmalloc --without-fuse;make Am I missing something here? Thanks in advance Nitin ___ ceph-users mailing list ceph-users

  1   2   3   4   >