Seem that starting osd with:

TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=128M /usr/bin/ceph-osd

fix it.

I don't known if it's the right way ?



----- Mail original -----
De: "aderumier" <aderum...@odiso.com>
À: "ceph-devel" <ceph-devel@vger.kernel.org>, "Somnath Roy" 
<somnath....@sandisk.com>
Envoyé: Lundi 27 Avril 2015 14:06:22
Objet: Hitting tcmalloc bug even with patch applied

Hi, 

I'm hitting the tcmalloc even with patch apply. 
It's mainly occur when I try to bench fio with a lot jobs (20 - 40 jobs) 

Does It need to tuned something in osd environnement variable ? 


I double check it with 

#g++ -o gperftest gperftest.c -ltcmalloc 
# export TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=67108864 
# ./gperftest 
Tcmalloc OK! Internal and Env cache size are same:67108864 


perf top 
------- 
10.04% libtcmalloc.so.4.1.2 [.] tcmalloc::ThreadCache::ReleaseToCentralCache 
8.19% libtcmalloc.so.4.1.2 [.] tcmalloc::CentralFreeList::FetchFromSpans 
3.89% libtcmalloc.so.4.1.2 [.] tcmalloc::CentralFreeList::ReleaseToSpans 
2.04% libtcmalloc.so.4.1.2 [.] tcmalloc::CentralFreeList::ReleaseListToSpans 
1.79% libtcmalloc.so.4.1.2 [.] operator new 
1.25% ceph-osd [.] ConfFile::load_from_buffer 
1.21% libtcmalloc.so.4.1.2 [.] operator delete 
1.14% [kernel] [k] _raw_spin_lock 
1.08% libstdc++.so.6.0.19 [.] std::basic_string<char, std::char_traits<char>, 
std::allocator<char> >::basic_string 
1.04% [kernel] [k] __schedule 
1.00% libpthread-2.17.so [.] pthread_mutex_trylock 
0.90% [kernel] [k] native_write_msr_safe 
0.89% [kernel] [k] __switch_to 
0.79% [kernel] [k] _raw_spin_lock_irqsave 
0.73% [kernel] [k] copy_user_enhanced_fast_string 



Regards, 

Alexandre 
-- 
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
the body of a message to majord...@vger.kernel.org 
More majordomo info at http://vger.kernel.org/majordomo-info.html 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to