Yeah , I can see ceph-osd/ceph-mon built with jemalloc.

Thanks & Regards
Somnath

-----Original Message-----
From: Stefan Priebe [mailto:s.pri...@profihost.ag] 
Sent: Wednesday, August 19, 2015 1:41 PM
To: Somnath Roy; Alexandre DERUMIER; Mark Nelson
Cc: ceph-devel
Subject: Re: Ceph Hackathon: More Memory Allocator Testing


Am 19.08.2015 um 22:34 schrieb Somnath Roy:
> But, you said you need to remove libcmalloc *not* libtcmalloc...
> I saw librbd/librados is built with libcmalloc not with libtcmalloc..
> So, are you saying to remove libtcmalloc (not libcmalloc) to enable jemalloc ?

Ouch my mistake. I read libtcmalloc - too late here.

My build (Hammer) says:
# ldd /usr/lib/librados.so.2.0.0
         linux-vdso.so.1 =>  (0x00007fff4f71d000)
         libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fafdb26c000)
         libboost_thread.so.1.49.0 => /usr/lib/libboost_thread.so.1.49.0
(0x00007fafdb24f000)
         libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
(0x00007fafdb032000)
         libcrypto++.so.9 => /usr/lib/libcrypto++.so.9 (0x00007fafda924000)
         libuuid.so.1 => /lib/x86_64-linux-gnu/libuuid.so.1
(0x00007fafda71f000)
         librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fafda516000)
         libboost_system.so.1.49.0 => /usr/lib/libboost_system.so.1.49.0
(0x00007fafda512000)
         libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6
(0x00007fafda20b000)
         libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fafd9f88000)
         libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fafd9bfd000)
         libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1
(0x00007fafd99e7000)
         /lib64/ld-linux-x86-64.so.2 (0x000056358ecfe000)

Only ceph-osd is linked against libjemalloc for me.

Stefan

> -----Original Message-----
> From: Stefan Priebe [mailto:s.pri...@profihost.ag]
> Sent: Wednesday, August 19, 2015 1:31 PM
> To: Somnath Roy; Alexandre DERUMIER; Mark Nelson
> Cc: ceph-devel
> Subject: Re: Ceph Hackathon: More Memory Allocator Testing
>
>
> Am 19.08.2015 um 22:29 schrieb Somnath Roy:
>> Hmm...We need to fix that as part of configure/Makefile I guess (?)..
>> Since we have done this jemalloc integration originally, we can take that 
>> ownership unless anybody sees a problem of enabling tcmalloc/jemalloc with 
>> librbd/librados.
>>
>> << You have to remove libcmalloc out of your build environment to get 
>> this done How do I do that ? I am using Ubuntu and can't afford to remove 
>> libc* packages.
>
> I always use a chroot to build packages where only a minimal bootstrap + the 
> build deps are installed. googleperftools where libtcmalloc comes from is not 
> Ubuntu "core/minimal".
>
> Stefan
>
>>
>> Thanks & Regards
>> Somnath
>>
>> -----Original Message-----
>> From: Stefan Priebe [mailto:s.pri...@profihost.ag]
>> Sent: Wednesday, August 19, 2015 1:18 PM
>> To: Somnath Roy; Alexandre DERUMIER; Mark Nelson
>> Cc: ceph-devel
>> Subject: Re: Ceph Hackathon: More Memory Allocator Testing
>>
>>
>> Am 19.08.2015 um 22:16 schrieb Somnath Roy:
>>> Alexandre,
>>> I am not able to build librados/librbd by using the following config option.
>>>
>>> ./configure –without-tcmalloc –with-jemalloc
>>
>> Same issue to me. You have to remove libcmalloc out of your build 
>> environment to get this done.
>>
>> Stefan
>>
>>
>>> It seems it is building osd/mon/Mds/RGW with jemalloc enabled..
>>>
>>> root@emsnode10:~/ceph-latest/src# ldd ./ceph-osd
>>>            linux-vdso.so.1 =>  (0x00007ffd0eb43000)
>>>            libjemalloc.so.1 => /usr/lib/x86_64-linux-gnu/libjemalloc.so.1 
>>> (0x00007f5f92d70000)
>>>            .......
>>>
>>> root@emsnode10:~/ceph-latest/src/.libs# ldd ./librados.so.2.0.0
>>>            linux-vdso.so.1 =>  (0x00007ffed46f2000)
>>>            libboost_thread.so.1.55.0 => 
>>> /usr/lib/x86_64-linux-gnu/libboost_thread.so.1.55.0 (0x00007ff687887000)
>>>            liblttng-ust.so.0 => /usr/lib/x86_64-linux-gnu/liblttng-ust.so.0 
>>> (0x00007ff68763d000)
>>>            libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 
>>> (0x00007ff687438000)
>>>            libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
>>> (0x00007ff68721a000)
>>>            libnss3.so => /usr/lib/x86_64-linux-gnu/libnss3.so 
>>> (0x00007ff686ee0000)
>>>            libsmime3.so => /usr/lib/x86_64-linux-gnu/libsmime3.so 
>>> (0x00007ff686cb3000)
>>>            libnspr4.so => /usr/lib/x86_64-linux-gnu/libnspr4.so 
>>> (0x00007ff686a76000)
>>>            libuuid.so.1 => /lib/x86_64-linux-gnu/libuuid.so.1 
>>> (0x00007ff686871000)
>>>            librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 
>>> (0x00007ff686668000)
>>>            libboost_system.so.1.55.0 => 
>>> /usr/lib/x86_64-linux-gnu/libboost_system.so.1.55.0 (0x00007ff686464000)
>>>            libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
>>> (0x00007ff686160000)
>>>            libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007ff685e59000)
>>>            libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ff685a94000)
>>>            libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 
>>> (0x00007ff68587e000)
>>>            liblttng-ust-tracepoint.so.0 => 
>>> /usr/lib/x86_64-linux-gnu/liblttng-ust-tracepoint.so.0 (0x00007ff685663000)
>>>            liburcu-bp.so.1 => /usr/lib/liburcu-bp.so.1 (0x00007ff68545c000)
>>>            liburcu-cds.so.1 => /usr/lib/liburcu-cds.so.1 
>>> (0x00007ff685255000)
>>>            /lib64/ld-linux-x86-64.so.2 (0x00007ff68a0f6000)
>>>            libnssutil3.so => /usr/lib/x86_64-linux-gnu/libnssutil3.so 
>>> (0x00007ff685029000)
>>>            libplc4.so => /usr/lib/x86_64-linux-gnu/libplc4.so 
>>> (0x00007ff684e24000)
>>>            libplds4.so => /usr/lib/x86_64-linux-gnu/libplds4.so
>>> (0x00007ff684c20000)
>>>
>>> It is building with libcmalloc always...
>>>
>>> Did you change the ceph makefiles to build librbd/librados with jemalloc ?
>>>
>>> Thanks & Regards
>>> Somnath
>>>
>>> -----Original Message-----
>>> From: ceph-devel-ow...@vger.kernel.org 
>>> [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Alexandre 
>>> DERUMIER
>>> Sent: Wednesday, August 19, 2015 7:01 AM
>>> To: Mark Nelson
>>> Cc: ceph-devel
>>> Subject: Re: Ceph Hackathon: More Memory Allocator Testing
>>>
>>> Thanks Marc,
>>>
>>> Results are matching exactly what I have seen with tcmalloc 2.1 vs 2.4 vs 
>>> jemalloc.
>>>
>>> and indeed tcmalloc, even with bigger cache, seem decrease over time.
>>>
>>>
>>> What is funny, is that I see exactly same behaviour client librbd side, 
>>> with qemu and multiple iothreads.
>>>
>>>
>>> Switching both server and client to jemalloc give me best performance on 
>>> small read currently.
>>>
>>>
>>>
>>>
>>>
>>>
>>> ----- Mail original -----
>>> De: "Mark Nelson" <mnel...@redhat.com>
>>> À: "ceph-devel" <ceph-devel@vger.kernel.org>
>>> Envoyé: Mercredi 19 Août 2015 06:45:36
>>> Objet: Ceph Hackathon: More Memory Allocator Testing
>>>
>>> Hi Everyone,
>>>
>>> One of the goals at the Ceph Hackathon last week was to examine how to 
>>> improve Ceph Small IO performance. Jian Zhang presented findings showing a 
>>> dramatic improvement in small random IO performance when Ceph is used with 
>>> jemalloc. His results build upon Sandisk's original findings that the 
>>> default thread cache values are a major bottleneck in TCMalloc 2.1. To 
>>> further verify these results, we sat down at the Hackathon and configured 
>>> the new performance test cluster that Intel generously donated to the Ceph 
>>> community laboratory to run through a variety of tests with different 
>>> memory allocator configurations. I've since written the results of those 
>>> tests up in pdf form for folks who are interested.
>>>
>>> The results are located here:
>>>
>>> http://nhm.ceph.com/hackathon/Ceph_Hackathon_Memory_Allocator_Testing.
>>> pdf
>>>
>>> I want to be clear that many other folks have done the heavy lifting here. 
>>> These results are simply a validation of the many tests that other folks 
>>> have already done. Many thanks to Sandisk and others for figuring this out 
>>> as it's a pretty big deal!
>>>
>>> Side note: Very little tuning other than swapping the memory allocator and 
>>> a couple of quick and dirty ceph tunables were set during these tests. It's 
>>> quite possible that higher IOPS will be achieved as we really start digging 
>>> into the cluster and learning what the bottlenecks are.
>>>
>>> Thanks,
>>> Mark
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>>> in the body of a message to majord...@vger.kernel.org More majordomo 
>>> info at http://vger.kernel.org/majordomo-info.html
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>>> in the body of a message to majord...@vger.kernel.org More majordomo 
>>> info at  http://vger.kernel.org/majordomo-info.html
>>>
>>> ________________________________
>>>
>>> PLEASE NOTE: The information contained in this electronic mail message is 
>>> intended only for the use of the designated recipient(s) named above. If 
>>> the reader of this message is not the intended recipient, you are hereby 
>>> notified that you have received this message in error and that any review, 
>>> dissemination, distribution, or copying of this message is strictly 
>>> prohibited. If you have received this communication in error, please notify 
>>> the sender by telephone or e-mail (as shown above) immediately and destroy 
>>> any and all copies of this message in your possession (whether hard copies 
>>> or electronically stored copies).
>>>
>>> N     r  y   b X  ǧv ^ )޺{.n +   z ]z   {ay ʇڙ ,j   f   h   z  w
>>      j:+v   w j m         zZ+     ݢj"  !tml=
>>>
>> N     r  y   b X  ǧv ^ )޺{.n +   z ]z   {ay ʇڙ ,j   f   h   z  w
>     j:+v   w j m         zZ+     ݢj"  !tml=
>>
N�����r��y����b�X��ǧv�^�)޺{.n�+���z�]z���{ay�ʇڙ�,j��f���h���z��w���
���j:+v���w�j�m��������zZ+�����ݢj"��!�i

Reply via email to