I am not sure about debian , but, for Ubuntu latest tcmalloc is not 
incorporated till 3.16.0.50..
You can use the attached program to detect if your tcmalloc is okay or not. Do 
this..

$ g++ -o gperftest tcmalloc_test.c -ltcmalloc
   $ TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=67108864 ./gperftest

BTW, I am not saying latest tcmalloc will fix the issue , but worth trying.

Thanks & Regards
Somnath

From: David [mailto:[email protected]]
Sent: Friday, May 13, 2016 7:49 AM
To: Somnath Roy
Cc: ceph-users
Subject: Re: [ceph-users] Segfault in libtcmalloc.so.4.2.2

Linux osd11.storage 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt20-1+deb8u3 
(2016-01-17) x86_64 GNU/Linux

apt-show-versions linux-image-3.16.0-4-amd64
linux-image-3.16.0-4-amd64:amd64/jessie-updates 3.16.7-ckt20-1+deb8u3 
upgradeable to 3.16.7-ckt25-2

apt-show-versions libtcmalloc-minimal4
libtcmalloc-minimal4:amd64/jessie 2.2.1-0.2 uptodate



13 maj 2016 kl. 16:02 skrev Somnath Roy 
<[email protected]<mailto:[email protected]>>:

What is the exact kernel version ?
Ubuntu has a new tcmalloc incorporated from 3.16.0.50 kernel onwards. If you 
are using older kernel than this better to upgrade kernel or try building 
latest tcmalloc and try to see if this is happening there.
Ceph is not packaging tcmalloc it is using the tcmalloc available with distro.

Thanks & Regards
Somnath

From: ceph-users [mailto:[email protected]] On Behalf Of David
Sent: Friday, May 13, 2016 6:13 AM
To: ceph-users
Subject: [ceph-users] Segfault in libtcmalloc.so.4.2.2

Hi,

Been getting some segfaults in our newest ceph cluster running ceph 9.2.1-1 on 
Debian 8.3
segfault at 0 ip 00007f27e85120f7 sp 00007f27cff9e860 error 4 in 
libtcmalloc.so.4.2.2

I saw there’s already a bug up there on the tracker: 
http://tracker.ceph.com/issues/15628
Don’t know how many other are affected by it. We stop and start the osd to 
bring it up again but it’s quite annoying.

I’m guessing this affects Jewel as well?

Kind Regards,

David Majchrzak

PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).

#include <iostream>
#include <cstdlib>
#ifdef HAVE_GPERFTOOLS_HEAP_PROFILER_H
#include <gperftools/heap-profiler.h>
#else
#include <google/heap-profiler.h>
#endif

#ifdef HAVE_GPERFTOOLS_MALLOC_EXTENSION_H
#include <gperftools/malloc_extension.h>
#else
#include <google/malloc_extension.h>
#endif

using namespace std;

int main ()
{
  size_t tc_cache_sz;
  size_t env_cache_sz;
  char *env_cache_sz_str;
  int st;

  env_cache_sz_str = getenv("TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES");
  if (env_cache_sz_str) {
    env_cache_sz = strtoul(env_cache_sz_str, NULL, 0);
    if (env_cache_sz == 33554432) {
        cout << "TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES Value same as default:"
                " 33554432 export a different value for test" << endl;
        exit(EXIT_FAILURE);
    }
    tc_cache_sz = 0;
    MallocExtension::instance()->
        GetNumericProperty("tcmalloc.max_total_thread_cache_bytes",
                        &tc_cache_sz);
    if (tc_cache_sz == env_cache_sz) {
      cout << "Tcmalloc OK! Internal and Env cache size are same:" <<
              tc_cache_sz << endl;
      st = EXIT_SUCCESS;
    } else {
      cout << "Tcmalloc BUG! TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES: "
              << env_cache_sz << " Internal Size: " << tc_cache_sz
              << " different" << endl;
      st = EXIT_FAILURE;
    }
  } else {
    cout << "TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES Env Not Set" << endl;
    st = EXIT_FAILURE;
  }
  exit(st);
}
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to