Re: cmake

2015-12-04 Thread Daniel Gryniewicz
On Fri, Dec 4, 2015 at 3:59 AM, Pete Zaitcev  wrote:
> On Thu, 3 Dec 2015 19:26:52 -0500 (EST)
> Matt Benjamin  wrote:
>
>> Could you share the branch you are trying to build?  (ceph/wip-5073 would 
>> not appear to be it.)
>
> It's the trunk with a few of my insignificant cleanups.
>
> But I found a fix: deleting the CMakeFiles/ and CMakeCache.txt let
> it run. Thanks again for the tip about the separate build directory.
>

FWIW, many cmake issues can be fixed by nuking the cmake generated
files.  This is one of the big advantages to a separate build dir,
since the simplest way to do this is to nuke the build dir.

Daniel
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: nfsv41 over AF_VSOCK (nfs-ganesha)

2015-10-23 Thread Daniel Gryniewicz
On Fri, Oct 23, 2015 at 9:27 AM, John Spray  wrote:
>  * NFS writes from the guest are lagging for like a minute before
> completing, my hunch is that this is something in the NFS client
> recovery stuff (in ganesha) that's not coping with vsock, the
> operations seem to complete at the point where the server declares
> itself "NOT IN GRACE".


Ganesha always starts in Grace, and will not process new clients until
it exits Grace.  Existing clients should re-connect fine, and new
clients work fine after Grace is exited.

Dan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: branches! infernalis vs master, RIP next

2015-09-30 Thread Daniel Gryniewicz
On Tue, Sep 29, 2015 at 5:12 PM, Sage Weil  wrote:
>
>  1- Target any pull request with a bug fix that should go into infernalis
> at the infernalis branch.


So, currently, anything targeted at both infernalis and master should
have a pull request for infernalis only?  Or for both?

Daniel
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: make check bot failures (2 hours today)

2015-09-11 Thread Daniel Gryniewicz
Maybe periodically run git gc on the clone out-of-line?  Git runs it
occasionally when it thinks it's necessary, and that can take a while
on large and/or fragmented repos.

Daniel

On Fri, Sep 11, 2015 at 9:03 AM, Loic Dachary  wrote:
> Hi Ceph,
>
> The make check bot failed a number of pull request verifications today. Each 
> of them was notified as false negative (you should have received a short note 
> if your pull request is concerned). The problem is now fixed[1] and all 
> should be back to normal. If you want to schedule another run, you just need 
> to rebase your pull request and re-push it, the bot will notice.
>
> Sorry for the inconvenience and thanks for your patience :-)
>
> P.S. I'm not sure what it was exactly. Just that git fetch took too long to 
> answer and failed. Reseting the git clone from which the bot works fixed the 
> problem. It happened a few times in the past but did not show up in the past 
> six month or so.
>
> --
> Loïc Dachary, Artisan Logiciel Libre
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Ceph Hackathon: More Memory Allocator Testing

2015-09-03 Thread Daniel Gryniewicz
I believe preloading should work fine.  It has been a common way to
debug buffer overruns using electric fence and similar tools for
years, and I have used it in large applications of similar size to
Ceph.

Daniel

On Thu, Sep 3, 2015 at 5:13 AM, Shinobu Kinjo  wrote:
>
> Pre loading jemalloc after compiling with malloc
>
> $ cat hoge.c
> #include 
>
> int main()
> {
> int *ptr = malloc(sizeof(int) * 10);
>
> if (ptr == NULL)
> exit(EXIT_FAILURE);
> free(ptr);
> }
>
>
> $ gcc ./hoge.c
>
>
> $ ldd ./a.out
> linux-vdso.so.1 (0x7fffe17e5000)
> libc.so.6 => /lib64/libc.so.6 (0x7fc989c5f000)
> /lib64/ld-linux-x86-64.so.2 (0x55a718762000)
>
>
> $ nm ./a.out | grep malloc
>  U malloc@@GLIBC_2.2.5   // malloc loaded
>
>
> $ LD_PRELOAD=/usr/lib64/libjemalloc.so.1 \
> > ldd a.out
> linux-vdso.so.1 (0x7fff7fd36000)
> /usr/lib64/libjemalloc.so.1 (0x7fe6ffe39000)// jemallo loaded
> libc.so.6 => /lib64/libc.so.6 (0x7fe6ffa61000)
> libpthread.so.0 => /lib64/libpthread.so.0 (0x7fe6ff844000)
> /lib64/ld-linux-x86-64.so.2 (0x560342ddf000)
>
>
> Logically it could work, but in real world I'm not 100% sure if it works for 
> large scale application.
>
> Shinobu
>
> - Original Message -
> From: "Somnath Roy" 
> To: "Alexandre DERUMIER" 
> Cc: "Sage Weil" , "Milosz Tanski" , 
> "Shishir Gowda" , "Stefan Priebe" 
> , "Mark Nelson" , "ceph-devel" 
> 
> Sent: Sunday, August 23, 2015 2:03:41 AM
> Subject: RE: Ceph Hackathon: More Memory Allocator Testing
>
> Need to see if client is overriding the libraries built with different malloc 
> libraries I guess..
> I am not sure in your case the benefit you are seeing is because of qemu is 
> more efficient with tcmalloc/jemalloc or the entire client stack ?
>
> -Original Message-
> From: Alexandre DERUMIER [mailto:aderum...@odiso.com]
> Sent: Saturday, August 22, 2015 9:57 AM
> To: Somnath Roy
> Cc: Sage Weil; Milosz Tanski; Shishir Gowda; Stefan Priebe; Mark Nelson; 
> ceph-devel
> Subject: Re: Ceph Hackathon: More Memory Allocator Testing
>
> >>Wanted to know is there any reason we didn't link client libraries with 
> >>tcmalloc at the first place (but did link only OSDs/mon/RGW) ?
>
> Do we need to link client librairies ?
>
> I'm building qemu with jemalloc , and it's seem to be enough.
>
>
>
> - Mail original -
> De: "Somnath Roy" 
> À: "Sage Weil" , "Milosz Tanski" 
> Cc: "Shishir Gowda" , "Stefan Priebe" 
> , "aderumier" , "Mark Nelson" 
> , "ceph-devel" 
> Envoyé: Samedi 22 Août 2015 18:15:36
> Objet: RE: Ceph Hackathon: More Memory Allocator Testing
>
> Yes, even today rocksdb also linked with tcmalloc. It doesn't mean all the 
> application using rocksdb needs to be built with tcmalloc.
> Sage,
> Wanted to know is there any reason we didn't link client libraries with 
> tcmalloc at the first place (but did link only OSDs/mon/RGW) ?
>
> Thanks & Regards
> Somnath
>
> -Original Message-
> From: Sage Weil [mailto:s...@newdream.net]
> Sent: Saturday, August 22, 2015 6:56 AM
> To: Milosz Tanski
> Cc: Shishir Gowda; Somnath Roy; Stefan Priebe; Alexandre DERUMIER; Mark 
> Nelson; ceph-devel
> Subject: Re: Ceph Hackathon: More Memory Allocator Testing
>
> On Fri, 21 Aug 2015, Milosz Tanski wrote:
> > On Fri, Aug 21, 2015 at 12:22 AM, Shishir Gowda
> >  wrote:
> > > Hi All,
> > >
> > > Have sent out a pull request which enables building librados/librbd with 
> > > either tcmalloc(as default) or jemalloc.
> > >
> > > Please find the pull request @
> > > https://github.com/ceph/ceph/pull/5628
> > >
> > > With regards,
> > > Shishir
> >
> > Unless I'm missing something here, this seams like the wrong thing to.
> > Libraries that will be linked in by other external applications should
> > not have a 3rd party malloc linked in there. That seams like an
> > application choice. At the very least the default should not be to
> > link in a 3rd party malloc.
>
> Yeah, I think you're right.
>
> Note that this isn't/wasn't always the case, though.. on precise, for 
> instance, libleveldb links libtcmalloc. They stopped doing this sometime 
> before trusty.
>
> sage
>
> 
>
> PLEASE NOTE: The information contained in this electronic mail message is 
> intended only for the use of the designated recipient(s) named above. If the 
> reader of this message is not the intended recipient, you are hereby notified 
> that you have received this message in error and that any review, 
>