>> Even on a ccache *hit* both copies of the .o file wind up occupying
>> buffer cache space, because the ccached .o is read from disk [paging
>> it in] in order to write the .o file to the build output directory.
>> On a ccache miss the copy runs the other direction but you still wind
>> up with both sets of pages in the buffer cache.
>
> In the hit case I would have thought that the .o file you read would
> still create less memory pressure than the working memory of running
> the real compiler on that file?  Perhaps the difference is that the
> kernel knows that when the compiler exits, its anonymous pages can be
> thrown away, whereas it doesn't know which .o file it ought to retain.
>  So perhaps madvise might help.  (Just speculating.)

I'm curious about this.  I guess you'd madvise to tell the kernel that
the .o you just wrote shouldn't be cached?  But presumably it should
be, because you're going to link your program.

Alternatively, you could madvise and tell the kernel not to cache the
.o file from ccache's cache.  But if you re-compile, you want ccache's
cache to be in memory.

I'm not sure how one might win here without hardlinking.

-Justin

On Thu, Dec 2, 2010 at 4:24 PM, Martin Pool <m...@sourcefrog.net> wrote:
> On 3 December 2010 03:42, Christopher Tate <ct...@google.com> wrote:
>>> I'd love to know whether you also tried distcc for it, and if so what
>>> happened or what went wrong.  (Obviously it can only help for the
>>> C/C++ phases.)
>>
>> distcc can certainly help a great deal.  For us, it's a bit
>> problematic to use because more than half of our total build is
>> non-C/C++ that depends on the C/C++ targets [e.g. Java-language
>> modules that have partially native implementations],
>
> ... and you suspect that the Makefile dependencies are not solid
> enough to safely do a parallel build?
>
>> plus we have a
>> highly heterogeneous set of build machines: both Mac hosts and Linux,
>> not all the same distro of Linux, etc.  The inclusion of Macs in
>> particular makes distcc more of a pain to get up and running cleanly.
>
> That can certainly be a problem.
>
>>> I'm just trying to understand how this happens.  Is it that when
>>> ccache misses it writes out an object file both to the cache directory
>>> and into the build directory, and both will be in the buffer cache?
>>> So it's not so much they're paged in, but they are dirtied in memory
>>> and will still be held there.
>>
>> Even on a ccache *hit* both copies of the .o file wind up occupying
>> buffer cache space, because the ccached .o is read from disk [paging
>> it in] in order to write the .o file to the build output directory.
>> On a ccache miss the copy runs the other direction but you still wind
>> up with both sets of pages in the buffer cache.
>
> In the hit case I would have thought that the .o file you read would
> still create less memory pressure than the working memory of running
> the real compiler on that file?  Perhaps the difference is that the
> kernel knows that when the compiler exits, its anonymous pages can be
> thrown away, whereas it doesn't know which .o file it ought to retain.
>  So perhaps madvise might help.  (Just speculating.)
>
> --
> Martin
> _______________________________________________
> ccache mailing list
> ccache@lists.samba.org
> https://lists.samba.org/mailman/listinfo/ccache
>
_______________________________________________
ccache mailing list
ccache@lists.samba.org
https://lists.samba.org/mailman/listinfo/ccache

Reply via email to