thank you so much for your so valuable input!!

On Sat, Mar 21, 2020 at 2:19 PM Joel Rosdahl <j...@rosdahl.net> wrote:
> On Tue, 17 Mar 2020 at 10:06, Steffen Dettmer via ccache
> <ccache@lists.samba.org> wrote:
> > As workaround for a special unrelated issue currently we redefine
> > __FILE__ (and try to remove that redefinition). I understand that
> > ccache still works thanks to CCACHE_BASEDIR even for __FILE__ usage
> > inside files. Is that correct?
> Yes, if basedir is a prefix of the the source file path then __FILE__
> will expand to a relative path since ccache passes a relative source
> file path to the compiler.

Thanks for confirmation. For us this now seems to work fine!

Just BTW, isn't it common to build in some $builddir different from
top $srcdir (e.g. automake, cmake) and in that case couldn't the
common case need two base directories?
(for me its no problem, I just build in a subdir below BASEDIR, but
there are team mates who hate this and build in ramdisk or so).

> > I understood that CCACHE_SLOPPINESS=file_macro means that cache
> > results may be used even if __FILE__ is different, i.e. using a
> > __FILE__ from another user (fine for our usecases), is this correct?
> That used to be the case, but the file_macro sloppiness was removed in
> 3.7.6; see <https://ccache.dev/releasenotes.html#_ccache_3_7_6>.

Ahh, thanks for the pointer. I think I now remember that someone posted
about hunting some strange bug down to disassembly only to find something
like that as a cause. Indeed, such case once cannot be ever saved by
reduced compilation times. Good that you fixed it.

> > How to find a reasonable max_size? For now I just arbitrarily picked
> > 25 GB (approximately the build tree size) and I never saw it "full"
> > according to ccache -s.
> "cache size" will never reach "max cache size", so that is not a good
> indicator of whether "max cache size" is enough. See
> <https://ccache.dev/manual/3.7.8.html#_automatic_cleanup> for details on
> how automatic cache cleanup works. The TLDR is that "cache size" will
> stay around 90% (assuming limit_multiple is the default 0.8) of "max
> cache size" in steady state. This is because each of the 16
> subdirectories will be between 80% and 100% full with uniform
> probability.

Thanks for your great explanation! I read this (and almost understood it
right, but just almost) and with "full" I meant more than 80%
(I just saw slightly over 50%).

> Instead you can have a look at "cleanups performed". Each time a
> subdirectory gets full that counter will increase by 1.

Ahh, this of course is a great idea, of course. I will watch.
(Actually I wonder why I didn't had it in first place)

> Especially with network caches it might be a good idea to disable
> automatic cleanup and instead perform explicit cleanup periodically on
> one server, preferably the server that hosts the filesystem. That way
> the cleanup won't happen over slow network and several clients won't
> compete to clean up. One way of doing this is to set an unlimited cache
> size and then run something like "CCACHE_MAXSIZE=25G ccache -c"
> periodically the server.

This is again is a great idea. Will clean recover from corrupted caches,
or should I add some script like "when each cache value is zero, clear it"?
I think I could set a high "safety" value for the Jenkins user (ran locally)
and have a smaller periodic cleanup after the nightly builds, later.

> > Is sharing via CIFS possibly at all or could it have bad effects?
> Don't know, but I wouldn't be surprised if ccache's locking doesn't work
> properly with SMB/CIFS. Locking is based on creating symlinks atomically
> and I guess that doesn't translate well to Windows filesystems.

Thanks for your clarification. I disabled the Samba share (just needed
to reconfigure two repositories driving auto-updating docker containers,
isn't it simple nowadays lol) and now it seems to run very well. I guess
CIFS was the root of all our issues.

> > Are cache and/or stats version dependent?
> The cache data and stats files are intended to be backward and forward
> compatible from ccache 3.2.

ahh that's good to know, so in case someone accidentally uses a wrong
version, we shouldn't face issues. Great work!

> > I'm also still facing scmake issues (using "physical" and "logical" in
> > several mixed combinations). Complex topic.
> What is scmake?

A typo! cmake what was I wanted to write :)

Our issue somehow it that there are cases where cmake
uses physical path instead of logical and then BASEDIR
isn't matching.
I didn't yet understand the whole topic, I need to read
when I have a bit more time

Thank you for your great support again!!


ccache mailing list

Reply via email to