(Resend since my original mail didn't reach the mailing list properly.) (Sorry for the delayed reply, I have been on vacation.)
On cache-hit, there's currently no reason to actually look inside the file, > right? It just does the copy blind (I forget exactly how). Reading the > initial data from every binary on every cache-hit (the case we want to be > most optimal) sounds like a Bad Thing. No no, doing an extra read of initial data is not needed. If something I wrote implied that I must have been unclear. A cache hit in the current implementation (simplified, of course): 1. Stat object file in cache. If it exists, we have a hit. 2. Open file. 3. Read a chunk of the file into a buffer. 4. Write buffer content to the destination object file. 5. Repeat 3 and 4 until EOF. 6. Close file. A cache hit in the suggested solution where special data (e.g. encoding the exit code) is written to the cached "object file": 1. Stat object file in cache. If it exists, we have a hit. 2. Open file. 3. Read a chunk of the file into a buffer. 4. If the buffer contains special data (e.g. starts with a ccache-specific header), exit with the encoded exit code (and write stderr, etc.). Else: 5. Write buffer content to the destination object file. 6. Repeat 3 and 5 until EOF. 7. Close file. So, exactly the same system calls in both cases for a normal cache hit. That's what I tried to summarize with "On a cache hit, we need to open and read the file regardless of whether it's a real object file or special data encoding an exit code". The most common case must always be the "quick path" [...] Heh, I certainly know that it's important not to slow down common cases with e.g. more system calls. I didn't mean to ask you to explain what you meant by the term "slow path" but why you thought that there *would be* a slow path in the special-data-in-object-file solution. Again I must have been unclear, sorry about that. (I'm a bit surprised that you felt the need to explain basic stuff about what's important, to be honest.) And as described above, I see no reason for why special-data-in-object-file would be slower. Yes, the non-zero-exit-code case could perhaps be optimized with a stat size hack, but that doesn't feel important. I hope this is more clear. I'm beginning to lose faith in my English communication skills. :-) -- Joel _______________________________________________ ccache mailing list ccache@lists.samba.org https://lists.samba.org/mailman/listinfo/ccache