Am Montag, den 12.11.2012, 13:49 +0200 schrieb Bogdan Harjoc:
Basically, before writing a new object file, ccache could find a similar
object in the cache (based on object-code or source-code hashes for
example)
The main goal of most hashes is to give very distinct results even for
even small
On Mon, Nov 12, 2012 at 2:30 PM, Jürgen Buchmüller pullm...@t-online.dewrote:
Am Montag, den 12.11.2012, 13:49 +0200 schrieb Bogdan Harjoc:
Basically, before writing a new object file, ccache could find a similar
object in the cache (based on object-code or source-code hashes for
example)
On Mon, Nov 12, 2012 at 3:39 PM, Andrew Stubbs a...@codesourcery.com wrote:
On 12/11/12 11:49, Bogdan Harjoc wrote:
Alternatively, a compact operation could be run periodically, that
compresses the cache using the same approach.
Is cache size/capacity a very big issue for you?
No but
On 12/11/12 14:08, Bogdan Harjoc wrote:
No but there is room for improvement. This could be optional, like a
CCACHE_COMPRESS that saves 99% instead of 40% when I routinely recompile 20
kernel branches, for example (v2.6.x, 3.0.x, 3.4.x, -git, -next, -ubuntu,
etc).
I realise that the more
Initial results from a small .ccache (3.0) dir:
- 6476 objects
- 300MB
- probably about 500-1000 compiles/recompiles of around 100 small to large
projects
The test was:
1. Find the candidates for compression, based on: objdump -t | grep g
(defined symbols). If two objects had at least 4