On 25-11-2010 11:17, Wilson Snyder wrote:
Why don't you just put the cache on a NFS (/CIFS) mounted volume? With the most recent version this should work well.
In our case the biggest compilation issues we have is from Windows and not linux. My thought until now has been that there was a lot of overhead using these protocols. However it may be worth giving it a try.
If you already are, are you really doing enough writes to swamp a NFS cache server? It probably requires hundreds of compiling clients; since we have over a hundred here and don't see a bottleneck - with a single well performing NFS server.
In our case we're talking about 25 shared builds coming from like 15 machines.
Can you disclose what the specs are on that machine?
Memcached would provide a nice benefit of providing tolerance for machines going down, and somewhat better latency, but perhaps the above ideas with the existing version can deliver enough performance for you.
I think the idea I like most about this is the simplicity and less overhead. We're talking about plain tcp sockets without configuration needed.
In our environment we're working on many different operating systems. This is why it's even more interesting to cache the whole thing in a simplistic fashion that doesn't require a lot of maintenance.
I see that memcached is limited to 1 mb data per key. Naturally this causes some troubles as many files would either not be cached or you'd need to split it up to more keys.
-- Henrik _______________________________________________ ccache mailing list email@example.com https://lists.samba.org/mailman/listinfo/ccache