Derek,
Derek M Jones wrote:
Gilles,
When working on the ASPLOS paper with our 48 core machine, we tried
both. One limiting factor is the file system cache if you have a lot of
different source files to process. In our case, it was interesting to
process several semantic patches on a single source file. That way the
source file and all includes are reused more than a single time.
I'm guessing that your 48 core machine was designed for cpu intensive
work?
No, it's a standard DELL server. But, clearly the limiting factor is the
main memory.
I would expect that Amazon allows data to be distributed. Isn't that
how cloud computing is supposed to work, or is data distribution
going to be an option that costs more?
No idea if it costs more or less. Probably the main issue here is to
package the source files.
Coccinelle does have a mechanism for scripts to communicate when they
reach the start/end of a file:
http://cocci.ekstranet.diku.dk/wiki/doku.php?id=executing_python_at_start_end_of_every_source_file
I don't know whether this would be an worthwhile synchronization
mechanism if Amazon has a file system bottleneck.
I will ask a friend who knows everything about EC2 about the right way
to do such a thing...
Gilles
--
http://lip6.fr/Gilles.Muller
_______________________________________________
Cocci mailing list
[email protected]
http://lists.diku.dk/mailman/listinfo/cocci
(Web access from inside DIKUs LAN only)