Hello again,
could anyone confirm the following behaviour?

I run cocci in parallel (old way, as my coccicheck script is not up to date):
$ parallel make coccicheck CHECK=src FLAGS="-timeout 180" -- `ls
tools/coccinelle/tests/*.cocci | sed 's/tools/COCCI=tools/'`
I have some huge files, that's why timeout flag.
Unfortunately when spatch reaches such big file, its memory usage
increases and doesn't go down after timeouting. Memory is returned
after whole process finish.
That causes my system to slow down to a crawl.

I know I can run checks sequentially, but I'd really prefer
possibility of running them in parallel.

Sample ps output during analysis (note the first and last process):
R.Gomulka@rgomulka-u:~/git/$ ps auxwww | grep spatch
10273    18352 73.4 31.8 2014932 1299012 pts/10 D+  10:11  15:28
/usr/bin/ocamlrun /usr/lib/coccinelle/spatch -timeout 180 -sp_file
tools/coccinelle/tests/andand.cocci -dir ./git/src
10273    18353 89.3  0.5  34396 23548 pts/10   R+   10:11  18:50
/usr/bin/ocamlrun /usr/lib/coccinelle/spatch -timeout 180 -sp_file
tools/coccinelle/tests/find_unsigned.cocci -dir ./git/src
10273    18354 91.3  0.4  28888 16388 pts/10   R+   10:11  19:15
/usr/bin/ocamlrun /usr/lib/coccinelle/spatch -timeout 180 -sp_file
tools/coccinelle/tests/badzero.cocci -dir ./git/src
10273    18355 81.2 43.9 2023316 1793824 pts/10 R+  10:11  17:08
/usr/bin/ocamlrun /usr/lib/coccinelle/spatch -timeout 180 -sp_file
tools/coccinelle/tests/doublebitand.cocci -dir ./git/src

Best regards,
Robert
_______________________________________________
Cocci mailing list
[email protected]
http://lists.diku.dk/mailman/listinfo/cocci
(Web access from inside DIKUs LAN only)

Reply via email to