Did you actually try running spatch with the option -profile and find that most
of the time is spent in the python code?
I get a statistic like the following for one of my small data export
experiments.
elfring@Sonne:~/Projekte/Coccinelle/lokal/demos/return_values> spatch -sp_file
select_non_void_functions2.cocci example1b.c -profile
init_defs_builtins: /usr/share/coccinelle/standard.h
HANDLING: example1b.c
function|"return type"|action|"contained in"|"source file"|line|column
my_safe_log|int|declaration||"example1b.c"|1|1
my_status|int|declaration||"example1b.c"|2|3
my_addition|int|declaration||"example1b.c"|7|5
my_safe_log|int|definition||"example1b.c"|4|1
my_message|const char * const|definition||"example1b.c"|5|1
my_status|int|definition||"example1b.c"|6|1
my_addition|int|definition||"example1b.c"|8|1
main|int|definition||"example1b.c"|9|1
starting: Common.group_assoc_bykey_eff
ending: Common.group_assoc_bykey_eff, 0.000018s
---------------------
profiling result
---------------------
Main total : 0.121 sec 1 count
Main.outfiles computation : 0.121 sec 1 count
pre_engine : 0.104 sec 1 count
parse cocci : 0.094 sec 1 count
HACK : 0.023 sec 3 count
full_engine : 0.017 sec 1 count
C parsing.tokens : 0.014 sec 3 count
python : 0.011 sec 32 count
bigloop : 0.011 sec 1 count
initial_final_bigloop : 0.009 sec 2 count
process_a_ctl_a_env_a_toplevel : 0.008 sec 117 count
mysat : 0.007 sec 117 count
Common.full_charpos_to_pos : 0.004 sec 2 count
C parsing.fix_define : 0.004 sec 3 count
Pattern3.match_re_node : 0.004 sec 796 count
Common.full_charpos_to_pos_large : 0.003 sec 3 count
is_void_defined : 0.003 sec 1 count
function_definition : 0.002 sec 1 count
is_void_declared : 0.002 sec 1 count
is_void_member_function : 0.001 sec 1 count
is_member_function : 0.001 sec 1 count
ctl : 0.001 sec 10 count
C parsing.fix_cpp : 0.001 sec 1 count
function_declaration : 0.001 sec 1 count
Parsing: 1st pass : 0.001 sec 10 count
flow : 0.001 sec 10 count
get_glimpse_constants : 0.001 sec 1 count
YACC : 0.001 sec 10 count
TAC.annotate_program : 0.001 sec 1 count
C parsing.lex_ident : 0.001 sec 783 count
MACRO mgmt prep 2 : 0.000 sec 1 count
fix_flow : 0.000 sec 9 count
LEXING : 0.000 sec 1 count
post_engine : 0.000 sec 1 count
Common.=~ : 0.000 sec 6 count
MACRO mgmt prep 1 : 0.000 sec 1 count
show_xxx : 0.000 sec 210 count
C consistencycheck : 0.000 sec 1 count
C parsing.mk_info_item : 0.000 sec 10 count
C parsing.lookahead : 0.000 sec 113 count
Main.result analysis : 0.000 sec 1 count
Type_c.type_of_s : 0.000 sec 5 count
check_duplicate : 0.000 sec 1 count
Common.group_assoc_bykey_eff : 0.000 sec 1 count
TAC.add_binding : 0.000 sec 13 count
TAC.typedef_fix : 0.000 sec 9 count
TAC.lookup_env : 0.000 sec 9 count
C parsing : 0.000 sec 1 count
Main.infiles computation : 0.000 sec 1 count
asttoctl2 : 0.000 sec 1 count
TAC.unwrap_unfold_env : 0.000 sec 2 count
TOTAL : 0.000 sec 1 count
I guess that some software components have got a bigger impact on the run time
than the discussed data exchange/forwarding to scripts for position variables
within a source code filter pattern.
Regards,
Markus
_______________________________________________
Cocci mailing list
[email protected]
http://lists.diku.dk/mailman/listinfo/cocci
(Web access from inside DIKUs LAN only)