I have a aggregation policy where I am trying to keep counts of number of
connections an IP made in a cluster setup. 

For now, I am using table on workers and manager and using expire_func to
trigger worker2manager and manager2worker events. 

All works great until tables grow to > 1 million after which expire_functions
start clogging on manager and slowing down. 

Example of Timer from prof.log on manager:

1523636760.591416 Timers: current=57509 max=68053 mem=4942K lag=0.44s
1523636943.983521 Timers: current=54653 max=68053 mem=4696K lag=168.39s
1523638289.808519 Timers: current=49623 max=68053 mem=4264K lag=1330.82s
1523638364.873338 Timers: current=48441 max=68053 mem=4162K lag=60.06s
1523638380.344700 Timers: current=50841 max=68053 mem=4369K lag=0.47s

So Instead of using &expire_func, I can probably try schedule {} ; but I am not
sure how scheduling events are any different internally then scheduling
expire_funcs ?

I'd like to think/guess that scheduling events is probably less taxing. but
wanted to check with the greater group on thoughts - esp insights into their
internal processing queues. 

Thanks,
Aashish 


_______________________________________________
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev

Reply via email to