Re: [Bro-Dev] error compiling master

2018-07-10 Thread Aashish Sharma
Jon, 

> you have an old version of CAF (AKA actor-framework or libcaf)

Yes, that was the issue. I didn't realize I already had caf on this from Jan
2017. Thank you. 

root@box:# pkg info |fgrep caf
caf-0.15.3 C++ actor framework

Removing this, lets me build bro now. 

Thanks for the pointer. 

Aashish 

On Tue, Jul 10, 2018 at 06:02:51PM -0500, Jon Siwek wrote:
> On Tue, Jul 10, 2018 at 2:10 PM Aashish Sharma  wrote:
> 
> > [ 96%] Building CXX object 
> > libcaf_openssl/CMakeFiles/libcaf_openssl_shared.dir/src/manager.cpp.o
> > clang: warning: argument unused during compilation: '-L/lib'
> > /usr/local/src/master/aux/broker/3rdparty/caf/libcaf_openssl/src/manager.cpp:100:8:
> >  error: use of undeclared identifier 'get_or'
> >   if (!get_or(config(), "middleman.attach-utility-actors", false))
> >^
> 
> This is at least an odd error if it were actually finding the CAF
> headers that are embedded in the Bro source tree, so a guess is that
> you have an old version of CAF (AKA actor-framework or libcaf)
> installed in a system directory and it is finding those instead.  Can
> you check?  A specific header file in question here would be
> "caf/config_value.hpp".
> 
> And for reference, building on a fresh FreeBSD 10.3 system +
> bro/master checkout worked for me just now, which may also suggest a
> problem in the system setup (or else an inconsistent Bro
> checkout/configuration, but it seems like we are both starting from a
> fresh clone).
> 
> - Jon
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] error compiling master

2018-07-10 Thread Aashish Sharma
HI Daniel, 

> Was master (after the broker merge) previously working on this same machine?

Nah! just building from scratch first time. Previously I was/am running 
bro-2.5.3 on this
host. 

Same error when tried on another 10.3-STABLE release. 

Aashish 


On Tue, Jul 10, 2018 at 03:24:34PM -0500, Daniel Thayer wrote:
> Was master (after the broker merge) previously working on this same machine?
> 
> It works for me on 10.4-RELEASE.   Maybe you could try "make distclean"
> and "git pull" and try again.
> 
> 
> On 7/10/18 2:03 PM, Aashish Sharma wrote:
> > Probably obvious but I am not very sure so asking here.
> > 
> > I see this error trying to build current master - Thoughts what am I 
> > missing ?
> > 
> > (trying the build on: FreeBSD 10.3-STABLE)
> > 
> > I am building as:
> > 
> > ./configure --prefix=/usr/local/bro-master && make
> > 
> > 
> > 
> > ..
> > ..
> > 
> > gmake[6]: Leaving directory 
> > '/usr/local/src/master/build/aux/broker/caf-build'
> > gmake[6]: Entering directory 
> > '/usr/local/src/master/build/aux/broker/caf-build'
> > [ 96%] Building CXX object 
> > libcaf_openssl/CMakeFiles/libcaf_openssl_shared.dir/src/manager.cpp.o
> > clang: warning: argument unused during compilation: '-L/lib'
> > /usr/local/src/master/aux/broker/3rdparty/caf/libcaf_openssl/src/manager.cpp:100:8:
> >  error: use of undeclared identifier 'get_or'
> >if (!get_or(config(), "middleman.attach-utility-actors", false))
> > ^
> > /usr/local/src/master/aux/broker/3rdparty/caf/libcaf_openssl/src/manager.cpp:102:14:
> >  error: calling a private constructor of class 
> > 'caf::typed_actor,
> >  unsigned short, caf::intrusive_ptr,
> >std::__1::set, 
> > std::__1::less >, 
> > std::__1::allocator > >, 
> > std::__1::basic_string, bool>, caf::output_tuple >, 
> > caf::typed_mpi, 
> > unsigned short,
> >std::__1::basic_string, bool>, caf::output_tuple > short> >, 
> > caf::typed_mpi, 
> > std::__1::basic_string, unsigned short>, 
> > caf::output_tuple > caf::intrusive_ptr,
> >std::__1::set, 
> > std::__1::less >, 
> > std::__1::allocator > > > >, 
> > caf::typed_mpi,
> >  caf::actor_addr, unsigned short>, caf::output_tuple >,
> >
> > caf::typed_mpi, 
> > unsigned short>, caf::output_tuple >, 
> > caf::typed_mpi, 
> > caf::node_id, std::__1::basic_string, caf::message, 
> > std::__1::set,
> >std::__1::less >, 
> > std::__1::allocator > > >, 
> > caf::output_tuple > >, 
> > caf::typed_mpi, 
> > caf::node_id>, caf::output_tuple >std::__1::basic_string, unsigned short> > >'
> >manager_ = nullptr;
> >   ^
> > /usr/local/include/caf/typed_actor.hpp:270:3: note: declared private here
> >typed_actor(actor_control_block* ptr) : ptr_(ptr) {
> >^
> > /usr/local/src/master/aux/broker/3rdparty/caf/libcaf_openssl/src/manager.cpp:112:27:
> >  error: no member named 'openssl_certificate' in 'caf::actor_system_config'
> >  if (system().config().openssl_certificate.size() == 0)
> >  ~ ^
> > /usr/local/src/master/aux/broker/3rdparty/caf/libcaf_openssl/src/manager.cpp:114:27:
> >  error: no member named 'openssl_key' in 'caf::actor_system_config'
> >  if (system().config().openssl_key.size() == 0)
> >  ~ ^
> > /usr/local/src/master/aux/broker/3rdparty/caf/libcaf_openssl/src/manager.cpp:131:10:
> >  error: use of undeclared identifier 'openssl_manager'; did you mean 
> > 'opencl_manager'?
> >return openssl_manager;
> >   ^~~
> >   opencl_manager
> > /usr/local/include/caf/actor_system.hpp:140:7: note: 'opencl_manager' 
> > declared here
> >opencl_manager,
> >^
> > /usr/local/src/master/aux/broker/3rdparty/caf/libcaf_openssl/src/manager.cpp:140:14:
> >  error: no member named 'openssl_certificate' in 'caf::actor_system_config'
> >return cfg.openssl_certificate.size() > 0 || cfg.openssl_key.size() > 0
> >   ~~~ ^
> > /usr/local/src/master/aux/broker/3rdparty/caf/libcaf_openssl/src/manager.cpp:140:52:
> >  error: no member named 'openssl_key' in 'caf::actor_system_config'
> >return cfg.openssl_certificate.size() > 0 || cfg.openssl_key.size() > 0
> > ~~~ ^
> > /usr/local/src/master/aux/bro

[Bro-Dev] error compiling master

2018-07-10 Thread Aashish Sharma
Probably obvious but I am not very sure so asking here. 

I see this error trying to build current master - Thoughts what am I missing ? 

(trying the build on: FreeBSD 10.3-STABLE) 

I am building as: 

./configure --prefix=/usr/local/bro-master && make 



..
..

gmake[6]: Leaving directory '/usr/local/src/master/build/aux/broker/caf-build'
gmake[6]: Entering directory '/usr/local/src/master/build/aux/broker/caf-build'
[ 96%] Building CXX object 
libcaf_openssl/CMakeFiles/libcaf_openssl_shared.dir/src/manager.cpp.o
clang: warning: argument unused during compilation: '-L/lib'
/usr/local/src/master/aux/broker/3rdparty/caf/libcaf_openssl/src/manager.cpp:100:8:
 error: use of undeclared identifier 'get_or'
  if (!get_or(config(), "middleman.attach-utility-actors", false))
   ^
/usr/local/src/master/aux/broker/3rdparty/caf/libcaf_openssl/src/manager.cpp:102:14:
 error: calling a private constructor of class 
'caf::typed_actor,
 unsigned short, caf::intrusive_ptr,
  std::__1::set, 
std::__1::less >, 
std::__1::allocator > >, 
std::__1::basic_string, bool>, caf::output_tuple >, 
caf::typed_mpi, unsigned 
short,
  std::__1::basic_string, bool>, caf::output_tuple >, 
caf::typed_mpi, 
std::__1::basic_string, unsigned short>, caf::output_tuple,
  std::__1::set, 
std::__1::less >, 
std::__1::allocator > > > >, 
caf::typed_mpi, 
caf::actor_addr, unsigned short>, caf::output_tuple >,
  caf::typed_mpi, 
unsigned short>, caf::output_tuple >, 
caf::typed_mpi, 
caf::node_id, std::__1::basic_string, caf::message, 
std::__1::set,
  std::__1::less >, 
std::__1::allocator > > >, 
caf::output_tuple > >, 
caf::typed_mpi, 
caf::node_id>, caf::output_tuple, unsigned short> > >'
  manager_ = nullptr;
 ^
/usr/local/include/caf/typed_actor.hpp:270:3: note: declared private here
  typed_actor(actor_control_block* ptr) : ptr_(ptr) {
  ^
/usr/local/src/master/aux/broker/3rdparty/caf/libcaf_openssl/src/manager.cpp:112:27:
 error: no member named 'openssl_certificate' in 'caf::actor_system_config'
if (system().config().openssl_certificate.size() == 0)
~ ^
/usr/local/src/master/aux/broker/3rdparty/caf/libcaf_openssl/src/manager.cpp:114:27:
 error: no member named 'openssl_key' in 'caf::actor_system_config'
if (system().config().openssl_key.size() == 0)
~ ^
/usr/local/src/master/aux/broker/3rdparty/caf/libcaf_openssl/src/manager.cpp:131:10:
 error: use of undeclared identifier 'openssl_manager'; did you mean 
'opencl_manager'?
  return openssl_manager;
 ^~~
 opencl_manager
/usr/local/include/caf/actor_system.hpp:140:7: note: 'opencl_manager' declared 
here
  opencl_manager,
  ^
/usr/local/src/master/aux/broker/3rdparty/caf/libcaf_openssl/src/manager.cpp:140:14:
 error: no member named 'openssl_certificate' in 'caf::actor_system_config'
  return cfg.openssl_certificate.size() > 0 || cfg.openssl_key.size() > 0
 ~~~ ^
/usr/local/src/master/aux/broker/3rdparty/caf/libcaf_openssl/src/manager.cpp:140:52:
 error: no member named 'openssl_key' in 'caf::actor_system_config'
  return cfg.openssl_certificate.size() > 0 || cfg.openssl_key.size() > 0
   ~~~ ^
/usr/local/src/master/aux/broker/3rdparty/caf/libcaf_openssl/src/manager.cpp:141:17:
 error: no member named 'openssl_passphrase' in 'caf::actor_system_config'
 || cfg.openssl_passphrase.size() > 0 || cfg.openssl_capath.size() > 0
~~~ ^
/usr/local/src/master/aux/broker/3rdparty/caf/libcaf_openssl/src/manager.cpp:141:54:
 error: no member named 'openssl_capath' in 'caf::actor_system_config'
 || cfg.openssl_passphrase.size() > 0 || cfg.openssl_capath.size() > 0
 ~~~ ^
/usr/local/src/master/aux/broker/3rdparty/caf/libcaf_openssl/src/manager.cpp:142:17:
 error: no member named 'openssl_cafile' in 'caf::actor_system_config'
 || cfg.openssl_cafile.size() > 0;
~~~ ^
10 errors generated.
gmake[6]: *** 
[libcaf_openssl/CMakeFiles/libcaf_openssl_shared.dir/build.make:63: 
libcaf_openssl/CMakeFiles/libcaf_openssl_shared.dir/src/manager.cpp.o] Error 1
gmake[6]: Leaving directory '/usr/local/src/master/build/aux/broker/caf-build'
gmake[5]: *** [CMakeFiles/Makefile2:394: 
libcaf_openssl/CMakeFiles/libcaf_openssl_shared.dir/all] Error 2
gmake[5]: Leaving directory '/usr/local/src/master/build/aux/broker/caf-build'
gmake[4]: *** [Makefile:128: all] Error 2
gmake[4]: Leaving directory '/usr/local/src/master/build/aux/broker/caf-build'
*** Error code 2

Stop.
make[3]: stopped in /usr/local/src/master/build
*** Error code 1

Stop.
make[2]: stopped in /usr/local/src/master/build
*** Error code 1

Stop.
make[1]: stopped in /usr/local/src/master/build
*** Error code 1

Stop.
make: stopped in /usr/local/src/master
___
bro-dev mailing list
bro-dev@bro.org

Re: [Bro-Dev] input-framework reporter_error vs reporter_warning events ?

2018-04-20 Thread Aashish Sharma
Hah! Should have checked change notes more carefully!

No wonder a bunch of pre 2.5.1 scripts never complained :) they all were tapping
into event reporter_error. I'll forward port those now.

Thanks Johanna! 

Aashish 

On Fri, Apr 20, 2018 at 02:34:13PM -0700, Johanna Amann wrote:
> Hi Aashish,
> 
> This changed with Bro 2.5.1. To quote NEWS:
> 
> - The input framework's Ascii reader is now more resilient. If an input
>   is marked to reread a file when it changes and the file didn't exist
>   during a check Bro would stop watching the file in previous versions.
>   The same could happen with bad data in a line of a file.  These
>   situations do not cause Bro to stop watching input files anymore. The
>   old behavior is available through settings in the Ascii reader.
> 
> Johanna
> 
> On 20 Apr 2018, at 14:09, Aashish Sharma wrote:
> 
> > While testing other stuff, I realized that if input-framework cannot
> > find a file
> > its now generating reporter_warning event instead of reporter_error ?
> > 
> > Did "error" changed to "warning" for some reason ? Wasn't previously
> > this a
> > error condition ?
> > 
> > 
> > 0.00Reporter::WARNING
> > /DATA/feeds/BRO-feeds/WIRED.blocknet.2/Input::READER_ASCII: Init: cannot
> > open /DATA/feeds/BRO-feeds/WIRED.blocknet.2(empty)
> > 
> > 
> > 1) Also in what situations input-framework would generate WARNINGS vs
> > ERRORS ?
> > 
> > 2) Does warnings means READER_ASCII will try to read file again after
> > some time
> > or does it just gives up and waits for me to tap into reporter_warning
> > event to
> > handle the situation ?
> > 
> > Thanks,
> > Aashish
> > ___
> > bro-dev mailing list
> > bro-dev@bro.org
> > http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] input-framework reporter_error vs reporter_warning events ?

2018-04-20 Thread Aashish Sharma
While testing other stuff, I realized that if input-framework cannot find a file
its now generating reporter_warning event instead of reporter_error ? 

Did "error" changed to "warning" for some reason ? Wasn't previously this a
error condition ? 


0.00Reporter::WARNING   
/DATA/feeds/BRO-feeds/WIRED.blocknet.2/Input::READER_ASCII: Init: cannot open 
/DATA/feeds/BRO-feeds/WIRED.blocknet.2(empty)


1) Also in what situations input-framework would generate WARNINGS vs ERRORS ?

2) Does warnings means READER_ASCII will try to read file again after some time
or does it just gives up and waits for me to tap into reporter_warning event to
handle the situation ?

Thanks, 
Aashish 
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] scheduling events vs using _func ?

2018-04-19 Thread Aashish Sharma
> conn.log because the logger aggregated the records from all the workers.  If 
> the manager
> is running out of memory tracking all of the scanners, then the single 
> instance of the python
> script is going to run into the same issue at some point.

Oh totally forgot to add an important point. 

Issue isn't memory to begin with. Issue was 2Million entires in a table can
result in expire_function to be slow clogging the event queue resulting in stall
network_time and manager lagging significantly behind. 

Holding potential scanners over workers helps me break manager table down to
much smaller size across workers. 

> How are you tracking slow scanners on the workers?  If you have 50 workers 
> and you
> are not distributing the data between them, there's only a 1 in 50 chance 
> that you'll

Exactly, thats what keeps worker table sizes small. until there are enough
connections to flag something as scanner. Note: worker_tables only keep
IP->start_time 

They report to manager a potential scanner but manager doesn't need to keep that
in table. I think I use combination of bloomfilter and hyperloglog there to
scale to Millions easily. 

Note2: this is to generate scan_summary and not a scanner. 

this thing:

#fields ts  scanner state   detection   start_tsend_ts  
detect_ts   detect_latency  total_conn  total_hosts_scanned 
durationscan_rate   country_coderegion  citydistance
event_peer
#types  timeaddrenumstring  timetimetimeinterval
count   count   intervaldouble  string  string  string  double  string
1522090899.384438   122.224.112.162 Scan::DETECTLandMine
1522090219.816163   1522090744.317532   1522090744.317532   
524.501369  19  19  524.501369  27.605335   CN  02  
Hangzhou6243.450744 bro
1522094550.487969   122.224.112.162 Scan::UPDATELandMine
1522090219.816163   1522094128.634672   1522090744.317532   
524.501369  110 109 3908.818509 35.86072CN  02  
Hangzhou6243.450744 bro
1522098156.871486   122.224.112.162 Scan::UPDATELandMine
1522090219.816163   1522097984.861365   1522090744.317532   
524.501369  225 227 7765.045202 34.207248   CN  02  
Hangzhou6243.450744 bro
1522101784.996068   122.224.112.162 Scan::UPDATELandMine
1522090219.816163   1522101081.946002   1522090744.317532   
524.501369  359 365 10862.12983929.75926CN  02  
Hangzhou6243.450744 bro
1522354414.224635   122.224.112.162 Scan::FINISHLandMine
1522090219.816163   1522103520.055414   1522090744.317532   
524.501369  488 507 13300.23925126.233214   CN  02  
Hangzhou6243.450744 bro



Aashish 

On Wed, Apr 18, 2018 at 01:46:08PM +, Azoff, Justin S wrote:
> > On Apr 17, 2018, at 4:04 PM, Aashish Sharma <asha...@lbl.gov> wrote:
> > 
> > For now, I am resorting to _func route only. I think by using some 
> > more
> > heuristics in worker's expire functions for more aggregated stats, I am 
> > able to
> > shed load on manager where manager doesn't need to track ALL potential 
> > scanners. 
> > 
> > Lets see, I am running to see if new code works without exhausting memory 
> > for few days. 
> > 
> > Yes certainly, the following changed did address the manager network_time()
> > stall issues:
> > 
> > redef table_expire_interval = 0.1 secs ;
> > redef table_incremental_step=25 ;
> > 
> > Useful observation: if  you want to expire a lot of entires from a 
> > table/set,
> > expire few but expire often. 
> > 
> > I Still need to determine limits of both table_incremental_step,
> > table_expire_interval and this works for million or million(s) of entires. 
> 
> That should probably work unless you are adding new table entries at a rate 
> higher than 250/second.
> You may need to tweak that so interval x step is at least the rate of new 
> entries.
> 
> > 
> > Yes, It seems like that. I still don't know at what point. In previous runs 
> > it
> > appears after table had 1.7-2.3 Million entires. But then I don't think its
> > function of counts, but how much RAM i've got on the system. Somewhere in 
> > the
> > range is when manager ran out of memory. HOwever (as stated above), I was 
> > able
> > to come up with a little heuristics which still allows me to keep track of
> > really slow scanners, while not burdening manager but rather let load be on
> > workers. Simple observation that really slow scanners aren't going to have 
&

Re: [Bro-Dev] scheduling events vs using _func ?

2018-04-19 Thread Aashish Sharma
Justin, 

On Wed, Apr 18, 2018 at 01:46:08PM +, Azoff, Justin S wrote:
> How are you tracking slow scanners on the workers?  If you have 50 workers 
> and you
> are not distributing the data between them, there's only a 1 in 50 chance 
> that you'll
> see the same scanner twice on the same worker, and a one in 2500 that you'd 
> see
> 3 packets in a row on the same worker... and 1:125,000 for 4 in a row.

Yes, that was the observation and idea. If real slow scanners, (won't be way 
too many way
too soon (obviously)) let each one of their start time be tracked on workers.
ODDS of hitting same worker are too low, so burden of tracking start time for
1000's of slow scanner is distributed fairly even across 10/50 workers *instead*
of manager having to store all this. 

So 600K slow scanners means |manager_table| +=600K  vs |worker_table| = 600K/50
so burden of memory is more distributed. 

I checked some numbers on my end = since midnight we flagged 172K scanners while
tracking 630K potential scanners which will eventually be flagged. 

Issue isn't flagging these. Issue was being accurate on when was the very first
time we saw a IP connect to us and keep that in memory - this is not needed but
good to have stat. 

> >> I'd suggest a different way to think about
> >> structuring the problem: you could Rendezvous Hash the IP addresses across
> >> proxies, with each one managing expiration in just their own table.  In 
> >> that
> >> way, the storage/computation can be uniformly distributed and you should be
> >> able to simply adjust number of proxies to fit the required scale.
> > 
> That doesn't simplify anything, that just moves the problem.  You can only 
> tail the single
> conn.log because the logger aggregated the records from all the workers.  If 
> the manager
> is running out of memory tracking all of the scanners, then the single 
> instance of the python
> script is going to run into the same issue at some point.

I agree, but we digress on the actual issue here. see below. 

> > So yes we can shed load from manger -> workers -> proxies. I'll try this
> > approach. But I think I am also going to try (with new broker-enabled 
> > cluster)
> > approach of sending all connections to one proxy/data-store and just do
> > aggregation there and see if that works out (the tail -f conn.log |
> > 'python-script' approach). Admittedly, this needs more thinking to get the 
> > right
> > architecture in the new cluster era! 
> 
> No.. this is just moving the problem again.  If your manager is running out 
> of memory and you
> move everything to one proxy, that's just going to have the same problem.


I think we've talked about this roughly over 2 years. I am probably
mis-understanding or may be unclear. the issue is complexity of aggregation due
to clusterization in scan-detection. now you can use many proxies, many  data
nodes etc but as long as distributed nature of data is there, aggregation in
realtime is problem. Data needs to be concentrated at one place. A tail -f
conn.log is data concentrated at one place. 

Now its a different issue that conn.log entry is 5+ seconds late which can
already miss a significant scan etc. 

> The fix is to use the distributing message routing features that I've been 
> talking about for a while
> (and that Jon implemented in the actor-system branch!)
> 
> The entire change to switch simple-scan from aggregating all scanners on a 
> single manager to
> aggregating scanners across all proxies (which can be on multiple machines) 
> is swapping

aggregating across all proxies is still distributing data around. So the way I
see is you are moving the problem around :) But as I said, I don't know more how
this works since I haven't tried new broker stuff just yet. 

>  event Scan::scan_attempt(scanner, attempt);
> with
> Cluster::publish_hrw(Cluster::proxy_pool, scanner, 
> Scan::scan_attempt, scanner, attempt);
> (with some @ifdefs to make it work on both versions of bro)

I am *really* looking forward to trying this stuff out in new broker model. 

Aashish 
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] scheduling events vs using _func ?

2018-04-13 Thread Aashish Sharma
I have a aggregation policy where I am trying to keep counts of number of
connections an IP made in a cluster setup. 

For now, I am using table on workers and manager and using expire_func to
trigger worker2manager and manager2worker events. 

All works great until tables grow to > 1 million after which expire_functions
start clogging on manager and slowing down. 

Example of Timer from prof.log on manager:

1523636760.591416 Timers: current=57509 max=68053 mem=4942K lag=0.44s
1523636943.983521 Timers: current=54653 max=68053 mem=4696K lag=168.39s
1523638289.808519 Timers: current=49623 max=68053 mem=4264K lag=1330.82s
1523638364.873338 Timers: current=48441 max=68053 mem=4162K lag=60.06s
1523638380.344700 Timers: current=50841 max=68053 mem=4369K lag=0.47s

So Instead of using _func, I can probably try schedule {} ; but I am not
sure how scheduling events are any different internally then scheduling
expire_funcs ?

I'd like to think/guess that scheduling events is probably less taxing. but
wanted to check with the greater group on thoughts - esp insights into their
internal processing queues. 

Thanks,
Aashish 


___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] timer delays between different events for same connection

2018-04-13 Thread Aashish Sharma
Ah! it is obvious now! 

Yep that was it. Its a relatively slow scan and I only extracted all activity
for this specific source IP out of timemachine. 

I didn't see this behavior in other test cases because there I pull scanners
based on ports so somewhat of more 'fluid' traffic. 

Thanks, 
Aashish 

On Fri, Apr 13, 2018 at 07:46:33AM -0400, Seth Hall wrote:
> 
> 
> On 13 Apr 2018, at 0:30, Aashish Sharma wrote:
> 
> > So I am seeing some weird stuff in my sample pcap of scanners. May be
> > too
> > obvious and I am just not seeing why/how of it.
> 
> It's a straight forward answer but not completely obvious. :)
> 
> > Q. Why would connection_attempt event kick in after 36 minutes and 6
> > seconds ? (
> > 06:13:48 - 05:37:42 ) ?
> 
> I bet that you have a jump in timestamps in your pcap.  Since Bro's internal
> clock is driven forward by seeing timestamps associated with packets it's
> possible that your pcap has a 36 minute jump in timestamps so Bro couldn't
> have possibly expired anything in the time before that because from its
> perspective there was an immediate jump in time.  You don't normally
> experience the effects of this behavior in traffic you're sniffing live
> because you will tend to have many packets every second so you see Bro's
> clock driven forward in very tiny increments as you would expect.  If you go
> a long time without receiving a packet is when stuff gets tricky.
> 
>   .Seth
> 
> --
> Seth Hall * Corelight, Inc * www.corelight.com
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] timer delays between different events for same connection

2018-04-12 Thread Aashish Sharma

So I am seeing some weird stuff in my sample pcap of scanners. May be too
obvious and I am just not seeing why/how of it. 

Here is the issue : ( I have time in human format for easier read): 

SO I just pick one session from conn.log  and this is the connection in
question: (there are many more like this): 


$ fgrep CspAa42NoEGEaXK4ci conn.log  | cf
Apr 12 05:37:42 CspAa42NoEGEaXK4ci  191.254.157.138 45107   128.3.97.204
23  tcp -   -   -   -   S0  F   T   0   
S   1   40  0   0   -

Now as part of debugging I have dumped network_time for various events which
process this connection: 

Apr 12 05:37:42 new_connection  CspAa42NoEGEaXK4ci
Apr 12 06:13:48 connection_attempt  CspAa42NoEGEaXK4ci
Apr 12 06:13:48 connection_state_remove  CspAa42NoEGEaXK4ci


Now my understanding is there are various timers involved upon whose expirations
bro infers events such as connection_attempt,  connection_reset etc etc. Timers
such as tcp_attempt_delay, tcp_SYN_timeout, tcp_close_delay amongst others. But
all these timers are generally 5 seconds. 

Q. Why would connection_attempt event kick in after 36 minutes and 6 seconds ? (
06:13:48 - 05:37:42 ) ? 

I have a pcap to share if anyone is interested and replicate on their end.

Aashish 

___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] [Cron <bro@xx> /usr/local/bin/randsleep 59 && broctl cron]

2018-03-19 Thread Aashish Sharma
Nevermind! Please ignore! Issue resolved - it was a mistake on my end!

Aashish

On Mon, Mar 19, 2018 at 4:11 PM, Aashish Sharma <asha...@lbl.gov> wrote:
> So I just moved one of my boxes to bro-2.5.3 and see this report.
>
> Any ideas - ? permission issues or something else going on with broctl cron ?
>
> Aashish
>
> - Forwarded message from Cron Daemon <bro@> -
>
> Date: Mon, 19 Mar 2018 16:05:38 -0700 (PDT)
> From: Cron Daemon
> To: bro
> Subject: Cron <bro@> /usr/local/bin/randsleep 59 && broctl cron
>
> Traceback (most recent call last):
>   File "/usr/local/bro/bin/broctl", line 833, in 
> sys.exit(main())
>   File "/usr/local/bro/bin/broctl", line 809, in main
> cmdsuccess = loop.onecmd(cmdline)
>   File "/usr/local/lib/python2.7/cmd.py", line 221, in onecmd
> return func(arg)
>   File "/usr/local/bro/bin/broctl", line 388, in do_cron
> self.broctl.cron(watch)
>   File "/usr/local/bro-2.5.3/lib/broctl/BroControl/broctl.py", line 44, in 
> wrapper
> return func(self, *args, **kwargs)
>   File "/usr/local/bro-2.5.3/lib/broctl/BroControl/broctl.py", line 353, in 
> cron
> self.controller.cron(watch)
>   File "/usr/local/bro-2.5.3/lib/broctl/BroControl/control.py", line 1451, in 
> cron
> tasks.log_stats(5)
>   File "/usr/local/bro-2.5.3/lib/broctl/BroControl/cron.py", line 81, in 
> log_stats
> if self.config.mailreceivingpackets:
>   File "/usr/local/bro-2.5.3/lib/broctl/BroControl/config.py", line 236, in 
> __getattr__
> raise AttributeError("unknown config attribute %s" % attr)
> AttributeError: unknown config attribute mailreceivingpackets
>
> - End forwarded message -
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [Cron <bro@xx> /usr/local/bin/randsleep 59 && broctl cron]

2018-03-19 Thread Aashish Sharma
So I just moved one of my boxes to bro-2.5.3 and see this report. 

Any ideas - ? permission issues or something else going on with broctl cron ?

Aashish 

- Forwarded message from Cron Daemon  -

Date: Mon, 19 Mar 2018 16:05:38 -0700 (PDT)
From: Cron Daemon 
To: bro
Subject: Cron  /usr/local/bin/randsleep 59 && broctl cron

Traceback (most recent call last):
  File "/usr/local/bro/bin/broctl", line 833, in 
sys.exit(main())
  File "/usr/local/bro/bin/broctl", line 809, in main
cmdsuccess = loop.onecmd(cmdline)
  File "/usr/local/lib/python2.7/cmd.py", line 221, in onecmd
return func(arg)
  File "/usr/local/bro/bin/broctl", line 388, in do_cron
self.broctl.cron(watch)
  File "/usr/local/bro-2.5.3/lib/broctl/BroControl/broctl.py", line 44, in 
wrapper
return func(self, *args, **kwargs)
  File "/usr/local/bro-2.5.3/lib/broctl/BroControl/broctl.py", line 353, in cron
self.controller.cron(watch)
  File "/usr/local/bro-2.5.3/lib/broctl/BroControl/control.py", line 1451, in 
cron
tasks.log_stats(5)
  File "/usr/local/bro-2.5.3/lib/broctl/BroControl/cron.py", line 81, in 
log_stats
if self.config.mailreceivingpackets:
  File "/usr/local/bro-2.5.3/lib/broctl/BroControl/config.py", line 236, in 
__getattr__
raise AttributeError("unknown config attribute %s" % attr)
AttributeError: unknown config attribute mailreceivingpackets

- End forwarded message -
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] [Bro-Commits] [git/bro] topic/actor-system: First-pass broker-enabled Cluster scripting API + misc. (07ad06b)

2017-11-02 Thread Aashish Sharma
My view:

I have again and again encountered 4 types cases while doing script/pkg work:

1) manager2worker: Input-framework reads external data and all workers need to 
see it. 
examples: intel-framework, 
2) worker2manager: workers see something report to manager, manager keeps
aggregated counts to make decisions
example: scan-detection 
3) worker2manager2all-workers: workers see something, send to manager, manager
distributes to all workers
example: tracking clicked URLs from extracted from email 

Basically, Bro has two kinds of heuristic needs

a) Cooked data analysis and corelations -  cooked data is the data which ends up
in logs - basically the entire 'protocol record' example c$http or c$smtp -
these are majority. 

Cooked data processing functionality can be also interpreted, for simplicity) as
: 
tail -f blah.log | ./python-script 

but inside bro. 

b) Raw or derived data - which you need to extract from traffic with a defined
policy of your own (example - extracted URLs from email tapping into
mime_data_all event) or extracting mac addresses from router
advertisements/solicitation events or something which is not yet in ::Info
record or a new 'thing' - this should be rare and few use cases over time. 


So in short, give me reliable events which are simply tail -f log functionality
on a data/processing node. It will reduce the number of syncronization needs by
order of magnitude(s). 

for (b) - raw or derived data, we can keep complexities of broker stores and
syncs. etc. but I have hopes that a refined raw data could become its own log
easily and be processed as cooked data. 

So a lot of data centrality  issues related to cluster can go away with data
note which can handle a lot of cooked data related stuff for (1), (2) and in
somecases (3). 

Now, while Justins' multiple data nodes idea has specticular merits, I am not 
much fan of it. Reason being having multiple data-notes results in same sets of 
problems - syncronization, latencies, mess of data2worker, worker2data events 
etc etc. 

I'd love to keep things rather simple.  Cooked data goes to one (or more) 
datanodes (datastores). Just replicate for relibaility rather then pick and 
choose what goes where. 

Just picking up some things: 

> > In the case of broadcasting from a worker to all other workers, the reason 
> > why you relay via another node is only because workers are not connected to 
> > each other?  Do we know that a fully-connected cluster is a bad idea?  i.e. 
> > why not have a worker able to broadcast directly to all other workers if 
> > that’s what is needed?
> 
> Mostly so that workers don't end up spending all their time sending out 
> messages when they should be analyzing packets.

Yes, Also, I have seen this can case broadcast stroms. Thats why I have always
used manager as a central judge on what goes. See, often same data is seen by
all workers. so if manager is smart, it can just send first instance to workers
and all other workers can stop announcing further. 

Let me explain: 

- I block a scanner on 3 connections. 
- 3 workers see a connection each - they each report to manager 
- manager says "yep scanner" sends note to all workers saying traffic from this
  IP is now uninteresting stop reporting. 
- lets say 50 workers
- total commnication events = 53 

If all workers send data to all workers a scanner hitting 65,000 hosts will be a
mess inside cluster. esp when scanners are hitting in ms and not seconds. 

Similar to this is another case. 

lets say 

-  I read 1 million blacklisted IPs from a file on manager.
- manager sends 1 million X 50 events ( to 50 workers)
- each worker needs to report if a blacklisted IP has touched network
- now imagine, if we want to keep a count of how many unique local IPs has each
  of these blacklisted IPs touched 
- and at what rate and when was first contact and when was last contact. 

(btw, I have a working script for this - so whatever new broker does, it needs
to be able to give me this functionality)

Here is a sample log:

#fields ts  ipaddr  ls  days_seen   first_seen  last_seen   
active_for  last_active hosts   total_conns source
1509606970.541130   185.87.185.45   Blacklist::ONGOING  3   
1508782518.636892   1509462618.466469   07-20:55:00 01-16:05:52 
20  24  TOR
1509606980.542115   46.166.162.53   Blacklist::ONGOING  3   
1508472908.494320   1509165782.304233   08-00:27:54 05-02:33:18 
7   9   TOR
1509607040.546524   77.161.34.157   Blacklist::ONGOING  3   
1508750181.852639   1509481945.439893   08-11:16:04 01-10:44:55 
7   9   TOR
1509607050.546742   45.79.167.181   Blacklist::ONGOING  4   
1508440578.524377   1508902636.365934   05-08:20:58 08-03:40:14 
66  818 TOR
1509607070.547143   192.36.27.7 Blacklist::ONGOING  6   

Re: [Bro-Dev] bro-pkg dependencies ?

2017-09-08 Thread Aashish Sharma
Ah! Nice. Yes, this is what I was looking for. Thanks for the pointer Seth!


On Fri, Sep 08, 2017 at 02:45:21PM -0400, Seth Hall wrote:
> 
> 
> On 8 Sep 2017, at 13:29, Aashish Sharma wrote:
> 
> > Can we specify dependent packages in bro-pkg and would bro-pkg go and
> > resolve
> > (install) those dependencies by itself ?
> 
> Yep.  Here's one that does..
>   https://github.com/corelight/top-dns/blob/master/bro-pkg.meta
> 
> > Also, can we make the bro-pkg dump some output (notes) before? or after?
> > pkg
> > installation - something like see this file for details etc ?
> 
> bro-pkg tells you that it's going to install dependencies and asks if you
> want to proceed.  If you want to see how it works, install the
> corelight/top-dns package.
> 
>   .Seth
> 
> --
> Seth Hall * Corelight, Inc * www.corelight.com
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] bro-pkg dependencies ?

2017-09-08 Thread Aashish Sharma
Thanks Adam!

I think we might also need to introduce the concept of pkg-conflicts too:   
cannot
install B if A is installed. 

Aashish 


On Fri, Sep 08, 2017 at 06:05:16PM +, Slagell, Adam J wrote:
> At list recursively finding all of those dependencies is on the roadmap. 
> Terry has that on his queue.
> 
> > On Sep 8, 2017, at 12:29 PM, Aashish Sharma <asha...@lbl.gov> wrote:
> > 
> > Can we specify dependent packages in bro-pkg and would bro-pkg go and 
> > resolve
> > (install) those dependencies by itself ?
> > 
> > Also, can we make the bro-pkg dump some output (notes) before? or after? pkg
> > installation - something like see this file for details etc ?
> > 
> > Aashish 
> > ___
> > bro-dev mailing list
> > bro-dev@bro.org
> > http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev
> 
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] bro-pkg dependencies ?

2017-09-08 Thread Aashish Sharma
Can we specify dependent packages in bro-pkg and would bro-pkg go and resolve
(install) those dependencies by itself ?

Also, can we make the bro-pkg dump some output (notes) before? or after? pkg
installation - something like see this file for details etc ?

Aashish 
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] input-framework file locations

2017-08-25 Thread Aashish Sharma
[ re-igniting an OLD thread ]

OK so @DIR sort of works.

I've used this as

 global smtp_indicator_feed= fmt
("%s/feeds/smtp_malicious_indicators.out",@DIR)  ;

Problem is: @DIR gives the path of the directory where script is residing.

So when I do broctl install - all the scripts go into :
../spool/installed-scripts-do-not-touch/

so while file is referenced correctly and input-data is read just
fine. Humans or no other process can now append to the input file
anymore.

Does the problem make sense ?

I think I am looking for something which can point back to a 'static' path.

On Fri, Jul 8, 2016 at 5:41 PM, Robin Sommer  wrote:
>
>
> On Fri, Jul 08, 2016 at 16:59 -0700, you wrote:
>
>> Something similar to __load__.bro model
>
> @DIR gives you the path to the directory the current script is located
> in. Does that help?
>
> Robin
>

===

Original thread:

I have been thinking and trying different things but for now, it
appears that if we are to share policies around, there is no easy way
to be able to distribute input-files along with policy files.

Basically, right now I use

redef Scan::whitelist_ip_file = "/usr/local/bro/feeds/ip-whitelist.scan" ;

and then expect everyone to edit path as their setup demands it and
place accompanying sample file in the directory or create one for
themselves  - this all introduces errors as well as slows down
deployment.

Is there a way I can use relative paths instead of absolute  paths for
input-framework digestion.  At present a new-heuristics dir can have
__load__.bro with all policies but input-framework won't read files
relative to that directory or where it is placed.

redef Scan::whitelist_ip_file = "../feeds/ip-whitelist.scan" ;

Something similar to __load__.bro model

Also, one question I have is should all input-files go to a 'standard'
feeds/input dir in bro or be scattered around along with their
accompanied bro policies (ie in individual directories )

Something to think about as with more and more reliance on
input-framework i think there is a need for 'standardization' on where
to put input-files and how to easily find and read them.
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] Check if table element exists

2017-08-08 Thread Aashish Sharma
(Not sure if I am interpreting your question right but here is how I read it) 

basically use "in" operator


local my_ip_table : table[addr] of bool ; 

local ip: addr = 127.0.0.1 

if ( ip in my_ip_table)
found 
else 
not found 


btw, you can also use "!in" operator too which is rather more handy 


if (something !in table)
initialize it here
table[something] = Blah; 


or 

most of scripts just flat return in membership doesn't exist that helps
eliminate a lot of unneeded run through scripts. 

if (ip !in Site::local_nets)
return 

 


Here is a (rather complicated but) useful example which stretches
above problem with an extended use-case:

/usr/local/bro/share/bro/policy/frameworks/software/vulnerable.bro

Hope this helps. Let me know if you need more clearifications. 

Aashish


On Tue, Aug 08, 2017 at 11:04:23AM -0700, Reinhard Gentz wrote:
> Hi,
> 
> I would like to check if a certain table element exists and then take
> corresponding action like the following:
> 
> if (exists(mytable["my_dynamic_name"]))
> do something
> else
> do something else
> 
> 
> Can someone give me a hint?
> Reinhard

> ___
> bro-dev mailing list
> bro-dev@bro.org
> http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev

___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] clusterization issue: logger node vs manager node or both ?

2017-06-01 Thread Aashish Sharma
> I *strongly* recommend not running code on the logger.  The whole

I agree and this makes sense. 

> What's the problem you're trying to solve by running code there?

So I have a working clusterized bro package but it stops behaving as expected 
if I enable logger node. 

I am calling a worker2manager event inside "event log_smtp", that event' isn't 
kicking at all. When I disable logger, the event runs as expected. So this led 
me to wonder if somehow log_* events are running on logger and not on worker, 
which I doubted. 

Further causing concerns about how existence of LOGGER node can affect the 
entire clusterization architecture. 

Aashish 

On Thu, Jun 01, 2017 at 02:10:47PM -0400, Seth Hall wrote:
> On Thu, Jun 1, 2017 at 1:12 PM, Aashish Sharma <asha...@lbl.gov> wrote:
> 
> > I can surely do "Cluster::local_node_type() == Cluster::LOGGER" and then 
> > events logger2manager_events and logger2worker_events etc etc so on so 
> > forth.
> 
> I *strongly* recommend not running code on the logger.  The whole
> point of the logger is that it doesn't have any script execution tasks
> to take care of and it's solely dedicated to logging.  What's the
> problem you're trying to solve by running code there?
> 
>   .Seth
> 
> -- 
> Seth Hall * Corelight, Inc * s...@corelight.com * www.corelight.com
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] clusterization issue: logger node vs manager node or both ?

2017-06-01 Thread Aashish Sharma
SO with the emergence of logging node, I am encoutering an issue with 
clusterization and was seeking feedback on whats a better way to do this.

Presently I have been using:

@if (( Cluster::is_enabled() && Cluster::local_node_type() == Cluster::MANAGER 
) || ! Cluster::is_enabled())
@end if 

and events worker2manager_events and manager2worker_events. 

With logging node: 

I can surely do "Cluster::local_node_type() == Cluster::LOGGER" and then events 
logger2manager_events and logger2worker_events etc etc so on so forth. 

The issue I am facing is that to begin with I don't know if someone is only 
going to run manager only or if someone is going to run logger node as well, 
making 

the following clumsy: 

 - @if (( Cluster::is_enabled() && Cluster::local_node_type() == 
Cluster::MANAGER ) || ! Cluster::is_enabled())
 - if manager then use worker2manager and manager2worker events

OR 
 - @if (( Cluster::is_enabled() && Cluster::local_node_type() == 
Cluster::LOGGER) || ! Cluster::is_enabled())
 - if logger then user logger events ?


Any thoughts on how to handle existence or non-existence of logger node in a 
clusterization scheme ?

Aashish 


___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] can I send an opaque of bloomfilter over Cluster::manager2worker_event ?

2017-05-01 Thread Aashish Sharma

> const global_hash_seed: string = "" 

Yes, with setting of global_hash_seed, bloomfilter movement across workers is 
working fine and as expected, I see from initial tests. 

While we are on this thread, is the following good or there is a better way to 
copy/merge bloomfilter once its sent to workers:

@if (( Cluster::is_enabled() && Cluster::local_node_type() != Cluster::MANAGER 
) || !Cluster::is_enabled())

event Blacklist::m_w_add_bloom(val: opaque of bloomfilter)
{
blacklistbloom=bloomfilter_merge(val, val);

}
@endif 

I figured merging same bloom to itself won't make a difference, I primarily 
want to copy it to blacklistbloom. 

Aashish 

On Mon, May 01, 2017 at 08:39:03AM -0700, Robin Sommer wrote:
> 
> 
> On Mon, May 01, 2017 at 08:20 -0700, you wrote:
> 
> > Actually - I am not sure if we ever implemented consistent hashing over the
> > cluster;
> 
> Ah, good point, we did implement that, but it needs to be configured:
> 
> ## Seed for hashes computed internally for probabilistic data structures. 
> Using
> ## the same value here will make the hashes compatible between 
> independent Bro
> q## instances. If left unset, Bro will use a temporary local seed.
> const global_hash_seed: string = "" 
> 
> Robin
> 
> --
> Robin Sommer * ICSI/LBNL * ro...@icir.org * www.icir.org/robin
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] can I send an opaque of bloomfilter over Cluster::manager2worker_event ?

2017-04-28 Thread Aashish Sharma
I tried doing that and then merging with an existing (initialized) bloomfilter 
on worker. 

I see this error:

1493427133.170419   Reporter::INFO  calling inside the m_w_add_bloom
worker-1-
1493427133.170419   Reporter::ERROR incompatible hashers in 
BasicBloomFilter merge  (empty) -
1493427133.170419   Reporter::ERROR failed to merge Bloom filter(empty) 
-
1493427115.582247   Reporter::INFO  calling inside the m_w_add_bloom
worker-6-
1493427115.582247   Reporter::ERROR incompatible hashers in 
BasicBloomFilter merge  (empty) -
1493427115.582247   Reporter::ERROR failed to merge Bloom filter(empty) 
-
1493427116.358858   Reporter::INFO  calling inside the m_w_add_bloom
worker-20   -
1493427116.358858   Reporter::ERROR incompatible hashers in 
BasicBloomFilter merge  (empty) -
1493427116.358858   Reporter::ERROR failed to merge Bloom filter(empty) 
-
1493427115.935649   Reporter::INFO  calling inside the m_w_add_bloom
worker-7-
1493427115.935649   Reporter::ERROR incompatible hashers in 
BasicBloomFilter merge  (empty) -
1493427115.935649   Reporter::ERROR failed to merge Bloom filter(empty) 
-
1493427115.686241   Reporter::INFO  calling inside the m_w_add_bloom
worker-16   -
1493427115.686241   Reporter::ERROR incompatible hashers in 
BasicBloomFilter merge  (empty) -
1493427115.686241   Reporter::ERROR failed to merge Bloom filter(empty) 
-
14934271


Not sure if the error is because an opaque of bloomfilter cannot be sent over 
worker2manager_events and manager2worker_events or if I am doing something not 
quite right.

Aashish 
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] CMU/SEI C++ secure coding best practices

2017-04-18 Thread Aashish Sharma
Anyone seen this out of CMU:

SEI CERT C++ Coding Standard Rules for Developing Safe, Reliable, and
Secure Systems in C++

http://cert.org/downloads/secure-coding/assets/sei-cert-cpp-coding-standard-2016-v01.pdf

Not sure how good/bad/awesome/relevant this is.

Aashish
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [desired broker api as oppose to whats in known-hosts.bro]

2017-03-03 Thread Aashish Sharma
SO I came across a sample of Broker-API usage:

when (local res = Broker::exists(Cluster::cluster_store, 
Broker::data("known_hosts")))
{
local res_bool = Broker::refine_to_bool(res$result);
if(res_bool)
{
when ( local res2 = Broker::lookup(Cluster::cluster_store, 
Broker::data("known_hosts")) )
{
local res2_bool = Broker::set_contains(res2$result, 
Broker::data(host));

if(!res2_bool)
{
Broker::add_to_set(Cluster::cluster_store, 
Broker::data("known_hosts"), Broker::data(host));
Log::write(Known::HOSTS_LOG, 
[$ts=network_time(), $host=host]);
}
}
timeout 10sec
{ print "timeout"; }
 }
 }
 timeout 20sec
 { print "timeout"; }


Now this isn't too complicated but I find it cumbersome and one needs to 
understand execution flow since "when" is involved etc etc etc. 



Here is how I'd envision broker usage (and I know easier said than done...)

define: 

global known_hosts: table[addr] of blah  ; 

now 

when I query the table :
if (addr in known_hosts)
{  

} 


1a) Given  directive above, Bro should go and check store if value isn't 
already in the table and update as needed in background - if value isn't in the 
store, if condition would fail anyways. 

may be  
1b) You can probably maintain a bloomfilter which builds itself from whats out 
there in table and works as an index or a membership check 

and 

1c) We also need another directive something akin to "_store = 1 hrs", 
which expires entires from the table in memory and puts into the store. 

on implementation side I am pretty sure there are complexities since broker 
model is different and I don't grasp it yet. 

But from current bro scripting prespective: 

(1a) will be If value isn't in the bloomfilter, call a Input::Event (which 
reads the data from the store) and fire end-of-data like event or similiar to 
input-framework give me a capability to fire events when data read from store 
is complete. So this would eliminate the "when" construct and give a much 
clearer event based code execution path (from scripters prespective) 

for (1c) Likewise, for _store - I am merely using 
&(read|write|create)_expire functions to write to the database. 

So in-summary we need a directive  which works with sets, tables and bro 
data types. 

If value is in member OK else broker in background goes and checks store and in 
background updates sets, tables, datastructures and then fires events when 
done. 

Save me from all the above "when" constructs and Broker::lookup routines. Let 
those be in background. 

I'd be happy to talk in person or video conference if more clearification is 
needed. 

Thanks, 
Aashish 


___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] help Reading the backtrace

2017-01-19 Thread Aashish Sharma
SO this doesn't (at the moment) seem to be related to table expiration. My 
table is maintained on manager and expire_func only runs on manager.  

But, I see 'a' worker stall with 99-100% CPU for a good while while all other 
workers go down to 5-6% CPU. conn.log continues to grow though

GDB points to : Get() in install/bro-2.5/src/threading/Queue.h : 

template
inline T Queue::Get()
{
safe_lock([read_ptr]);

int old_read_ptr = read_ptr;

if ( messages[read_ptr].empty() && ! ((reader && reader->Killed()) || 
(writer && writer->Killed())) )
{
struct timespec ts;
ts.tv_sec = time(0) + 5;
ts.tv_nsec = 0;

->  pthread_cond_timedwait(_data[read_ptr], [read_ptr], 
);


On a side note, Well, why is this on bro-dev ? Not entirely sure. :) I think 
eventually this might go into what my script is messing up and whats a better 
way to script the code, I suppose. 

Aashish 


On Thu, Jan 19, 2017 at 09:55:40AM -0800, Robin Sommer wrote:
> 
> 
> On Thu, Jan 19, 2017 at 09:44 -0800, you wrote:
> 
> > Still, to clearify, there might be a possibility that because at
> > present table_incremental_step=5000, somehow expiring >> 5000 entries
> > continiously every moment might cause cause Queue to deadlock
> > resulting in BRO to stop packets processing ?
> 
> It shouldn't deadlock. What I can see happening, depending on load and
> these parameters, is Bro spending most of its time going through the
> table to expire entries and only getting to few packets in between (so
> not complete stop of processing, but not getting much done either)
> 
> Robin
> 
> -- 
> Robin Sommer * ICSI/LBNL * ro...@icir.org * www.icir.org/robin
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] help Reading the backtrace

2017-01-18 Thread Aashish Sharma
Yes, I have been making heavy use of tables ( think a million entries a day and 
million expires a day) 

Let me figure out a way to upload the scripts on github or send them yours and 
justin's way otherwise. 

Strangely this code kept running fine for last month and reasonably stable. I 
am not sure what little thing I added/changed that has caused bro to run but 
all workers in uwait state with 6% CPU. (I'll be doing svn diffs to figure out) 

Seems like bro is stuck in:

0x0004039ccadc in _umtx_op_err () from /lib/libthr.so.3
(gdb) bt
#0  0x0004039ccadc in _umtx_op_err () from /lib/libthr.so.3
#1  0x0004039c750b in _thr_umtx_timedwait_uint () from /lib/libthr.so.3
#2  0x0004039cea06 in ?? () from /lib/libthr.so.3
#3  0x009042ee in threading::Queue::Get 
(this=0x404543038) at /home/bro/install/bro-2.5/src/threading/Queue.h:173
#4  0x00902a31 in threading::MsgThread::RetrieveIn (this=0x404543000) 
at /home/bro/install/bro-2.5/src/threading/MsgThread.cc:349
#5  0x00902ce4 in threading::MsgThread::Run (this=0x404543000) at 
/home/bro/install/bro-2.5/src/threading/MsgThread.cc:366
#6  0x008fb952 in threading::BasicThread::launcher (arg=0x404543000) at 
/home/bro/install/bro-2.5/src/threading/BasicThread.cc:201
#7  0x0004039c6260 in ?? () from /lib/libthr.so.3
#8  0x in ?? ()
Backtrace stopped: Cannot access memory at address 0x7fbfe000
(gdb)



Aashish 

On Wed, Jan 18, 2017 at 06:37:08PM +0100, Jan Grashöfer wrote:
> Hi Aashish,
> 
> > So I am running a new detection package and everything seemed right but 
> > somehow since yesterday each worker is running at 5.7% to 6.3% CPU and not 
> > generating logs.
> 
> my guess would be that the script makes (heavy) use of tables and table
> expiration, right? Can you share the script?
> 
> Jan
> ___
> bro-dev mailing list
> bro-dev@bro.org
> http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] help Reading the backtrace

2017-01-18 Thread Aashish Sharma
So I am running a new detection package and everything seemed right but somehow 
since yesterday each worker is running at 5.7% to 6.3% CPU and not generating 
logs. 

The backtrace shows the following and how much (%) CPU is spending on what 
functions.

Can someone help me read why might BRO spend 17.5% of its time in 

bro-2.5/src/Dict.cc:void* Dictionary::NextEntry(HashKey*& h, IterCookie*& 
cookie, int return_hash) const

Here is functions and time spent in each of them: 

bro`_ZN8iosource4pcap10PcapSource17ExtractNextPacketEP6Packet1   0.1%
bro`_ZNK13PriorityQueue3TopEv   1   0.1%
bro`_ZNK7BroFunc4CallEP8ValPListP5Frame 1   0.1%
bro`_Z15net_update_timed1   0.1%
bro`_ZN16RemoteSerializer6GetFdsEPN8iosource6FD_SetES2_S2_1   0.1%
bro`_ZN8EventMgr5DrainEv1   0.1%
bro`_ZNK15EventHandlerPtrcvbEv  1   0.1%
bro`_ZN8iosource6FD_Set6InsertEi1   0.1%
bro`_ZNK11ChunkedIOFd12ExtraReadFDsEv   1   0.1%
bro`_ZN13PriorityQueue10BubbleDownEi1   0.1%
bro`0x699d602   0.1%
bro`_ZNK8iosource8IOSource6IsOpenEv 2   0.1%
bro`_ZN8iosource6FD_Set6InsertERKS0_2   0.1%
bro`_ZNK8iosource6FD_Set5ReadyEP6fd_set 3   0.2%
bro`_ZNK14DictEntryPListixEi3   0.2%
bro`_ZN8iosource6PktSrc25ExtractNextPacketInternalEv4   0.3%
bro`_ZNSt3__16__treeIiNS_4lessIiEENS_9allocatorIiEEE15__insert_uniqueERKi   
 4   0.3%
bro`_ZNK8iosource6FD_Set3SetEP6fd_set   5   0.3%
bro`0x69a6105   0.3%
bro`_ZNSt3__16__treeIiNS_4lessIiEENS_9allocatorIiEEE7destroyEPNS_11__tree_nodeIiPvEE
5   0.3%
bro`0x699c006   0.4%
bro`_ZNSt3__16__treeIiNS_4lessIiEENS_9allocatorIiEEE16__construct_nodeIJRKiEEENS_10unique_ptrINS_11__tree_nodeIiPvEENS_22__tree_node_destructorINS3_ISC_EEDpOT_
6   0.4%
bro`0x69ad507   0.5%
bro`_ZN7HashKeyD2Ev 7   0.5%
bro`_ZN8iosource7Manager11FindSoonestEPd7   0.5%
bro`_ZN7HashKeyC2EPKvim11   0.7%
bro`_ZNK18TableEntryValPDict9NextEntryERP7HashKeyRP10IterCookie   12   0.8%
bro`_ZN8TableVal8DoExpireEd16   1.1%
bro`_ZNK7HashKey7CopyKeyEPKvi  16   1.1%
bro`_ZNK13TableEntryVal16ExpireAccessTimeEv   164  11.1%
bro`_ZNK8BaseList6lengthEv170  11.5%
bro`_ZNK8BaseListixEi 173  11.7%
bro`_ZNK10Dictionary9NextEntryERP7HashKeyRP10IterCookiei  259  17.5%

Aashish 

___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] broctl archive copy vs move

2016-12-13 Thread Aashish Sharma
So if we have compresscmd unset then archive-log script does a copy:

archive-log:nice cp $file_name "$dest"

Any reason why it doesn't do move instead ? 

I propose changing cp to mv 

Aashish 
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] proxies and tree data structures

2016-12-04 Thread Aashish Sharma
I have noticed that at times my proxies are spending way too much CPU (100% for 
extended duration) in tree operations which include inserts and 
tree_balance_after_insert. Anyone has any pointers to what might be going on 
proxies ? 

Aashish 
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] missing worker2manager and manager2worker events

2016-12-01 Thread Aashish Sharma
I have noticed that sometimes (more often than not), not all workers see a 
manager2worker event or likewise not all workers report a worker2manager event 
on manager - missing as high as 10% of the events and as little as 1% of such 
events are 'missing' ie don't show up. 

This is puzzling since I am not sure why/where bottleneck is or what might 
cause the disappreance. 

Is there a way to assure that these events are movig data as expected ? 

Thanks, 
Aashish 
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] bro-pkg upgrade and over-writing of files

2016-11-29 Thread Aashish Sharma
On Tue, Nov 29, 2016 at 07:51:21PM +, Siwek, Jon wrote:
> 
> But a new feature could be added to bro-pkg that allows package authors to 
> specify a list of config files in their bro-pkg.meta.  Then on 
> install/upgrade/remove, if a user has made modifications to any of those 
> files, they can be warned/prompted about how to proceed (show a diff, ask to 
> overwrite or keep modified version, etc.).  This seems a common way to handle 
> config files in the package management scene.
> 
I like the idea of warned/prompted (show a diff, ask to overwrite or keep new 
version etc) is good. That helps with situations where a new variables 
introduced in in a config file after the upgrade also don't go unnoticed. 

> Would such a feature work for you?

Yes absolutely. This would be rather perfect. 

Thank you, 
Aashish 
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] bro-pkg upgrade and over-writing of files

2016-11-29 Thread Aashish Sharma
Hello, 

I have a package where I provide a sample configuration file for people to 
redef according to their needs and specifics. 

Now everytime when they upgrade the package, I risk over writing their modified 
config file. 

SO I decided to call the config file scan-config.bro.orig but then I am running 
into issues of which one to load and how to determine the presence of an 
already existing scan-config.bro in __load__.bro 

The idea of asking uses to redef outside package directory might be cumbersome 
for unfamiliar users.

Any thoughts ? 


Aashish 


___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] Packet Brick question(s)

2016-11-28 Thread Aashish Sharma
Scott, 

I was using the following script when I was playing with the packet-bricks Dec 
last year:

utilObj = dofile("scripts/utils.lua")
utilObj:enable_nmpipes()
pe = PktEngine.new("e0")
lb = Brick.new("LoadBalancer", 2)
lb:connect_input("ix0")
lb:connect_output("ix0{1", "ix0{2", "ix0{3", "ix0{4", "ix0{5", "ix0{6", 
"ix0{7", "ix0{8", "ix0{9", "ix0{10", "ix0{11", "ix0{12", "ix0{13", "ix0{14", 
"ix0{15", "ix0{16", "ix0{17", "ix0{18", "ix0{19", "ix0{20" )
pe:link(lb)
pe:start()
pe:show_stats()

This was on FreeBSD. 

> When I run the "./pkt-gen -i netmap:eth6}0 -f rx" command, I am seeing
> only a tiny fraction of the expected traffic:

Uncharted terretory for me. I was actually comparing bro logs generated from 
packet-bricks with the ones on other clusters and bricks vs others seemed 
reasonably comparable when I last left playing with this.

> What can I do differently to get better performance?

Others can elaborate more since I haven't tested this at all but I hear netmap 
support in linux is quite stable now - that might be something to try. 

Aashish 

On Mon, Nov 28, 2016 at 11:11:08AM -0500, Scott Campbell wrote:
> I have been investigating the use of Packet Bricks/netmap as a
> replacement for pf_ring on linux, but have a few questions.
> 
> (1) Is there any documentation except for the git page and the scripts
> themselves?  The script comments are nice and useful, but at times the
> syntax is rather opaque.
> 
> (2) Following directions and mailing list recommendations, I have a
> working version which reads from a heavily loaded 10G ixgbe interface
> and splits the traffic into 4 netmap interfaces.  The script looks like:
> 
> utilObj:enable_nmpipes()
> pe = PktEngine.new("e0", 1024, 8)
> lb = Brick.new("LoadBalancer", 2)
> lb:connect_input("eth6")
> lb:connect_output("eth6{0", "eth6{1", "eth6{2", "eth6{3")
> pe:link(lb)
> pe:start()
> 
> where eth6 is the data source interface.
> 
> Script output looks like:
> > [root@xdev-w4 PB_INSTALL]# sbin/bricks -f 
> > etc/bricks-scripts/startup-one-thread.lua
> > [ pmain(): line  466] Executing 
> > etc/bricks-scripts/startup-one-thread.lua
> > [print_version(): line  348] BRICKS Version 0.5-beta
> > bricks> utilObj:enable_nmpipes()
> > bricks> pe = PktEngine.new("e0", 1024, 8)
> > bricks> lb = Brick.new("LoadBalancer", 2)
> > bricks> lb:connect_input("eth6")
> > bricks> lb:connect_output("eth6{0", "eth6{1", "eth6{2", "eth6{3")
> > bricks> pe:link(lb)
> > [   lb_init(): line   66] Adding brick eth6{0 to the engine
> > [   promisc(): line   96] Interface eth6 is already set to promiscuous mode
> > 970.328612 nm_open [444] overriding ARG3 0
> > 970.328631 nm_open [457] overriding ifname eth6 ringid 0x0 flags 0x1
> > [netmap_link_iface(): line  183] Wait for 2 secs for phy reset
> > [brick_link(): line  113] Linking e0 with link eth6 with batch size: 512 
> > and qid: -1
> > [netmap_create_channel(): line  746] brick: 0xfac090, local_desc: 0xfac780
> > 972.343050 nm_open [444] overriding ARG3 0
> > [netmap_create_channel(): line  781] zerocopy for eth6 --> eth6{0 (index: 
> > 0) enabled
> > [netmap_create_channel(): line  786] Created netmap:eth6{0 interface
> > [netmap_create_channel(): line  746] brick: 0xfac090, local_desc: 0xfac780
> > 972.343600 nm_open [444] overriding ARG3 0
> > [netmap_create_channel(): line  781] zerocopy for eth6 --> eth6{1 (index: 
> > 1) enabled
> > [netmap_create_channel(): line  786] Created netmap:eth6{1 interface
> > [netmap_create_channel(): line  746] brick: 0xfac090, local_desc: 0xfac780
> > 972.344200 nm_open [444] overriding ARG3 0
> > [netmap_create_channel(): line  781] zerocopy for eth6 --> eth6{2 (index: 
> > 2) enabled
> > [netmap_create_channel(): line  786] Created netmap:eth6{2 interface
> > [netmap_create_channel(): line  746] brick: 0xfac090, local_desc: 0xfac780
> > 972.344696 nm_open [444] overriding ARG3 0
> > [netmap_create_channel(): line  781] zerocopy for eth6 --> eth6{3 (index: 
> > 3) enabled
> > [netmap_create_channel(): line  786] Created netmap:eth6{3 interface
> > bricks> pe:start()
> 
> and the related dmesg data is:
> 
> > dmesg:
> > ixgbe :81:00.0: eth6: detected SFP+: 5
> > ixgbe :81:00.0: eth6: NIC Link is Up 10 Gbps, Flow Control: RX/TX
> > 494.566450 [ 131] ixgbe_netmap_configure_srrctl bufsz: 2048 srrctl: 2
> > ixgbe :81:00.0: eth6: detected SFP+: 5
> > ixgbe :81:00.0: eth6: NIC Link is Up 10 Gbps, Flow Control: RX/TX
> > 496.743920 [ 320] netmap_pipe_krings_create 880876731a00: case 1, 
> > create both ends
> > 496.744464 [ 320] netmap_pipe_krings_create 880876731000: case 1, 
> > create both ends
> > 496.745026 [ 320] netmap_pipe_krings_create 880878fcb600: case 1, 
> > create both ends
> > 496.745520 [ 320] netmap_pipe_krings_create 880875e06c00: case 1, 
> > create both ends
> > Loading kernel module for a network device with CAP_SYS_MODULE 
> > (deprecated).  Use CAP_NET_ADMIN and alias netdev-netmap instead
> > Loading kernel module for a network 

[Bro-Dev] Adding event/notice at the end of log rotation

2016-10-24 Thread Aashish Sharma
Would it be possible (also suggestion on what might be the best way) to add an 
event/execute a script once log-rotation/compression is complete. 

Use case: We archive the logs to a mass storage while leaving a local copy for 
N days. Right now, its a guessing game on when to run the nightly archive (ie 
once log compression and local rotation is complete). 

thanks, 
Aashish 
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] [archive log failure]

2016-10-03 Thread Aashish Sharma
> the /share/broctl/scripts/archive-log and
> the /share/broctl/scripts/post-terminate script.

No modifications on either scripts as far as I can tell:

MD5 (share/broctl/scripts/archive-log) = 0d61804be56f8a61c18c6612bad486c8
MD5 (share/broctl/scripts/post-terminate) = ad4b56dfcfe8c1796a0a755e37256dda

Aashish 

On Mon, Oct 03, 2016 at 03:45:54PM -0500, Daniel Thayer wrote:
> Your make-archive-name script works for me.
> 
> The next thing to check is your copy of
> the /share/broctl/scripts/archive-log and
> the /share/broctl/scripts/post-terminate script.
> Check if you made any changes to those scripts (a bug in
> those scripts could potentially run make-archive-name with
> invalid parameters).
> 
> 
> On 10/3/16 3:18 PM, Aashish Sharma wrote:
> >HI Daniel,
> >
> >>As for the strange directory names, one possible reason could be your
> >>make-archive-name script is producing bad output.
> >
> >make-archive-name script does correctly archive the logs to 2016-10-03 
> >folder.
> >
> >This is the contect of the script:
> >
> >$ cat make-archive-name
> >
> >name=$1
> >flavor=$2
> >opened=$3
> >closed=$4
> >host=`hostname -s`
> >
> >day=`echo $opened  | awk -F - '{printf "%s-%s-%s", $1, $2, $3}'`
> >from=`echo $opened | awk -F - '{printf "%s:%s:%s", $4, $5, $6}'`
> >to=`echo $closed | awk -F - '{printf "%s:%s:%s", $4, $5, $6}'`
> >
> >if [ "$closed" != "" ]; then
> >   echo $day/$name.$host.$day-$from-$to
> >else
> >   echo $day/$name.$host.$day-$from-current
> >fi
> >
> >===
> >
> >Hereis output of  20rk-5-8 directory for example:
> >
> >~/logs/20rk-5-8]$ ls -altrh
> >total 40
> >-rw-r--r--1 bro  bro20B Sep 28 17:39 
> >drop-debug.log.cluster.20rk-5-8-::-17:39:24.gz
> >-rw-r--r--1 bro  bro20B Oct  3 09:40 
> >drop-debug.log.cluster.20rk-5-8-::-09:40:35.gz
> >drwxr-xr-x  196 bro  bro   6.0k Oct  3 11:54 ..
> >drwxr-xr-x2 bro  bro   512B Oct  3 11:54 .
> >-rw-r--r--1 bro  bro20B Oct  3 11:54 
> >drop-debug.log.cluster.20rk-5-8-::-11:54:38.gz
> >
> >
> >Since make-archive-name does archive logs as expected not sure how to 
> >address 20rk-5-8 issue. secondly, why would these directories be in ~/logs 
> >instead of ../spool/tmp ?
> >
> >
> >Aashish
> >
> >On Mon, Oct 03, 2016 at 03:02:14PM -0500, Daniel Thayer wrote:
> >>Those archive log failure emails are a new feature in version 2.5.
> >>The only purpose of the emails is to make it easier to notice when
> >>such an error occurs (i.e., these emails do not indicate a new type
> >>of error condition).
> >>Previously, if such a failure occurred, the only way you would know
> >>is if you noticed missing logs in one of the subdirectories of
> >>the /logs/ directory, or if you noticed the presence of
> >>a new spool/tmp/post-terminate-* directory.
> >>
> >>As for the strange directory names, one possible reason could be your
> >>make-archive-name script is producing bad output.
> >>
> >>
> >>
> >>On 10/3/16 2:11 PM, Aashish Sharma wrote:
> >>>I see notifications as following:
> >>>
> >>>- Forwarded message from Xxx  -
> >>>
> >>>Date: Mon, 3 Oct 2016 11:54:39 -0700 (PDT)
> >>>From:
> >>>To:
> >>>Subject: [bro-cluster] archive log failure
> >>>
> >>>Unable to archive one or more logs in directory:
> >>>/usr/local/bro/spool/tmp/post-terminate-worker-2016-10-03-09-40-35-36665
> >>>Check the post-terminate.out file in that directory for any error messages.
> >>>
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] [archive log failure]

2016-10-03 Thread Aashish Sharma
HI Daniel, 

> As for the strange directory names, one possible reason could be your
> make-archive-name script is producing bad output.

make-archive-name script does correctly archive the logs to 2016-10-03 folder. 

This is the contect of the script:

$ cat make-archive-name 

name=$1
flavor=$2
opened=$3
closed=$4
host=`hostname -s`

day=`echo $opened  | awk -F - '{printf "%s-%s-%s", $1, $2, $3}'`
from=`echo $opened | awk -F - '{printf "%s:%s:%s", $4, $5, $6}'`
to=`echo $closed | awk -F - '{printf "%s:%s:%s", $4, $5, $6}'`

if [ "$closed" != "" ]; then
   echo $day/$name.$host.$day-$from-$to
else
   echo $day/$name.$host.$day-$from-current
fi

=== 

Hereis output of  20rk-5-8 directory for example: 

~/logs/20rk-5-8]$ ls -altrh
total 40
-rw-r--r--1 bro  bro20B Sep 28 17:39 
drop-debug.log.cluster.20rk-5-8-::-17:39:24.gz
-rw-r--r--1 bro  bro20B Oct  3 09:40 
drop-debug.log.cluster.20rk-5-8-::-09:40:35.gz
drwxr-xr-x  196 bro  bro   6.0k Oct  3 11:54 ..
drwxr-xr-x2 bro  bro   512B Oct  3 11:54 .
-rw-r--r--1 bro  bro20B Oct  3 11:54 
drop-debug.log.cluster.20rk-5-8-::-11:54:38.gz


Since make-archive-name does archive logs as expected not sure how to address 
20rk-5-8 issue. secondly, why would these directories be in ~/logs instead of 
../spool/tmp ? 


Aashish 

On Mon, Oct 03, 2016 at 03:02:14PM -0500, Daniel Thayer wrote:
> Those archive log failure emails are a new feature in version 2.5.
> The only purpose of the emails is to make it easier to notice when
> such an error occurs (i.e., these emails do not indicate a new type
> of error condition).
> Previously, if such a failure occurred, the only way you would know
> is if you noticed missing logs in one of the subdirectories of
> the /logs/ directory, or if you noticed the presence of
> a new spool/tmp/post-terminate-* directory.
> 
> As for the strange directory names, one possible reason could be your
> make-archive-name script is producing bad output.
> 
> 
> 
> On 10/3/16 2:11 PM, Aashish Sharma wrote:
> >I see notifications as following:
> >
> >- Forwarded message from Xxx  -
> >
> >Date: Mon, 3 Oct 2016 11:54:39 -0700 (PDT)
> >From:
> >To:
> >Subject: [bro-cluster] archive log failure
> >
> >Unable to archive one or more logs in directory:
> >/usr/local/bro/spool/tmp/post-terminate-worker-2016-10-03-09-40-35-36665
> >Check the post-terminate.out file in that directory for any error messages.
> >
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [archive log failure]

2016-10-03 Thread Aashish Sharma
I see notifications as following: 

- Forwarded message from Xxx  -

Date: Mon, 3 Oct 2016 11:54:39 -0700 (PDT)
From: 
To: 
Subject: [bro-cluster] archive log failure

Unable to archive one or more logs in directory:
/usr/local/bro/spool/tmp/post-terminate-worker-2016-10-03-09-40-35-36665
Check the post-terminate.out file in that directory for any error messages.
-- 
[Automatically generated.]

But then there is no spool/tmp/post-terminate-* 

However, in my /usr/local/bro/logs directory I see these folders emerge:

drwxr-xr-x2 bro  bro   512B Oct  3 11:54 20rk-5-9
drwxr-xr-x2 bro  bro   512B Oct  3 11:54 20rk-5-5
drwxr-xr-x2 bro  bro   512B Oct  3 11:54 20rk-5-8
drwxr-xr-x2 bro  bro   512B Oct  3 11:54 20rk-5-7
drwxr-xr-x2 bro  bro   512B Oct  3 11:54 20rk-5-10
drwxr-xr-x2 bro  bro   512B Oct  3 11:54 20rk-5-4
drwxr-xr-x2 bro  bro   512B Oct  3 11:54 20rk-5-3
drwxr-xr-x2 bro  bro   512B Oct  3 11:54 20rk-5-2
drwxr-xr-x2 bro  bro   512B Oct  3 11:54 20rk-5-1
drwxr-xr-x2 bro  bro   512B Oct  3 11:54 20rk-5-6
drwxr-xr-x2 bro  bro   512B Oct  3 11:54 20ox-5-
drwxr-xr-x2 bro  bro   6.5k Oct  3 11:56 2016-10-03
drwxr-xr-x2 bro  bro   512B Oct  3 12:01 20na--

Now, I do use the followign setting in broctl.cfg:

# change log naming
MakeArchiveName = /usr/local/bro-cpp/common/scripts/makelocal-archivename-2.1

However above been there forever and I don't recall these archive failure 
messages or these directories show up until I moved to : bro version 
2.5-beta-debug

Aashish 
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] Bro IDS request

2016-08-12 Thread Aashish Sharma
May be try: ftp://ftp.ee.lbl.gov/cf-1.2.5.tar.gz 

eg: cf conn.log  | less 


On Fri, Aug 12, 2016 at 02:03:48PM -0400, Dave Florek wrote:
> Hello,
> 
> Because I lose so much processing power when manually converting Bro output
> logs from Epoch to EST using bro-cut, can I have a feature that
> automatically outputs the Bro logs to EST automatically instead of Epoch
> while Bro is timestamping the logs as it sees the traffic?
> 
> I'm not sure if using the Epoch format makes Bro much faster while it's
> processing, but I would like a more integrated solution aside from using
> the bro-cut utility.
> 
> Thank you for your time,

> ___
> bro-dev mailing list
> bro-dev@bro.org
> http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev

___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] testing topic/dnthayer/ticket1627

2016-07-29 Thread Aashish Sharma
HI Daniel, 

Are there any specific node.cfg settings or broctl.cfg settings to run the 
Logging node ? Could you please point me to the right locations. 

Thanks, 
Aashish 

___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] input-framework file locations

2016-07-08 Thread Aashish Sharma
I have been thinking and trying different things but for now, it appears that 
if we are to share policies around, there is no easy way to be able to 
distribute input-files along with policy files. 

Basically, right now I use 

redef Scan::whitelist_ip_file = "/usr/local/bro/feeds/ip-whitelist.scan" ;

and then expect everyone to edit path as their setup demands it and place 
accompanying sample file in the directory or create one for themselves  - this 
all introduces errors as well as slows down deployment. 

Is there a way I can use relative paths instead of absolute  paths for 
input-framework digestion.  At present a new-heuristics dir can have 
__load__.bro with all policies but input-framework won't read files relative to 
that directory or where it is placed. 

redef Scan::whitelist_ip_file = "../feeds/ip-whitelist.scan" ;

Something similar to __load__.bro model 

Also, one question I have is should all input-files go to a 'standard' 
feeds/input dir in bro or be scattered around along with their accompanied bro 
policies (ie in individual directories )

Something to think about as with more and more reliance on input-framework i 
think there is a need for 'standardization' on where to put input-files and how 
to easily find and read them. 

Aashish 

___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] Configurable _expire interval

2016-06-10 Thread Aashish Sharma
HI Jan, 

> > A solution could be to evaluate the interval expression every time it is
> > used inside the table implementation. The drawback would be that there

For all of my needs above has worked fairly well. including using exp_val= 0 
secs as default. 

Based on the value of item in the table, I could return a variable time back 
from the expire function thus either keeping it longer or expiring it.

> > used inside the table implementation. The drawback would be that there
> > is no fixed value for serialization (I am not sure about the effects

Sorry, I don't quite seem to understand this drawback. 

Aashish 

On Wed, Jun 08, 2016 at 08:30:14PM +0200, Jan Grashöfer wrote:
> My explanations might be hard to follow without examples. So I am adding
> some pseudo code:
> 
> > I ran into an issue while trying to make the _expire interval
> > configurable: Using a redefable constant does not work here, as the
> > expression only gets evaluated when the table is initialized and thus
> > later redefs do not influence the value.
> 
> # base script:
> const exp_val = 5min 
> data: table[addr] of string _expire=exp_val;
> 
> # user script:
> redef exp_val = 20min; # has no effect
> 
> > I thought about circumventing
> > this by setting the value to 0 and maintain an extra variable to check
> > against in my expire_func and return the right value. Unfortunately this
> > won't work with write/read_expire as a write or read will reset the
> > expiration to the initial value of 0.
> 
> # base script:
> const exp_val = 5min 
> 
> function do_exp(data: table[addr] of string, idx: addr): interval
>   {
>   if ( is_first_call() )
>   return exp_val;
>   # in case of a write, expire timer will be reset to 0
>   else
>   ...
>   }
> 
> data: table[addr] of string _expire=0 expire_func=do_exp;
> 
> > A solution could be to evaluate the interval expression every time it is
> > used inside the table implementation. The drawback would be that there
> > is no fixed value for serialization (I am not sure about the effects
> > here). Another solution would be to provide a bif (or implement a
> > language feature) to change the expire_time value from inside the
> > expire_func.
> 
> # base script:
> function do_exp(data: table[addr] of string, idx: addr): interval
>   {
>   if ( is_first_call() )
>   expire exp_val; # sets expire timer instead of delay
>   else
>   ...
>   }
> 
> > There was a somehow similar discussion about per item expiration (see
> > http://mailman.icsi.berkeley.edu/pipermail/bro-dev/2016-April/011731.html)
> > in which Robin came up with the solution of multiple tables with
> > different expiration values. Again this would be a solution but doesn't
> > feel right (duplicate code, static and somehow counterintuitive for the
> > user).
> 
> # base script:
> type exp_interval enum { ei1m, ei10m, ... };
> const exp_val = ei1m 
> 
> data1m: table[addr] of string _expire=1min;
> data10m: table[addr] of string _expire=10min;
> ...
> data1d: table[addr] of string _expire=1day;
> 
> function insert(...)
>   {
>   switch ( exp_val )
>   {
>   case ei1m:
>   data1m[...] = ...
>   break;
>   case ei10m:
>   data10m[...] = ...
>   break;
>   ...
>   }
>   }
> 
> # user script:
> redef exp_val = ei30m;
> 
> > Maybe I am missing something regarding the loading sequence of scripts
> > and this problem could be solved easier. So I am open for any
> > suggestions or feedback!
> ___
> bro-dev mailing list
> bro-dev@bro.org
> http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] CBAN design proposal

2016-05-21 Thread Aashish Sharma
> In other words, my proposal is to put authors into control of their
> code, and make them fully responsible for it too --- not us. We'd just
> connect authors with users, with as little friction as possible.
> 

I support this completely. 

> If we want some kind of quality measure, we could introduce stars for
> modules that user assign if they like something; or even some facility
> to leave comments or other feedback that's visible to everybody. We

I think community vetting and feedback (and experience stories) is the right 
way to go here. 

Bro team vetting something will be very hard. My personal experience says there 
are times when scripts bring surprises weeks and months down the line - so 
testing isn't merely run a few days and give an OK.

Aashish 


On Sat, May 21, 2016 at 03:44:01PM -0700, Robin Sommer wrote:
> 
> 
> On Fri, May 20, 2016 at 20:16 +, Jon wrote:
> 
> > Here’s an updated design doc for CBAN, a plugin manager for Bro, with
> > some new ideas on how it could work:
> 
> Cool, thanks. I'm going to send some feedback but first I wanted to
> bring something up that might be controversial. :-)
> 
> As I read through the design doc, I started questioning our plan of
> curating CBAN content. I know that's what we've been intending to do,
> but is that really the best approach? I don't know of script
> repositories for other languages that enforce quality control on
> submissions beyond checking technical conventions like certain meta
> data being there.
> 
> I'm getting concerned that we're creating an unncessary bottleneck by
> imposing the Bro Team into the critical path for submissions and
> updates. Why not let people control their stuff themselves? They'd
> register things with CBAN to make it available to everybody, but we'd
> stay out of the loop for anything further. We may still want to keep a
> local copy of the code to protect against stuff going offline, but
> that could be automated. Or we could even move that burden to the
> authors as well and just keep a reference where to find the code,
> without a local copy; if it disappears, so be it.
> 
> In other words, my proposal is to put authors into control of their
> code, and make them fully responsible for it too --- not us. We'd just
> connect authors with users, with as little friction as possible.
> 
> If we want some kind of quality measure, we could introduce stars for
> modules that user assign if they like something; or even some facility
> to leave comments or other feedback that's visible to everybody. We
> could also verify if a plugins builds and loads correctly, or if tests
> pass. But we wouldn't block it if something fails, just mark it (say,
> with a banner saying "tests pass", "tests fail", "no tests").
> 
> Thoughts?
> 
> Robin
> 
> -- 
> Robin Sommer * ICSI/LBNL * ro...@icir.org * www.icir.org/robin
> ___
> bro-dev mailing list
> bro-dev@bro.org
> http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] declaration error: function type clash

2016-05-12 Thread Aashish Sharma
Jan, 

> I guess the function for initialization receives the index that should
> be initialized. 

Thank you. This works! 

For future reference:

I also needed to convert the following table to use opaque of cardinality for 
this table grows reasonably big: 

global distinct_backscatter_peers: table[addr] of table[port] of set[addr] 
_expire=1 day;


Here is how I did this one: 

type bs: table[port] of opaque of cardinality =function(p:port): opaque 
of cardinality {return hll_cardinality_init(0.1, 0.95); };
global c_distinct_backscatter_peers: table[addr] of bs  _expire=1 day ;

and to access: 


if ( orig !in c_distinct_backscatter_peers)
c_distinct_backscatter_peers[orig] = table() ;

if (s_port !in c_distinct_backscatter_peers[orig])
{
local cp: opaque of cardinality = hll_cardinality_init(0.1, 
0.95);
c_distinct_backscatter_peers[orig][s_port]=cp ;
}

hll_cardinality_add(c_distinct_backscatter_peers[orig][s_port], resp);

local d_val = 
double_to_count(hll_cardinality_estimate(c_distinct_backscatter_peers[orig][s_port]));

### use d_val here  


Now may be there is a better more elegant way to do this, but above seems to 
work for me.

Again, thanks Jan!! 

Aashish 





On Thu, May 12, 2016 at 04:36:36PM +0200, Jan Grashöfer wrote:
> > how do I declare (3) so that I can avoid the " function type clash" 
> >  error above.
> 
> I guess the function for initialization receives the index that should
> be initialized. In this case the index consists of two values. I tried
> the following and Bro did not complain:
> 
> global c_likely_scanner: table[addr,port] of opaque of cardinality
>  = function(a: addr, p: port): opaque of cardinality {
> return hll_cardinality_init(0.1, 0.95); }
> _expire=1 day;
> 
> Hope that works for you!
> 
> Regards,
> Jan
> ___
> bro-dev mailing list
> bro-dev@bro.org
> http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] declaration error: function type clash

2016-05-12 Thread Aashish Sharma
So I am trying to convert tables into using opaque of cardinality since thats 
more memory efficient (or counting bloomfilters for that matter):

works: if table (0) converted to (1) 
errors: if table (2) converted to (3) 


Details: I am trying the following, original table (0) converted to (1): 

(0) global likely_scanner: table[addr,port] of set[addr] _expire=1 day 
 ;

(1) global c_likely_scanner: table[addr] of opaque of cardinality
 = function(n: any): opaque of cardinality { return 
hll_cardinality_init(0.1, 0.95); }
_expire=1 day  ;


ERRORS: 

(2) global likely_scanner: table[addr,port] of set[addr] _expire=1 day 
 ;

Converted table:

(3) global c_likely_scanner: table[addr,port] of opaque of cardinality
 = function(n: any): opaque of cardinality { return 
hll_cardinality_init(0.1, 0.95); }
_expire=1 day  ;

I get this error:

check-knock.bro, line 58:  function type clash 
(=anonymous-function{ return (hll_cardinality_init(0.1, 0.95))})



Question: 

how do I declare (3) so that I can avoid the " function type clash"  
error above.

I am not sure what am I doing wrong in the declaration. Any thoughts/advice how 
to get past this issue ?

Thanks,
Aashish

___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] bloomfilter_counting_init parameterization ?

2016-05-09 Thread Aashish Sharma
Nevermind my email!

I found: src/probabilistic/cardinality-counter.bif

Thanks,
Aashish



On Mon, May 9, 2016 at 2:29 AM, Aashish Sharma <asha...@lbl.gov> wrote:
> Matthias,
>
> I am encountering some big tables in my scan-detection heuristics and which 
> grow due to scanners:
>
> So was thinking of this possibility to use counting bloomfilters instead of 
> tables and sets. After-all we are still looking for cardinality of tables and 
> sets for identifying scanners.
>
> for example:
>
> 1) global distinct_peers: table[addr] of set[addr]
>
> then 
> .
>
> if ( orig !in distinct_peers )
> distinct_peers[orig] = set() 
>
> if ( resp !in distinct_peers[orig] )
> add distinct_peers[orig][resp];
>
> local n = |distinct_peers[orig]|;
>
>
> and if  n > N - its a scanner !!!
>
>
> SO I was wondering can the following be somehow represented as combinations 
> of counting bloomfilters:
>
> 1) global distinct_peers: table[addr] of set[addr]
>
> and/or
>
> 2) global distinct_backscatter_peers: table[addr] of table[port] of 
> set[addr]
>
> Aashish
>
>
> Here is an example proof-of-concept policy of what I am tryig to explore:
>
> === bloom-scan.bro ==
>
> module Scan;
>
> global src: opaque of bloomfilter ;
> global dst_port: opaque of bloomfilter ;
>
>
> event bro_init()
> {
>
> src  =  bloomfilter_counting_init(3, 128, 1);
> dst_port =  bloomfilter_counting_init(3, 128, 1);
> }
>
>
> function check_bloom (c: connection)
> {
>
> local orig = c$id$orig_h;
> local resp = c$id$resp_h ;
> local resp_p = c$id$resp_p ;
>
>
> if (resp_p == 40884/tcp || resp_p == 40876/tcp)
> return ;
>
> bloomfilter_add (src, orig);
> bloomfilter_add (dst_port, fmt("%s%s", resp, resp_p));
>
>
> local src_counts =  bloomfilter_lookup(src, orig) ;
> local dst_counts = bloomfilter_lookup(dst_port, fmt("%s%s", resp, 
> resp_p)) ;
>
>  idea here is that a remote scanner is going to be hitting a lot 
> of local hosts
>  so footprint (conn counts of the remote scanner) is going to be 
> dis-propotionate to
> ### footprint of local host+port
>
> if (src_counts > 30 && dst_counts < 5)
> print fmt ("possible_scanner: %s -> %s on %s ( counts: %s, 
> %s)", orig, resp, resp_p, src_counts, dst_counts);
>
>
> }
>
>
> event partial_connection(c: connection)
>{
>Scan::check_bloom(c);
>}
>
> event connection_attempt(c: connection)
>{
>Scan::check_bloom(c);
>}
>
> event connection_half_finished(c: connection)
>{
># Half connections never were "established", so do scan-checking here.
>Scan::check_bloom(c);
>}
>
> event connection_rejected(c: connection)
>{
>Scan::check_bloom(c);
>}
>
> event connection_reset(c: connection)
>{
>Scan::check_bloom(c);
>}
>
> event connection_pending(c: connection)
>{
>if ( c$orig$state == TCP_PARTIAL && c$resp$state == TCP_INACTIVE )
>Scan::check_bloom(c);
>}
>
>
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] bloomfilter_counting_init parameterization ?

2016-05-09 Thread Aashish Sharma
Matthias, 

I am encountering some big tables in my scan-detection heuristics and which 
grow due to scanners:

So was thinking of this possibility to use counting bloomfilters instead of 
tables and sets. After-all we are still looking for cardinality of tables and 
sets for identifying scanners. 

for example:

1) global distinct_peers: table[addr] of set[addr] 

then 
.

if ( orig !in distinct_peers )
distinct_peers[orig] = set() 

if ( resp !in distinct_peers[orig] )
add distinct_peers[orig][resp];

local n = |distinct_peers[orig]|;


and if  n > N - its a scanner !!!


SO I was wondering can the following be somehow represented as combinations of 
counting bloomfilters: 

1) global distinct_peers: table[addr] of set[addr] 

and/or 

2) global distinct_backscatter_peers: table[addr] of table[port] of 
set[addr]

Aashish 


Here is an example proof-of-concept policy of what I am tryig to explore:  

=== bloom-scan.bro == 

module Scan;

global src: opaque of bloomfilter ;
global dst_port: opaque of bloomfilter ;


event bro_init()
{

src  =  bloomfilter_counting_init(3, 128, 1);
dst_port =  bloomfilter_counting_init(3, 128, 1);
}


function check_bloom (c: connection)
{

local orig = c$id$orig_h;
local resp = c$id$resp_h ;
local resp_p = c$id$resp_p ;


if (resp_p == 40884/tcp || resp_p == 40876/tcp)
return ;

bloomfilter_add (src, orig);
bloomfilter_add (dst_port, fmt("%s%s", resp, resp_p));


local src_counts =  bloomfilter_lookup(src, orig) ;
local dst_counts = bloomfilter_lookup(dst_port, fmt("%s%s", resp, 
resp_p)) ;

 idea here is that a remote scanner is going to be hitting a lot of 
local hosts 
 so footprint (conn counts of the remote scanner) is going to be 
dis-propotionate to 
### footprint of local host+port 

if (src_counts > 30 && dst_counts < 5)
print fmt ("possible_scanner: %s -> %s on %s ( counts: %s, 
%s)", orig, resp, resp_p, src_counts, dst_counts);


}


event partial_connection(c: connection)
   {
   Scan::check_bloom(c);
   }

event connection_attempt(c: connection)
   {
   Scan::check_bloom(c);
   }

event connection_half_finished(c: connection)
   {
   # Half connections never were "established", so do scan-checking here.
   Scan::check_bloom(c);
   }

event connection_rejected(c: connection)
   {
   Scan::check_bloom(c);
   }

event connection_reset(c: connection)
   {
   Scan::check_bloom(c);
   }

event connection_pending(c: connection)
   {
   if ( c$orig$state == TCP_PARTIAL && c$resp$state == TCP_INACTIVE )
   Scan::check_bloom(c);
   }


___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] bloomfilter_counting_init parameterization ?

2016-05-03 Thread Aashish Sharma
So I am trying to use bloomfilter_counting_init for keeping a count of uniq IPs 
seen within a subnet and instead of relying on a table or a set, I was toying 
with an idea of using bloomfilter_counting_init. 

However, I am not clear on the parameterization below:

global bloomfilter_counting_init: function(k: count , cells: count , max: count 
, name: string =""): opaque of bloomfilter ;

What should be the length of the cells for storing 65536 IPs ? 

Is k=3 a good value or I need something else ? Could someone elaborate on how 
to decide these parameters. 

I looked at /btest/bifs/bloomfilter.bro but not quite clear.

thanks, 
Aashish 


On Mon, Apr 11, 2016 at 08:26:37AM -0700, Matthias Vallentin wrote:
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] cluster communication best practice?

2016-04-11 Thread Aashish Sharma
I am in process of clusterizing a bunch of scripts and using worker2manager and 
manager2worker events for doing so.  This seem to be working *quite fantastic* 
actually and I see 1-to-1 mapping on data moving around. 

I still don't quite understand how the communication happens in background (can 
someone elaborate or point me to where should I be looking ) 

While I am using local caches and not sending data if already sent around, I 
know still the number of events has increased significantly. I am wondering if 
in background proxy/workers/manager/workers keep a persistent connection over 
which bytes just move (so doesn't quite matter how many times we move the data 
) or am I in danger of overloading proxies at some point with this 
communication ? Would increase in number of proxies help ? 

for an example test case I am trying synchronizing bloomfilter (populating with 
IPs based on outgoing SF seen) across workers using this technique. 

Right now I don't see significant increase in CPU or memory perse doing this 
but porting old-scan detection to cluster is next on to-do list and I want to 
make sure I don't cause proxies to explode. 

Thanks, 
Aashish 


___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1472) Bif for a new function to calculates haversine distance between two geoip locations

2016-03-30 Thread Aashish Sharma (JIRA)

[ 
https://bro-tracker.atlassian.net/browse/BIT-1472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=25205#comment-25205
 ] 

Aashish Sharma commented on BIT-1472:
-

Until you are set to update libGeoIP2 API, could you add this bif to bro.bif 

you can later eliminate this from bro.bif or reintegrate as you see fit. 


> Bif for a new function to calculates haversine distance between two geoip 
> locations
> ---
>
> Key: BIT-1472
> URL: https://bro-tracker.atlassian.net/browse/BIT-1472
> Project: Bro Issue Tracker
>  Issue Type: New Feature
>  Components: Bro
>Affects Versions: 2.4
>    Reporter: Aashish Sharma
>Assignee: Justin Azoff
>Priority: Low
>  Labels: bif, function
> Fix For: 2.5
>
>
> Merge request for:
> topic/aashish/haversine
> ## ## Calculates haversine distance between two geoip locations
> ##
> ##
> ## lat1, long1, lat2, long2
> ##
> ## Returns: distance in miles
> ## function haversine_distance%(lat1:double, long1:double, lat2:double, 
> long2:double %): double
> accompanying bro policy in base/utils/haversine_distance_ip.bro
> module GLOBAL;
> ## Returns the haversine distance between two IP addresses based on GeoIP
> ## database locations
> ##
> ##
> ## orig: the address of orig connection
> ## resp: the address of resp server
> ## Returns: the GeoIP distance between orig and resp in miles
> function haversine_distance_ip(orig: addr, resp: addr): double



--
This message was sent by Atlassian JIRA
(v7.2.0-OD-04-029#72002)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] MOTS and bro ?

2016-03-21 Thread Aashish Sharma
I got a query from ANL about Bro's capability to detect MOTS: 

   "I had a question for you – I was at a talk last week, and someone was
   talking about a Man on the Side attack. The presenter had indicated that
   suricata was currently the only tool doing this detection, but that they
   thought an update to bro was in work – that would add that capability into
   bro as well.
   Was the speaker correct ?
   Do you know if bro currently can detect MOTS ?
   " 



Wondering is MOTS detection this something we worry about in bro world and Any 
feedback for my reply ? 

Aashish 

___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] current_time() vs network_time()

2015-11-19 Thread Aashish Sharma
> I'm not sure what you have available but to generate the unix timestamp
> I would use localtime() or gmtime() (using gmtime() avoids daylight

Here is the function I am now using (sharing - might be useful to improve upon)

Index: ../../all-check.bro
===
--- ../../all-check.bro (revision 819)
+++ ../../all-check.bro (working copy)

+function next_report_time():time
+{
+   local kv_splitter: pattern = /[\ \t]+/;
+local one_space: string = " ";

+   local _report_hours: vector of count = {0, 10, 12, 14, 16, 23};
+
+   local t = current_time();
+   local _now_h = to_count(strftime("%H", t));
+
+   local _next_report_hour : count = 0 ;
+
+   for (h in _report_hours)
+   {
+   print fmt ("now_h is %s, H is %s", _now_h, _report_hours[h]) ;
+   if (_now_h < _report_hours[h])
+   {
+   _next_report_hour = _report_hours[h]  ; break;
+   }
+   }
+
+   local t_year = strftime("%Y",t);
+   local t_zone = strftime("%Z",t);
+   local zone_year_month_day = strftime("%Z %Y %b %d", t);
+
+   local _hour = _next_report_hour ;
+   local _min = "00" ;
+   local _sec = "00" ;
+
+   local _t_string = fmt ("%s %s:%s:%s", zone_year_month_day, 
_hour,_min,_sec );
+
+   local _next_report_time = fmt ("time is :  %s, %s", strftime("%Z %Y %b 
%d %T", t), _t_string) ;
+
+   local parse_string: string = "%Z %Y %b %d %H:%M:%S";
+   local date_mod = fmt("%s", _t_string);
+   local date_mod_p = gsub(date_mod, kv_splitter, one_space);
+   local ret_val = strptime(parse_string, date_mod_p);
+
+   return ret_val ;
+}
+

And then basically: 

event bro_init() =10
{

+   nrt =  next_report_time() ;

} 


event report_allcheck()
 {

 +   #if((report_hour == 0 || report_hour == 10 || report_hour == 12 || 
report_hour == 14
 +   ##|| report_hour == 16 || report_hour == 23)  && 
report_min == 0 && report_sec == 0)
 +
 +if (current_time() > nrt)
  {
  +   nrt = next_report_time();

  } 
 } 



On Wed, Nov 18, 2015 at 11:34:39AM -0800, Craig Leres wrote:
> On 11/18/2015 10:58 AM, Aashish Sharma wrote:
> > So, I am trying to have bro send me report/alerts at specific timeslots. 
> > 
> > Given current_time is the wall-clock time, I am relying on current_time() 
> > function to get time and then, my code is : if (hh:mm:ss == desired time), 
> > run a report. 
> 
> My recommendation for how to implement this would be to calculate a unix
> timestamp (seconds since 1970) that corresponds to the next time you
> want send a report and then poll for when time() is >= this value. After
> sending the report, calculate the next timestamp.
> 
> I'm not sure what you have available but to generate the unix timestamp
> I would use localtime() or gmtime() (using gmtime() avoids daylight
> saving time issues) to break out the fields, set the H, M and S to the
> desired values and then use mktime() (or timegm()) to convert back to a
> unix timestamp.
> 
> Craig
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] current_time() vs network_time()

2015-11-18 Thread Aashish Sharma
So, I am trying to have bro send me report/alerts at specific timeslots. 

Given current_time is the wall-clock time, I am relying on current_time() 
function to get time and then, my code is : if (hh:mm:ss == desired time), run 
a report.  I noticed inconsistencies so here is more detailed debug log: 

I notice, jumps in the current_time:

Report time is 1447869593.121702, report hour is 9:59:53
Report time is 1447869595.234395, report hour is 9:59:55
Report time is 1447869596.45385, report hour is 9:59:56
Report time is 1447869597.636261, report hour is 9:59:57
Report time is 1447869598.597632, report hour is 9:59:58
Report time is 1447869599.628088, report hour is 9:59:59
Report time is 1447869601.926001, report hour is 10:0:1  <- no 10:0:0 ? 
Report time is 1447869603.182218, report hour is 10:0:3  <--- jump 
Report time is 1447869604.166191, report hour is 10:0:4
Report time is 1447869605.647308, report hour is 10:0:5
Report time is 1447869606.499426, report hour is 10:0:6
Report time is 1447869607.383869, report hour is 10:0:7
Report time is 1447869617.52706, report hour is 10:0:17  <- big jump 
Report time is 1447869618.188414, report hour is 10:0:18
Report time is 1447869619.04252, report hour is 10:0:19  <- stall ? 
Report time is 1447869619.733979, report hour is 10:0:19 <--- stall ? 
Report time is 1447869622.635545, report hour is 10:0:22
Report time is 1447869623.28335, report hour is 10:0:23


I believe network_time would be somewhat better probably and will try to see 
how that fares for my usecase. Any idea why I see such jumps on the wall-clock 
times ? I'd think this should be rather more reliable ?

Thanks, 
Aashish 




___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] current_time() vs network_time()

2015-11-18 Thread Aashish Sharma
> My recommendation for how to implement this would be to calculate a unix
> timestamp (seconds since 1970) that corresponds to the next time you
> want send a report and then poll for when time() is >= this value. After
> sending the report, calculate the next timestamp.

ah! Much better way! Thanks Craig!  

Aashish 

On Wed, Nov 18, 2015 at 11:34:39AM -0800, Craig Leres wrote:
> On 11/18/2015 10:58 AM, Aashish Sharma wrote:
> > So, I am trying to have bro send me report/alerts at specific timeslots. 
> > 
> > Given current_time is the wall-clock time, I am relying on current_time() 
> > function to get time and then, my code is : if (hh:mm:ss == desired time), 
> > run a report. 
> 
> My recommendation for how to implement this would be to calculate a unix
> timestamp (seconds since 1970) that corresponds to the next time you
> want send a report and then poll for when time() is >= this value. After
> sending the report, calculate the next timestamp.
> 
> I'm not sure what you have available but to generate the unix timestamp
> I would use localtime() or gmtime() (using gmtime() avoids daylight
> saving time issues) to break out the fields, set the H, M and S to the
> desired values and then use mktime() (or timegm()) to convert back to a
> unix timestamp.
> 
>   Craig
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-835) Porting Drop and Catch-n-release to 2.0

2015-09-04 Thread Aashish Sharma (JIRA)

[ 
https://bro-tracker.atlassian.net/browse/BIT-835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=21955#comment-21955
 ] 

Aashish Sharma commented on BIT-835:



I'd take this one! 

On Fri, Sep 04, 2015 at 07:52:00AM -0500, Seth Hall (JIRA) wrote:
We just need someone to take it on once enough of the infrastructure is in 
place.


> Porting Drop and Catch-n-release to 2.0
> ---
>
> Key: BIT-835
> URL: https://bro-tracker.atlassian.net/browse/BIT-835
> Project: Bro Issue Tracker
>  Issue Type: New Feature
>  Components: Bro
>Affects Versions: git/master
>    Reporter: Aashish Sharma
> Fix For: 2.5
>
> Attachments: drop.bro, drop-catch-n-release.patch, scan.bro, 
> test-drop-connectivity, test-restore-connectivity
>
>
> The following patch ports the drop.bro to bro-2.0+ (along with 
> catch-n-release functionality) 
> [originally written for bro-1.5.3 and prior versions by Jim Mellander and 
> Robin Sommer|policies] 
> Also attaching scan.bro (which is ported to 2.0) 
> scan.bro and drop.bro files need to go into policy/protocols/conn/ folder. 
> Also adding test-drop-connectivity and test-restore-connectivity scripts 
> which should go into aux/broctl/bin/ 
> This patch and policies have been operational at LBNL for a few months now 
> with bro-2.0. 
> (sorry haven't created my own branch to commit these) \\- please let me know 
> if this need to be otherwise.



--
This message was sent by Atlassian JIRA
(v7.0.0-OD-02-259#70102)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] [JIRA] (BIT-835) Porting Drop and Catch-n-release to 2.0

2015-09-04 Thread Aashish Sharma
> We just need someone to take it on once enough of the infrastructure is in 
> place.

I'd take this one! 

On Fri, Sep 04, 2015 at 07:52:00AM -0500, Seth Hall (JIRA) wrote:
> 
> [ 
> https://bro-tracker.atlassian.net/browse/BIT-835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=21943#comment-21943
>  ] 
> 
> Seth Hall commented on BIT-835:
> ---
> 
> We should be able to get a very nice version of this into 2.5 with the Broker 
> key-value store and the net control framework.  
We just need someone to take it on once enough of the infrastructure is in 
place.
> 
> > Porting Drop and Catch-n-release to 2.0
> > ---
> >
> > Key: BIT-835
> > URL: https://bro-tracker.atlassian.net/browse/BIT-835
> > Project: Bro Issue Tracker
> >  Issue Type: New Feature
> >      Components: Bro
> >Affects Versions: git/master
> >Reporter: Aashish Sharma
> > Fix For: 2.5
> >
> > Attachments: drop.bro, drop-catch-n-release.patch, scan.bro, 
> > test-drop-connectivity, test-restore-connectivity
> >
> >
> > The following patch ports the drop.bro to bro-2.0+ (along with 
> > catch-n-release functionality) 
> > [originally written for bro-1.5.3 and prior versions by Jim Mellander and 
> > Robin Sommer|policies] 
> > Also attaching scan.bro (which is ported to 2.0) 
> > scan.bro and drop.bro files need to go into policy/protocols/conn/ folder. 
> > Also adding test-drop-connectivity and test-restore-connectivity scripts 
> > which should go into aux/broctl/bin/ 
> > This patch and policies have been operational at LBNL for a few months now 
> > with bro-2.0. 
> > (sorry haven't created my own branch to commit these) \\- please let me 
> > know if this need to be otherwise.
> 
> 
> 
> --
> This message was sent by Atlassian JIRA
> (v7.0.0-OD-02-259#70102)
> ___
> bro-dev mailing list
> bro-dev@bro.org
> http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1396) Logs disappearing on broctl restart

2015-09-04 Thread Aashish Sharma (JIRA)

[ 
https://bro-tracker.atlassian.net/browse/BIT-1396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=21954#comment-21954
 ] 

Aashish Sharma commented on BIT-1396:
-

Please close it! 

If I encounter this again, I will request a new ticket !!! 

> Logs disappearing on broctl restart
> ---
>
> Key: BIT-1396
> URL: https://bro-tracker.atlassian.net/browse/BIT-1396
> Project: Bro Issue Tracker
>  Issue Type: Problem
>  Components: BroControl
>Affects Versions: 2.4
>    Reporter: Aashish Sharma
>Assignee: Daniel Thayer
>Priority: High
> Fix For: 2.4
>
>
> Noticed that on certain restarts of bro-2.4-beta, logs arbitrarily disappear.
> Restarts happen as
> - broctl check; broctl restart
> - broctl check; broctl restart --clean
> - broctl restart
> or some variant - not precisely sure. But all log files for that duration of 
> restarts are missing



--
This message was sent by Atlassian JIRA
(v7.0.0-OD-02-259#70102)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1472) Bif for a new function to calculates haversine distance between two geoip locations

2015-09-03 Thread Aashish Sharma (JIRA)
Aashish Sharma created BIT-1472:
---

 Summary: Bif for a new function to calculates haversine distance 
between two geoip locations
 Key: BIT-1472
 URL: https://bro-tracker.atlassian.net/browse/BIT-1472
 Project: Bro Issue Tracker
  Issue Type: New Feature
  Components: Bro
Affects Versions: 2.4
Reporter: Aashish Sharma


Merge request for:

topic/aashish/haversine


## ## Calculates haversine distance between two geoip locations
##
##
## lat1, long1, lat2, long2
##
## Returns: distance in miles
## function haversine_distance%(lat1:double, long1:double, lat2:double, 
long2:double %): double


accompanying bro policy in base/utils/haversine_distance_ip.bro

module GLOBAL;

## Returns the haversine distance between two IP addresses based on GeoIP
## database locations
##
##
## orig: the address of orig connection
## resp: the address of resp server
## Returns: the GeoIP distance between orig and resp in miles
function haversine_distance_ip(orig: addr, resp: addr): double




--
This message was sent by Atlassian JIRA
(v7.0.0-OD-02-259#70102)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1396) Logs disappearing on broctl restart

2015-06-14 Thread Aashish Sharma (JIRA)

[ 
https://bro-tracker.atlassian.net/browse/BIT-1396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=20918#comment-20918
 ] 

Aashish Sharma commented on BIT-1396:
-

Issue Remains. 

I am not sure what specific crashes of bro is causing it but yes logs are not 
getting archived. 

While, I have not manually been able to reproduce this, there is quite a few of 
this events which happened automatically since Jun 1st:

Logs got moved to ~/spool/tmp but never got archived:

 36Gpost-terminate-2015-06-02-13-50-24-6473-crash
9.4Gpost-terminate-2015-06-03-15-05-04-18332-crash
 11Gpost-terminate-2015-06-05-15-05-05-12274-crash
9.4Gpost-terminate-2015-06-08-15-05-45-71408-crash
 11Gpost-terminate-2015-06-11-15-05-45-5191-crash


 Logs disappearing on broctl restart
 ---

 Key: BIT-1396
 URL: https://bro-tracker.atlassian.net/browse/BIT-1396
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Affects Versions: 2.4
Reporter: Aashish Sharma
Priority: High
 Fix For: 2.4


 Noticed that on certain restarts of bro-2.4-beta, logs arbitrarily disappear.
 Restarts happen as
 - broctl check; broctl restart
 - broctl check; broctl restart --clean
 - broctl restart
 or some variant - not precisely sure. But all log files for that duration of 
 restarts are missing



--
This message was sent by Atlassian JIRA
(v6.5-OD-05-041#65001)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] some Broker questions

2015-06-03 Thread Aashish Sharma
I am trying using BrokerStore with a master and a clone setup. Where by I was 
thinking of using master on manager and all the workers are clones. However, I 
am somewhat confused at a few things - attaching the sample policies used: 

1) I see that stores-listener.bro has clone created into it and 
store-connector.bro has master in it.

Does that mean the idea is to have workers run listener and manager run 
connector ? Which fundamentally means manager connects to the workers ?
Or this is open to 'case-by-case' basis ?

2) What exactly does bro/event/ready mean ? Is idea here to compartmentalize 
various events for various policies ?
 something like bro/event/tor-ban/balh ?  

2b) Is it right to understand that with auto_event the event will be 
automatically called on workers if called on manager ? 

2c) How do I trigger a clone to update the master (how often or can I trigger 
updates on certain conditions ? ) 

3) Since all the action happens in event 
BrokerComm::outgoing_connection_established  I don't see way to pass data to 
it. 

Do I need to create global variables and then use them in this event ? I mean 
whats a good way to pass/use data to this event ?

3b) How is BrokerComm::outgoing_connection_established event triggered ? Does 
using BrokerStore::insert in some other event also trigger the updates to 
master from the clone ? 

4) Somewhat whimsical issue:  Why is peer_address of string type when we have 
peer_port as port data type. Shouldn't peer_address be address data type ? I 
was hoping may be one can use dns-names thats why but I cannot seem to get that 
working ? 

4b) Shouldn't this event be better off as : event 
BrokerComm::outgoing_connection_established(p: peer)

Oh also, I see that it supports sets but seems like doesn't support tables ?

I am really liking Broker from what my current understanding is so far. Its 
tremendously powerful.

Thanks,
Aashish

const broker_port= /tcp redef ; 
redef exit_only_after_terminate = T ;


global h:  opaque of BrokerStore::Handle ; 

function dv(d: BrokerComm::Data): BrokerComm::DataVector 
{
local rval: BrokerComm::DataVector ;
rval[0] = d; 
return rval ;
}

global ready: event(); 

event BrokerComm::outgoing_connection_broken(peer_address: string, peer_port: 
port)
{
terminate(); 
} 

event BrokerComm::outgoing_connection_established(peer_address: string, 
peer_port: port, peer_name: string)
{

local myset: set[string] = [a, b, c, d ];
local myvec: vector of string = [ alpha, beta, gamma, theta] ; 

h = BrokerStore::create_master(mystore); 
BrokerStore::insert(h, BrokerComm::data(one), BrokerComm::data(110)); 
BrokerStore::insert(h, BrokerComm::data(two), BrokerComm::data(223)); 
BrokerStore::insert(h, BrokerComm::data(myset), 
BrokerComm::data(myset)); 
BrokerStore::insert(h, BrokerComm::data(myvec), 
BrokerComm::data(myvec)); 

BrokerStore::increment(h, BrokerComm::data(one)); 
BrokerStore::increment(h, BrokerComm::data(two)); 

BrokerStore::add_to_set(h, BrokerComm::data(myset), 
BrokerComm::data(e)); 
BrokerStore::remove_from_set(h, BrokerComm::data(myset), 
BrokerComm::data(b)); 

BrokerStore::push_left(h, BrokerComm::data(myvec), 
dv(BrokerComm::data(delta))); 
BrokerStore::push_right(h, BrokerComm::data(myvec), 
dv(BrokerComm::data(omega))); 

when (local res = BrokerStore::size(h) )
{
print master size, res; 
event ready(); 
}   
timeout 10 sec 
{ print timeout ; } 
} 


event bro_init()
{
BrokerComm::enable(); 
BrokerComm::connect(127.0.0.1, broker_port, 1 secs); 
BrokerComm::auto_event(bro/event/ready, ready); 

}
const broker_port: port /tcp redef; 
redef exit_only_after_terminate = T ; 

global h: opaque of BrokerStore::Handle; 
global expected_key_count = 4; 
global key_count = 0 ;

function do_lookup(key: string)
{

when (local res = BrokerStore::lookup(h, BrokerComm::data(key)) ) 
{ 
++key_count ; 
print lookup, key, res ; 

if (key_count == expected_key_count)
terminate();
}
timeout 10 sec
{
print timeout,key; 
} 

}


event ready()
{
h = BrokerStore::create_clone(mystore); 

when (local res = BrokerStore::keys(h) ) 
{
print clone keys, res ; 

do_lookup(BrokerComm::refine_to_string(BrokerComm::vector_lookup(res$result,0)))
 ; 

do_lookup(BrokerComm::refine_to_string(BrokerComm::vector_lookup(res$result,1)))
 ; 

do_lookup(BrokerComm::refine_to_string(BrokerComm::vector_lookup(res$result,2)))
 ; 

do_lookup(BrokerComm::refine_to_string(BrokerComm::vector_lookup(res$result,3)))
 ; 
}
timeout 10 sec  

[Bro-Dev] [JIRA] (BIT-1396) Logs disappearing on broctl restart

2015-05-19 Thread Aashish Sharma (JIRA)

[ 
https://bro-tracker.atlassian.net/browse/BIT-1396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=20715#comment-20715
 ] 

Aashish Sharma commented on BIT-1396:
-

I found the 'missing' logs in spool/tmp/crash_dump dir. 

However, as I mentioned before, this crash_dump wasn't something outstanding 
(likely a script bug) so, I never noticed it happen when doing broctl restart 
etc.  

So logs didn't quite disappear atleast but weren't moved to archive folder too. 

I think I am suppose to send Daniel stderr.out. Doing that now. 

Aashish 



-- 
Aashish Sharma  (asha...@lbl.gov)
Cyber Security, 
Lawrence Berkeley National Laboratory  
http://go.lbl.gov/pgp-aashish 
Office: (510)-495-2680  Cell: (510)-612-7971


 Logs disappearing on broctl restart
 ---

 Key: BIT-1396
 URL: https://bro-tracker.atlassian.net/browse/BIT-1396
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Affects Versions: 2.4
Reporter: Aashish Sharma
Priority: High
 Fix For: 2.4


 Noticed that on certain restarts of bro-2.4-beta, logs arbitrarily disappear.
 Restarts happen as
 - broctl check; broctl restart
 - broctl check; broctl restart --clean
 - broctl restart
 or some variant - not precisely sure. But all log files for that duration of 
 restarts are missing



--
This message was sent by Atlassian JIRA
(v6.5-OD-03-002#65000)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] broctl restart --clean

2015-05-15 Thread Aashish Sharma
Just a thought, 

broctl restart --clean at present does operations in the following sequence: 

1) Stop running bro
2) clean up nodes
3) check configurations 
4) install new config 
5) start bro. 

If scripts are buggy, this would fail at step (3) and now I am debugging 
scripts while bro is not running. 

I think restart --clean should first check configurations (step 3) and then if 
success, move further or stop. 

buggy/typo scripts are preffered to be debugged while bro is running. 

Aashsih 


-- 
Aashish Sharma  (asha...@lbl.gov)
Cyber Security, 
Lawrence Berkeley National Laboratory  
http://go.lbl.gov/pgp-aashish 
Office: (510)-495-2680  Cell: (510)-612-7971
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1396) Logs disappearing on broctl restart

2015-05-14 Thread Aashish Sharma (JIRA)
Aashish Sharma created BIT-1396:
---

 Summary: Logs disappearing on broctl restart
 Key: BIT-1396
 URL: https://bro-tracker.atlassian.net/browse/BIT-1396
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Affects Versions: 2.4
 Environment: Noticed that on certain restarts of bro-2.4-beta, logs 
arbitrarily disappear. 

Restarts happen as 
- broctl check; broctl restart 
- broctl check; broctl restart --clean 
- broctl restart 

or some variant - not precisely sure. But all log files for that duration of 
restarts are missing. 


Reporter: Aashish Sharma






--
This message was sent by Atlassian JIRA
(v6.5-OD-03-002#65000)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1396) Logs disappearing on broctl restart

2015-05-14 Thread Aashish Sharma (JIRA)

[ 
https://bro-tracker.atlassian.net/browse/BIT-1396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=20702#comment-20702
 ] 

Aashish Sharma commented on BIT-1396:
-

Example:

-rw-r--r--  1 bro  bro81M May 13 13:33 
conn.log.mgr.2015-05-13-13:20:11-13:33:12.gz
-rw-r--r--  1 bro  bro   3.1G May 13 13:36 
conn.log.mgr.2015-05-13-00:00:00-13:19:59.gz
-rw-r--r--  1 bro  bro   420M May 13 14:31 
conn.log.mgr.2015-05-13-13:33:24-14:29:41.gz
??? (bro was running)
-rw-r--r--  1 bro  bro   1.4G May 14 00:06 
conn.log.mgr.2015-05-13-16:48:02-00:00:00.gz


Logs from 14:30 to 16:48 were not archived and disappeared. (Shows conn.log, 
but its everything)

Likewise:



 Logs disappearing on broctl restart
 ---

 Key: BIT-1396
 URL: https://bro-tracker.atlassian.net/browse/BIT-1396
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Affects Versions: 2.4
Reporter: Aashish Sharma
 Fix For: 2.4


 Noticed that on certain restarts of bro-2.4-beta, logs arbitrarily disappear.
 Restarts happen as
 - broctl check; broctl restart
 - broctl check; broctl restart --clean
 - broctl restart
 or some variant - not precisely sure. But all log files for that duration of 
 restarts are missing



--
This message was sent by Atlassian JIRA
(v6.5-OD-03-002#65000)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1396) Logs disappearing on broctl restart

2015-05-14 Thread Aashish Sharma (JIRA)

[ 
https://bro-tracker.atlassian.net/browse/BIT-1396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=20706#comment-20706
 ] 

Aashish Sharma commented on BIT-1396:
-

Yes, nothing in stderr.log - likely got over-written by one of the later bro 
restarts. 


-- 
Aashish Sharma  (asha...@lbl.gov)
Cyber Security, 
Lawrence Berkeley National Laboratory  
http://go.lbl.gov/pgp-aashish 
Office: (510)-495-2680  Cell: (510)-612-7971


 Logs disappearing on broctl restart
 ---

 Key: BIT-1396
 URL: https://bro-tracker.atlassian.net/browse/BIT-1396
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Affects Versions: 2.4
Reporter: Aashish Sharma
Priority: High
 Fix For: 2.4


 Noticed that on certain restarts of bro-2.4-beta, logs arbitrarily disappear.
 Restarts happen as
 - broctl check; broctl restart
 - broctl check; broctl restart --clean
 - broctl restart
 or some variant - not precisely sure. But all log files for that duration of 
 restarts are missing



--
This message was sent by Atlassian JIRA
(v6.5-OD-03-002#65000)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1396) Logs disappearing on broctl restart

2015-05-14 Thread Aashish Sharma (JIRA)

[ 
https://bro-tracker.atlassian.net/browse/BIT-1396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=20704#comment-20704
 ] 

Aashish Sharma commented on BIT-1396:
-

Ah! Yes, I see logs in spool/tmp/post-terminate--MM-DD-HH-MM-SS-PID. 

Is there a way to know if archival failed and what caused the failure. Unless 
one goes to explicitly account for  entire days of logs, it seems like this is 
a silent failure. One might end up unknowingly missing chunks of logs. 

If I recall my workflow was, edit script, introduce a bug, broctl start fails, 
fix the bug, retry. Still not sure what action got archival failure. 

Aashish 


-- 
Aashish Sharma  (asha...@lbl.gov)
Cyber Security, 
Lawrence Berkeley National Laboratory  
http://go.lbl.gov/pgp-aashish 
Office: (510)-495-2680  Cell: (510)-612-7971


 Logs disappearing on broctl restart
 ---

 Key: BIT-1396
 URL: https://bro-tracker.atlassian.net/browse/BIT-1396
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Affects Versions: 2.4
Reporter: Aashish Sharma
Priority: High
 Fix For: 2.4


 Noticed that on certain restarts of bro-2.4-beta, logs arbitrarily disappear.
 Restarts happen as
 - broctl check; broctl restart
 - broctl check; broctl restart --clean
 - broctl restart
 or some variant - not precisely sure. But all log files for that duration of 
 restarts are missing



--
This message was sent by Atlassian JIRA
(v6.5-OD-03-002#65000)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1396) Logs disappearing on broctl restart

2015-05-14 Thread Aashish Sharma (JIRA)

[ 
https://bro-tracker.atlassian.net/browse/BIT-1396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=20707#comment-20707
 ] 

Aashish Sharma commented on BIT-1396:
-

Um! Well the stderr.log in spool/tmp/port-terminate is there but nothing stands 
out in it. I can email it to you, if you'd like. 

Aashish 

 Logs disappearing on broctl restart
 ---

 Key: BIT-1396
 URL: https://bro-tracker.atlassian.net/browse/BIT-1396
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Affects Versions: 2.4
Reporter: Aashish Sharma
Priority: High
 Fix For: 2.4


 Noticed that on certain restarts of bro-2.4-beta, logs arbitrarily disappear.
 Restarts happen as
 - broctl check; broctl restart
 - broctl check; broctl restart --clean
 - broctl restart
 or some variant - not precisely sure. But all log files for that duration of 
 restarts are missing



--
This message was sent by Atlassian JIRA
(v6.5-OD-03-002#65000)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1396) Logs disappearing on broctl restart

2015-05-14 Thread Aashish Sharma (JIRA)

[ 
https://bro-tracker.atlassian.net/browse/BIT-1396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=20706#comment-20706
 ] 

Aashish Sharma edited comment on BIT-1396 at 5/14/15 6:19 PM:
--

Yes, nothing in stderr.log - likely got over-written by one of the later bro 
restarts. 


Aashish


was (Author: asha...@lbl.gov):
Yes, nothing in stderr.log - likely got over-written by one of the later bro 
restarts. 


-- 
Aashish Sharma  (asha...@lbl.gov)
Cyber Security, 
Lawrence Berkeley National Laboratory  
http://go.lbl.gov/pgp-aashish 
Office: (510)-495-2680  Cell: (510)-612-7971


 Logs disappearing on broctl restart
 ---

 Key: BIT-1396
 URL: https://bro-tracker.atlassian.net/browse/BIT-1396
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Affects Versions: 2.4
Reporter: Aashish Sharma
Priority: High
 Fix For: 2.4


 Noticed that on certain restarts of bro-2.4-beta, logs arbitrarily disappear.
 Restarts happen as
 - broctl check; broctl restart
 - broctl check; broctl restart --clean
 - broctl restart
 or some variant - not precisely sure. But all log files for that duration of 
 restarts are missing



--
This message was sent by Atlassian JIRA
(v6.5-OD-03-002#65000)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1306) bro process would get stuck/freeze with myricom drivers

2015-04-12 Thread Aashish Sharma (JIRA)

 [ 
https://bro-tracker.atlassian.net/browse/BIT-1306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aashish Sharma updated BIT-1306:


Yes, So sorry, I couldn't get to it soon enough. Yes, Patch fixes the problem. 

Aashish 


-- 
Aashish Sharma  (asha...@lbl.gov)
Cyber Security, 
Lawrence Berkeley National Laboratory  
http://go.lbl.gov/pgp-aashish 
Office: (510)-495-2680  Cell: (510)-612-7971


 bro process would get stuck/freeze with myricom drivers
 ---

 Key: BIT-1306
 URL: https://bro-tracker.atlassian.net/browse/BIT-1306
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Affects Versions: git/master
 Environment:  OS: FreeBSD 9.3-RELEASE-p5 OS
 bro version 2.3-328
 git log -1 --format=%H
 379593c7fded0f9791ae71a52dd78a4c9d5a2c1f
Reporter: Aashish Sharma
Assignee: Robin Sommer
  Labels: bro-git, myricom
 Fix For: 2.4


 When I stop bro (in cluster mode), one of the bro worker process (random) 
 would get stuck and wouldn't shutdown, stop or even be killed using kill -s 
 9. 
 System has to be ultimately rebooted to remove stuck bro process. 
 On running  myri_start_stop I see:
 # /usr/local/opt/snf/sbin/myri_start_stop stop
 Removing myri_snf.ko
 kldunload: can't unload file: Device busy
 It appears that the myri_snf.ko driver cannot be unloaded because of the 
 stuck bro process.  That process still has an open descriptor on the Sniffer 
 device/driver and bro process freezes 
 More details:
 The bro process is stuck in RNE state
 R   Marks a runnable process.
 N   The process has reduced CPU scheduling priority (see setpriority(2)).
 E   The process is trying to exit.
 Here is an example:
 ### stuck process:
 [bro@01 ~]$ ps auxwww | fgrep 1616
 bro1616  100.0  0.0 758040 60480 ??  RNE   2:57PM   53:50.04 
 /usr/local/bro-git/bin/bro -i myri0 -U .status -p broctl -p broctl-live -p 
 local -p worker-1-1 mgr.bro broctl base/frameworks/cluster local-worker.bro 
 broctl/auto
 when checking for process in proc:
 [bro@c ~]$ ls -l /proc/1616
 ls: /proc/1616: No such file or directory



--
This message was sent by Atlassian JIRA
(v6.4-OD-16-006#64014)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1370) SIP Analyzer

2015-04-03 Thread Aashish Sharma (JIRA)

[ 
https://bro-tracker.atlassian.net/browse/BIT-1370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=20222#comment-20222
 ] 

Aashish Sharma commented on BIT-1370:
-

I've been running vlad's branch (2443d319112fd345878766618951c56c2fd65fbd) for 
a long while and for all practical purposes, its been running stable and 
blocking sip scanners and logging sip sessions. 

There are a couple unknown_SIP_method (SUBSCRIBE and  NOTIFY) in weird logs.  I 
will send vlad pcaps for these specific ones. At present, I don't know if these 
are affecting anything per se. 



 SIP Analyzer
 

 Key: BIT-1370
 URL: https://bro-tracker.atlassian.net/browse/BIT-1370
 Project: Bro Issue Tracker
  Issue Type: New Feature
  Components: Bro
Affects Versions: 2.4
Reporter: Vlad Grigorescu
Assignee: Vlad Grigorescu

 topic/vladg/sip has a SIP analyzer.



--
This message was sent by Atlassian JIRA
(v6.4-OD-16-006#64014)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1326) Broctl installation requires sqlite but does not check for its presence

2015-03-18 Thread Aashish Sharma (JIRA)

[ 
https://bro-tracker.atlassian.net/browse/BIT-1326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=20022#comment-20022
 ] 

Aashish Sharma commented on BIT-1326:
-

I am trying to test some stuff with the current master on FreeBSD. 

Any idea when this would be fixed and/or any hints on a workaround ? 

Thanks, 

 Broctl installation requires sqlite but does not check for its presence
 ---

 Key: BIT-1326
 URL: https://bro-tracker.atlassian.net/browse/BIT-1326
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: BroControl
Affects Versions: git/master
Reporter: Johanna Amann
 Fix For: 2.4


 Trying to start broctl on a new installation of FreeBSD with a standard 
 python installation results in the following error message upon first start:
 {code}
 [bro@marge ~/master]$ broctl
 Traceback (most recent call last):
   File /xa/bro/master/bin/broctl, line 29, in module
 from BroControl.broctl import BroCtl
   File /xa/bro/master/lib/broctl/BroControl/broctl.py, line 8, in module
 from BroControl import util
   File /xa/bro/master/lib/broctl/BroControl/util.py, line 6, in module
 from BroControl import config
   File /xa/bro/master/lib/broctl/BroControl/config.py, line 10, in module
 from .state import SqliteState
   File /xa/bro/master/lib/broctl/BroControl/state.py, line 2, in module
 import sqlite3
   File /usr/local/lib/python2.7/sqlite3/__init__.py, line 24, in module
 from dbapi2 import *
   File /usr/local/lib/python2.7/sqlite3/dbapi2.py, line 28, in module
 from _sqlite3 import *
 ImportError: No module named _sqlite3
 {code}
 We should probably check for the module in cmake and refuse installation if 
 it is not present.



--
This message was sent by Atlassian JIRA
(v6.4-OD-15-055#64014)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1182) Input-framework thread spwan

2015-03-13 Thread Aashish Sharma (JIRA)

 [ 
https://bro-tracker.atlassian.net/browse/BIT-1182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aashish Sharma updated BIT-1182:

Resolution: Fixed
Status: Closed  (was: Open)

I tested out 30K+ adds/deletes with input framework. The problem was that I was 
inadvertently firing exec framework. Modified/fixed my  script now. 

Closing this ticket - Events created with input-framework has near negligible 
overheads! 


 Input-framework thread spwan
 

 Key: BIT-1182
 URL: https://bro-tracker.atlassian.net/browse/BIT-1182
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Affects Versions: 2.2
Reporter: Aashish Sharma
  Labels: input-framework

 Using the mode REREAD, I noticed that input-framework spawns a thread for 
 every add/change/delete for the elements in the feed file. 
 this is a VERY desired feature and powerful capability and works quite well 
 in general settings. 
 Since, all the changes in a file spawns a thread to process for: EVENT_NEW, 
 EVENT_CHANGED, EVENT_REMOVED, If there are lets say 5000 Changes in the file, 
 there would be 5000 threads spawned at the same time. this is still alright 
 and system can handle load and processing is done in a few seconds.
 However, if I include a when statement along with exec framework usage to 
 execute an action in Input::EVENT_NEW, Input::EVENT_CHANGED or 
 Input::EVENT_REMOVED - all threads spawned together freezes bro from 
 processing any packets at all. 
 It would be nice if we can serialize this thread creation and spawn only a 
 few at a time. This way we can spread out the increased load over next N mins 
 instead of freezing bro to a standstill. 
 (As always, please let me know if you want code to be able to re-produce this 
 issue). 



--
This message was sent by Atlassian JIRA
(v6.4-OD-15-055#64014)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1335) Extract all files policy script

2015-03-13 Thread Aashish Sharma (JIRA)

[ 
https://bro-tracker.atlassian.net/browse/BIT-1335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=19942#comment-19942
 ] 

Aashish Sharma commented on BIT-1335:
-

I prefer keeping protocol + fid  - Easy to sort extracted files in different 
buckets quickly when going through a big pcap.  Generally there isn't big need 
to tie back a file with session since the extractions are going forward in 
workflow. However FID is sufficient to tie backwards with other logs. 

I am sure you have a better use case for uid+timestamp. I cannot quite think of 
one. 

(I take timestamp is for case where multiple files are part of same uid ?) 





 Extract all files policy script
 ---

 Key: BIT-1335
 URL: https://bro-tracker.atlassian.net/browse/BIT-1335
 Project: Bro Issue Tracker
  Issue Type: New Feature
  Components: Bro
Affects Versions: 2.4
Reporter: grigorescu
Assignee: Jon Siwek
Priority: Trivial
 Fix For: 2.4


 We've mentioned a few times that it'd be nice to have an extract all files 
 policy script that ships with Bro. Can we get this into 2.4?



--
This message was sent by Atlassian JIRA
(v6.4-OD-15-055#64014)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1182) Input-framework thread spwan

2015-03-03 Thread Aashish Sharma (JIRA)

[ 
https://bro-tracker.atlassian.net/browse/BIT-1182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=19908#comment-19908
 ] 

Aashish Sharma commented on BIT-1182:
-

ah! that makes sense now. You are correct. I was inadvertently calling exec 
framework after an entry gets removed. 

Infact, thats what I was working on to fix, and I see this comment. 

I will report back once I eliminate exec framework spawn for removals and make 
that a batch-job ( only 1 exec call for all removals)

Thanks, 
Aashish 


 Input-framework thread spwan
 

 Key: BIT-1182
 URL: https://bro-tracker.atlassian.net/browse/BIT-1182
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Affects Versions: 2.2
Reporter: Aashish Sharma
  Labels: input-framework

 Using the mode REREAD, I noticed that input-framework spawns a thread for 
 every add/change/delete for the elements in the feed file. 
 this is a VERY desired feature and powerful capability and works quite well 
 in general settings. 
 Since, all the changes in a file spawns a thread to process for: EVENT_NEW, 
 EVENT_CHANGED, EVENT_REMOVED, If there are lets say 5000 Changes in the file, 
 there would be 5000 threads spawned at the same time. this is still alright 
 and system can handle load and processing is done in a few seconds.
 However, if I include a when statement along with exec framework usage to 
 execute an action in Input::EVENT_NEW, Input::EVENT_CHANGED or 
 Input::EVENT_REMOVED - all threads spawned together freezes bro from 
 processing any packets at all. 
 It would be nice if we can serialize this thread creation and spawn only a 
 few at a time. This way we can spread out the increased load over next N mins 
 instead of freezing bro to a standstill. 
 (As always, please let me know if you want code to be able to re-produce this 
 issue). 



--
This message was sent by Atlassian JIRA
(v6.4-OD-15-055#64014)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1306) bro process would get stuck/freeze with myricom drivers

2015-01-22 Thread Aashish Sharma (JIRA)
Aashish Sharma created BIT-1306:
---

 Summary: bro process would get stuck/freeze with myricom drivers
 Key: BIT-1306
 URL: https://bro-tracker.atlassian.net/browse/BIT-1306
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Affects Versions: git/master
 Environment:  OS: FreeBSD 9.3-RELEASE-p5 OS

bro version 2.3-328

git log -1 --format=%H
379593c7fded0f9791ae71a52dd78a4c9d5a2c1f

Reporter: Aashish Sharma


When I stop bro (in cluster mode), one of the bro worker process (random) would 
get stuck and wouldn't shutdown, stop or even be killed using kill -s 9. 

System has to be ultimately rebooted to remove stuck bro process. 
On running  myri_start_stop I see:

# /usr/local/opt/snf/sbin/myri_start_stop stop
Removing myri_snf.ko
kldunload: can't unload file: Device busy

It appears that the myri_snf.ko driver cannot be unloaded because of the stuck 
bro process.  That process still has an open descriptor on the Sniffer 
device/driver and bro process freezes 

More details:

The bro process is stuck in RNE state

R   Marks a runnable process.
N   The process has reduced CPU scheduling priority (see setpriority(2)).
E   The process is trying to exit.

Here is an example:

### stuck process:

[bro@01 ~]$ ps auxwww | fgrep 1616
bro1616  100.0  0.0 758040 60480 ??  RNE   2:57PM   53:50.04 
/usr/local/bro-git/bin/bro -i myri0 -U .status -p broctl -p broctl-live -p 
local -p worker-1-1 mgr.bro broctl base/frameworks/cluster local-worker.bro 
broctl/auto

when checking for process in proc:

[bro@c ~]$ ls -l /proc/1616
ls: /proc/1616: No such file or directory




--
This message was sent by Atlassian JIRA
(v6.4-OD-13-026#64011)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] [JIRA] (BIT-1286) Add policy script for Windows version detection via CryptoAPI HTTP Traffic

2014-11-03 Thread Aashish Sharma
This is a very neat policy for sure!!

On Mon, Nov 03, 2014 at 12:56:07PM -0600, grigorescu (JIRA) wrote:
 
 [ 
 https://bro-tracker.atlassian.net/browse/BIT-1286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=18702#comment-18702
  ] 
 
 grigorescu commented on BIT-1286:
 -
 
 Forgot to mention the branch :-). It's in topic/vladg/cryptoapi
 
  Add policy script for Windows version detection via CryptoAPI HTTP Traffic
  --
 
  Key: BIT-1286
  URL: https://bro-tracker.atlassian.net/browse/BIT-1286
  Project: Bro Issue Tracker
   Issue Type: New Feature
   Components: Bro
 Affects Versions: git/master
 Reporter: grigorescu
 
  Windows systems access a Microsoft Certificate Revocation List (CRL) 
  periodically. The user agent for these requests reveals which version of 
  Crypt32.dll installed on the system, which can uniquely identify the 
  version of Windows that's running.
  This branch adds a Software framework policy script will log the version of 
  Windows that was identified.
 
 
 
 --
 This message was sent by Atlassian JIRA
 (v6.4-OD-09-005#64005)
 ___
 bro-dev mailing list
 bro-dev@bro.org
 http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev

-- 
Aashish Sharma  (asha...@lbl.gov)
Cyber Security, 
Lawrence Berkeley National Laboratory  
http://go.lbl.gov/pgp-aashish 
Office: (510)-495-2680  Cell: (510)-612-7971


pgpKTIKEDDVf8.pgp
Description: PGP signature
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1286) Add policy script for Windows version detection via CryptoAPI HTTP Traffic

2014-11-03 Thread Aashish Sharma (JIRA)

[ 
https://bro-tracker.atlassian.net/browse/BIT-1286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=18703#comment-18703
 ] 

Aashish Sharma commented on BIT-1286:
-

This is a very neat policy for sure!!


-- 
Aashish Sharma  (asha...@lbl.gov)
Cyber Security, 
Lawrence Berkeley National Laboratory  
http://go.lbl.gov/pgp-aashish 
Office: (510)-495-2680  Cell: (510)-612-7971


 Add policy script for Windows version detection via CryptoAPI HTTP Traffic
 --

 Key: BIT-1286
 URL: https://bro-tracker.atlassian.net/browse/BIT-1286
 Project: Bro Issue Tracker
  Issue Type: New Feature
  Components: Bro
Affects Versions: git/master
Reporter: grigorescu

 Windows systems access a Microsoft Certificate Revocation List (CRL) 
 periodically. The user agent for these requests reveals which version of 
 Crypt32.dll installed on the system, which can uniquely identify the version 
 of Windows that's running.
 This branch adds a Software framework policy script will log the version of 
 Windows that was identified.



--
This message was sent by Atlassian JIRA
(v6.4-OD-09-005#64005)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] Plugins providing threads?

2014-10-07 Thread Aashish Sharma
Very timely question, I've be mulling over this and I would like to vote for 
adding thread component. 

This may allow us to do a lot more processing of data in the script land.

Now my use case may not be likely an ideal one.  

I am *experimenting* with a policy to flag very long sustained persistant 
connections between two hosts (irrespective of ports). For this, I am storing 
connection information [src, dst] in a table and based on various 
conditions/heuristics I am expiring these at different dynamic times (so 
read_expire isn't helpful). For 'spike' elimination from scanners, I iterate 
through the table every min. (unoptimal but stay with me). At present when 
table stores about 4 Million elements, iterating through it freezes entire bro 
(no growth of conn log etc).  

A seperate thread could allow me to delegate these kinds of tasks outside Bro's 
main thread/event queue. 

Aashish




On Tue, Oct 07, 2014 at 03:43:01PM -0700, Robin Sommer wrote:
 I'm wondering if we should add another type of plugin component:
 threads. This would be for functionality that's to run in parallel
 with Bro's main thread, and communicate with it via message passing.
 
 We have the structure for that in place already, logging and reading
 are using it as already. But this would formalize the notion a bit
 more directly that a plugin can provide a new thread, with its own
 logic; and also extend the interface that it has available from inside
 that thread (e.g., being able to raise events; have bif functions
 that, when called, get passed through to the thread).
 
 One use case is Christian's OpenFlow plugin: if we went the route of
 integrating an external library for speaking OpenFlow directly, that
 communication needs to be handled somewhere. Traditionally, that would
 be an IOSource hooked into the main loop. The plugin model could
 support that, too, but being able to fully decouple it inside its own
 thread seems appealing.
 
 Jon, how are you planing to integrate Broker into Bro? Would this help
 there as well if you could just follow a similar structure with Broker
 running inside its own thread?
 
 Robin
 
 -- 
 Robin Sommer * Phone +1 (510) 722-6541 * ro...@icir.org
 ICSI/LBNL* Fax   +1 (510) 666-2956 * www.icir.org/robin
 ___
 bro-dev mailing list
 bro-dev@bro.org
 http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev

-- 
Aashish Sharma  (asha...@lbl.gov)
Cyber Security, 
Lawrence Berkeley National Laboratory  
http://go.lbl.gov/pgp-aashish 
Office: (510)-495-2680  Cell: (510)-612-7971


pgp8APRygUKSj.pgp
Description: PGP signature
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] Bro + real-time question

2014-09-26 Thread Aashish Sharma
 * work toward putting hard limits on the number of cycles Bro is allowed 
 to execute per-packet before injecting a hard stop and forcing the 
 engine to move to the next packet, or

I believe it the value of : watchdog_interval 

Aashish 


On Fri, Sep 26, 2014 at 03:33:25PM -0400, Gilbert Clark wrote:
 Hi list:
 
 Does anyone know of work that involves placing hard limits on the amount 
 of time bro is able to spend processing individual packets? 
 Specifically, I'm looking for:
 
 * work toward putting hard limits on the number of cycles Bro is allowed 
 to execute per-packet before injecting a hard stop and forcing the 
 engine to move to the next packet, or
 * work toward emulating buffers / drops based on the number of cycles 
 bro spends processing a particular packet in offline / pseudo-realtime mode
 
 Thanks in advance for any references / thoughts!
 
 --Gilbert
 
 ___
 bro-dev mailing list
 bro-dev@bro.org
 http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev

-- 
Aashish Sharma  (asha...@lbl.gov)
Cyber Security, 
Lawrence Berkeley National Laboratory  
http://go.lbl.gov/pgp-aashish 
Office: (510)-495-2680  Cell: (510)-612-7971


pgpIgtnoGistR.pgp
Description: PGP signature
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1140) Bloomfilter hashing problem

2014-06-13 Thread Aashish Sharma (JIRA)

[ 
https://bro-tracker.atlassian.net/browse/BIT-1140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=16825#comment-16825
 ] 

Aashish Sharma commented on BIT-1140:
-

SO I have been running the code for last 5 days on DMZ box.  I don't see any 
false +ve's.  
BloomFilter seems to be holding up quite well to store URL strings properly 
now. 




 Bloomfilter hashing problem
 ---

 Key: BIT-1140
 URL: https://bro-tracker.atlassian.net/browse/BIT-1140
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Reporter: Robin Sommer
Assignee: Robin Sommer
 Fix For: 2.3

 Attachments: bloom-test2.bro, bloom-test-short.bro


 It seems bloomfilter hashing isn't working correctly. Has that been 
 confirmed? Is there a fix?



--
This message was sent by Atlassian JIRA
(v6.3-OD-06-017#6327)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1204) broctl query|print timesout for really large tables

2014-06-12 Thread Aashish Sharma (JIRA)

 [ 
https://bro-tracker.atlassian.net/browse/BIT-1204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aashish Sharma updated BIT-1204:

Resolution: Fixed
Status: Closed  (was: Open)

setting CommTimeout = 300 ( or higher number in seconds) fixes the issue. 

 broctl query|print timesout for really large tables
 ---

 Key: BIT-1204
 URL: https://bro-tracker.atlassian.net/browse/BIT-1204
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Affects Versions: 2.2, 2.3
 Environment: Bro-2.x on FreeBSD-9.x 
Reporter: Aashish Sharma
  Labels: broctl

 I've noticed that for really large tables (~10,000+) elements, broctl print 
 command times out 
 example:
 $ broctl print Drop::drop_info
manager   error: time-out
 For few hundred elements or small enough table we get the desired results. 
 Please let me know if you cannot reproduce the problem or need further 
 clarifications. 



--
This message was sent by Atlassian JIRA
(v6.3-OD-06-017#6327)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1204) broctl query|print timesout for really large tables

2014-06-12 Thread Aashish Sharma (JIRA)

[ 
https://bro-tracker.atlassian.net/browse/BIT-1204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=16823#comment-16823
 ] 

Aashish Sharma commented on BIT-1204:
-

Yes ! increasing CommTimeout works well. 

( Needless ticket - should have asked on mailing list, I guess ) 

 broctl query|print timesout for really large tables
 ---

 Key: BIT-1204
 URL: https://bro-tracker.atlassian.net/browse/BIT-1204
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Affects Versions: 2.2, 2.3
 Environment: Bro-2.x on FreeBSD-9.x 
Reporter: Aashish Sharma
  Labels: broctl

 I've noticed that for really large tables (~10,000+) elements, broctl print 
 command times out 
 example:
 $ broctl print Drop::drop_info
manager   error: time-out
 For few hundred elements or small enough table we get the desired results. 
 Please let me know if you cannot reproduce the problem or need further 
 clarifications. 



--
This message was sent by Atlassian JIRA
(v6.3-OD-06-017#6327)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1204) broctl query|print timesout for really large tables

2014-06-11 Thread Aashish Sharma (JIRA)
Aashish Sharma created BIT-1204:
---

 Summary: broctl query|print timesout for really large tables
 Key: BIT-1204
 URL: https://bro-tracker.atlassian.net/browse/BIT-1204
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Affects Versions: 2.2, 2.3
 Environment: Bro-2.x on FreeBSD-9.x 
Reporter: Aashish Sharma


I've noticed that for really large tables (~10,000+) elements, broctl print 
command times out 

example:
$ broctl print Drop::drop_info
   manager   error: time-out

For few hundred elements or small enough table we get the desired results. 

Please let me know if you cannot reproduce the problem or need further 
clarifications. 




--
This message was sent by Atlassian JIRA
(v6.3-OD-06-017#6327)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1140) Bloomfilter hashing problem

2014-06-05 Thread Aashish Sharma (JIRA)

[ 
https://bro-tracker.atlassian.net/browse/BIT-1140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=16801#comment-16801
 ] 

Aashish Sharma commented on BIT-1140:
-

Thanks for the fix Matthias. I am testing your topic/matthias/bloomfilter-fix 
branch with my policies. Give me a couple of days. So far looks alright. 



 Bloomfilter hashing problem
 ---

 Key: BIT-1140
 URL: https://bro-tracker.atlassian.net/browse/BIT-1140
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Reporter: Robin Sommer
Assignee: Robin Sommer
 Fix For: 2.3

 Attachments: bloom-test2.bro, bloom-test-short.bro


 It seems bloomfilter hashing isn't working correctly. Has that been 
 confirmed? Is there a fix?



--
This message was sent by Atlassian JIRA
(v6.3-OD-06-017#6327)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1180) Input framework subsiquient REREAD fails after file update

2014-04-07 Thread Aashish Sharma (JIRA)
Aashish Sharma created BIT-1180:
---

 Summary: Input framework subsiquient REREAD fails after file 
update 
 Key: BIT-1180
 URL: https://bro-tracker.atlassian.net/browse/BIT-1180
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Affects Versions: 2.2
Reporter: Aashish Sharma
Priority: High


I have a file that gets updated every hour and I am using it as a feed into bro 
using input framework. Every hour I write a list of IP addresses into this 
file. For many updates everything works fine but Occasionally,  I see the 
following error:

Apr  6 05:00:09 Reporter::ERROR 
/feeds/Blacklist/CURRENT.24hrs_BRO/Input::READER_ASCII: could not read first 
line(empty)

After this failure/message,  any subsequent updates on the file are ignored by 
the input framework. 

From visual inspection the file looks just fine and header/data (1 column of 
IP addresses) is there as expected but somehow input framework doesn't like 
it. It seems that every hour when update the file using a cron script, on a 
rare occasion the  file is empty for a minuscule duration after which this 
error starts. 

for further REREADS data won't get updated into the tables anymore once the 
above Reporter::ERROR kicks in. 

Please let me know if you need ways to reproduce this error condition or have 
more questions for me. 



--
This message was sent by Atlassian JIRA
(v6.3-OD-02-026#6318)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1181) Input-framework errors should be fatal (or Notice_Alarm) instead of silent reporter::error failures

2014-04-07 Thread Aashish Sharma (JIRA)
Aashish Sharma created BIT-1181:
---

 Summary: Input-framework errors should be fatal (or Notice_Alarm) 
instead of silent reporter::error failures
 Key: BIT-1181
 URL: https://bro-tracker.atlassian.net/browse/BIT-1181
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Affects Versions: 2.2
Reporter: Aashish Sharma


I noticed many times that if there is a problem in a feed file (syntax, or some 
other issue) and input-framework is unable to read the file, it generates a 
Reporter::Error. This is a silent failure condition ie bro continues to operate 
as normal and the error is logged into reporter log. 

Ideally above is the right thing to do. However, This failure results in no 
data in the tables getting updated any more while I continue to operate 
under-impression that Bro is working fine (unless I have explicitly been 
looking at reporter log for this issue , which now I do). 

If input-framework is unable to read/digest data from a feed, I believe that 
should be a (configurable) fatal error or something which at least triggers an 
alarm/alert/email. 





--
This message was sent by Atlassian JIRA
(v6.3-OD-02-026#6318)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1140) Bloomfilter hashing problem

2014-04-01 Thread Aashish Sharma (JIRA)

 [ 
https://bro-tracker.atlassian.net/browse/BIT-1140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aashish Sharma updated BIT-1140:


Attachment: bloom-test2.bro
bloom-test-short.bro

Test files to reproduce the problem. 

 Bloomfilter hashing problem
 ---

 Key: BIT-1140
 URL: https://bro-tracker.atlassian.net/browse/BIT-1140
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Reporter: Robin Sommer
Assignee: Matthias Vallentin
 Fix For: 2.3

 Attachments: bloom-test2.bro, bloom-test-short.bro


 It seems bloomfilter hashing isn't working correctly. Has that been 
 confirmed? Is there a fix?



--
This message was sent by Atlassian JIRA
(v6.3-OD-01-067#6307)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1140) Bloomfilter hashing problem

2014-04-01 Thread Aashish Sharma (JIRA)

[ 
https://bro-tracker.atlassian.net/browse/BIT-1140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=16010#comment-16010
 ] 

Aashish Sharma commented on BIT-1140:
-

Matthias, 

I have created two simple test files. Both of these files add a bunch of URL's 
to a bloomfilter. 

Then, scripts do a bloomfilter_lookup on a *different* set of URLs. 

You should notice two problems
1) URLs which aren't even added to the filter show up as in the filter ( 
bloomfilter_lookup returns 1) 
2) Return 1 is inconsistent on multiple runs  (sometimes it shows 0, sometimes 
1) 

The URLs' added are from in smtp extracted URLs while URLs looked up are in 
http stream.  Basically, I am making a bloomfilter for all the URLs extracted 
from emails and then testing against HTTP to see if any of smtp URLs has been 
clicked.  (Currently I use a table which gives me correct results but with a 
much bigger memory footprint)

With boomfilter, we see quite a bit of false positives. 

Here are two examples: 

1) bloom-test-short.bro  - only does lookup for 4 URLs. on repeated run (bro 
./bloom-test-short.bro ) you should see different outputs on hits (0 - miss, 1 
hit) and the URLs we are looking up aren't added to the filter. 
2) bloom-test2.bro  - Has much more extensive Lookup set. On a run you should 
see the lookup results as 0 or 1 and it varies. Again all the lookup URLs are 
different from the ones added. 

Please let me know if you have problems reproducing this. I can send you the 
actual smtp-embedded-url.bro scripts as well. 




 Bloomfilter hashing problem
 ---

 Key: BIT-1140
 URL: https://bro-tracker.atlassian.net/browse/BIT-1140
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: Bro
Reporter: Robin Sommer
Assignee: Matthias Vallentin
 Fix For: 2.3

 Attachments: bloom-test2.bro, bloom-test-short.bro


 It seems bloomfilter hashing isn't working correctly. Has that been 
 confirmed? Is there a fix?



--
This message was sent by Atlassian JIRA
(v6.3-OD-01-067#6307)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


Re: [Bro-Dev] [JIRA] (BIT-1138) UDP scan detection generates a large number of triggers

2014-02-18 Thread Aashish Sharma
I haven't got chance to measure if the fix is effective or not yet. I have
start measuring the CPU spikes in this week after putting in the fix for
scan_udp.bro. I  should have some results in a couple of days.


Aashish



On Tue, Feb 18, 2014 at 2:19 PM, Jon Siwek (JIRA) 
j...@bro-tracker.atlassian.net wrote:


 [
 https://bro-tracker.atlassian.net/browse/BIT-1138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=15561#comment-15561]

 Jon Siwek commented on BIT-1138:
 

 This was from a custom script that Aashish was running, not something
 distributed w/ Bro?

 But yeah, I don't recall if we found out if the suggestions helped.

  UDP scan detection generates a large number of triggers
  ---
 
  Key: BIT-1138
  URL: https://bro-tracker.atlassian.net/browse/BIT-1138
  Project: Bro Issue Tracker
   Issue Type: Problem
   Components: Bro
 Reporter: Robin Sommer
  Fix For: 2.3
 
 
  These triggers then cause high CPU load. We had a fix already but I'm
 not sure if it has been confirmed that it solved the problem?



 --
 This message was sent by Atlassian JIRA
 (v6.2-OD-09-036#6252)
 ___
 bro-dev mailing list
 bro-dev@bro.org
 http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev

___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev


[Bro-Dev] [JIRA] (BIT-1126) Logs disappearing after bro termination

2014-01-31 Thread Aashish Sharma (JIRA)
Aashish Sharma created BIT-1126:
---

 Summary: Logs disappearing after bro termination
 Key: BIT-1126
 URL: https://bro-tracker.atlassian.net/browse/BIT-1126
 Project: Bro Issue Tracker
  Issue Type: Problem
  Components: BroControl
Affects Versions: 2.2
 Environment: freebsd 
Reporter: Aashish Sharma
Priority: High


I have noticed several times that in the event of bro termination after  
expiration of StopTimeout, bro logs disappear. 

This is generally seen when log sizes are much bigger (for example after 
overnight)

This issue was present in bro-2.1 and continue to be present in bro-2.2 

I see (kill from control.py - kick in often when stopping or restarting bro) 
because catch-n-release is still trying to flush its tables (which takes long 
time). Then there is no logs from overnight!

I can provide more information if desired (or even a test case). 

Thanks, 
Aashish 
 



--
This message was sent by Atlassian JIRA
(v6.2-OD-07-028#6211)
___
bro-dev mailing list
bro-dev@bro.org
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev