Hi,
I hope that someone here has come across this before, or can point out what
stupid thing I’m doing (or not doing)
I’ve got an nfacctd instance taking netflow feeds from a pair of Juniper MX’s,
aggregations to memory and mysql backends are working just fine, so I’m pretty
sure the basics are OK.
Now I’m trying to get a file backend to create files with timestamps in the
names, and regardless of what config options I set all I’m getting is a single
file with timestamp derived from zero unix time.
the print_output_file config line looks like this:
print_output_file[elsrch]: /opt/pmacct/var/spool/elsrch/%Y%m%d-%H%M.json
and the file it produces looks like this:
$ ls -l
total 1616
-rw------- 1 pmacct pmacct 912050 May 14 20:21 19700101-0100.json
lrwxrwxrwx 1 pmacct pmacct 47 May 14 19:46 latest ->
/opt/pmacct/var/spool/elsrch/19700101-0100.json
I’m inclined to believe that nfacctd is failing to get the system date
correctly as not only is the dynamic file naming unhappy , but the
print_markers config directive is putting a zero at the top of the file.
Below are config, output and log.
I’ve trawled, the docs, mailing-list archives, and my google-fu has run out.
Hopefully someone can point me in the right direction.
Thanks
Dariush
config (somewhat sanitised) looks like this:
$ cat /opt/pmacct/etc/nfacctd.conf
! Global daemon config
!
daemonize: true
logfile: /opt/pmacct/var/log/nfacctd.log
pidfile: /opt/pmacct/var/run/nfacctd.pid
files_uid: 666
files_gid: 666
! netflow daemon config
nfacctd_port: 2055
! nfacctd_time_new:true is required if sql_dont_try_update:true is in use. See
CONFIG-KEYS.
nfacctd_time_new: true
nfacctd_disable_checks: true
! global print plugin config
print_markers: true
! plugin instances
plugins: print[elsrch]
! --- [elsrch] ---
aggregate[elsrch]: etype, proto, src_host, src_port, dst_host, dst_port,
timestamp_start, timestamp_end
print_output_file[elsrch]: /opt/pmacct/var/spool/elsrch/%Y%m%d-%H%M.json
print_latest_file[elsrch]: /opt/pmacct/var/spool/elsrch/latest
print_output[elsrch]: json
print_refresh_time[elsrch]:60
The output file, which gets over-written on every cache purge, starts like
below, note the first line
$ head -n 5 ./spool/elsrch/19700101-0100.json
--START (0+60)--
{"timestamp_start": "2015-05-14 20:14:44.225000", "ip_proto": "udp", "ip_dst":
“x.x.x.x", "ip_src": "x.x.x.x", "etype": "800", "bytes": 62, "port_dst": 53,
"packets": 1, "port_src": 38242, "timestamp_end": "2015-05-14 20:14:44.225000"}
{"timestamp_start": "2015-05-14 20:14:44.167000", "ip_proto": "udp", "ip_dst":
"x.x.x.x", "ip_src": "x.x.x.x", "etype": "800", "bytes": 567, "port_dst": 53,
"packets": 9, "port_src": 29428, "timestamp_end": "2015-05-14 20:14:44.168000"}
{"timestamp_start": "2015-05-14 20:14:43.892000", "ip_proto": "udp", "ip_dst":
"x.x.x.x", "ip_src": "x.x.x.x", "etype": "800", "bytes": 206, "port_dst":
38655, "packets": 2, "port_src": 53, "timestamp_end": "2015-05-14
20:14:43.914000"}
{"timestamp_start": "2015-05-13 10:18:11.534000", "ip_proto": "udp", "ip_dst":
"x.x.x.x", "ip_src": "x.x.x.x", "etype": "800", "bytes": 268, "port_dst":
12839, "packets": 5, "port_src": 12838, "timestamp_end": "2015-05-14
20:16:12.636000"}
and a log file excerpt for good measure:
May 14 20:14:01 INFO ( elsrch/print ): *** Purging cache - START (PID: 8105) ***
May 14 20:14:01 INFO ( elsrch/print ): *** Purging cache - END (PID: 8105, QN:
4138/4138, ET: 0) ***
May 14 20:15:01 INFO ( elsrch/print ): *** Purging cache - START (PID: 8118) ***
May 14 20:15:01 INFO ( elsrch/print ): *** Purging cache - END (PID: 8118, QN:
4192/4192, ET: 0) ***
May 14 20:16:01 INFO ( elsrch/print ): *** Purging cache - START (PID: 8128) ***
May 14 20:16:01 INFO ( elsrch/print ): *** Purging cache - END (PID: 8128, QN:
3225/3225, ET: 0) ***
_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists