Re: [pmacct-discussion] pretag map line length limits

2019-01-11 Thread Inge Bjørnvall Arnesen
Hi Paolo!

That worked like a charm 

Regards,

-- Inge

-Original Message-
From: pmacct-discussion  On Behalf Of 
Paolo Lucente
Sent: torsdag 10. januar 2019 22.51
To: pmacct-discussion@pmacct.net
Subject: Re: [pmacct-discussion] pretag map line length limits


Hi Inge,

Always great to read from you. 

You are looking for the maps_row_len knob, by default 256 chars. Along with 
maps_entries it allows to specify the two key dimensions to alloc memory for 
the map.

Paolo

On Thu, Jan 10, 2019 at 02:54:09PM +, Inge Bjørnvall Arnesen wrote:
> Hi,
> 
> I have been running nfacct for many years and it has served me well, but as 
> my network gets ever more complex and new transit lines are added, I've come 
> across an issue with how I've been configuring the program. My goal is still 
> to maintain a MySQL DB with  minute Internet traffic entries (both 
> directions) per public IP at my site. My routers report ingress traffic only, 
> so Netflow must be enabled on all edge interfaces, rather than just the 
> designated uplinks and transits.  This means that Netflow reports all traffic 
> that goes via our edge routers and that I have to filter Internet traffic out 
> from other, internal traffic that crosses edge.
> 
> My approach so far has been to use pretag map filters for this. The basic 
> structure for these filters are:
> 
> !  Incoming
> id=1 ip= filter='not ( src net   or src net  prefix n>) and dst net '
> ...
> id=1 ip= filter='not ( src net   or src net  prefix n>) and dst net '
> 
> 
> ! Outgoing
> id=2 ip= filter='not ( dst net   or dst net  prefix n>) and src net '
> ...
> id=2 ip= filter='not ( dst net   or dst net  prefix n>) and src net '
> 
> 
> With RFC1918 prefixes takes up some space to begin with  and the number of 
> public prefixes are increasing, I'm running into an issue where the pretag 
> map line length is exceeded and nfacct fails to start.  Are there ways to 
> increase the maximum line length or other ways of organizing this filtering 
> process that will keep me within the maximum pretag map line length?
> 
> Regards,
> 
> 
>   *   Inge Arnesen
> 
> 
> 
> 

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

[pmacct-discussion] pretag map line length limits

2019-01-10 Thread Inge Bjørnvall Arnesen
Hi,

I have been running nfacct for many years and it has served me well, but as my 
network gets ever more complex and new transit lines are added, I've come 
across an issue with how I've been configuring the program. My goal is still to 
maintain a MySQL DB with  minute Internet traffic entries (both directions) 
per public IP at my site. My routers report ingress traffic only, so Netflow 
must be enabled on all edge interfaces, rather than just the designated uplinks 
and transits.  This means that Netflow reports all traffic that goes via our 
edge routers and that I have to filter Internet traffic out from other, 
internal traffic that crosses edge.

My approach so far has been to use pretag map filters for this. The basic 
structure for these filters are:

!  Incoming
id=1 ip= filter='not ( src net   or src net ) and dst net '
...
id=1 ip= filter='not ( src net   or src net ) and dst net '


! Outgoing
id=2 ip= filter='not ( dst net   or dst net ) and src net '
...
id=2 ip= filter='not ( dst net   or dst net ) and src net '


With RFC1918 prefixes takes up some space to begin with  and the number of 
public prefixes are increasing, I'm running into an issue where the pretag map 
line length is exceeded and nfacct fails to start.  Are there ways to increase 
the maximum line length or other ways of organizing this filtering process that 
will keep me within the maximum pretag map line length?

Regards,


  *   Inge Arnesen




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

[pmacct-discussion] Tips on debugging IPFIX/v10 on 1.5.2?

2016-06-03 Thread Inge Bjørnvall Arnesen
Hi all,

We've changed one edge router to a more modern Juniper MX and I'm trying to get 
IPFIX working on my 1.5.2 installation. Since Juniper only allows a single 
destination, we have set up a splitter to duplicate traffic to the various flow 
destinations. The other destination appliances decode the v10 packets without 
problems and doing a tcpdump and Wireshark check on the nfacct host indicates 
that all the IPFIX packets are received correctly. No data is entered into the 
MySQL or memory plugins from this flow source however. With debugging enabled, 
I see (after the initial IPFIX packets before templates are received):

DEBUG ( default/core ): Received NetFlow/IPFIX packet from [a.b.c.d:50101] 
version [10] seqno [0]
DEBUG ( default/core ): NfV10 agent : a.b.c.d:524288
DEBUG ( default/core ): NfV10 template type : flow
DEBUG ( default/core ): NfV10 template ID   : 256
DEBUG ( default/core ): -
DEBUG ( default/core ): |pen | field type | offset |  size  |
DEBUG ( default/core ): | 0  | IPv4 src addr  |  0 |  4 |
DEBUG ( default/core ): | 0  | IPv4 dst addr  |  4 |  4 |
DEBUG ( default/core ): | 0  | tos|  8 |  1 |
DEBUG ( default/core ): | 0  | L4 protocol|  9 |  1 |
DEBUG ( default/core ): | 0  | L4 src port| 10 |  2 |
DEBUG ( default/core ): | 0  | L4 dst port| 12 |  2 |
DEBUG ( default/core ): | 0  | icmp type  | 14 |  2 |
DEBUG ( default/core ): | 0  | input snmp | 16 |  4 |
DEBUG ( default/core ): | 0  | 58 | 20 |  2 |
DEBUG ( default/core ): | 0  | IPv4 src mask  | 22 |  1 |
DEBUG ( default/core ): | 0  | IPv4 dst mask  | 23 |  1 |
DEBUG ( default/core ): | 0  | src as | 24 |  4 |
DEBUG ( default/core ): | 0  | dst as | 28 |  4 |
DEBUG ( default/core ): | 0  | IPv4 next hop  | 32 |  4 |
DEBUG ( default/core ): | 0  | tcp flags  | 36 |  1 |
DEBUG ( default/core ): | 0  | output snmp| 37 |  4 |
DEBUG ( default/core ): | 0  | in bytes   | 41 |  8 |
DEBUG ( default/core ): | 0  | in packets | 49 |  8 |
DEBUG ( default/core ): | 0  | 52 | 57 |  1 |
DEBUG ( default/core ): | 0  | 53 | 58 |  1 |
DEBUG ( default/core ): | 0  | 152| 59 |  8 |
DEBUG ( default/core ): | 0  | 153| 67 |  8 |
DEBUG ( default/core ): | 0  | 136| 75 |  1 |
DEBUG ( default/core ): | 0  | 243| 76 |  2 |
DEBUG ( default/core ): | 0  | 245| 78 |  2 |
DEBUG ( default/core ): -
DEBUG ( default/core ): Netflow V9/IPFIX record size : 80
DEBUG ( default/core ):
DEBUG ( default/core ): Received NetFlow/IPFIX packet from [a.b.c.d:50103] 
version [10] seqno [434178]
DEBUG ( default/core ): NfV10 agent : a.b.c.d:524288
DEBUG ( default/core ): NfV10 template type : options
DEBUG ( default/core ): NfV10 template ID   : 512
DEBUG ( default/core ): 
DEBUG ( default/core ): | field type | offset |  size  |
DEBUG ( default/core ): | 144|  0 |  4 |
DEBUG ( default/core ): | 160|  4 |  8 |
DEBUG ( default/core ): | 130| 12 |  4 |
DEBUG ( default/core ): | 131| 16 | 16 |
DEBUG ( default/core ): | 214| 32 |  1 |
DEBUG ( default/core ): | 215| 33 |  1 |
DEBUG ( default/core ): -
DEBUG ( default/core ): Netflow V9/IPFIX record size : 34
DEBUG ( default/core ):
DEBUG ( default/core ): Received NetFlow/IPFIX packet from [a.b.c.d:50101] 
version [10] seqno [738443061]
DEBUG ( default/core ): Received NetFlow/IPFIX packet from [a.b.c.d:50101] 
version [10] seqno [738443066]
DEBUG ( default/core ): Received NetFlow/IPFIX packet from [a.b.c.d:50101] 
version [10] seqno [738443071]
DEBUG ( default/core ): Received NetFlow/IPFIX packet from [a.b.c.d:50101] 
version [10] seqno [738443076]
DEBUG ( default/core ): Received NetFlow/IPFIX packet from [a.b.c.d:50101] 
version [10] seqno [738443081]
DEBUG ( default/core ): Received NetFlow/IPFIX packet from [a.b.c.d:50101] 
version [10] seqno [738443086]
DEBUG ( default/core ): Received NetFlow/IPFIX packet from [a.b.c.d:50101] 
version [10] seqno [738443091]
DEBUG ( default/core ): Received NetFlow/IPFIX packet from [a.b.c.d:50101] 
version [10] seqno [738443096]
DEBUG ( default/core ): 

Re: [pmacct-discussion] Issue with Ipfix reporting, VLAN tags and filtering

2015-04-17 Thread Inge Bjørnvall Arnesen
 so the only solution that appears to work to me is:

id=220 ip=a.b.c.d filter='ip'
id=230 ip=a.b.d.e filter='ip'
id=220 ip=a.b.c.d filter='vlan and ip'
id=230 ip=a.b.d.e filter='vlan and ip'

Thank you so much, Paolo - it works like a charm. Guess I was too perplexed by 
the strange symptoms to consider that. Patched the software yesterday to 
disable the NF9_FTYPE_VLAN flow type and that worked as well, but this is much 
cleaner.

:)

-- Inge 


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Problems debugging netflow handling

2008-06-05 Thread Inge Bjørnvall Arnesen
Hi Alex,

Ah - yes - shorter lines - good idea! As for tcpdump'ing, I've done that and I 
see the netflow packets and that they contain reports for the 79.171.80.0/21 
network (intermingled with reports on the other networks), so I know they 
arrive safe and sound to granny Nfacct's house. It's from arrival on port 
2100/UDP and onwards where I'm kind of lost on how to debug.

All the best,

-- I.

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of alex
Sent: 5. juni 2008 16:54
To: pmacct-discussion@pmacct.net
Subject: Re: [pmacct-discussion] Problems debugging netflow handling

   Hi Inge,
   Sorry, i can only advise to change:

dst net 81.93.160.0/20 or dst net 79.171.80.0/21 or dst net 195.225.0.0/19

   on:

dst net ( 81.93.160.0/20 or 79.171.80.0/21 or 195.225.0.0/19 )

   and for src net also.
   You can also start tcpdump and listen what you have on your interfaces
(where sfacctd are working).


   Alex


 Hi all,
 
 
 
 I've been running pmacct with both memory and mysql backend for some time 
and it has worked very well. I use pretag.map for filtering and as the 
number of address ranges have increased, I've added to these rules. When I 
added our third address range, however, none of the flows reported for this 
range ends up in the memory or mysql databases and as far as I can see, 
these are reported by our routers in the same way as all the others (same 
routers, same interfaces, same scaling, same everything). Basically, I 
don't know how to debug this problem. My pretag file is structured like 
this (it is much larger with more interfaces and routers):
 
 
 
 id=1039 ip=81.93.172.80 in=39 filter='dst net 81.93.160.0/20 or 
dst net 79.171.80.0/21 or dst net 195.225.0.0/19' sampling_rate=1000
 
 id=1040 ip=81.93.172.80 in=40 filter='dst net 81.93.160.0/20 or 
dst net 79.171.80.0/21 or dst net 195.225.0.0/19' sampling_rate=1000
 
 
 
 id=2039 ip=81.93.172.80 out=39 filter='src net 81.93.160.0/20 or 
src net 195.225.0.0/19 or src net 79.171.80.0/21' sampling_rate=1000
 
 id=2040 ip=81.93.172.80 out=40 filter='src net 81.93.160.0/20 or 
src net 195.225.0.0/19 or src net 79.171.80.0/21' sampling_rate=1000
 
 
 
 I have verified that the ranges 81.93.160.0/20 and 195.225.0.0/19 are 
working well, but not a single entry has been created associated with the 
79.171.80.0/21 network. As seen from the above snippet I have tried 
variations of the sequence of networks in the filter string, but that does 
not matter. Also, the IDs used for the other nets are the same, so the IDs 
are thus properly set up in the pmacctd.conf file. How can I go about 
debugging this on a live system?  Maybe I'm just blind to the obvious - 
that has happened before... many times.
 
 All the best,
 
 -- Inge



--
Так много можно сказать! Так мало нужно платить! Абоненты тарифных планов 
'Свои люди' и 'Люблю поговорить' говорят внутри сети 'БеСТ' всего от 
10 рублей за минуту разговора. Подробности на сайте http://www.best.by.


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] Juniper cFlow, sampling and nfacctd handling

2007-03-28 Thread Inge Bjørnvall Arnesen
[snip]]
  1: Does Netflow v5 say that sampling mode must be set for 
 sample rate to be valid?
[snip]
 3: Can I get Juniper/cFlow to report sampling mode?
 
 1. Yes, it has. The meaning of the first two bits are: 00 
 no sampling,
 01 sampling is enabled, so read the remaining 14 bits to get 
 the sampling rate. 

I just got a response from Juniper about this which was similar to mine. They 
have no Cisco document documenting these bits (implying that no changes or 
patches will be made until this is rectified). Do you have reference document 
for the semantics of these two bits? It is infuriating that Cisco has made 
several revision to V5 without properly documenting these changes.

all the best,

-- Inge

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] nfacctd warnings

2007-03-15 Thread Inge Bjørnvall Arnesen
Hi Paolo,

Thanks for your swift response! Here are the answers to your questions and some 
more precise information. A gzip of the syslog messages will be sent you 
privately:

- There were 26328 messages of the mentioned type during the high load period 
with 0.11.3 which was 43 minutes.
- No other errors were reported.
- The sources are two interfaces on one Cisco 6500, three interfaces on another 
6500 and two interfaces on a Juniper router. The former are full flows, while 
the Juniper is sampled 1:1000. The were losses on the reports from each 
interface on every router during this incident.

I will try the nfacct_disable_checks option, though I am reluctant as I expect 
to lose more flow information (I need to schedule it). What surprises me is 
that there were plenty of CPU-cycles left during this incident (at least on 
other threads/CPUs than the one nfacct used), only the load was slightly high. 
The Nfacctd host is a dual Xeon 3GHz and receives around 700-1500 Netflow/cFlow 
datagrams per second depending on the time of day. As flow data is heavily 
aggregated, there should (and seems to) be little problem dealing with this. 
Anybody got experince with how many datagrams it should be able to tackle? The 
path from the sources to the Netflow host is 10Gb except for the last jump 
(switch to host NIC) which is 1Gb. If we are on the edge of what can be dealt 
with, I have to revert to sampled flows on the Ciscos as well.


all the best,

-- Inge


 -Original Message-
 From: Paolo Lucente [mailto:[EMAIL PROTECTED] 
 Sent: 15. mars 2007 01:07
 To: Inge Bjørnvall Arnesen
 Cc: pmacct-discussion@pmacct.net
 Subject: Re: [pmacct-discussion] nfacctd warnings
 
 Hi Inge,
 25k messages in 45 mins makes some 9-10 messages per second - 
 which is quite a lot. Which network devices are you getting 
 NetFlow datagrams off? A reason I might see is: sequence 
 checks fail (ie. they have been reported to fail with 
 Huawei's implementation of NetFlow), grab a lot of CPU cycles 
 by logging down massively and as a result the box is unable 
 to process incoming datagrams at full rate - please note that 
 datagrams failing sequence checks are not discarded. 
 
 To verify this, you can append this nfacctd_disable_checks: 
 true to your config. You should not see any further log 
 message at this propo and can compare whether graphs show the 
 expected figure.
 
 If you still have such log, can you please send me privately 
 a more consistent fragment? I'm curious to look whether there 
 is any evident pattern. Sequence checks were not implemented 
 in 0.10.3 . Let me know how things work out. And thanks as 
 usual for your cooperation.
 
 Cheers,
 Paolo
 
 On Wed, Mar 14, 2007 at 05:11:50PM +0100, Inge Bj?rnvall 
 Arnesen wrote:
  I know this is an oldie, but I'm very conservative when it 
 comes to upgrading. Here is my experience with this problem:
  
  I've been running nfacctd - pmacct version 0.10.3 - for 
 quite some time. I use a memory plugin with interface to 
 Cricket for real time graph presentation and MySQL logging 
 for batch processing of the stored flows. From time to time 
 I've been executing fairly complex MySQL queries (resulting 
 in high load on the Nfacct host - 2 to 4 - but lots of free 
 CPU time) while nfacct is running and this has been no 
 problem. Around 2.5 hours ago I upgraded to 0.11.3 and then 
 had to made some changes to some MySQL tables, resulting in 
 fairly high load (around 2.5, but still with a lot of CPU 
 left). The result was dramatic during the single hour I had 
 0.11.3 running:
  
  During the first 15 minutes (when the load was mostly low 
 as I just created some tables for later use) I received 4 
 messages like the ones below. After starting the MySQL jobs 
 and for the coming 45 minutes I had around 25000 messages, 
 all on the format:
  
  Mar 14 16:16:23 dump02 nfacctd[9651]: WARN: expecting flow 
  '3982342489' but received '3982343156' collector=(null):2100 
  agent=193.156.90.68:1792 Mar 14 16:16:23 dump02 
 nfacctd[9651]: WARN: 
  expecting flow '3982343156' but received '3982343533' 
  collector=(null):2100 agent=193.156.90.68:1792
  
  I have 3 distinct sources of Netflow/cFlow packets and all 
 three had lost reports like this. All plugins had a 
 dramatic decrease in reported flow data for all IPs (my 
 estimate is around 60% lost flow information during these 45 
 minutes). During that time I tried desperately to 
 troubleshoot the possible cause. Finally I gave up and 
 reverted to 0.10.3 (while the MySQL jobs were still running). 
 I received no further warning messages and the Cricket graphs 
 went immediately back to normal while the MySQL jobs 
 continued running with unaltered load (they are still running). 
  
  There are 0 errors on the receiving interfaces. There were 
 no other recorded network related incidents during that 
 period. I also have another installation on a site with much 
 less traffic and more moderate load on the Nfaccd

[pmacct-discussion] Using tag ID's in aggregate_filter

2006-01-20 Thread Inge Bjørnvall Arnesen
Hi Paolo,

Having found the aggregate_filter field insufficient for my needs, I've made a 
pretag map which should generate the IDs I need for the flows. What I can't 
seem to find in the documentation or example is how to match the IDs in the 
aggregate_filter field. Do you have an example around that could go into the 
distribution as well as my mailbox?

all the best,

-- inge



[pmacct-discussion] protocol types, rFlow and ARM-platform

2006-01-04 Thread Inge Bjørnvall Arnesen
Hi ho Paolo,

 Hmmm. Does your 'aggregate' directive include the 'proto' key ? 

Come to think of it - no. No wonder it didn't show up then. I'll take my 
slightly red face to the kitchen and make another espresso.

 BTW, what is
DD-WRT rFlow ? Is it something based upon NetFlow ? 

Yes - it's a Linux implementation of Netflow, now a part of the DD-WRT 
alternate firmware for Linksys wireless routers and look-alikes - 
http://www.dd-wrt.com/. There is very little info on the implementation, but 
the source code is available for download(and I believe it is GPL, so anyone 
can put it onto their favorite Linux-based router). I've had it running at home 
for little over two weeks and so far it seems to work well.

Of course there is a big difference between these tiny routers and the big 
Ciscos, but for network control freaks or general home computer hobbyists, it 
is a nice scaled-down version that works well with ADSL/cable connections. I 
find the Linksys'es with this FW convenient for prototyping as well. Don't want 
to disrupt production systems and many people can't afford a big Cisco in a 
staging environment.

I also compiled pmacct for a small embedded platform - the ARM-based NSLU2 from 
Linksys - a tiny, fanless Linux-based computer that costs nothing (eh... 
~$100), is reasonably stable and with a USB memory key is just right for such 
applications as online network statistics (not pcap-based - there is not enough 
CPU power for that), honeypots and so on. Haven't gotten sqlite to work yet, 
but that is most probably not pmacct's fault (MySQL runs, but slowly and 
requires a hard drive). If pmacct lives up to expectations during testing, I 
hope to have a binary distribution ready soon (since it is a small device, 
applications are normally cross-compiled). No idea if there is interest in this 
outside of my apartment, but time will tell.

Thanks,

-- Inge
winmail.dat

[pmacct-discussion] MySQL update performance and possible bug in 0.9.4

2005-12-08 Thread Inge Bjørnvall Arnesen
Hi Paolo,

Thank you very much for your reply. I've been testing out your suggestions and 
documenting the results, but in the process I found something seriously wrong 
that may make these results irrelevant, so I decided to ask for help with 
regards to this before spending more time on testing.

My main problem was that the process of entering flow data into MySQL using 
nfacctd took a long time - sometimes the process did not complete purging the 
buffer before the next buffer purge process was started, thus filling up memory 
with instances of nfacctd and mysql. Nfacctd/MySQL is running on a dedicated 
(otherwise idle) quad CPU, 3GHz Xeon with 2GB RAM and the number of buffer 
entries purged each 5 minutes are around 1500, so we're talking an overpowered 
beast dealing with peanuts.

That nfacctd spends 3-6 minutes putting these values into MySQL doesn't seem 
right. Looking at the time taken, the UPDATE queries accounts for 95% of this 
(though 40% of all queries are INSERTs). INSERTs are very fast - several 
hundred queries per second (maybe thousands), while UPDATEs run at ~8-10 
queries per second, decreasing with table size. At first I checked the myisam 
indexes (no change) and even converted to InnoDB (which just made it run 
slower).

The UPDATE queries made by nfacctd (without L2 information) look like this (I 
modified the plugin to give a query-dump):


UPDATE acct_out SET packets=packets+4466, bytes=bytes+5404822, 
stamp_updated=now() WHERE FROM_UNIXTIME(1133803500) = stamp_inserted AND 
ip_src='81.93.162.19'
 AND ip_dst=2116 AND src_port=80 AND dst_port=0 AND ip_proto='ip';
UPDATE acct_out SET packets=packets+1734, bytes=bytes+2289876, 
stamp_updated=now() WHERE FROM_UNIXTIME(1133803500) = stamp_inserted AND 
ip_src='81.93.161.244
' AND ip_dst=15659 AND src_port=80 AND dst_port=0 AND ip_proto='ip';
UPDATE acct_out SET packets=packets+92, bytes=bytes+70564, stamp_updated=now() 
WHERE FROM_UNIXTIME(1133803500) = stamp_inserted AND ip_src='81.93.162.132' AN
D ip_dst=15659 AND src_port=80 AND dst_port=0 AND ip_proto='ip';
UPDATE acct_out SET packets=packets+10, bytes=bytes+3682, stamp_updated=now() 
WHERE FROM_UNIXTIME(1133803500) = stamp_inserted AND ip_src='81.93.161.42' AND
ip_dst=15659 AND src_port=80 AND dst_port=0 AND ip_proto='ip';
..

I checked the table keys for the two tables to make sure each sub-clause in the 
WHERE-clause correspondes to the primary table keys:

mysql describe acct_in;
++-+--+-+-+---+
| Field  | Type| Null | Key | Default | 
Extra |
++-+--+-+-+---+
| ip_src | char(15)|  | PRI | | 
  |
| ip_dst | char(15)|  | PRI | | 
  |
| src_port   | int(2) unsigned |  | PRI | 0   | 
  |
| dst_port   | int(2) unsigned |  | PRI | 0   | 
  |
| ip_proto   | char(6) |  | PRI | | 
  |
| packets| int(10) unsigned|  | | 0   | 
  |
| bytes  | bigint(20) unsigned |  | | 0   | 
  |
| stamp_inserted | datetime|  | PRI | -00-00 00:00:00 | 
  |
| stamp_updated  | datetime| YES  | | NULL| 
  |
++-+--+-+-+---+
9 rows in set (0.00 sec)

mysql describe acct_out;
++-+--+-+-+---+
| Field  | Type| Null | Key | Default | 
Extra |
++-+--+-+-+---+
| ip_src | char(15)|  | PRI | | 
  |
| ip_dst | char(15)|  | PRI | | 
  |
| src_port   | int(2) unsigned |  | PRI | 0   | 
  |
| dst_port   | int(2) unsigned |  | PRI | 0   | 
  |
| ip_proto   | char(6) |  | PRI | | 
  |
| packets| int(10) unsigned|  | | 0   | 
  |
| bytes  | bigint(20) unsigned |  | | 0   | 
  |
| stamp_inserted | datetime|  | PRI | -00-00 00:00:00 | 
  |
| stamp_updated  | datetime| YES  | | NULL| 
  |
++-+--+-+-+---+
9 rows in set (0.00 sec)



An EXPLAIN on a SELECT query with the same WHERE clause as nfaccd UPDATE 
queries, gives the following.

mysql explain SELECT count(*) from acct_in WHERE FROM_UNIXTIME(1133803500) = 
stamp_inserted AND ip_dst='81.93.162.235' AND ip_src=3307 AND