What do I think? I think you don't quite understand netFlow
1) The frequency of netFlow export shouldn't matter. netFlow is sending you
((src),(dst),(pkt),(bytes)). If you delay the export, all the sender does
is collapse more packets - i.e. pkt > 1.
But you still get the flow and it gets stored into the normal ntop counters.
See handleV5Flow() in plugins/rrdPlugin.c -- look for the setting of len
(from record->dOctets) and then the incrementTrafficCounter() calls.
Then the normal rrd loop takes over and periodically dumps the counters into
the rrds.
I think you and Bill Richardson need to figure out which value is right and
which is wrong. It sure sounds like it could be the same basic problem, See
his PR QDA9WTB stuff...
Bill: Reading the code, there clearly are two different paths taken between
the domain and hosts stats. More precisely, there are a number of
conditions that would cause the host data not to be stored, while the domain
data is pretty much unconditional. Compare the blocks in
plugins/rrdPlugin.c: if(dumpDomains) {} 1684ff vs. if(dumpHosts) {} at
1801ff
Key differences:
dumpDomains uses the now standard loop:
// walk through all hosts, getting their domain names and counting
stats
for (el = getFirstHost(devIdx);
el != NULL; el = getNextHost(devIdx, el)) {
vs.
for(i=1; i<myGlobals.device[devIdx].actualHashSize; i++) {
HostTraffic *el = myGlobals.device[devIdx].hash_hostTraffic[i];
dumpDomains bails only if the name hasn't yet been resolved:
// if we didn't get a domain name, bail out
if ((el->fullDomainName == NULL)
|| (el->fullDomainName[0] == '\0')
|| (el->dotDomainName == NULL)
|| (el->hostResolvedName[0] == '\0')
|| broadcastHost(el)
) {
continue;
}
vs. a bunch of tests in dumpHosts, including:
if(el->hostNumIpAddress[0] != '\0') {
...
} else {
/* For the time being do not save IP-less hosts */
el = el->next;
...
continue;
}
However, like hostResolvedAddress, hostNumIpAddress should get valued during
the ntop run, perhaps not immediately, but certainly within secods/minutes -
so that we might miss a few cycles, but that's not the 10x difference on
every cycle you're seeing.
If anything, I would expect the dumpDomains value to be right and the
dumpHosts wrong.
See here's what's weird. For me, the data points that are in common match:
# rrdtool fetch eth0/domains/burtonstrauss.com/bytesRcvd.rrd AVERAGE |
grep -v nan | grep -v '0\.0000000000e+00'
counter
1077951900: 1.1114381271e+03
1077952200: 4.1864169454e+02
# rrdtool fetch eth0/hosts/217/160/226/66/bytesRcvd.rrd AVERAGE | grep -v
nan
counter
1077951900: 1.1114381271e+03
If I'm reading things correctly, Bill is using a single interface and
capturing packets, Markus is using netFlow.
Bill, are you sure that the domain stats are wrong and the hosts stats
right? The different loop is the one thing that would explain BOTH issues
(plus what I'm seeing on my system). But then the 20GB domain would be
right and the lower host value wrong...
-----Burton
> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf
> Of Markus Rehbach
> Sent: Saturday, February 28, 2004 2:32 AM
> To: [EMAIL PROTECTED]
> Subject: [Ntop-dev] values in upToXYZPkts RRD graphs are too low (about
> factor 10)
>
>
> Hi all,
>
> comparing the ethernetPkts RRD graph with the sum of the upToXYZPkts is
> showing a difference by about factor 10.
>
> The graphs for eth0 seems to be correct.
>
> It is clear that differences must exist between 'raw packets' and NetFlow
> because of the quanifying effects of NetFlow (concerning the packet sizes
> lower values in the biggest and smallest sizes), but I'm exporting the
> NetFlow stuff every 5 seconds and therefore the differences sould
> not be big.
>
> What do you think?
>
> Markus
>
> _______________________________________________
> Ntop-dev mailing list
> [EMAIL PROTECTED]
> http://listgateway.unipi.it/mailman/listinfo/ntop-dev
>
_______________________________________________
Ntop-dev mailing list
[EMAIL PROTECTED]
http://listgateway.unipi.it/mailman/listinfo/ntop-dev