Do you think the error "Skip unknown record type .. " is also due to
memory?

With a smaller data set, I don't get the core dump but when I try and
read the aggregated binary file it shows errors (one error for each
aggregated flow entry):


$ nfdump -a -A srcip4/24,dstip4/24 -r nfcapd.201010162355 -w /tmp/s

$ nfdump -r /tmp/s
Skip unknown record type 4466
Skip unknown record type 4466
Skip unknown record type 4466
Skip unknown record type 4466
Date flow start          Duration Proto      Src IP Addr:Port
Dst IP Ad
dr:Port   Packets    Bytes Flows
2010-10-16 23:57:01.920     1.590 TCP        63.241.13.0:22    ->
12.120.10
.0:49819        4     3184     1
Summary: total flows: 1, total bytes: 3184, total packets: 4, avg bps:
16020, avg pps: 2, avg bpp: 796
Time window: 2010-10-16 23:57:01 - 2010-10-16 23:57:03
Total flows processed: 1, Blocks skipped: 0, Bytes read: 21364
Sys: 0.004s flows/second: 249.2      Wall: 0.001s flows/second: 512.3

Also the validation shows its corrupted:

lznsun02-124# nfdump -v /tmp/s
File    : /tmp/s
Version : 1 - not compressed
Blocks  : 0
 Type 1 : 0
 Type 2 : 0
Records : 0

-----Original Message-----
From: Peter Haag [mailto:[email protected]] 
Sent: Wednesday, January 05, 2011 12:09 PM
To: SOLOMON, STEVEN J (ATTSI)
Cc: [email protected]
Subject: Re: [Nfdump-discuss] Failure in Aggregation to a binary file

Looks like memory gets corrupted. Unfortunately I can not reproduce
that.
Large quantities of flows work for me as well as small ones on x86 and
x86_64

If you run out of memory you should get
malloc() error in nflowcache.c line 254: Cannot allocate memory

You can test that by setting limits on the shell level:
%limit vmemoryuse 100M
% nfdump -a -A srcip4/24,dstip4/24 -R
2010/12/01/00/nfcapd.201012010000:nfcapd.201012020025 -w /tmp/s
malloc() error in nflowcache.c line 254: Cannot allocate memory
%

        - Peter


On 1/5/11 15:15, SOLOMON, STEVEN J (ATTSI) wrote:
> When using the aggregation flag (-a) in nfdump on a single or range of
> files, with a large dataset for example in the command below , it
> generates a core dump and an empty output file (276 byte file):
> 
>  
> 
> $ nfdump -a -A srcip4/24,dstip4/24 -R
> 2010-11-01/nfcapd.201011010000:2010-11-01/nfcapd.201011010025 -w
/tmp/s
> 
> Segmentation Fault(coredump)
> 
> $ ls -l /tmp/s
> 
> -rw-r--r--   ...  Jan  4 22:50 /tmp/s
> 
>  
> 
> When the same aggregation is done on a  different server with a
smaller
> data set , there is no coredump, and the resulting file is larger, but
> when I read the file it throws errors "Skip unknown record type .."
> (with what looks like an ASN):
> 
>  
> 
> $ nfdump -a -A srcip4/24,dstip4/24,srcport,dstport -r
> nfcapd.201010162355 -w /tmp/s
> 
>  
> 
> $ nfdump -r /tmp/s
> 
> Skip unknown record type 4466
> 
> Skip unknown record type 4466
> 
> Skip unknown record type 4466
> 
> Skip unknown record type 4466
> 
> Date flow start          Duration Proto      Src IP Addr:Port
> Dst IP Ad
> 
> dr:Port   Packets    Bytes Flows
> 
> 2010-10-16 23:57:01.920     1.590 TCP        63.241.13.0:22    ->
> 12.120.10.0:49819        4     3184     1
> 
> Summary: total flows: 1, total bytes: 3184, total packets: 4, avg bps:
> 16020, avg pps: 2, avg bpp: 796
> 
> Time window: 2010-10-16 23:57:01 - 2010-10-16 23:57:03
> 
> Total flows processed: 1, Blocks skipped: 0, Bytes read: 21364
> 
> Sys: 0.004s flows/second: 249.2      Wall: 0.001s flows/second: 512.3
> 
>  
> 
> I have no trouble using the aggregation option (-a) writing to a text
> file.
> 
>  
> 
> Can anyone shed any light about why I'm getting these errors
aggregating
> to a binary file, is this supported or am I doing something wrong?
> 
>  
> 
>  
> 
> Steve Solomon
> 
> 
> 
> 
> 
>
------------------------------------------------------------------------
------
> Learn how Oracle Real Application Clusters (RAC) One Node allows
customers
> to consolidate database storage, standardize their database
environment, and, 
> should the need arise, upgrade to a full multi-node Oracle RAC
database 
> without downtime or disruption
> http://p.sf.net/sfu/oracle-sfdevnl
> 
> 
> 
> _______________________________________________
> Nfdump-discuss mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/nfdump-discuss

------------------------------------------------------------------------------
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
_______________________________________________
Nfdump-discuss mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nfdump-discuss

Reply via email to