I put nfdump on a Linux machine and tried the same thing.

I don't get the errors, but the binary file created isn't aggregated. In
fact its larger that the original file. 

If I output the aggregation as text (remove -w option) then it is
correct. So I'm still not able to write the aggregation to a binary file
that can be read by nfdump.

[...@mtnlsic01 bin]$ ./nfdump -a -A srcip4/24,dstip4/24 -r
./nfcapd.200912161720 -w /tmp/s
[...@mtnlsic01 bin]$ ls -l /tmp/s
-rw-r--r-- 1 sjs sjs 90500 Jan  5 12:51 /tmp/s
[...@mtnlsic01 bin]$ ls -l ./nfcapd.200912161720
-rw-rw-r-- 1 sjs sjs 21050 Jan  5 12:46 ./nfcapd.200912161720
[...@mtnlsic01 bin]$ ./nfdump -v /tmp/s
File    : /tmp/s
Version : 1 - not compressed
Blocks  : 1
 Type 1 : 0
 Type 2 : 1
Records : 1505
[...@mtnlsic01 bin]$ ./nfdump -v ./nfcapd.200912161720
File    : ./nfcapd.200912161720
Version : 1 - compressed
Blocks  : 1
 Type 1 : 0
 Type 2 : 1
Records : 1505

[...@mtnlsic01 bin]$ ./nfdump -a -A srcip4/24,dstip4/24 -r
./nfcapd.200912161720
Date flow start          Duration       Src IP Addr      Dst IP Addr
Packets    Bytes      bps    Bpp Flows
2009-12-16 11:09:22.580  4501.420     204.127.113.1    204.127.118.1
10.3 M    7.5 G   13.4 M    726   500
2009-12-16 11:54:07.910  1801.080     204.127.112.1    204.127.114.1
10.3 M    7.5 G   33.4 M    726   500
2009-12-16 12:20:05.050   206.000     204.127.122.1    204.127.122.2
3      151        5     50     3
2009-12-16 11:54:32.670  1801.330     204.127.114.1    204.127.112.1
10.3 M    7.5 G   33.4 M    726   500
Summary: total flows: 1503, total bytes: 22.5 G, total packets: 31.0 M,
avg bps: 40.0 M, avg pps: 6874, avg bpp: 726
Time window: 2009-12-16 11:09:22 - 2009-12-16 12:24:34
Total flows processed: 1503, Blocks skipped: 0, Bytes read: 90224
Sys: 0.001s flows/second: 751875.9   Wall: 0.000s flows/second:
2110955.1

-----Original Message-----
From: SOLOMON, STEVEN J (ATTSI) 
Sent: Wednesday, January 05, 2011 12:23 PM
To: [email protected]
Cc: [email protected]
Subject: Re: [Nfdump-discuss] Failure in Aggregation to a binary file

Do you think the error "Skip unknown record type .. " is also due to
memory?

With a smaller data set, I don't get the core dump but when I try and
read the aggregated binary file it shows errors (one error for each
aggregated flow entry):


$ nfdump -a -A srcip4/24,dstip4/24 -r nfcapd.201010162355 -w /tmp/s

$ nfdump -r /tmp/s
Skip unknown record type 4466
Skip unknown record type 4466
Skip unknown record type 4466
Skip unknown record type 4466
Date flow start          Duration Proto      Src IP Addr:Port
Dst IP Ad
dr:Port   Packets    Bytes Flows
2010-10-16 23:57:01.920     1.590 TCP        63.241.13.0:22    ->
12.120.10
.0:49819        4     3184     1
Summary: total flows: 1, total bytes: 3184, total packets: 4, avg bps:
16020, avg pps: 2, avg bpp: 796
Time window: 2010-10-16 23:57:01 - 2010-10-16 23:57:03
Total flows processed: 1, Blocks skipped: 0, Bytes read: 21364
Sys: 0.004s flows/second: 249.2      Wall: 0.001s flows/second: 512.3

Also the validation shows its corrupted:

lznsun02-124# nfdump -v /tmp/s
File    : /tmp/s
Version : 1 - not compressed
Blocks  : 0
 Type 1 : 0
 Type 2 : 0
Records : 0

-----Original Message-----
From: Peter Haag [mailto:[email protected]] 
Sent: Wednesday, January 05, 2011 12:09 PM
To: SOLOMON, STEVEN J (ATTSI)
Cc: [email protected]
Subject: Re: [Nfdump-discuss] Failure in Aggregation to a binary file

Looks like memory gets corrupted. Unfortunately I can not reproduce
that.
Large quantities of flows work for me as well as small ones on x86 and
x86_64

If you run out of memory you should get
malloc() error in nflowcache.c line 254: Cannot allocate memory

You can test that by setting limits on the shell level:
%limit vmemoryuse 100M
% nfdump -a -A srcip4/24,dstip4/24 -R
2010/12/01/00/nfcapd.201012010000:nfcapd.201012020025 -w /tmp/s
malloc() error in nflowcache.c line 254: Cannot allocate memory
%

        - Peter


On 1/5/11 15:15, SOLOMON, STEVEN J (ATTSI) wrote:
> When using the aggregation flag (-a) in nfdump on a single or range of
> files, with a large dataset for example in the command below , it
> generates a core dump and an empty output file (276 byte file):
> 
>  
> 
> $ nfdump -a -A srcip4/24,dstip4/24 -R
> 2010-11-01/nfcapd.201011010000:2010-11-01/nfcapd.201011010025 -w
/tmp/s
> 
> Segmentation Fault(coredump)
> 
> $ ls -l /tmp/s
> 
> -rw-r--r--   ...  Jan  4 22:50 /tmp/s
> 
>  
> 
> When the same aggregation is done on a  different server with a
smaller
> data set , there is no coredump, and the resulting file is larger, but
> when I read the file it throws errors "Skip unknown record type .."
> (with what looks like an ASN):
> 
>  
> 
> $ nfdump -a -A srcip4/24,dstip4/24,srcport,dstport -r
> nfcapd.201010162355 -w /tmp/s
> 
>  
> 
> $ nfdump -r /tmp/s
> 
> Skip unknown record type 4466
> 
> Skip unknown record type 4466
> 
> Skip unknown record type 4466
> 
> Skip unknown record type 4466
> 
> Date flow start          Duration Proto      Src IP Addr:Port
> Dst IP Ad
> 
> dr:Port   Packets    Bytes Flows
> 
> 2010-10-16 23:57:01.920     1.590 TCP        63.241.13.0:22    ->
> 12.120.10.0:49819        4     3184     1
> 
> Summary: total flows: 1, total bytes: 3184, total packets: 4, avg bps:
> 16020, avg pps: 2, avg bpp: 796
> 
> Time window: 2010-10-16 23:57:01 - 2010-10-16 23:57:03
> 
> Total flows processed: 1, Blocks skipped: 0, Bytes read: 21364
> 
> Sys: 0.004s flows/second: 249.2      Wall: 0.001s flows/second: 512.3
> 
>  
> 
> I have no trouble using the aggregation option (-a) writing to a text
> file.
> 
>  
> 
> Can anyone shed any light about why I'm getting these errors
aggregating
> to a binary file, is this supported or am I doing something wrong?
> 
>  
> 
>  
> 
> Steve Solomon
> 
> 
> 
> 
> 
>
------------------------------------------------------------------------
------
> Learn how Oracle Real Application Clusters (RAC) One Node allows
customers
> to consolidate database storage, standardize their database
environment, and, 
> should the need arise, upgrade to a full multi-node Oracle RAC
database 
> without downtime or disruption
> http://p.sf.net/sfu/oracle-sfdevnl
> 
> 
> 
> _______________________________________________
> Nfdump-discuss mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/nfdump-discuss

------------------------------------------------------------------------
------
Learn how Oracle Real Application Clusters (RAC) One Node allows
customers
to consolidate database storage, standardize their database environment,
and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
_______________________________________________
Nfdump-discuss mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nfdump-discuss

------------------------------------------------------------------------------
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
_______________________________________________
Nfdump-discuss mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nfdump-discuss

Reply via email to