Hi David,

The other possibility that comes to mind is that you are only
exporting counter samples (disregarded by pmacct) and not flow
samples (used by pmacct). You can confirm this by reading your
sFlow packets with sflowtool. Alternatively you can follow-up
privately and send me a brief capture of your sflow packets
for some investigation (needless to say i'm happy to support).

Cheers,
Paolo

On Thu, Nov 06, 2014 at 04:23:05PM +0100, David Winterstein wrote:
> Hi Paolo,
> 
> I changed the sfacct config as you suggested so it looks like this
> now for the time of testing:
> debug: true
> daemonize:                    false
> aggregate: src_host,dst_host,proto,src_port,dst_port
> logfile:                      /var/log/sfacctd.log
> pidfile:                      /var/run/sfacctd.pid
> interface:                    eth0
> interface_wait:               true
> promisc:                      false
> sfacctd_ip:                   10.41.16.155
> sfacctd_port:                 6343
> sfacctd_net:                  sflow
> sfacctd_disable_checks:       false
> plugins:                      print
> print_markers:                true
> print_output:                 formatted
> print_num_protos:             true
> print_refresh_time:           60
> print_history:                10m
> print_output_file:            /tmp/sfacctd_print.txt
> print_output_file_append:     true
> 
> While the logs show that sfacct is doing *something*, but the file
> indicates that the flow packets are empty if I'm interpreting it
> right:
> root@domU:~# cat /var/log/sfacctd.log
> Nov 06 11:58:27 INFO ( default/print ): 229376 bytes are available
> to address shared memory segment; buffer size is 200 bytes.
> Nov 06 11:58:27 INFO ( default/print ): Trying to allocate a shared
> memory segment of 5734400 bytes.
> Nov 06 11:58:27 INFO ( default/core ): waiting for sFlow data on
> 10.41.16.155:6343
> Nov 06 11:59:01 INFO ( default/print ): *** Purging cache - START ***
> Nov 06 11:59:01 INFO ( default/print ): *** Purging cache - END (QN:
> 0, ET: 0) ***
> Nov 06 12:00:01 INFO ( default/print ): *** Purging cache - START ***
> Nov 06 12:00:01 INFO ( default/print ): *** Purging cache - END (QN:
> 0, ET: 0) ***
> Nov 06 12:01:01 INFO ( default/print ): *** Purging cache - START ***
> Nov 06 12:01:01 INFO ( default/print ): *** Purging cache - END (QN:
> 0, ET: 0) ***
> root@domU:~# cat /tmp/sfacctd_print.txt
> SRC_IP           DST_IP           SRC_PORT  DST_PORT PROTOCOL
> PACKETS               BYTES
> --START (1415271480+60)--
> --END--
> --START (1415271540+60)--
> --END--
> --START (1415271600+60)--
> --END--
> 
> So what does this mean? The flow packets are empty?
> iptables shows nothing - neither on dom0 nor on domU:
> root@dom0:~# iptables -t nat -L -n -v
> Chain PREROUTING (policy ACCEPT 3725M packets, 6310G bytes)
>  pkts bytes target     prot opt in     out source
> destination
> 
> Chain INPUT (policy ACCEPT 1610 packets, 251K bytes)
>  pkts bytes target     prot opt in     out source
> destination
> 
> Chain OUTPUT (policy ACCEPT 453 packets, 35984 bytes)
>  pkts bytes target     prot opt in     out source
> destination
> 
> Chain POSTROUTING (policy ACCEPT 3363K packets, 214M bytes)
>  pkts bytes target     prot opt in     out source               destination
> 
> root@domU:~# iptables -t nat -L -n -v
> Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
>  pkts bytes target     prot opt in     out source
> destination
> 
> Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
>  pkts bytes target     prot opt in     out source
> destination
> 
> Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
>  pkts bytes target     prot opt in     out source
> destination
> 
> Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
>  pkts bytes target     prot opt in     out source               destination
> 
> There is actually nothing else between dom0 and domU because they
> are host and virtual machine, or am I guessing wrong?
> 
> 
> 
> Am 06.11.2014 um 02:14 schrieb Paolo Lucente:
> >Hi David,
> >
> >Two things to try: 1) simplify your config by printing to stdout or
> >flat-files so to remove the possibility issues are with the schema;
> >2) make sure no firewall, ie. iptables, is blocking packets: tcpdump
> >socket is served before packet filtering, sfacctd indeed after that.
> >
> >Keep me posted how these go and let's get it from there.
> >
> >Let me also confirm you the wireshark dissector for sflow is not of
> >the best quality - so exceptions like the one you ran into are normal.
> >
> >Cheers,
> >Paolo
> >
> >On Wed, Nov 05, 2014 at 09:20:49AM +0100, David Winterstein wrote:
> >>Hi,
> >>
> >>I'm currently having trouble setting up an sflow generating and
> >>receiving / interpreting environment and do not really know where to
> >>ask for help, I hope this is the right place.
> >>The current configuration includes a xenserver connected to the
> >>monitoring port of a HP switch - in detail:
> >>*Switch:* HP Procurve 2848
> >>[Mirror Port]        42
> >>[Monitoring sources] Ports 01 - 41 & 43 - 48
> >>*Server:*
> >>[BOARD] Supermicro X9SCI/X9SCA 1.01
> >>[CPU]   Intel Xeon E31230 4x @3.2 GHz
> >>[RAM]   16GB @1333 MHz
> >>
> >>The server has a xenserver dom0 with Debian Wheezy, running hsflowd
> >>version 1.26.2 with a configuration as simple as the following:
> >>sflow {
> >>   agent = eth0
> >>   DNSSD = off
> >>   polling = 20
> >>   sampling = 512
> >>   collector {
> >>     ip = 10.41.16.155
> >>     udpport = 6343
> >>   }
> >>}
> >>
> >>The network on the dom0 is configured as follows:
> >># eth0
> >>allow-hotplug eth0
> >>auto eth0
> >>iface eth0 inet static
> >>   address 10.41.16.152
> >>   netmask 255.255.255.0
> >>   gateway 10.41.16.1
> >>
> >># eth1 bridge
> >>allow-hotplug xenbr1
> >>auto xenbr1
> >>iface xenbr1 inet static
> >>   address 10.41.16.153
> >>   netmask 255.255.255.0
> >>   bridge-ports eth1
> >>
> >>with the domU network part of the config file being
> >>vif = [ 'ip=10.41.1.155,mac=xx:xx:xx:xx:xx:xx,bridge=xenbr1' ]
> >>
> >>which results in the domU network configuration as follows:
> >># eth0
> >>auto eth0
> >>allow-hotplug eth0
> >>iface eth0 inet static
> >>   address 10.41.16.155
> >>   netmask 255.255.255.0
> >>   gateway 10.41.16.1
> >>
> >>With this configuration flows should be generated from the traffic
> >>arriving on dom0 eth0 by hsflowd and be sent to domU eth0, where
> >>sfacctd is listening.
> >>All interfaces are working the way they should and tcpdump shows a
> >>lot of traffic going through on dom0 eth0. tcpdump with a filter for
> >>UDP packets on port 6343 shows packets like these:
> >>root@dom0:~# tcpdump -n udp port 6343
> >>17:43:44.030582 IP 10.41.16.153.36533 > 10.41.16.155.6343: sFlowv5,
> >>IPv4 agent 10.41.16.152, agent-id 1, length 468
> >>root@domU:~# tcpdump -n udp port 6343
> >>17:43:44.188244 IP 10.41.16.153.36533 > 10.41.16.155.6343: sFlowv5,
> >>IPv4 agent 10.41.16.152, agent-id 1, length 468
> >>
> >>The sfacctd configuration is held quite simple, too:
> >>debug: true
> >>daemonize:                    true
> >>aggregate: src_host,dst_host,proto,src_port,dst_port
> >>logfile:                      /var/log/sfacctd.log
> >>pidfile:                      /var/run/sfacctd.pid
> >>interface:                    eth0
> >>interface_wait:               true
> >>promisc:                      false
> >>sfacctd_ip:                   10.41.16.155
> >>sfacctd_port:                 6343
> >>sfacctd_net:                  sflow
> >>sfacctd_disable_checks:       false
> >>plugins:                      mysql
> >>sql_max_writers:              11
> >>sql_cache_entries:            49999
> >>sql_dont_try_update:          true
> >>sql_use_copy:                 true
> >>sql_multi_values:             1024000
> >>sql_locking_style:            table
> >>sql_trigger_exec:             true
> >>sql_trigger_time:             300
> >>sql_host:                     localhost
> >>sql_user:                     pmacct
> >>sql_passwd:                   xxxxxxxxxxxx
> >>sql_db:                       pmacct
> >>sql_table:                    v4_%Y%m%d
> >>sql_table_schema: /usr/local/pmacct/conf/mysql_schema_v4.cnf
> >>sql_table_version:            4
> >>sql_optimize_clauses:         true
> >>sql_refresh_time:             60
> >>sql_history:                  10m
> >>sql_history_roundoff:         m
> >>sql_recovery_logfile:         /var/log/pmacct/recovery_log
> >>
> >>with the sql_table_schema being defined as this:
> >>CREATE TABLE v4_%Y%m%d (
> >>   ip_src CHAR(15) NOT NULL,
> >>   ip_dst CHAR(15) NOT NULL,
> >>   src_port INT(2) UNSIGNED NOT NULL,
> >>   dst_port INT(2) UNSIGNED NOT NULL,
> >>   ip_proto CHAR(6) NOT NULL,
> >>   packets INT UNSIGNED NOT NULL,
> >>   bytes BIGINT UNSIGNED NOT NULL,
> >>   stamp_inserted DATETIME NOT NULL,
> >>   stamp_updated DATETIME,
> >>   PRIMARY KEY (ip_src, ip_dst, src_port, dst_port,
> >>   ip_proto, stamp_inserted)
> >>);
> >>
> >>sfacctd is definitely up and running, as ps and netstat suggest:
> >>root@domU:~# ps auxf |sed '1p;/sfacct/!d'
> >>USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
> >>root     19739  0.0  0.0   8412   552 pts/2    S+   08:38   0:00 |
> >>\_ sed 1p;/sfacct/!d
> >>root      5352  0.0  0.0  75748  7404 ?        Ss   Nov04   0:00
> >>sfacctd: Core Process [default]
> >>root      5353  0.0  0.1  89380 15592 ?        S    Nov04   0:03 \_
> >>sfacctd: MySQL Plugin [default]
> >>root@domU:~# netstat -tulnapee |sed '2p;/sfacct/!d'
> >>Proto Recv-Q Send-Q Local Address           Foreign Address
> >>State       User       Inode       PID/Program name
> >>udp        0      0 10.41.16.155:6343 0.0.0.0:*
> >>0          10869 5352/sfacctd: Core
> >>
> >>
> >>The problem: no data is written to the database. The table is not
> >>even being created:
> >>root@domU:~# mysql --defaults-file=~/pmacct.my.cnf
> >>mysql> SELECT CURRENT_USER();
> >>+------------------+
> >>| CURRENT_USER()   |
> >>+------------------+
> >>| pmacct@localhost |
> >>+------------------+
> >>1 row in set (0.00 sec)
> >>mysql> SHOW DATABASES;
> >>+--------------------+
> >>| Database           |
> >>+--------------------+
> >>| information_schema |
> >>| pmacct             |
> >>+--------------------+
> >>2 rows in set (0.00 sec)
> >>mysql> USE pmacct;
> >>Database changed
> >>mysql> SHOW TABLES;
> >>Empty set (0.00 sec)
> >>
> >>I do not really know where to search for an error, as the
> >>sfacctd.log does not really provide any useful information even in
> >>debug mode:
> >>root@domU:~# cat /var/log/sfacctd.log
> >>Nov 04 13:51:47 INFO ( default/mysql ): 229376 bytes are available
> >>to address shared memory segment; buffer size is 200 bytes.
> >>Nov 04 13:51:47 INFO ( default/mysql ): Trying to allocate a shared
> >>memory segment of 5734400 bytes.
> >>Nov 04 13:51:47 INFO ( default/core ): waiting for sFlow data on
> >>10.41.16.155:6343
> >>
> >>I even sniffed 5 sflow packets or so with tcpdump and tried to open
> >>them in Wireshark because I thought those might be corrupted and
> >>therefore not being interpreted by sfacctd, but I do not really know
> >>what the result means though the *[Dissector bug, protocol sFlow:
> >>proto.c:3903: failed assertion "DISSECTOR_ASSERT_NOT_REACHED"]*
> >>error looks more like an error in the protocol handling of Wireshark
> >>to me:
> >>- InMon sFlow
> >>     Datagram Version: 5
> >>     Agent Address: 10.41.16.152 (10.41.16.152)
> >>     Sub-agent ID: 100000
> >>     Sequence number: 197
> >>     SysUptime: 3921000
> >>     NumSamples: 1
> >>  - Counters sample, seq 197
> >>       0000 0000 0000 0000 0000 .... .... .... = Enterprise: standard
> >>sFlow (0)
> >>       .... .... .... .... .... 0000 0000 0000 = sFlow sample type:
> >>Counters sample (2)
> >>       Sample length (byte): 432
> >>       Sequence number: 197
> >>       0000 0010 .... .... .... .... .... .... = Source ID type: 2
> >>       .... .... 0000 0000 0000 0000 0000 0001 = Source ID index: 1
> >>       Counters records: 6
> >>     - Unknown sample format
> >>         0000 0000 0000 0000 0000 .... .... .... = Enterprise:
> >>standard sFlow (0)
> >>         .... .... .... .... .... 0111 1101 0001 = Format: Unknown (2001)
> >>         Flow data length (byte): 68
> >>     - 100 Base VG interface counters
> >>         0000 0000 0000 0000 0000 .... .... .... = Enterprise:
> >>standard sFlow (0)
> >>         .... .... .... .... .... 0000 0000 0100 = Format: 100 Base
> >>VG interface counters (4)
> >>         Flow data length (byte): 31
> >>         In High Priority Frames: 1
> >>         In High Priority Octets: 18374686479671558144
> >>         In Normal Priority Frames: 3
> >>         In Normal Priority Octets: 4297429031
> >>         In IPM Errors: 1518534656
> >>         In Oversize Frame Errors: 2
> >>         In Data Errors: 1
> >>         In Null Addressed Frames: 2461735
> >>         Out High Priority Frames: 1518469120
> >>         Out High Priority Octets: 128849018881
> >>         Transition Into Trainings: 2461735
> >>         HC In High Priority Octets: 6522056685362612181
> >>         HC In Normal Priority Octets: 223338299394
> >>         HC Out High Priority Octets: 6438106773558657025
> >>       Unknown enterprise format
> >>- [Dissector bug, protocol sFlow: proto.c:3903: failed assertion
> >>"DISSECTOR_ASSERT_NOT_REACHED"]
> >>   - [Expert Info (Error/Malformed): proto.c:3903: failed assertion
> >>"DISSECTOR_ASSERT_NOT_REACHED"]
> >>       [proto.c:3903: failed assertion "DISSECTOR_ASSERT_NOT_REACHED"]
> >>       [Severity level: Error]
> >>       [Group: Malformed]
> >>Can someone confirm that Wireshark has problems inspecting (such)
> >>sFlow packets?
> >>
> >>I really hope someone can help solving this.
> >>Thanks in advance and kind regards,
> >>   David Winterstein
> >>
> >>-- 
> >>
> >>Compositiv GmbH
> >>Süderstraße 232     
> >>20537 Hamburg
> >>Tel: 040 / 611 673 40
> >>Fax: 040 / 611 673 41
> >>
> >>Geschäftsführer Matthias Krawen
> >>Amtsgericht Hamburg - HRB 122540
> >>
> >>USt.-IdNr: DE282432834
> >>Es gelten ausschliesslich unsere AGB.
> >>
> >>
> >>_______________________________________________
> >>pmacct-discussion mailing list
> >>http://www.pmacct.net/#mailinglists
> >
> >_______________________________________________
> >pmacct-discussion mailing list
> >http://www.pmacct.net/#mailinglists
> 

_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Reply via email to