Hello, Paolo,

Is it possible that is traffic hitting your own prefixes,
It hardly could be for two reasons:
- sflow is enabled only on "uplink" interfaces of the switch, facing to carriers. Traffic to my own prefixes wouldn't go there.
- The BGP-to-pmacct export policy is set up to export internal routes.

You may double-check by adding net_dst to your aggregation method and see for which prefixes you get a as_dst zero - and evaluate whether that makes sense or not.
I did so, the as_dst=0 entries are represented with 0.0.0.0 nets (as I have suspected): MariaDB [pmacct]> select iface_out, as_dst, net_dst, packets, bytes from as_out order by stamp_inserted desc limit 20;
+-----------+--------+---------------+-----------+--------------+
| iface_out | as_dst | net_dst       | packets   | bytes        |
+-----------+--------+---------------+-----------+--------------+
|       567 |  39572 | 46.229.160.0  |   4194304 |   6383730688 |
|       564 |  12876 | 51.15.0.0     |   1048576 |     85983232 |
|       567 |  35004 | 195.74.72.0   |   2097152 |    134217728 |
|       508 |      0 | 0.0.0.0       | 471859200 | 522110107648 |
|       509 |  50113 | 185.180.228.0 |    524288 |     38797312 |
|       569 |   3269 | 79.0.0.0      |   2621440 |   3968860160 |
|       564 |  15169 | 74.125.0.0    |  30408704 |   2760900608 |
|       564 |  60781 | 5.79.64.0     |   3145728 |   2136997888 |
|       567 |  52000 | 185.15.211.0  |   8388608 |  12767461376 |
|       509 |  12764 | 212.112.96.0  |    524288 |    797966336 |
|       564 |  36873 | 105.112.22.0  |   1048576 |   1533018112 |
|       567 |  49476 | 80.75.128.0   |   2097152 |    465567744 |
|       569 |   4837 | 112.192.0.0   |    524288 |    787480576 |
|       508 |  35362 | 95.158.32.0   |   1048576 |    335544320 |
|       569 |   4804 | 49.187.32.0   |    524288 |    793772032 |
|       564 |  37440 | 41.78.57.0    |   1048576 |   1512046592 |
|       564 |  54994 | 8.37.230.0    |   1048576 |     77594624 |
|       509 |      0 | 0.0.0.0       | 244318208 | 221750231040 |
|       567 |  35816 | 78.30.192.0   |   4194304 |    547356672 |
|       567 |   6640 | 65.151.188.0  |   2097152 |   3191865344 |
+-----------+--------+---------------+-----------+--------------+


You could say that the routing table I have announced to sfacctd is incomplete but I announce all the prefixes my routers have got: they route traffic to uplinks according to it.. I guess pmacct's BGP daemon ignores or "loses" some announces. The routers sending BGP feed are Juniper MX480 and MX80.

I'd be glad to help you with finding this bug, if you give me instructions what to check next.



Paolo Lucente писал 2017-04-01 16:24:
Hi Stanislaw,

Is it possible that is traffic hitting your own prefixes, ie. prefixes
lying in the same ASN you are iBGP peering with pmacct (for which the
AS_PATH would be null and hence the as_dst would be null too)? You may
double-check by adding net_dst to your aggregation method and see for
which prefixes you get a as_dst zero - and evaluate whether that makes
sense or not. Or, if you switch to GitHub master code (about to be
freezed and rolled out in April as 1.6.2), you have a new bgp_daemon_as
directive to define an ASN and hence allow you to set an eBGP peering
with your router(s): this way you would see your own prefixes lying in
your ASN instead of zeroes.

Paolo

On Thu, Mar 30, 2017 at 12:10:42PM +0300, Stanislaw wrote:
Hello,
I'm trying to setup sfacctd to account per-ASN traffic statistics
using pmacct 1.6.1 (the latest). It seems to work but the topmost
position in the report is dst_asn 0.
Is it possible that pmacct can't handle BGP fullview, so portion of
ASN<->prefix data is lost within the daemon?


I use pmacct mysql table version 6 with iface_in, iface_out fields
added.
MariaDB [pmacct]> SELECT iface_out, as_dst, SUM(bytes) AS bytes FROM
as_out GROUP BY iface_out, as_dst ORDER BY SUM(bytes) DESC LIMIT 20;
+-----------+--------+----------------+
| iface_out | as_dst | bytes          |
+-----------+--------+----------------+
|       508 |      0 | 14271156862976 |
|       509 |      0 |  8610211954688 |
|       570 |   6849 |  6350382530560 |
|       570 |  25229 |  6203280506880 |
|       570 |  15169 |  3872144621568 |
|       570 |  13188 |  3619018899456 |
|       569 |   4134 |  3502273396736 |
|       567 |  24940 |  2158440415232 |
|       570 |  21343 |  2101956657152 |
|       570 |  39608 |  2061838761984 |
|       570 |      0 |  2005833433088 |
|       570 |  13238 |  1953913880576 |
|       570 |  21219 |  1912485134336 |
|       569 |   3269 |  1716940570624 |
|       570 |  31148 |  1612703350784 |
|       570 |   6876 |  1525952118784 |
|       564 |  15169 |  1316007411712 |
|       570 |   3255 |  1223870398464 |
|       567 | 133774 |   984798167040 |
|       569 |   4837 |   978273566720 |
+-----------+--------+----------------+

As far as I have got AS0 is an "ASN which pmacct doesn't know about"
but it can't be so as I announce fullview from routers (which
actually route this traffic) to the daemon: if routers don't have
that prefix in BGP table it won't be delivered at all (they have
0.0.0.0/0 route to discard). Sflow is sent by Juniper QFX5100
switch, sflow statistics are collected only from internet uplink
interfaces.

Here are my configs:

/etc/pmacct/sfacctd.conf:
debug: false
logfile: /var/log/sfacctd.log
pre_tag_map: /etc/pmacct/pre_tag_map
daemonize: false

sfacctd_port:           6345
sfacctd_as_new:         true
sfacctd_ip:             10.7.10.7
sfacctd_renormalize:    true
sfacctd_disable_checks: true

plugin_buffer_size:     20480
plugin_pipe_size:       10240000

sfacctd_net: bgp
sfacctd_as_new: bgp

bgp_daemon: true
bgp_daemon_msglog: false
bgp_peer_src_as_type: bgp
bgp_daemon_ip: 10.7.10.7
bgp_daemon_port: 179
bgp_agent_map: /etc/pmacct/bgp_agent_map


plugins: mysql[as_in], mysql[as_out]
aggregate[as_in]: src_as, in_iface
aggregate[as_out]: dst_as, out_iface
pre_tag_filter[as_in]: 1
pre_tag_filter[as_out]: 2

sql_table[as_in]: as_in
sql_table_version[as_in]: 6
sql_history_since_epoch[as_in]: true

sql_table[as_out]: as_out
sql_table_version[as_out]: 6
sql_history_since_epoch[as_out]: true


sql_locking_style: row
sql_history: 1h
sql_history_roundoff: h
sql_refresh_time: 3600
sql_optimize_clauses: true
sql_multi_values: 140000
sql_cache_entries: 140003

sql_host: 127.0.0.1
sql_db: pmacct
sql_user: pmacct
sql_passwd: passwd



/etc/pmacct/pre_tag_map:
! ingress traffic
! Traffic to one of routers MAC addresses is ingress
set_tag=1 ip=10.7.10.101 filter='ether dst 00:26:88:2a:bf:c0'
set_tag=1 ip=10.7.10.101 filter='ether dst a8:d0:e5:5f:ec:80'

! egress traffic
! Traffic from one of routers MAC addresses is egress
set_tag=2 ip=10.7.10.101 filter='ether src 00:26:88:2a:bf:c0'
set_tag=2 ip=10.7.10.101 filter='ether src a8:d0:e5:5f:ec:80'



/etc/pmacct/bgp_agent_map:
! BGP session with router 1
id=xx.xx.64.1 ip=10.7.10.101

! BGP session with router 2
id=xx.xx.82.1 ip=10.7.10.101

Thanks in advance!

_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Reply via email to