Re: [pmacct-discussion] pmacct+clickhouse

2023-11-30 Thread Brian Solar
I use nfacctd -> Kafka -> clickhouse.

Sent from ProtonMail mobile

 Original Message 
On Nov 17, 2023, 2:09 PM, Sergey Gorshkov wrote:

> Hi Paolo! Are you has best practices to install pmacct+clickhouse? 
> ___ pmacct-discussion mailing 
> list http://www.pmacct.net/#mailinglists___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Multiples nfacctd deamons writing to same Kafka topic

2020-04-22 Thread Brian Solar
use the named configuration feature:

kafka_topic[config_name]: netflow
kafka_broker_host[config_name]: 192.168100.105
...
...

‐‐‐ Original Message ‐‐‐
On Sunday, April 19, 2020 5:51 PM, Emanuel dos Reis Rodrigues 
 wrote:

> I see, I actually tried it before and the realized the write_id was changing 
> based on the  PID of nfacctd. Do you know what is the parameter to customize 
> the writer_id ?
>
> Thanks !
>
> Best Regards,
> Emanuel
>
> On Sun, Apr 19, 2020 at 11:42 AM Brian Solar  wrote:
>
>> You already seem to have a solution, but to me the writer_id is what you 
>> want.  Change the name of the process in your configuration file.
>>
>> ‐‐‐ Original Message ‐‐‐
>> On Wednesday, April 15, 2020 7:33 PM, Emanuel dos Reis Rodrigues 
>>  wrote:
>>
>>> Hey, I just realize it worked. I think I was little behind on the messages 
>>> parking on my kafka, now I can see the tag.
>>>
>>> Thank you so much for your help.
>>>
>>> On Wed, Apr 15, 2020 at 10:33 AM Emanuel dos Reis Rodrigues 
>>>  wrote:
>>>
>>>> I am using:
>>>>
>>>> NetFlow Accounting Daemon, nfacctd 1.7.2-git (20181018-00+c3)
>>>>
>>>> Arguments:
>>>>  '--enable-kafka' '--enable-jansson' 
>>>> 'JANSSON_CFLAGS=-I/usr/local/include/' 'JANSSON_LIBS=-L/usr/local/lib 
>>>> -ljansson' '--enable-l2' '--enable-ipv6' '--enable-64bit' 
>>>> '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins' 
>>>> '--enable-st-bins'
>>>>
>>>> Libs:
>>>> libpcap version 1.5.3
>>>> rdkafka 0.11.4
>>>> jansson 2.12
>>>>
>>>> I can upgrade it to a newer version and try again.
>>>>
>>>> On Wed, Apr 15, 2020 at 8:59 AM Paolo Lucente  wrote:
>>>>
>>>>> Hey Emanuel,
>>>>>
>>>>> The config is correct and I did try your same config and that does work
>>>>> for me, ie.:
>>>>>
>>>>> $ ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 
>>>>> --topic pmacct.flows
>>>>> {"event_type": "purge", "tag": 1, [ .. ]}
>>>>>
>>>>> What version of the software are you using? Is it 1.7.4p1 (latest
>>>>> stable) or master code from GitHub? If so, is it possible an old running
>>>>> nfacctd process is reading the data instead of the newly configured one?
>>>>>
>>>>> Paolo
>>>>>
>>>>> On Wed, Apr 15, 2020 at 12:17:43AM -0400, Emanuel dos Reis Rodrigues 
>>>>> wrote:
>>>>>> I tried, follow my config:
>>>>>>
>>>>>> kafka_topic: netflow
>>>>>> kafka_broker_host: 192.168100.105
>>>>>> kafka_broker_port: 9092
>>>>>> kafka_refresh_time: 1
>>>>>> #daemonize: true
>>>>>> plugins: kafka
>>>>>> nfacctd_port: 9995
>>>>>> post_tag: 1
>>>>>> aggregate: tag, peer_src_ip, src_host, dst_host, timestamp_start,
>>>>>> timestamp_end, src_port, dst_port, proto
>>>>>>
>>>>>>
>>>>>> I kept the peer_src_ip, but the tag one is not being posted to Kafka.
>>>>>>
>>>>>> {'event_type': 'purge', 'peer_ip_src': '172.18.0.2', 'ip_src':
>>>>>> '192.168.1.100', 'ip_dst': 'x.46.x.245', 'port_src': 51184, 'port_dst':
>>>>>> 443, 'ip_proto': 'tcp', 'timestamp_start': '2020-04-14 14:15:39.00',
>>>>>> 'timestamp_end': '2020-04-14 14:15:54.00', 'packets': 5, 'bytes': 
>>>>>> 260,
>>>>>> 'writer_id': 'default_kafka/75091'}
>>>>>>
>>>>>> Did I miss anything ?
>>>>>>
>>>>>>
>>>>>> Thanks !
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Apr 14, 2020 at 10:26 AM Paolo Lucente  wrote:
>>>>>>
>>>>>> >
>>>>>> > I may have skipped the important detail you need to add the 'tag' key 
>>>

Re: [pmacct-discussion] Multiples nfacctd deamons writing to same Kafka topic

2020-04-19 Thread Brian Solar
You already seem to have a solution, but to me the writer_id is what you want.  
Change the name of the process in your configuration file.

‐‐‐ Original Message ‐‐‐
On Wednesday, April 15, 2020 7:33 PM, Emanuel dos Reis Rodrigues 
 wrote:

> Hey, I just realize it worked. I think I was little behind on the messages 
> parking on my kafka, now I can see the tag.
>
> Thank you so much for your help.
>
> On Wed, Apr 15, 2020 at 10:33 AM Emanuel dos Reis Rodrigues 
>  wrote:
>
>> I am using:
>>
>> NetFlow Accounting Daemon, nfacctd 1.7.2-git (20181018-00+c3)
>>
>> Arguments:
>>  '--enable-kafka' '--enable-jansson' 'JANSSON_CFLAGS=-I/usr/local/include/' 
>> 'JANSSON_LIBS=-L/usr/local/lib -ljansson' '--enable-l2' '--enable-ipv6' 
>> '--enable-64bit' '--enable-traffic-bins' '--enable-bgp-bins' 
>> '--enable-bmp-bins' '--enable-st-bins'
>>
>> Libs:
>> libpcap version 1.5.3
>> rdkafka 0.11.4
>> jansson 2.12
>>
>> I can upgrade it to a newer version and try again.
>>
>> On Wed, Apr 15, 2020 at 8:59 AM Paolo Lucente  wrote:
>>
>>> Hey Emanuel,
>>>
>>> The config is correct and I did try your same config and that does work
>>> for me, ie.:
>>>
>>> $ ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic 
>>> pmacct.flows
>>> {"event_type": "purge", "tag": 1, [ .. ]}
>>>
>>> What version of the software are you using? Is it 1.7.4p1 (latest
>>> stable) or master code from GitHub? If so, is it possible an old running
>>> nfacctd process is reading the data instead of the newly configured one?
>>>
>>> Paolo
>>>
>>> On Wed, Apr 15, 2020 at 12:17:43AM -0400, Emanuel dos Reis Rodrigues wrote:
 I tried, follow my config:

 kafka_topic: netflow
 kafka_broker_host: 192.168100.105
 kafka_broker_port: 9092
 kafka_refresh_time: 1
 #daemonize: true
 plugins: kafka
 nfacctd_port: 9995
 post_tag: 1
 aggregate: tag, peer_src_ip, src_host, dst_host, timestamp_start,
 timestamp_end, src_port, dst_port, proto


 I kept the peer_src_ip, but the tag one is not being posted to Kafka.

 {'event_type': 'purge', 'peer_ip_src': '172.18.0.2', 'ip_src':
 '192.168.1.100', 'ip_dst': 'x.46.x.245', 'port_src': 51184, 'port_dst':
 443, 'ip_proto': 'tcp', 'timestamp_start': '2020-04-14 14:15:39.00',
 'timestamp_end': '2020-04-14 14:15:54.00', 'packets': 5, 'bytes': 260,
 'writer_id': 'default_kafka/75091'}

 Did I miss anything ?


 Thanks !



 On Tue, Apr 14, 2020 at 10:26 AM Paolo Lucente  wrote:

 >
 > I may have skipped the important detail you need to add the 'tag' key to
 > your 'aggregate' line in the config, my bad. This is in addition to, say,
 > 'post_tag: 1' to identify collector 1. Let me know how it goes.
 >
 > Paolo
 >
 > On Tue, Apr 14, 2020 at 10:18:55AM -0400, Emanuel dos Reis Rodrigues 
 > wrote:
 > > Thank you man, I did this test but I did not see the id being pushed
 > along
 > > with the Netflow info to Kafka topic. Is there the place the 
 > > information
 > > would show up ?
 > >
 > >
 > > On Tue, Apr 14, 2020 at 9:15 AM Paolo Lucente  wrote:
 > >
 > > >
 > > > Hi Emanuel,
 > > >
 > > > Apologies i did not get you wanted and ID for the collector. The
 > > > simplest way of achieving that is 'post_tag' as you just have to 
 > > > supply
 > > > a number as ID; pre_tag_map expects a map and may be better to be
 > > > reserved for more complex use-cases.
 > > >
 > > > Paolo
 > > >
 > > > On Mon, Apr 13, 2020 at 03:35:52PM -0400, Emanuel dos Reis Rodrigues
 > wrote:
 > > > > Thank you for your help. Appreciate it !
 > > > >
 > > > > See, I did use it for testing after I sent this email. However, the
 > ip
 > > > > showed there was the IP from my nfacctd machine, the collector
 > itself.
 > > > Not
 > > > > the exporter.
 > > > >
 > > > > peer_src_ip  : IP address or identificator of
 > > > telemetry
 > > > > exporting device
 > > > >
 > > > > In fact, it may have todo with the fact I currently have an SSH
 > tunnel
 > > > with
 > > > > socat with the remote machine in order to collect the data. This 
 > > > > may
 > be
 > > > the
 > > > > reason why which is definitively not a ordinary condition. :)
 > > > >
 > > > > I am wondering if I could use this one to include a different tag 
 > > > > on
 > it
 > > > > process/collector, but have not yet figured out how. Any thoughts ?
 > > > >
 > > > > label: String label, ie. as result of
 > > > > pre_tag_map evaluation
 > > > >
 > > > >
 > > > > Thank you again.
 > > > >
 > > > > On Mon, Apr 13, 2020 at 9:07 AM Paolo Lucente 
 > wrote:
 > > > >
 > > > > >
 > > > > > Hi Emanuel,
 > > > > >
 > > > > > I think

Re: [pmacct-discussion] Making an RPM out of source code

2019-03-08 Thread Brian Solar
mitives.lst.example
/usr/share/pmacct/examples/probe_netflow.conf.example
/usr/share/pmacct/examples/probe_sflow.conf.example
/usr/share/pmacct/examples/sampling.map.example
/usr/share/pmacct/examples/tee_receivers.lst.example
/usr/share/pmacct/sql
/usr/share/pmacct/sql/README.64bit
/usr/share/pmacct/sql/README.GeoIP
/usr/share/pmacct/sql/README.IPv6
/usr/share/pmacct/sql/README.cos
/usr/share/pmacct/sql/README.custom_primitives
/usr/share/pmacct/sql/README.etype
/usr/share/pmacct/sql/README.export_proto
/usr/share/pmacct/sql/README.iface
/usr/share/pmacct/sql/README.label
/usr/share/pmacct/sql/README.mask
/usr/share/pmacct/sql/README.mpls
/usr/share/pmacct/sql/README.mysql
/usr/share/pmacct/sql/README.nat
/usr/share/pmacct/sql/README.pgsql
/usr/share/pmacct/sql/README.sampling
/usr/share/pmacct/sql/README.sqlite3
/usr/share/pmacct/sql/README.tag2
/usr/share/pmacct/sql/README.timestamp
/usr/share/pmacct/sql/README.tunnel
/usr/share/pmacct/sql/pmacct-create-db.pgsql
/usr/share/pmacct/sql/pmacct-create-db_bgp_v1.mysql
/usr/share/pmacct/sql/pmacct-create-db_v1.mysql
/usr/share/pmacct/sql/pmacct-create-db_v2.mysql
/usr/share/pmacct/sql/pmacct-create-db_v3.mysql
/usr/share/pmacct/sql/pmacct-create-db_v4.mysql
/usr/share/pmacct/sql/pmacct-create-db_v5.mysql
/usr/share/pmacct/sql/pmacct-create-db_v6.mysql
/usr/share/pmacct/sql/pmacct-create-db_v7.mysql
/usr/share/pmacct/sql/pmacct-create-db_v8.mysql
/usr/share/pmacct/sql/pmacct-create-db_v9.mysql
/usr/share/pmacct/sql/pmacct-create-table_bgp_v1.pgsql
/usr/share/pmacct/sql/pmacct-create-table_bgp_v1.sqlite3
/usr/share/pmacct/sql/pmacct-create-table_v1.pgsql
/usr/share/pmacct/sql/pmacct-create-table_v1.sqlite3
/usr/share/pmacct/sql/pmacct-create-table_v2.pgsql
/usr/share/pmacct/sql/pmacct-create-table_v2.sqlite3
/usr/share/pmacct/sql/pmacct-create-table_v3.pgsql
/usr/share/pmacct/sql/pmacct-create-table_v3.sqlite3
/usr/share/pmacct/sql/pmacct-create-table_v4.pgsql
/usr/share/pmacct/sql/pmacct-create-table_v4.sqlite3
/usr/share/pmacct/sql/pmacct-create-table_v5.pgsql
/usr/share/pmacct/sql/pmacct-create-table_v5.sqlite3
/usr/share/pmacct/sql/pmacct-create-table_v6.pgsql
/usr/share/pmacct/sql/pmacct-create-table_v6.sqlite3
/usr/share/pmacct/sql/pmacct-create-table_v7.pgsql
/usr/share/pmacct/sql/pmacct-create-table_v7.sqlite3
/usr/share/pmacct/sql/pmacct-create-table_v8.sqlite3
/usr/share/pmacct/sql/pmacct-create-table_v9.pgsql
/usr/share/pmacct/sql/pmacct-create-table_v9.sqlite3
/usr/share/pmacct/sql/pmacct-grant-db.mysql

‐‐‐ Original Message ‐‐‐
On Thursday, March 7, 2019 7:53 AM, Brian Solar  wrote:

> Here is 1.7.2
>
> https://copr.fedorainfracloud.org/coprs/traivor/pmacct
>
>  Original Message 
> On Mar 7, 2019, 5:12 AM, Edvinas Kairys < edvinas.em...@gmail.com> wrote:
>
>> Hello,
>>
>> I'm trying to make an RPM file from latest version of PMACCT. Now i came to 
>> problem to gather all required files to pack them in RPM (using FPM 
>> software.)
>>
>> Looking at 'make install' where're lots of randomly located files. Maybe 
>> there're any list of them to make sure all them will be packed using RPM ?
>>
>> Thanks___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] nfacctd Dropping UDP. Buffer Receive Size?

2019-03-08 Thread Brian Solar

The culprit was actually the template file.  It appears to block while writing 
and it's really slow.  When I remove the configuration option one process could 
do what I could not accomplish using 80 processes with each using a template 
file.

Any consideration on a different implementation?

Writing it out on a configurable interval would be a simple improvement.

When load-balancing, particularly with SO_REUSEPORT, it would be nice to allow 
them to communicate the template set to each other.  Perhaps another use for 
zeromq?

Brian



‐‐‐ Original Message ‐‐‐
On Sunday, February 24, 2019 5:02 PM, Paolo Lucente  wrote:

>
>
> Hi Brian,
>
> You are most probably looking for this:
>
> https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L2644-#L2659
>
> Should that not work, ie. too many input flows for the available
> resources, you have a couple load-balancing strategies possible:
> one is to configure a replicator (tee plugin, see in QUICKSTART).
>
> Paolo
>
> On Sun, Feb 24, 2019 at 05:31:55PM +, Brian Solar wrote:
>
> > Is there a way to adjust the UDP buffer receive size ?
> > Are there any other indications of nfacctd not keeping up?
> > cat /proc/net/udp |egrep drops\|0835
> > sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid 
> > timeout inode ref pointer drops
> > 52366: :0835 : 07 :00034B80 00: 
> >  0 0 20175528 2 89993febd940 7495601
> > 7495601 drops w/ a buffer of 0x0034B80 or 214528
> > sysctl -a |fgrep mem
> > net.core.optmem_max = 20480
> > net.core.rmem_default = 212992
> > net.core.rmem_max = 2147483647
> > net.core.wmem_default = 212992
> > net.core.wmem_max = 212992
> > net.ipv4.igmp_max_memberships = 20
> > net.ipv4.tcp_mem = 9249771 12333028 18499542
> > net.ipv4.tcp_rmem = 4096 87380 6291456
> > net.ipv4.tcp_wmem = 4096 16384 4194304
> > net.ipv4.udp_mem = 9252429 12336573 18504858
> > net.ipv4.udp_rmem_min = 4096
> > net.ipv4.udp_wmem_min = 4096
> > vm.lowmem_reserve_ratio = 256 256 32
> > vm.memory_failure_early_kill = 0
> > vm.memory_failure_recovery = 1
> > vm.nr_hugepages_mempolicy = 0
> > vm.overcommit_memory = 0
>
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] Making an RPM out of source code

2019-03-07 Thread Brian Solar
Here is 1.7.2

https://copr.fedorainfracloud.org/coprs/traivor/pmacct

 Original Message 
On Mar 7, 2019, 5:12 AM, Edvinas Kairys wrote:

> Hello,
>
> I'm trying to make an RPM file from latest version of PMACCT. Now i came to 
> problem to gather all required files to pack them in RPM (using FPM software.)
>
> Looking at 'make install' where're lots of randomly located files. Maybe 
> there're any list of them to make sure all them will be packed using RPM ?
>
> Thanks___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] nfacctd Dropping UDP. Buffer Receive Size?

2019-03-04 Thread Brian Solar
Is there a way to send the devices decoded system time and uptime to Kafka?

Are there protections for flows stopping before they start? Or other time 
format errors?

I have yet to track down actual packets, but Ive seen usual timestamps and very 
old time stamps in Kafka. This makes nfacctd generate millions of entries at 
times.

Fortigate devices seem to report this way at times. I haven't noticed it on 
other devices as of yet.

 Original Message 
On Feb 25, 2019, 9:28 AM, Paolo Lucente wrote:

Hi Brian,

Thanks very much for the nginx config, definitely something to add to
docs as a possible option. QN reads 'Queries Number' (inherited from the
SQL plugins, hence the queries wording); the first number is now many
are sent to the backend, the second is how many should be sent as part
of the purge event.

They should normally be aligned. In case of NetFlow/IPFIX, among the
different possibilities, it may reveal time sync issues among exporters
and the collector; easiest to resolve / experiment is to consider as
timestamp in pmacct the arrival time at the collector (versus the start
time of flows) by setting nfacctd_time_new to true.

Paolo

On Mon, Feb 25, 2019 at 03:23:42AM +0000, Brian Solar wrote:
>
> Thanks for the response Paolo. I am using nginx to stream load balance (see 
> config below).
>
> Another quick question on the Kafka plugin. What Does the QN portion of the 
> purging cache end line indicate/mean?
>
>
> 2019-02-25T03:05:04Z INFO ( kafka2/kafka ): *** Purging cache - END (PID: 
> 387033, QN: 12786/13291, ET: 1) ***
>
> 2019-02-25T03:16:22Z INFO ( kafka2/kafka ): *** Purging cache - END (PID: 
> 150221, QN: 426663/426663, ET: 19) ***
>
> # Load balance UDP-based FLOW traffic across two servers
> stream {
>
> log_format combined '$remote_addr - - [$time_local] $protocol $status 
> $bytes_sent $bytes_received $session_time "$upstream_addr"';
>
> access_log /var/log/nginx/stream-access.log combined;
>
> upstream flow_upstreams {
> #hash $remote_addr consistent;
> server 10.20.25.11:2100;
> #
> server 10.20.25.12:2100;
>
> }
>
> server {
> listen 2201 udp;
> proxy_pass flow_upstreams;
> #proxy_timeout 1s;
> proxy_responses 0;
> # must have user: root in main config
> proxy_bind $remote_addr transparent;
> error_log /var/log/nginx/stream-flow-err.log;
> }
> }
>
>
>
>
> ‐‐‐ Original Message ‐‐‐
> On Sunday, February 24, 2019 5:02 PM, Paolo Lucente  wrote:
>
> >
> >
> > Hi Brian,
> >
> > You are most probably looking for this:
> >
> > https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L2644-#L2659
> >
> > Should that not work, ie. too many input flows for the available
> > resources, you have a couple load-balancing strategies possible:
> > one is to configure a replicator (tee plugin, see in QUICKSTART).
> >
> > Paolo
> >
> > On Sun, Feb 24, 2019 at 05:31:55PM +, Brian Solar wrote:
> >
> > > Is there a way to adjust the UDP buffer receive size ?
> > > Are there any other indications of nfacctd not keeping up?
> > > cat /proc/net/udp |egrep drops\|0835
> > > sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt 
> > > uid timeout inode ref pointer drops
> > > 52366: :0835 : 07 :00034B80 00: 
> > >  0 0 20175528 2 89993febd940 7495601
> > > 7495601 drops w/ a buffer of 0x0034B80 or 214528
> > > sysctl -a |fgrep mem
> > > net.core.optmem_max = 20480
> > > net.core.rmem_default = 212992
> > > net.core.rmem_max = 2147483647
> > > net.core.wmem_default = 212992
> > > net.core.wmem_max = 212992
> > > net.ipv4.igmp_max_memberships = 20
> > > net.ipv4.tcp_mem = 9249771 12333028 18499542
> > > net.ipv4.tcp_rmem = 4096 87380 6291456
> > > net.ipv4.tcp_wmem = 4096 16384 4194304
> > > net.ipv4.udp_mem = 9252429 12336573 18504858
> > > net.ipv4.udp_rmem_min = 4096
> > > net.ipv4.udp_wmem_min = 4096
> > > vm.lowmem_reserve_ratio = 256 256 32
> > > vm.memory_failure_early_kill = 0
> > > vm.memory_failure_recovery = 1
> > > vm.nr_hugepages_mempolicy = 0
> > > vm.overcommit_memory = 0
> >
> > > pmacct-discussion mailing list
> > > http://www.pmacct.net/#mailinglists
>___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] nfacctd Dropping UDP. Buffer Receive Size?

2019-02-24 Thread Brian Solar

Thanks for the response Paolo.  I am using nginx to stream load balance (see 
config below).

Another quick question on the Kafka plugin. What Does the QN portion of the 
purging cache end line indicate/mean?


2019-02-25T03:05:04Z INFO ( kafka2/kafka ): *** Purging cache - END (PID: 
387033, QN: 12786/13291, ET: 1) ***

2019-02-25T03:16:22Z INFO ( kafka2/kafka ): *** Purging cache - END (PID: 
150221, QN: 426663/426663, ET: 19) ***

# Load balance UDP-based FLOW traffic across two servers
stream {

   log_format combined '$remote_addr - - [$time_local] $protocol $status 
$bytes_sent $bytes_received $session_time "$upstream_addr"';

   access_log /var/log/nginx/stream-access.log combined;

upstream flow_upstreams {
#hash $remote_addr consistent;
server 10.20.25.11:2100;
#
server 10.20.25.12:2100;

}

server {
listen 2201 udp;
proxy_pass flow_upstreams;
#proxy_timeout 1s;
proxy_responses 0;
# must have user: root in main config
proxy_bind $remote_addr transparent;
error_log /var/log/nginx/stream-flow-err.log;
}
}




‐‐‐ Original Message ‐‐‐
On Sunday, February 24, 2019 5:02 PM, Paolo Lucente  wrote:

>
>
> Hi Brian,
>
> You are most probably looking for this:
>
> https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L2644-#L2659
>
> Should that not work, ie. too many input flows for the available
> resources, you have a couple load-balancing strategies possible:
> one is to configure a replicator (tee plugin, see in QUICKSTART).
>
> Paolo
>
> On Sun, Feb 24, 2019 at 05:31:55PM +, Brian Solar wrote:
>
> > Is there a way to adjust the UDP buffer receive size ?
> > Are there any other indications of nfacctd not keeping up?
> > cat /proc/net/udp |egrep drops\|0835
> > sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid 
> > timeout inode ref pointer drops
> > 52366: :0835 : 07 :00034B80 00: 
> >  0 0 20175528 2 89993febd940 7495601
> > 7495601 drops w/ a buffer of 0x0034B80 or 214528
> > sysctl -a |fgrep mem
> > net.core.optmem_max = 20480
> > net.core.rmem_default = 212992
> > net.core.rmem_max = 2147483647
> > net.core.wmem_default = 212992
> > net.core.wmem_max = 212992
> > net.ipv4.igmp_max_memberships = 20
> > net.ipv4.tcp_mem = 9249771 12333028 18499542
> > net.ipv4.tcp_rmem = 4096 87380 6291456
> > net.ipv4.tcp_wmem = 4096 16384 4194304
> > net.ipv4.udp_mem = 9252429 12336573 18504858
> > net.ipv4.udp_rmem_min = 4096
> > net.ipv4.udp_wmem_min = 4096
> > vm.lowmem_reserve_ratio = 256 256 32
> > vm.memory_failure_early_kill = 0
> > vm.memory_failure_recovery = 1
> > vm.nr_hugepages_mempolicy = 0
> > vm.overcommit_memory = 0
>
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

[pmacct-discussion] nfacctd Dropping UDP. Buffer Receive Size?

2019-02-24 Thread Brian Solar
Is there a way to adjust the UDP buffer receive size ?

Are there any other indications of nfacctd not keeping up?

cat /proc/net/udp |egrep drops\|0835

  sl  local_address rem_address   st tx_queue rx_queue tr tm->when retrnsmt   
uid  timeout inode ref pointer drops

52366: :0835 : 07 :00034B80 00: 
 00 20175528 2 89993febd940 7495601

7495601 drops w/ a buffer of 0x0034B80 or 214528

sysctl -a |fgrep mem

net.core.optmem_max = 20480

net.core.rmem_default = 212992

net.core.rmem_max = 2147483647

net.core.wmem_default = 212992

net.core.wmem_max = 212992

net.ipv4.igmp_max_memberships = 20

net.ipv4.tcp_mem = 9249771  1233302818499542

net.ipv4.tcp_rmem = 409687380   6291456

net.ipv4.tcp_wmem = 409616384   4194304

net.ipv4.udp_mem = 9252429  1233657318504858

net.ipv4.udp_rmem_min = 4096

net.ipv4.udp_wmem_min = 4096

vm.lowmem_reserve_ratio = 256   256 32

vm.memory_failure_early_kill = 0

vm.memory_failure_recovery = 1

vm.nr_hugepages_mempolicy = 0

vm.overcommit_memory = 0___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists