Re: [pmacct-discussion] Problem with running pmacct to monitor

2006-05-23 Thread Jamie Wilkinson
This one time, at band camp, zulkarnain wrote:
  syslog:daemon
  interface: eth0,eth1

Change this to

  interface: eth0

and copy it to pmacct.eth0.conf

and make another copy pmacct.eth1.conf and set

  interface: eth1

Then run two pmacctds:

 pmacctd -f pmacct.eth0.conf
 pmacctd -f pmacct.eth1.conf

Or, don't specify an interface at all in the config file, and specify the
interface on the command line:

 pmacctd -i eth0 -f pmacct.conf
 pmacctd -i eth1 -f pmacct.conf

You'll have to check the manual to make sure the options are correct, I'm
just going from memory.

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Problem with running pmacct to monitor traffic in two interface

2006-05-22 Thread Jamie Wilkinson
This one time, at band camp, zulkarnain wrote:
Hi all,
   
  I searched the pmacct-discussion Archives for configuring pmacct to 
 monitor traffic in two interface, unfortunately no success.
   
  Probably somebody can point me in the right direction on how to setup pmacct 
 to monitor traffic in two interface and store the data to mysql tables plugin.

You need to run a separate pmacct on each interface.

The debian package is set up to run multiple instances by naming the
configuration files after the interfaces you want to run on, if you choose
to.

If you're not using Debian, then I suggest you do something similar and
modify your startup scripts to start two instances with different config
files.

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] pmacct 0.10.1 released !

2006-04-19 Thread Jamie Wilkinson
This one time, at band camp, Paolo Lucente wrote:
VERSION.
0.10.1

I've just uploaded 0.10.1 to Debian unstable.



Re: [pmacct-discussion] PostgreSQL performance

2006-04-19 Thread Jamie Wilkinson
This one time, at band camp, Sven Anderson wrote:
Hi,

i'm just testing pmacct on OpenBSD 3.7. First I used MySQL as data
backend. Even after several days of capturing into the same table I had no
performance problems. But since the MySQL backend uses a string for
ip_proto and has no IP address type I decided to switch to PostgreSQL.

But now I have massive performance problems. After one day, when there are
 500 000 to 700 000 rows in the table, it's getting so slow, that it
cannot store the data fast enough any more, resulting in a lot of
processes like

22758 ??  I   0:00.00 pmacctd: PostgreSQL Plugin -- DB Writer
[default] (pmacctd)

and

20427 ??  I   0:00.03 postmaster: pmacct pmacct [local] LOCK TABLE
waiting (postgres)

Is there a tuning problem, or is PostgreSQL known to be not as fast as
MySQL? I thought, maybe there are some kind of rollback journals written,
which MySQL doesn't, or the hash-tables are not indexed correctly? Any ideas?

What version of pmacct are you using?

0.10.0 has patches to remove the LOCK TABLE if you're in insert-only mode,
which I recommend.

I also remove the index from the table to speed up the inserts, and
partition the data into a new table per day so that each table never grows
too big to manage.
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists



[pmacct-discussion] public cvs?

2006-03-01 Thread Jamie Wilkinson
Paolo,

Is there a publicly accessible CVS repository for pmacct?


[pmacct-discussion] postgresql connection errors in 0.9.1

2005-08-18 Thread Jamie Wilkinson
So I'm testing out 0.9.1, and have a simple config file -- similar to my old
live config but tweaked so as not to interfere with the old daemon:

pidfile: /var/run/pmacctd.test.pid
debug: true
aggregate: src_host,dst_host
networks_file: /etc/pmacct/networks
pcap_filter:  vlan and ( net 202.4.224.0/20 or net 203.98.86/24 ) and not ((src 
net 202.4.224.0/20 or src net 203.98.86/24 ) and ( dst net 202.4.224.0/20 or 
dst net 203.98.86/24 ) )
interface: eth1
plugins: pgsql
sql_host: localhost
sql_passwd: x
sql_table: acct_test
sql_table_version: 4
sql_refresh_time: 60
sql_history: 1m
sql_recovery_logfile: /var/lib/pmacct/recovery.test
sql_dont_try_update: true
sql_cache_entries: 15485863

I run it as ./pmacctd-0.9.1 -f ./pmacct-test.conf and watch the console
output.

Here's what happened when I ran it for a while, sent some SIGUSR1 for kicks,
and then ^C'd it: notice how the postgres connection failed:

corsair:~# ./pmacctd-0.9.1 -d -f ./pmacct-test.conf
OK ( default/core ): link type is: 1
WARN ( default/core ): eth1: no IPv4 address assigned
INFO ( default/pgsql ): 111616 bytes are available to address shared memory 
segment; buffer size is 64 bytes.
INFO ( default/pgsql ): Trying to allocate a shared memory segment of 1785856 
bytes.
DEBUG ( /etc/pmacct/networks ): (networks table IPv4) net: ca04e000, mask: 
f000
DEBUG ( /etc/pmacct/networks ): (networks table IPv4) net: cb625600, mask: 
ff00
(1124333023) 368485 packets received by filter
(1124333023) 2239 packets dropped by kernel
(1124333220) 389396 packets received by filter
(1124333220) 0 packets dropped by kernel
( default/pgsql ) *** Purging PGSQL queries queue ***

81581 packets received by filter
0 packets dropped by kernel
( default/pgsql ) *** Purging cache - START ***
ALERT ( default/pgsql ): primary PostgreSQL server failed.
( default/pgsql ) *** Purging cache - END (QN: 0, ET: 0) ***

At this point, there's no recovery.test logfile, which worries me, where'd
the packets go?  A count query on acct_test, a version 4 schema table I've
just created from the docs, returns 0 rows.

Browsing the source code tells me this is a generic failure error, and it
could have happened either when locking or during a query; unfortunately no
details on what actually happened.

Interestingly I can get some values in QN if I remove the sql_cache_entries
value from the config, the connection fails almost immediately:

OK ( default/core ): link type is: 1
WARN ( default/core ): eth1: no IPv4 address assigned
INFO ( default/pgsql ): 111616 bytes are available to address shared memory 
segment; buffer size is 64 bytes.
INFO ( default/pgsql ): Trying to allocate a shared memory segment of 1785856 
bytes.
DEBUG ( /etc/pmacct/networks ): (networks table IPv4) net: ca04e000, mask: 
f000
DEBUG ( /etc/pmacct/networks ): (networks table IPv4) net: cb625600, mask: 
ff00
( default/pgsql ) *** Purging cache - START ***
ALERT ( default/pgsql ): primary PostgreSQL server failed.
( default/pgsql ) *** Purging cache - END (QN: 67, ET: 0) ***

30298 packets received by filter
0 packets dropped by kernel
( default/pgsql ) *** Purging PGSQL queries queue ***
( default/pgsql ) *** Purging cache - START ***
ALERT ( default/pgsql ): primary PostgreSQL server failed.
( default/pgsql ) *** Purging cache - END (QN: 128, ET: 0) ***

... but a second run of that didn't try to write straight away.

Anyway, any ideas what might be going on, or how I could get some more info?


[pmacct-discussion] excessive plugin_pipe_size

2005-08-16 Thread Jamie Wilkinson
The FAQ-0.9.0 on the website mentions operating system limits; specifically
on Linux the values in /proc/sys/net/core/rmem_max limits the size
of plugin_pipe_size.

What does pmacctd do if plugin_pipe_size is set greater than {r,w}mem_max ?

Does it silently round plugin_pipe_size down to the value of rmem_max, or
does it just fail?


Re: [pmacct-discussion] stamp_inserted and sql_history

2005-08-13 Thread Jamie Wilkinson
This one time, at band camp, Paolo Lucente wrote:
On Sat, Aug 13, 2005 at 02:14:00AM +1000, Jamie Wilkinson wrote:

 Ok.  Does this mean that unless the config options 'sql_history' and
 'sql_history_roundoff' exist, then pmacctd will not write time stamps to the
 database?

yes.

Thanks for the clarification.

 I've done so, but I've also added these two config options back to my config
 file, and I'm seeing a lot of 'We are missing data' errors in the syslog.

Such error shouldn't be related in any way with the stamps. It signals that
the shared memory segment between the Core Process (which collects packets
from the network) and the Plugin (which writes flows into the DB) is full. (*)

It's very likely that you have not enabled bufferization. Try adding to your
configuration the two following lines (then tune the parameters in order to
fit your scenario):

===
plugin_pipe_size: 8192000
plugin_buffer_size: 4096
===

Ok, I already have a plugin_pipe_size, but no buffer_size.  Anyway, there
were only a couple of 'We are missing data' messages at startup so I suspect
they are transient.

Thanks.