> On 25 Jun 2018, at 11:54, Alex K wrote:
>
> Hi all,
>
> I have a setup with uacctd monitoring traffic of several interfaces through
> NFLOG.
> With uacctd stopped I see that the server (a relatively small device with 4
> GB of RAM) consumes 450MB of RAM. Once I start uacctd the mem usage
Let me change the posting style... :)
On Mon, Jun 25, 2018 at 3:51 PM, Dariush Marsh-Mossadeghi <
dari...@gravitas.co.uk> wrote:
> OK, so we’re moving from bottom-posting to top-posting… that’ll make it
> interesting for other readers ;-)
>
> The output of free doesn’t look desperate, but it is
Thanx for the reply.
The output of free is the following:
free
totalusedfreeshared buff/cache
available
Mem:4046572 2832576 152012 784240 1061984
204248
Swap: 3906556 1086080 2820476
While is as below when stopped:
OK, so we’re moving from bottom-posting to top-posting… that’ll make it
interesting for other readers ;-)
The output of free doesn’t look desperate, but it is starting to look a bit
tight.
You’ve got about a gig of buffer/cache, which the kernel will evict if it needs
it.
You’ve got 200M of
OK, so it’s effectively an embedded system scenario, with a fixed config
hardware, or similar
No silver bullets or one-liner fixes here :-\
You’ve a number of options to slim down your memory footprint
- Strip Debian of all the packages you don’t need, learn a lot about kernel
tuning, and tune
Hello,
If there is no more RAM available, i would test sqlite3 plugin, as
sqlite3 is more suited for limited resources usage. Youl will possibly
need to change your workflow to export sqlite3 files and load it
somewhere else, but it should be a lot cheaper memory wise.
This one is swapping
Thanks Dariush. Appreciate your feedback.
I was testing several stripped down kernels by compiling and removing most
of unused modules. The gain I had was in the range of few MB.
Seems that I have to find a more elegant approach at uacctd configuration
since with current setup I am loading
On Mon, Jun 25, 2018 at 6:49 PM, Rasto Rickardt wrote:
> Hello,
>
> If there is no more RAM available, i would test sqlite3 plugin, as
> sqlite3 is more suited for limited resources usage. Youl will possibly
> need to change your workflow to export sqlite3 files and load it
> somewhere else, but
Rasto's comments got me thinking…
Not being privy to your application and it’s architecture this may not work for
you at all.
I’ve had success in the past using rabbitmq to offload flow logs.
We used it to deliver netflow to ELK stacks running on AWS, and it worked
really well.
It has the
I have not experience with RabbitMQ but I've seen this guy around on high
volume data transfer scenarios.
A message queuing approach sounds more resilient to me but this will need
consideration at a later point. Thanx though for the thoughts.
What I currently do is: iptables -> uacctd -> mysql ->
10 matches
Mail list logo