--On 30 October 2012 19:51 +0200 Konstantin Belousov kostik...@gmail.com
wrote:
I suggest to take a look at where the actual memory goes.
Start with procstat -v.
Ok, running that for the milter PID I get seem to be able to see smallish
chunks used for things like 'libmilter.so', and
Hi!
This questions about Inactive queue and Swap layer in VM management
system at FreeBSD. For test, i running dd (for put ufs cache to Inactive), and
i get this:
1132580 wire
896796 act
5583964 inact
281852 cache
112252 free
836960 buf
in swap: 20M
It is good. Lets start run programm like:
On Wed, Oct 31, 2012 at 09:49:21AM +, Karl Pielorz wrote:
--On 30 October 2012 19:51 +0200 Konstantin Belousov kostik...@gmail.com
wrote:
I suggest to take a look at where the actual memory goes.
Start with procstat -v.
Ok, running that for the milter PID I get seem to be able
--On 31 October 2012 16:06 +0200 Konstantin Belousov kostik...@gmail.com
wrote:
Since you neglected to provide the verbatim output of procstat, nothing
conclusive can be said. Obviously, you can make an investigation on your
own.
Sorry - when I ran it this morning the output was several
On Wed, Oct 31, 2012 at 02:44:05PM +, Karl Pielorz wrote:
--On 31 October 2012 16:06 +0200 Konstantin Belousov kostik...@gmail.com
wrote:
Since you neglected to provide the verbatim output of procstat, nothing
conclusive can be said. Obviously, you can make an investigation on
In the last episode (Oct 31), Karl Pielorz said:
--On 31 October 2012 16:06 +0200 Konstantin Belousov kostik...@gmail.com
wrote:
Since you neglected to provide the verbatim output of procstat, nothing
conclusive can be said. Obviously, you can make an investigation on
your own.
Sorry
.. isn't the default thread stack size now really quite large?
Like one megabyte large?
adrian
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to
On Wed, 2012-10-31 at 10:55 -0700, Adrian Chadd wrote:
.. isn't the default thread stack size now really quite large?
Like one megabyte large?
That would explain a larger VSZ but the original post mentions that both
virtual and resident sizes have grown by almost an order of magnitude.
I
On 31 October 2012 11:20, Ian Lepore free...@damnhippie.dyndns.org wrote:
I think there are some things we should be investigating about the
growth of memory usage. I just noticed this:
Freebsd 6.2 on an arm processor:
369 root 1 8 -88 1752K 748K nanslp 3:00 0.00% watchdogd
On Wed, Oct 31, 2012 at 11:52:06AM -0700, Adrian Chadd wrote:
On 31 October 2012 11:20, Ian Lepore free...@damnhippie.dyndns.org wrote:
I think there are some things we should be investigating about the
growth of memory usage. I just noticed this:
Freebsd 6.2 on an arm processor:
It seems like the new compiler likes to get up to ~200+MB resident when
building some basic things in our tree.
Unfortunately this causes smaller machines (VMs) to take days because of
swap thrashing.
Doesn't our make(1) have some stuff to mitigate this? I would expect it
to be a bit
On Wed, Oct 31, 2012 at 12:58 PM, Alfred Perlstein bri...@mu.org wrote:
It seems like the new compiler likes to get up to ~200+MB resident when
building some basic things in our tree.
Unfortunately this causes smaller machines (VMs) to take days because of
swap thrashing.
Doesn't our
On Wed, Oct 31, 2012 at 12:06 PM, Konstantin Belousov
kostik...@gmail.com wrote:
...
If not wired, swapout might cause a delay of the next pat, leading to
panic.
Yes. We need to write microbenchmarks and do more careful analysis to
figure out where and why things have grown. Maybe a mock
On 31 October 2012 12:06, Konstantin Belousov kostik...@gmail.com wrote:
Watchdogd was recently changed to mlock its memory. This is the cause
of the RSS increase.
If not wired, swapout might cause a delay of the next pat, leading to
panic.
Right, but look at the virtual size of the 6.4
On 2012-Oct-31 12:58:18 -0700, Alfred Perlstein bri...@mu.org wrote:
It seems like the new compiler likes to get up to ~200+MB resident when
building some basic things in our tree.
The killer I found was the ctfmerge(1) on the kernel - which exceeds
~400MB on i386. Under low RAM, that fails
On 31 October 2012 13:41, Peter Jeremy pe...@rulingia.com wrote:
Another, more involved, approach would be for the scheduler to manage
groups of processes - if a group of processes is causing memory
pressure as a whole then the scheduler just stops scheduling some of
them until the pressure
On 10/31/12 1:41 PM, Peter Jeremy wrote:
On 2012-Oct-31 12:58:18 -0700, Alfred Perlstein bri...@mu.org wrote:
It seems like the new compiler likes to get up to ~200+MB resident when
building some basic things in our tree.
The killer I found was the ctfmerge(1) on the kernel - which exceeds
On Wed, Oct 31, 2012 at 1:44 PM, Adrian Chadd adr...@freebsd.org wrote:
On 31 October 2012 13:41, Peter Jeremy pe...@rulingia.com wrote:
Another, more involved, approach would be for the scheduler to manage
groups of processes - if a group of processes is causing memory
pressure as a whole
On 2012-Oct-31 14:21:51 -0700, Alfred Perlstein bri...@mu.org wrote:
Ah, but make(1) can delay spawning any new processes when it knows its
children are paging.
That could work in some cases and may be worth implementing. Where it
won't work is when make(1) initially hits a parallelisable block
On 10/31/12 3:14 PM, Peter Jeremy wrote:
On 2012-Oct-31 14:21:51 -0700, Alfred Perlstein bri...@mu.org wrote:
Ah, but make(1) can delay spawning any new processes when it knows its
children are paging.
That could work in some cases and may be worth implementing. Where it
won't work is when
On 2012/10/31 22:44, Karl Pielorz wrote:
--On 31 October 2012 16:06 +0200 Konstantin Belousov
kostik...@gmail.com wrote:
Since you neglected to provide the verbatim output of procstat, nothing
conclusive can be said. Obviously, you can make an investigation on your
own.
Sorry - when I ran
21 matches
Mail list logo