On Thu, 2 Oct 2008 13:14:35 -0400
Bill Moran <[EMAIL PROTECTED]> wrote:

> I've never been 100% clear on the exact differences, but it basically
> has to do with where the data in RAM came from.  Depending on whether
> it was a VM page, or a disk page will determine what bucket it goes
> into when it moves out of active.

The distinction is between clean pages (Cache) and dirty pages
(Inactive), a dirty page needs to be written to swap or synced to the
disk, before it can be reused. It's the cache queue that gives the
kernel "liquidity", once the free memory is drops below about 2%. It
actively balances the cache and inactive queues to maintain this.

> I'm fairly sure that inactive is memory used by program code.  

Inactive and cached queues are the first step to recycling memory. The
queues don't differentiate between different origins.

> When
> the program terminates, the memory is marked as inactive, which means
> the next time the program starts the code can simply be moved back to
> active and the program need not be reloaded from disk.

I think such pages can remain active. The level of active memory
seems to be mostly a matter of "stock-control". When I shutdown
Xorg/KDE, huge amounts of memory remain active for hours. When demand
for memory increases, the queues get rebalanced to provide more
cached/inactive memory. These figures don't really tell you much.

> Buffer and cache memory are disk data held at different points within
> the kernel.  I've never been 100% clear on the difference, and I
> believe it depends heavily on a thorough understanding of how the
> kernel works.
> The other rule of thumb I've heard is that the closer memory is to the
> left side of top output, the less expensive it is for the kernel to
> move it to active ... inactive being the most efficient and cache
> requiring the most work by the kernel ... I could be wrong, though.

Partly, but it's more the other way around, the further to the right the
easier it is to reuse (not counting buffer and wired, which are
outside the normal VM/cache system). 

> I know that a lot of what I'm saying isn't authoritative, so I hope
> I'm not remembering any of this wrong.  I think to fully understand
> how it works you'll need to read _The_Design_and_Implementation_.

Matt Dillon's vm-design article is a good place to start. 


freebsd-questions@freebsd.org mailing list
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to