Irvine Short wrote:

On Wed, 27 Aug 2003, David Landgren wrote:


Irvine Short wrote:

I then found that this:
options         MAXDSIZ="(2048*1024*1024)"
options         MAXSSIZ="(128*1024*1024)" (and also 64MB)
options         DFLDSIZ="(512*1024*1024)"

worked fine but not as expected - limit reports datasize unlimited.

I've managed to crank it up as far as


options MAXDSIZ="(3568*1024*1024)"


Cool! Although I tried 3500 & it blew up too...

Bah!


Just for the record, I read those figures from a discarded kernel configuration file, and the above values (3568MB max data size for a process) don't work.

The highest value I've been able to boot correctly with is (only) 2816Mb. There's probably a few more megabytes that can be eked out, but I'm pretty sure that 3072 fails. The error is something to do with the kernel being able to map the largest-sized process into memory. It blows up at boot-time with some sort of vmalloc or kmalloc error.

I searched the archives when I was working on this, and came across a kernel developer who replied to someone having simuilar difficulties and they said "why would you want to do something like that?" His reasoning being that the left-over memory would be better used by the OS for caching and buffering anyway.

I don't really consider that as a good answer, at least not for modern machines with large amounts of RAM. I have a 4Gb RAM server running Squid, and nothing else. Squid represents a very specialised problem domain and has elaborate algorithms to decide what to keep in RAM, and what, and when, to write out to disk. Much more so than the OS, which is tuned to deal with the common case.

When it's time for an object to be written out to disk, it should be written out quickly, so that the RAM can be freed up and given to something else more deserving of being cached. As it happens, the SCSI controller has a large slab of RAM on it too, so there's even less point for the OS to hold onto it for too long in disk buffers.

As it is, I never see the Cache and Buf values in top(1) rise about 85M and 199M respectively. I take that to mean that the OS isn't using the extra memory either. I've looked around at sysctl settings and the source and the documentation suggests that modifying anything to do with VM settings is akin to meddling with the affairs of wizards.

This seems to me then, when adding in another 20Mb for sundry housekeeping processes, that just under a gig of RAM is going wasted. I could easily cache another 50000 web objects *in RAM* if I could make it available to Squid.

So if there's something that can be done about huge maximum process sizes, I'd love to hear about it. I'd *really* like to be able to have a 3.5Gb process in a 4Gb machine. 512Mb ought to be enough for everything else. I've become skilled in not running anything else on that server that could possibly chew up RAM and upset Squid.

David

_______________________________________________
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to