Joe Gibbens wrote:
Thanks for the reply Janne.
So my only way to run a process over 1GB in size is a custom kernel? Is
Yes, as of now, on i386.
there an easier way to run a large cache with a process size over 1GB?
You can do other things aswell, like bumping cachepct to ~12 with
Joe Gibbens wrote:
I'm running squid-transparent on 3.9, and the process dies every time
it reaches 1GB.
FATAL: xcalloc: Unable to allocate 1 blocks of 4108 bytes!
The system has 2GB ram
# ulimit -aH
time(cpu-seconds)unlimited
file(blocks) unlimited
coredump(blocks) unlimited
Thanks for the reply Janne.
So my only way to run a process over 1GB in size is a custom kernel? Is
there an easier way to run a large cache with a process size over 1GB? I
can re-configure the memory usage, but it would be nice to be able to
utilize more of my physical memory without having to
I'm running squid-transparent on 3.9, and the process dies every time
it reaches 1GB.
FATAL: xcalloc: Unable to allocate 1 blocks of 4108 bytes!
The system has 2GB ram
# ulimit -aH
time(cpu-seconds)unlimited
file(blocks) unlimited
coredump(blocks) unlimited
data(kbytes)
4 matches
Mail list logo