[cc trimmed, I'll only get bounces from the other lists]
Carlo Sogono wrote:
Does RHEL or Linux in general limit the amount of memory being used by a
single process? If so, how do I get around this? After an application
performing a lot of small mallocs (160 bytes each) nears about 1 GB
worth of allocated memory, RHEL seems to be slowing down the allocation
speed of the application. In the first maybe 5 million mallocs it can do
about 100,000 mallocs per second, however after more than 1 GB worth it
slows down to just a few thousand per second. CPU consumption also
slowed down from ~12% (1 CPU full load) to just about 1-3%.
Is there something I can do on Linux or RHEL, or maybe something else I
should do in my coding? Some stats...
Can't say I'm shocked. Think about the accounting overhead of what
you're trying to do. You're hitting some accounting data structure
that doesn't scale above a few million entries.
I'd expect that code to have performance issues on any operating system.
Since you know more about your application than the operating system
you can do a better job by writing a foo_malloc() and foo_free() which
calls the real malloc() and free() only occasionally.
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html