In message <[EMAIL PROTECTED]>, David Greenman writes:
>>The above perl program results in a loop more or less like:
>>      n = 2
>>      for (i = 0; i < 1000000; i++)
>>              realloc(n++);
>>Now, if you read _any_ malloc(3) man page, they will tell you that there
>>is no way it can be guaranteed that this does not result in a lot of
>   Um, except that copying isn't what is causing the problem. The performance
>problem is apparantly caused by tens of thousands of page faults per second as
>the memory is freed and immediately reallocated again from the kernel. Doesn't
>phkmalloc keep a small pool of allocations around to avoid problems like

Yes it does, but it doesn't help here.  Basically what happens is
that relloc() is called on to extend a string of one megabyte by
another page, so it allocates 1M+1p and copies the contents over.

Now, in this very particular cornercase, we might be able to optimize
for just being able to allocate the next page, but in all real-world
scenarioes I've seen, real usage is more like:

        long loop {
                do some other stuff involving malloc/free/realloc

which negates that optimization.

But if somebody wants to try to code this optimization, I'll be more
than happy to review the result.  I just don't expect it to do much
in "real-life" as opposed to "silly benchmark" situations.

Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED]         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message

Reply via email to