> The problem that bothers me is the pool. Since now we have
> $s->server_root_relative, it's almost as bad as using a global pool if
> users will happen to use it in requests, because it'll leak memory
> untill the server is killed. And users will use it, just because they can.
will it really leak? I mean, say they do use one of the server pools.
isn't the point of having pools that the memory is re-used? that is, given
100 calls to $s->server_root_relative won't the size of the pool stay the
same if there is server memory available and the lenght of the resulting
string is less than the available memory?
>
> Moreover we do this silly (but necessary) protection and copy the whole
> string into a perl sv, scared that someone will use a short-lived pool
> object when allocating this variable (was my idea).
>
> I say, let's drop that glue and write it in pure perl:
>
> use File::Spec qw(catfile);
> sub Apache::Server::server_root_relative {
> return catfile Apache::server_root, @_;
> }
>
> well, may be check that the thing exists like the apr_filepath_merge
> does. I haven't benchmarked it. But I won't be surprised if the
> performance/memory usage combination won't be better than the C
> equivalent (remember that at the moment we allocate the string twice,
> because of the copy). And we totally eliminate "the pool syndrom".
>
> It'd be a good idea to look at other APIs that require a pool object.
yikes. are you sure about all this?
--Geoff
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]