On Wed, Feb 21, 2007 at 12:20:00PM +0200, Henri Asseily wrote:
> 
> In order to add seamless integration of a caching engine (squid,  
> Redline accelerator, etc...), there are some issues.
> 
> First, POST requests are never cached by Squid, and probably not by  
> most other engines. POST is generally a "write", so no accelerator  
> likes caching them. I suggest that the http transport take that into  
> account, and allow for passing a driver parameter with the statement  
> handle that specifies a POST or a GET (default should probably be  
> POST, it's safer, see below). The related DBI mod_perl transport  
> handler would only need a simple test of the method type to decide  
> how to extract the data.

No problem.

> Second, it would be nice to manage via the statement handle  
> parameters the http transport's headers. Some implementations might  
> allow a client to bypass a caching engine by setting the correct  
> headers.
> 
> So basically, I guess some kind of way to pass in transport  
> parameters via the sth should be codified. I see in the TODO section  
> "Driver-private sth attributes - set via prepare() - change DBI  
> spec", so that's probably what you were thinking.

Nearly. That item relates to making prepare(..., \%attribs) act more
like connect(...,\%attribs). The idea being that "factory methods"
should treat their 'method attributes' as 'handle attributes' to be
applied to the newly created handle. For example:

    $sth = $dbh->prepare($sql, { RaiseError => 0 });

That's needed because with gofer you can't alter sth attributes after
the handle has been created.

After that it just needs a little 'plumbing' for the relevant gofer sth
attributes to be noticed by the gofer transport.

Then I'd expect, and hope, we'd end up with something as simple as:

    $sth = $dbh->prepare($sql, { go_cache_ttl => 60 });

or, more generally:
    $dbh = DBI->connect(..., { go_cache_ttl => 60 });
or
    $dbh = DBI->connect("dbi:Gofer:transport=http;cache_ttl=60;...", ...)

The mod_perl transport in the server-side would see the client
expressing a desire to have the results cached and, if the request has a
result set, then it'll add the relevant http headers to the response.

I'm also thinking in terms of adding support for client-side caching
via general caching modules like Cache::FastMmap and Cache::Memcached.
Perhaps via a go_cache_using => $foo attribute. Should be trivial.

> There are ways around this if we're stuck without sth-based driver  
> parameters, by creating different read dbh's and write dbh's.  
> Technically that's probably the better way to go for a large scale  
> system, but I can see it confusing the smaller user.

Also, in theory, you could have multiple transports each with a different
configuration, and then when you call prepare() you could specify which
transport you want to be used.

> Anyway, regarding GET and POST: POST is safer because it's guaranteed  
> to work, but it'll never be cached. GET works at least for a 2,048  
> character url, and potentially up to 8,000 characters, but you never  
> know what middleware device might truncate it. To compound the  
> problem, in order to be safe one would have to uri_encode_utf8() the  
> query string, which willl potentially bloat it even more.
> 
> So the question is... how big is a standard Gofer request package?

Small enough I think. The pipeone and stream transports just use
pack("H*") for simplicity and they can log it, so let's see:

$ make && DBD_GOFER_TRACE=4 perl -Mblib t/85gofer.t 2>&1 | grep '^Request:' | 
perl -MList::Util=max,sum -nle 'push @a,length; END{ print max(@a); print 
sum(@a)/$. }'
825
639.4

So the max is 825 bytes in hex, or ~413 in binary.  That's a good way
short of 2048, but that test uses only trivial statements.
A large select statement would add directly to the length.
The request (and response) would probably zip well, if that was needed.

For intranet use, though, middleware device limitations are less likely
to be a problem so I'd expect the 8000 char limit to be more relevant.

Tim.

Reply via email to