So a quick question, what sort of http transfers are chunking most often
used for? I believe we will get poor results with the method for most types
of binary data, which tend to be the larger files. In the web context these
will generally have not changed at all (in which case traditional caching
. There is a risk if we get a corruption of
the literal length byte that we could try read a very large amount of data,
not sure if this is acceptable.
Toby
2009/3/31 Gervase Markham g...@mozilla.org
On 25/03/09 18:20, Toby Collett wrote:
Not a GSoC project, just a project(crcsync is the name
:26 PM, Toby Collett t...@plan9.net.nz wrote:
There is no error checking in the encoding itself, this is assumed to be
taken care in other layers, and we through in a strong hash on the whole
file to make sure this is correct.
Is that right? I thought what Rusty was saying re crcsync
Langhoff martin.langh...@gmail.com
On Tue, Mar 31, 2009 at 8:32 PM, Toby Collett t...@plan9.net.nz wrote:
We are only using 30 bit hashes, so even if it was a perfect hash it is
possible you could get a collision. Having said that our collision space
is
only the single web request, so
Not a GSoC project, just a project(crcsync is the name at the moment).
Initial target is a double proxy server, one each end of the slow link, with
dreams of web standards and browser integration following.
Seems to me that both projects need the same upstream server extension to be
able to send
Hi Alex,
I think you are on the right track, there is a third option which is to add
a few extra configuration options to the cache module to make it more
aggressive about caching. Basically to cache everything except pages marked
'private' (and possibly even them as long as you can ensure the
Great to hear you got it running, unfortunately I only have about a two week
head start on you with regard to the apache front, so I am sure lots of
things will get neater as we go along.
2009/3/16 Alex Wulms alex.wu...@scarlet.be
Hi Toby,
I managed to get it working on my PC under suse 11.1