On Fri, 17 Dec 2004, Rod Walker wrote:

Hi,
Of course I don`t know the inner workings of Squid but I thought the
problem would be that the file chunks would always be different depending
on how many streams the user chose. So even if Squid cached partial file
objects, it's unlikely the client would ask for exactly those blocks. For
example, a client downloads a file with 2 streams and then another client
with 3
2 streams: 1-500 & 501-1000 would result in 2 objects cached.
3 streams: 1-333,334-666,667-1000 - would these requests give cache hits?

Yes, if the cache is implemented correctly.


As for the squid-squid, squid-server multi-stream ...
If the requested object will not be cached because it`s larger than the
max cachable size, then I can see the problems you mention. But if it
is to be cached then it could be written straight to disk, and read back
in a single stream to the client.

True, but how fast should this be done? the fetching of the parts which we can not yet send to the client. And how do we guarantee we do not run out of disk space? Concider the case there may be several of these requests in parallell for different resources.


I guess you normally keep the objects in
memory though. I guess my conceptual problem is that I`m talking about
objects around 1GB, and you objects more like 1MB.

No, I am talking about large objects. For small objects there is no noticeable benefit for multi-streamed transfers as the majority of the overhead is the session setup.


I think I can protoype the system with single streams, in the knowledge
that multi-stream is at least possible.
Thanks a lot for your help.



Regards Henrik

Reply via email to