On Sat, 2003-08-30 at 10:53, Vik Olliver wrote: > Assume there is a community with a LAN backbone. Individual members are > nto allowed to give others access to the internet via their own private > Internet connections but freely share LAN access. > > A member accesses the web via a local proxy that first consults a > central server cluster for pages. If the page is present, it is returned > to the member's browser. > > If the page is not present, the local proxy attempts to aquire the page > via the member's private connection. The proxy then serves the page to > the user's browser and also deposits a copy with the central community > server. > > All this happens using cross-platform software. > > Now, is there anything vaguely like this anywhere?
Thats a mess. The only way I can see to do that is to run one instance of squid for each user, with all copies sharing the same disk cache-dir. Each copy of squid has to be configured to use a different upstream cache server, which is the member's own cache, which is set to use their own connection. The possibility for security holes is high. AFAIK there are ways to make squid "route" based on target and source, but not at the IP level. Rather squid wants to know that all .com addresses are via one upstream proxy, and all .nz are on another. I think an easier way to go would be to have a normally configured squid running in each member's local network, but they're set to use all the nearby squids as (checks to get the wording right) sibling caches. This will achieve the same result, but without a central machine. However all members would need a squid process running :-\
