I'm a little bit worried about the clustering feature, as I think I want 
to be able to have different types of caches in a single location, and 
this is an ATS "proprietary"  feature implementation, correct?

But yes - that is really what I want - a distributed cache within the 
colo where each object is on one single cache.

Thanks!
JvD

On 03/13/2012 02:07 PM, Leif Hedstrom wrote:
> On 3/13/12 1:37 PM, Van Doorn, Jan R wrote:
>> Hello,
>>
>> I apologize if this has been discussed before, or is in the
>> documentation. I am fairly new to ATS, so forgive my ignorance.
>>
>> I am looking at building a multi tier CDN, and want to have "content
>> affinity" when my edge caches miss and select a parent cache, meaning, I
>> want to have multiple caches in the same parent caching location, and I
>> don't want to have these caches store the same content.
>>
>> I know I could use ICP to share the caches in a location, but the
>> records.config says "ICP Configuration. NOTE! ICP is currently broken
>> NOTE!", and I am hesitant to use ICP, as it seems to bring a lot of
>> complexity.
>
> Hmmm, ICP wouldn't solve this would it? ICP is generally a multi-cast 
> protocol, intended to be used from within the colocation. It also 
> duplicates the cache across all peers of an ICP network. ICP is 
> similar to the clustering feature that ATS, which is one reason why no 
> one has bothered with getting ICP to work.
>
> If you intended to do parent proxy across colos (I'm guessing?), you'd 
> want to use the TSHttpTxnParentProxySet() AP in a plugin. Or, if you 
> can partition your parent proxies based on various "rules" on the URL 
> (e.g. a prefix of a path), you can use our parent.config to pick and 
> choose parent(s) for various URLs (but it's a manually maintained 
> config, obviously).
>
> If really what you are asking is for a distributed cache within the 
> colo where each object is on one single cache, then take a look at the 
> clustering feature. It uses URL hashes. It wouldn't work well across 
> colocations though.
>
> -- Leif
>

Reply via email to