Note that even with Vary: Origin, we still have to load the HTTP request 
headers from the disk cache to apply the vary header, which leaks timing 
information, so "Vary: Origin" is not a sufficient security mechanism to 
prevent that sort of cross-site attack.

On Wednesday, October 29, 2025 at 5:08:42 PM UTC-4 Erik Anderson wrote:

> My understanding was that there was believed to be a meaningful security 
> benefit with partitioning the cache. That’s because it would limit a party 
> from being able to inferr that you’ve visited some other site by measuring 
> a side effect tied to how quickly a resource loads. That observation could 
> potentially be made even if that specific adversary doesn’t have any of 
> their own content loaded on the other site.
>
>  
>
> Of course, if there is an entity with a resource loaded across both sites 
> with a 3p cookie *and* they’re willing to share that info/collude, 
> there’s not much benefit. And even when partitioned, if 3p cookies are 
> enabled, there are potentially measurable side effects that differ based on 
> if the resource request had some specific state in a 3p cookie.
>
>  
>
> Does that incremental security benefit of partitioning the cache justify 
> the performance costs when 3p cookies are still enabled? I’m not sure.
>
>  
>
> Even if partitioning was eliminated, a site could protect themselves a bit 
> by specifying Vary: Origin, but that probably doesn’t sufficiently cover 
> iframe scenarios (nor would I expect most sites to hold it right).
>
>  
>
> *From:* Rick Byers <[email protected]> 
> *Sent:* Wednesday, October 29, 2025 11:56 AM
> *To:* Patrick Meenan <[email protected]>
> *Cc:* Mike Taylor <[email protected]>; blink-dev <[email protected]>
> *Subject:* [EXTERNAL] Re: [blink-dev] Intent to ship: Cache sharing for 
> extremely-pervasive resources
>
>  
>
> If this is enabled only when 3PCs are enabled, then what are the tradeoffs 
> of going through all this complexity and governance vs. just broadly 
> coupling HTTP cache keying behavior to 3PC status in some way? What can a 
> tracker credibly do with a single-keyed HTTP cache that they cannot do with 
> 3PCs? Are there also concerns about accidental cross-site resource sharing 
> which could be mitigated more simply by other means, eg. by scoping to just 
> to ETag-based caching?
>
>  
>
> I remember the controversy and some real evidence of harm to users and 
> businesses in 2020 when we partitioned the HTTP cache, but I was convinced 
> that we had to accept that harm in order to credibly achieve 3PCD. At the 
> time I was personally a fan of a proposal like this (even for users without 
> 3PCs) in order to mitigate the harm. But now it seems to me that if we're 
> going to start talking about poking holes in that decision, perhaps we 
> should be doing a larger review of the options in that space with the 
> knowledge that most Chrome users are likely to continue to have 3PCs 
> enabled. WDYT?
>
>  
>
> Thanks,
>
>    Rick
>
>  
>
> On Mon, Oct 27, 2025 at 10:27 AM Patrick Meenan <[email protected]> 
> wrote:
>
> I don't believe the security/privacy protections actually rely on the 
> assertions (and it's unlikely those would be public). It's more for 
> awareness and to make sure they don't accidentally break something with 
> their app if they were relying on the responses being partitioned by site.
>
>  
>
> As far as query params go, the browser code already only filters for 
> requests with no query params so any that do rely on query params won't get 
> included anyway.
>
>  
>
> The same goes for cookies. Since the feature is only enabled when 
> third-party cookies are enabled, adding cookies to these responses or 
> putting unique content in them won't actually pierce any new boundaries but 
> it goes against the intent of only using it for public/static resources and 
> they'd lose the benefit of the shared cache when it gets updated. Same goes 
> for the fingerprinting risks if the pattern was abused.
>
>  
>
> On Mon, Oct 27, 2025 at 9:39 AM Mike Taylor <[email protected]> wrote:
>
> On 10/22/25 5:48 p.m., Patrick Meenan wrote:
>
> The candidate list goes down to 20k occurrences in order to catch 
> resources that were updated mid-crawl and may have multiple entries with 
> different hashes that add up to 100k+ occurrences. In the candidate list, 
> without any filtering, the 100k cutoff is around 600, I'd estimate that 
> well less than 25% of the candidates make it through the filtering for 
> stable pattern, correct resource type and reliable pattern. First release 
> will likely be 100-200 and I don't expect it will ever grow above 500.
>
> Thanks - I see the living document has been updated to mention 500 as a 
> ceiling. 
>
>  
>
> As far as cadence goes, I expect there will be a lot of activity for the 
> next few releases as individual patterns are coordinated with the origin 
> owners but then it will settle down to a much more bursty pattern of 
> updates every few Chrome releases (likely linked with an origin changing 
> their application and adding more/different resources). And yes, it is 
> manual.
>
> As far as the process goes, resource owners need to actively assert that 
> their resource is appropriate for the single-keyed cache and that they 
> would like it included (usually in response to active outreach from us but 
> we have the external-facing list for owner-initiated contact as well).  The 
> design doc has the documentation for what it means to be appropriate (and 
> the doc will be moved to a readme page in the repository next to the actual 
> list so it's not a hard-to-find Google doc):
>
> Will there be any kind of public record of this assertion? What happens if 
> a site starts using query params or sending cookies? Does the person in 
> charge of manual list curation discover that in the next release? Does that 
> require a new release (I don't know if this lives in component updater, or 
> in the binary itself)? 
>
>  
>
> *5. Require resource owner opt-in*
> For each URL to be included, reach out to the team/company responsible for 
> the resource to validate the URL pattern and get assurances that the 
> pattern will always serve the same content to all sites and not be abused 
> for tracking (by using unique URLs within the pattern mask as a bit-mask 
> for fingerprinting). They will also need to validate that the URLs covered 
> by the pattern will not rely on being able to set cookies over HTTP using a 
> Set-Cookie HTTP response header because they will not be re-applied 
> across cache boundaries (the set-cookie is not cached with the resource).
>
>  
>
> On Wed, Oct 22, 2025 at 5:31 PM Mike Taylor <[email protected]> wrote:
>
> On 10/18/25 8:34 a.m., Patrick Meenan wrote:
>
> Sorry, I missed a step in making the candidate resource list public. I 
> have moved it to my chromium account and made it public here 
> <https://docs.google.com/spreadsheets/d/1TgWhdeqKbGm6hLM9WqnnXLn-iiO4Y9HTjDXjVO2aBqI/edit?usp=sharing>.
>  
>
>
>  
>
> Not everything in that list meets all of the criteria - it's just the 
> first step in the manual curation (same URL served the same content across 
> > 20k sites in the HTTP Archive dataset).
>
>  
>
> The manual steps frome there for meeting the criteria are basically:
>
>  
>
> - Cull the list for scripts, stylesheets and compression dictionaries.
>
> - Remove any URLs that use query parameters.
>
> - Exclude any responses that set cookies.
>
> - Identify URLs that are not manually versioned by site embedders (i.e. 
> the embedded resource can not get stale). This is either in-place updating 
> resources or automatically versioned resources.
>
> - Only include URLs that can reliably target a single resource by pattern 
> (i.e. ..../<hash>-common.js but not ..../<hash>.js)
>
> - Get confirmation from the resource owner that the given URL Pattern is 
> and will continue to be appropriate for the single-keyed cache
>
> A few questions on list curation:
>
> Can you clarify how big the list will be? The privacy review at 
> https://chromestatus.com/feature/5202380930678784?gate=5174931459145728 
> mentions 
> ~500, while the design doc mentions 1000. I see the candidate resource list 
> starts at ~5000, then presumably manual curation begins to get to one of 
> those numbers.
>
> What is the expected list curation/update cadence? Is it actually manual?
>
> Is there any recourse process for owners of resources that don't want to 
> be included? Do we have documentation on what it mean to be appropriate for 
> the single-keyed cache?
>
> thanks,
> Mike
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "blink-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To view this discussion visit 
> https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAPq58w6UFSnxxzhGKBnY1BJKiZZeH7BUm7PmcjQm_%2BLjGyrtYg%40mail.gmail.com
>  
> <https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAPq58w6UFSnxxzhGKBnY1BJKiZZeH7BUm7PmcjQm_%2BLjGyrtYg%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "blink-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
>
> To view this discussion visit 
> https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAFUtAY9Nffq00r-xbiu2BO00y%2B_2knAi-zheMs9hrE-dB%2BTZ3w%40mail.gmail.com
>  
> <https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAFUtAY9Nffq00r-xbiu2BO00y%2B_2knAi-zheMs9hrE-dB%2BTZ3w%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/a/chromium.org/d/msgid/blink-dev/a244e113-e886-47f8-8b73-cd160ee58fb6n%40chromium.org.

Reply via email to