bryancall commented on PR #13015: URL: https://github.com/apache/trafficserver/pull/13015#issuecomment-4113939841
## RFC 9110 Heuristic Caching Gap Analysis While adding 204 and 308, I checked the full list of heuristically cacheable status codes from [RFC 9110 Section 15.1](https://www.rfc-editor.org/rfc/rfc9110#section-15.1). ATS is still missing four codes from the allowlist in `is_response_cacheable()`: | Status | RFC 9110 | ATS (after this PR) | |---|---|---| | 200 OK | Cacheable | In allowlist | | 203 Non-Authoritative Information | Cacheable | In allowlist | | 204 No Content | Cacheable | In allowlist (this PR) | | 206 Partial Content | Cacheable | Intentionally excluded -- handled by range caching logic | | 300 Multiple Choices | Cacheable | In allowlist | | 301 Moved Permanently | Cacheable | In allowlist | | 308 Permanent Redirect | Cacheable | In allowlist (this PR) | | **404 Not Found** | Cacheable | **Missing** -- negative caching only | | **405 Method Not Allowed** | Cacheable | **Missing** -- negative caching only | | 410 Gone | Cacheable | In allowlist | | **414 URI Too Long** | Cacheable | **Missing** -- negative caching only | | **501 Not Implemented** | Cacheable | **Missing** -- negative caching only | The four missing codes (404, 405, 414, 501) are currently only cached when `proxy.config.http.negative_caching_enabled` is turned on, and they use a flat TTL (`negative_caching_lifetime`, default 1800s) instead of proper LM-factor heuristic freshness. Adding these to the heuristic allowlist would be a behavior change for existing deployments -- particularly for 404, where many operators rely on the current default of *not* caching 404s without explicit negative caching configuration. Keeping this PR scoped to just 204 and 308, but will open a separate issue to track the remaining gaps. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
