[
https://issues.apache.org/jira/browse/TS-2954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14077722#comment-14077722
]
Susan Hinrichs commented on TS-2954:
------------------------------------
I got the basic proxy client address verification support in yesterday. I plan
on doing more tests today and some debug message cleanup and hope to have a
patch for others to try out later today.
One thing I observed in my testing so far. For domains with lots of addresses
(like google.com, youtube,com, and i.ytimg.com), the servers only seem to
return 10 or so addresses at a time. Working against the google public dns
server (8.8.8.8), the set of 10 would vary over a broader set of one hundred or
so. So it is quite likely (and I saw in my testing), that the client picks a
valid addresses out of one set of addresses for google.com, but the proxy
checks against another set of valid addresses for google.com and so marks the
response as uncacheable.
Not sure there is much to be done here. Once at item is cached, the mismatch
won't matter. But with false mismatches between the client and the proxy DNS
lookups, the number of requests to get an item into the cache will increase.
One could consider tracking both a validation set and a current address set in
hostDB. Old address sets are moved into the validation set and used only to
specify client specified origin server addresses.
Looking at +edns with dig, it doesn't seem that more IPs are returned in that
case either. But I only did some very basic checks. It could well be that
some DNS server do return more addresses with the EDNS support. When we rework
the hostDB DNS caching logic, supporting EDNS should also be added.
Any other ideas or suggestions?
> cache poisoning due to proxy.config.http.use_client_target_addr = 1
> -------------------------------------------------------------------
>
> Key: TS-2954
> URL: https://issues.apache.org/jira/browse/TS-2954
> Project: Traffic Server
> Issue Type: Bug
> Components: Cache, DNS, Security, TProxy
> Reporter: Nikolai Gorchilov
> Assignee: Alan M. Carroll
> Priority: Critical
>
> Current implementation of proxy.config.http.use_client_target_addr opens a
> very simple attack vector for cache poisoning in transparent forwarding mode.
> An attacker (or malware installed on innocent end-user computer) puts a fake
> IP for popular website like www.google.com or www.facebook.com in hosts file
> on PC behind the proxy. Once an infected PC requests the webpage in question,
> a cacheable fake response poisons the cache.
> In order to prevent such scenarios (as well as [some
> others|http://www.kb.cert.org/vuls/id/435052]) Squid have implemented a
> mechanism known as [Host Header Forgery
> Detection|http://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery].
> In short, while requesting an URL from origin server IP as hinted by the
> client, proxy makes independent DNS query in parallel in order to determine
> if client supplied IP belongs to requested domain name. In case of
> discrepancy between DNS and client IP, the transaction shall be flagged as
> non-cacheable to avoid possible cache poisoning, while still serving the
> origin response to the client.
--
This message was sent by Atlassian JIRA
(v6.2#6252)