On Mon, Apr 27, 2015 at 4:17 PM, Paul Hoffman <[email protected]> wrote: > On Apr 27, 2015, at 12:50 PM, Christian Huitema <[email protected]> wrote: >>> Which is why I propose what is in effect a STLS (Staleless TLS) in >>> which each UDP request packet (optionally) contains the full state >>> required to decrypt it at the server. >> >> Without going in the details, there are two types of solution to the anycast >> problem: either some form of pinning, so requests from a given context are >> guaranteed to arrive at the server with that context; or, somehow ensuring >> that the requests carry enough state that they can be understood by any >> server in the pool. >> >> I understand how to do pinning: a first transaction to the anycast address >> returns the unicast address of the relevant server. Not perfect, because it >> adds a roundtrip during the initial setup, but easy to understand. >> >> I am not sure about the way to carry "enough state in each request." >> Especially if we want to do PFS, which means negotiating different session >> keys over time. I assume that the client could learn a "temporary key" that >> is understood by all servers in the pool, and use that to encrypt the >> messages. But then that requires a fair bit of coordination between the >> servers in the anycast pool. > > There is a third solution to the "anycast problem", which is what is done > today in all systems that use anycast: assume that it happens so rarely, that > a rare reset is just fine.
Yup, a number of content delivery networks do this, including Faslty and Cachefly (http://www.cachefly.com/2014/07/11/measuring-throughput-performance-dns-vs-tcp-anycast-routing/ ) Fastly doesn't seem to have much published about this, but you can test this easily yourself. foursquare.com (and fastly.com and ronpaulinstitute.org and aclu.org and gizmodo.com and about 3,000 other properties) all resolve to something in the 23.235.33.0/24 subnet. Using RIPE Atlas probes and ping you can see that this is ~4ms from Strasbourg (FR), <1ms from Dallas (USA), <1ms from Ashburn (USA), 4ms from Hong Kong, 4ms from London, and 3ms from Universitat Heidelberg (DE). As a rough estimation, signals propagate about 200km per ms, and the above numbers are RTT (and we'll ignore CPU, routing, etc.) This means that fastly is within 400km of Strasbourg, HK, London, less than 200km from Dallas, and 300km from Heidelberg. Obviously there is no way this could be a single location (Strasbourg and Heidelberg are ~130km apart, and so may be a single node), and so this subnet is (widely) anycast. Many (most?) of these properties run HTTPS. From what I hear, fastly customers are happy chappies -- TCP anycast works... W > > --Paul Hoffman > _______________________________________________ > dns-privacy mailing list > [email protected] > https://www.ietf.org/mailman/listinfo/dns-privacy -- I don't think the execution is relevant when it was obviously a bad idea in the first place. This is like putting rabid weasels in your pants, and later expressing regret at having chosen those particular rabid weasels and that pair of pants. ---maf _______________________________________________ dns-privacy mailing list [email protected] https://www.ietf.org/mailman/listinfo/dns-privacy
