My final solution looked like:

backend clientsite_ember
  default-server inter 10s resolve-prefer ipv4
  server cf foobar.cloudfront.net:443 ssl verify required verifyhost
foobar.cloudfront.net sni str(foobar.cloudfront.net) ca-file
/etc/ssl/certs/ca-certificates.crt check port 80 resolvers dns

(port 80 is a workaround for no SNI in health checks)

On May 18, 2017 at 4:33:34 PM, Ryan Schlesinger ([email protected])
wrote:

That’s incredibly insightful of you.  I’ll set up a resolver for all of my
CF uses and report back if I can repro this apart from that config fix.

Thanks!


On May 18, 2017 at 3:42:35 PM, Michael Ezzell ([email protected]) wrote:



On May 18, 2017 3:07 PM, "Ryan Schlesinger" <[email protected]>
wrote:

We have the following backend configuration:

backend clientsite_ember
  server cf foobar.cloudfront.net:443 ssl verify required sni str(
foobar.cloudfront.net) ca-file /etc/ssl/certs/ca-certificates.crt

This has been working great with 1.7.2 since February.  I upgraded to 1.7.5
yesterday and today found that all requests through that backend were
returning 503.  Testing the cloudfront url manually loaded the site.

Sample Logs:
May 18 10:13:47 ip-10-4-13-35 haproxy:  <some_ip>:46924
[18/May/2017:17:13:32.237] http-in~ clientsite_ember/cf 0/0/-1/-1/14969 503
212 - - CC--


That second C is significant:

the proxy was waiting for the CONNECTION to establish on theserver.
The server might at most have noticed a connection attempt.


You don't have a healthcheck configured.  You don't want option httpchk
with CloudFront, but you do need at least a TCP check.  The place where you
were connecting to could have been unavailable.

To understand how, take a look at the results of dig
dzzzexample.cloudfront.net.  There will be several responses.  But, without
a DNS resolver section configured on the proxy and attached to each backend
server to continually re-resolve the addresses, the proxy will latch to
just one, and stick to it until restarted.

The DNS responses from CloudFront can vary from day to day or hour to hour,
since the DNS is dynamically derived from their system's current notion of
the "closest" (most optimal) location from where you query DNS from.  From
Cincinnati, Ohio, I see DNS responses indicating I'm connecting to South
Bend, IN, one day,  Chicago, IL, another,  then Ashburn, VA.  As I type
this, I'm actually seeing New York, NY.  (Do a reverse lookup on the IP
addresses currently associated with the CloudFront hostname.  An
alphanumeric code in the hostname gives you the IATA code of the nearest
airport to the CloudFront edge in question -- IADx is Ashburn, JFKx is NYC,
etc.)

If CloudFront lost an edge or took one out of DNS rotation and shut it down
for maintenance, what you saw would potentially be one behavior HAProxy
could be expected to exhibit, because it wouldn't know.  Unless I missed a
memo, HAProxy only resolves DNS at startup unless configured otherwise.

The browser you tested with would have resolved a different address.

I'm not saying there can't be an issue in 1.7.5 but your configuration
seems vulnerable to service disruptions, since it can't take advantage of
CloudFront's fault tolerance mechanisms.

Reply via email to