Willy,

such a great idea! We can help in testing.
-- 
*/Best regards,/*
/Eugene Istomin/



> Hi Pawel,
> 
> On Fri, Nov 22, 2013 at 07:54:18PM -0800, Pawel Veselov wrote:
> > Hi.
> > 
> > There has been a heated discussion on this about 2 years back, so 
sorry
> > for
> > reopening any wounds. Also sorry for long winded intro.
> > 
> > My understanding is that neither 1.4 nor 1.5 are planned to have any
> > support for resolving any server addresses during normal operations; 
i.e.
> > such are always resolved at start-up.
> 
> There have been some improvements in this area. The ongoing 
connection
> rework that's being done for the server-side keep-alive is designed with
> this in mind so that we'll be able to perform DNS resolving during health
> checks and change the server's address on the fly without causing 
trouble
> to pending connections. Right now the session does not need the 
server's
> address until one exact instant which is just prior to connecting. And
> this address is immediately copied into the connection and not reused.
> So that is compatible with the ability to change an address on the fly,
> even possibly from the CLI. I find it reasonable to check the DNS for
> each health check since a health check defines the update period 
you're
> interested in.
> 
> This will also help people running in environments like EC2 where
> everything changes each time you sneeze. But that's not done :-)
> 
> > One of the ways I would like to use ha-proxy, is to to become a pure 
TCP
> > proxy to a database server that is provides fail-over through DNS.
> 
> Indeed it could also work for such use cases.
> 
> > The problem with connection the application directly to such database 
is
> > that when the database does go down, previous IP address effectively 
goes
> > "dark", and I don't even get TCP connections reset on previously
> > established connections.
> 
> That's not exact because you have "on-marked-down shutdown-
sessions" for
> this exact purpose.
> 
> (...)
> 
> > I tried using ha-proxy for this. The idea was - if ha-proxy determines
> > that
> > the server is "down", it will quickly snip both previously established, or
> > newly established connections, so I won't have to incur blocks 
associated
> > with those. So, ha-proxy is a perfect tool to prevent unreachable 
server
> > problem from the perspective of the application. This actually worked
> > great
> > in my test: once I simulated database failure, there was absolutely no
> > blocks on database operations (sure, there were failures to connect 
to it,
> > but that's fine).
> 
> That's already present :-)
> 
> > What remains a problem - is that because the fail-over changes the IP
> > address behind the server name, ha-proxy is not able to pick up the 
new
> > address. It would really be perfect if it could, otherwise, that 
"backend"
> > just never recovers.
> > 
> > Now, I have no control over this fail-over implementation. I have no
> > control over network specifics and application framework either. I can
> > fiddle with the JDBC driver, but it will probably be more tedious and
> > throw-away than the following.
> > 
> > Would anybody be interested in an optional parameter address 
modifier, say
> > "@chk:<n:m>" as a suffix to a host name, to enable ha-proxy to re-
check
> > the
> > specified name each <n> seconds, past initial resolution? Say, also
> > agreeing to mark server as "down" if a name fails to resolve after <m>
> > checks. If <n> is 0, then no checks are performed past initial 
resolution,
> > which is default and is that now. Having <m> of 0 to mean to not fail 
on
> > resolution errors.
> 
> I'd rather have the DNS servers defined in backends and inherited from
> defaults, so that its possible to specify it once. Also I think your M
> parameter above is more related to the DNS servers themselves and is 
just
> a cache duration, so I'd put that as one of their settings. The N 
parameter
> should probably be covered by the server's check interval. It will also
> have the benefit of respecting the fastinter and downinter values so 
that
> we don't resolve DNS too fast when the server is down. I'd also add 
support
> for preventing the resolving from being made too fast and enforcing the
> cached value to be kept for a configurable amount of time (eg: min-
cache).
> 
> Thus we could even have dedicated "resolver" sections just like we have
> "peers". It would also help putting some static information later. It
> could look like this :
> 
>     resolver local-dns
>         server dns1 192.168.0.1 cache 1m min-cache 10s
>         server dns2 192.168.0.2 cache 1m min-cache 10s
> 
>     backend foo
>         use-resolver local-dns   # (could also be put in defaults)
>         server s1 name1.local:80 resolve check
>         server s2 name2.local:80 resolve check
> 
> Thinking about it a bit more, I'd rather have the ability to specify
> the resolver to use on each "server" line, so that when you LB between
> local and remote servers, you can use different resolvers :
> 
>     resolver private-dns
>         server dns1 192.168.0.1 cache 1m min-cache 10s
>         server dns2 192.168.0.2 cache 1m min-cache 10s
> 
>     resolver public-dns
>         server dns1 4.4.4.4 cache 1m min-cache 10s
>         server dns2 8.8.8.8 cache 1m min-cache 10s
> 

Reply via email to