Hi Jerry,
Thanks a lot for your long detailed email!
My answers inline.
We are building a run-time orchestrater (honcho) that will manage the
> running apps inside a kubernetes pod. We have large complex apps and we
> often need to do various things that don't fit the typical "docker run"
>
Hi,
First, that you, Willie and everyone else who has contributed to this
tool that we use heavily in production.
I can imagine quite a lot of work under the covers had to happen to
enable this. Here is what I am thinking of and some background about it.
Obviously, we are just one use case.
On 16-12-19 16:01:08, Stephan Müller wrote:
> Different services on the same host, so it has also different health checks,
> balance policies and so on..
Alright -- please show this in your code, next time.
TIA and all the best,
Georg
signature.asc
Description: Digital signature
Different services on the same host, so it has also different health
checks, balance policies and so on..
On 19.12.2016 15:46, ge...@riseup.net wrote:
On 16-12-19 08:39:17, Stephan Müller wrote:
Another point I encounter frequently, I use the same server (IPs) in
multiple backends, this
On 16-12-19 08:39:17, Stephan Müller wrote:
> Another point I encounter frequently, I use the same server (IPs) in
> multiple backends, this duplicates configuration.
>
> SRV1_IP=192.168.0.1
> CHECK_INTER=1
>
> backend foo
> server service1 $SRV1_IP check inter $CHCECK_INTER
>
> backend
Hi Jerry,
Thanks a lot for your email and your nice feedback!
When checking SRV records, I first did not really know what to do with the
"priority" field, then I though we could use it to know which set of
servers to use first (as for the 'backup' keyword), exactly as you
mentioned!
Before
Hello Patrick,
You are right, with "exec" works:
# systemctl status haproxy.service -l
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/etc/systemd/system/haproxy.service; enabled)
Active: active (running) since Mon 2016-12-19 12:23:28 CET; 1min 17s
ago
Process: 25403
I have setup a haproxy config file as follow, and try to verify redispatch
function, however, when i set the balance algorithm as source, i get 3
retry(from stats web page) but finally get a 503 error, the request does
not redspatch to s2 when I kill s1, is anything wrong with my cfg, any
helps
I've found with stick tables the expiry time is only reset when you have a
connection attempt that hits the same stick table entry: Traffic has no
bearing. You can use socat on the stats socket to show the expiry time of a
socket entry.
But traffic does have bearing on the client/server/tunnel
9 matches
Mail list logo