Re: Detection of PROXY protocol version and Citrix CIP

2019-11-07 Thread Hugo Slabbert
Apologies as this is *way* overdue as I didn't get the initial reply 
for whatever reason.


Thanks, Willy, for that initial response.  We ended getting this 
implemented and things worked properly.



By the way you can currently do this using "expect-proxy layer4" and
"expect-netscaler-cip layer4" in tcp-request rules. If you already
know your clients addresses (which I'm sure you do), instead of leaving
the entry hardcoded you could put such rules instead.


Ah, right.  So, in our setup this was just with Netscalers in the path, 
then on to the client, not having haproxy in the path.  The dynamic 
detection of PROXY protocol vs. Netscaler CIP was on the backend server 
application side, not in haproxy config.  So the question here was purely 
around whether this would be breaking the intention of the PROXY protocol 
*spec*, rather than looking for implementation/configuration information on 
haproxy being in path.



Now I'm having a question : shouldn't we simply move the netscaler CIP
parser to the proxy protocol parser and use a single option ? (ie accept-
proxy would validate both of them, and possibly future ones if needed).

It's important to decide before we release 1.7 :-)


That's somewhat out of my wheelhouse in terms of haproxy implementation 
specifics. It looks from the docs for 1.8 that this is still split, so 
¯\_(ツ)_/¯


--
Hugo Slabbert   | email, xmpp/jabber: h...@slabnet.com
pgp key: B178313E   | also on Signal



Re: [PATCH v3] MINOR: stick-table: allow sc-set-gpt0 to set value from an expression

2019-11-07 Thread Willy Tarreau
Hi Cédric,

On Wed, Nov 06, 2019 at 06:38:53PM +0100, Cédric Dufour wrote:
> [sorry for mis-threading; hoping I got the git send-mail --in-reply-to right]

Don't worry about this. The purpose of the list is precisely to act as
humans and not as pre-configured bots. Whatever you can do right which
saves others' time is fine. What you can't easily do is left to others,
that's how we're the most efficient.

> Actually, reading through your original and last comments, I realize I must
> have misunderstood the sample_expr() part and got carried away.
> 
> Unless I'm mistaken, we can use the existing sample_fetch_as_type() function
> directly (without any further addition to sample.c). Or am I missing 
> something ?

Initially when I quickly had a look at it I had the impression that it
would only process a sample fetch function but not the whole expression.
That disturbed me a little bit because I thought we had something to do
this. Now I had a second look after your comment and I think you're
right :-)

> This leaves the switch(rule->from) <-> smpt_opt_dir stuff. Since there is
> nothing about act_rule in sample.c so I felt it has no place there.

Absolutely.

> Wouldn't an extra function call just to deal with this switch() be overkill ?

Yes I think it would be. Let's just place your switch() where you need it
and simply rely on sample_fetch_as_type() to do most of the job. I don't
see what could cause trouble there. Oh I'm seeing that it's apparently
what you did in this new version of the patch. I reviewed it very quickly
but at first glance it looks OK.

> Let me know if you still think this ought to go in a separate function
> (like if anticipating set_gpt1 :-) ).

Not now, it looks OK as-is.

Just let me know if you want me to merge this one now or if you made some
extra changes since you posted.

Thanks,
Willy



Re: PATCH: DNS: enforce resolve timenout for all cases

2019-11-07 Thread Willy Tarreau
On Thu, Nov 07, 2019 at 11:13:48AM +0100, Baptiste wrote:
> Hi,
> 
> Please find in attachment a new patch related to gihub issue #345.
> Basically, when the resolution status was VALID, we ignored the "timeout
> resolve", which goes against the documentation...
> And as stated in the github issue, there was some impacts: an entire
> backend could go down when the nameserver is not very reliable...

Thanks, now applied to 2.1 to 1.8.
Willy



Re: [PATCH] bugfix to make do-resolve to use DNS cache

2019-11-07 Thread Willy Tarreau
On Thu, Nov 07, 2019 at 08:40:26AM +0100, Baptiste wrote:
> Hi Willy,
> 
> Please find the patch updated. I also cleared a '{' '}' that I added on a
> if condition. This would make the code "cleaner" but should not be part of
> this patch at all.
> The new patch is in attachment.

Applied, thank you Baptiste!

> Sorry again for the mess.

No problem, shit happens.

Willy



Factors you need to know about your website. - Haproxy.com

2019-11-07 Thread Marrion Baker
Dear Owner,

I was on Haproxy.com.

What I think, there are a few things you could implement pretty quickly
that would help boost your Google rankings, organic traffic and
 conversions.

A Major part includes enhancing social presence Enhance Facebook likes,
Speed up twitter followers, YouTube viewers, Followers over on Instagram.

Is this of interest? Please.

Marrion Baker.
(301)666-


Fwd: Lua + shared memory segment

2019-11-07 Thread Kalantari
Hi Thierry,

I know this an old thread but I'm having a similar issue where HA Proxy
doesn't allow me to use Redis (not allowed to use socket in fetches). My
scenario is as below:

one frontend and multiple backends (proxies). (the frontend is just one IP
and always sends requests to the same url)
I use Lua script SmartRouting to:
A. Look at the body of the body of the message and decide which back end to
use
B. If the backend servers being used are too busy (active sessions >
threshold) then it will choose to send the requests to the offlineAgent

frontend http-in
bind *:80
use_backend %[lua.SmartRouting(txn)]
default_backend OnlineChargers

backend Onlineagent
server testServer1 ${Server_1_IP}:${Server_1_Port} check
backend ServersForRequestTypeX
server testServer1 ${Server_2_IP}:${Server_2_Port} check
backend offlineAgents
server testServer1 ${Server_3_IP}:${Server_3_Port} check
http-response lua.add_DegradedSession

Straight forward up to this point and no need for Redis, however if a
message is sent to offlineAgent proxy, then I want all the rest of the
requests to be sent to the offline agent. (each message can have a
sessionID
inside the payload). I tried to add the sessionID for the messages inside
my SmartRouting to Redis as it was explained in your blogpost but HAProxy
throws an error and doesn't allow use of a socket in sample fetch.
below is my Lua script:

-- if number of active sessions goes higher than below then degraded
mode is detected
DEGRADED_THRESHOLD = 10
-- global object which will be used for routing of requests based on
request type xml tag
-- if the request type is not found then HA proxy's default backend will be used
routingTable = {
[""] = "Onlineagent",
[""] = "ServersForRequestTypeX"
}

local function SmartRouting(txn)
--print_r(txn)
local payload = txn.sf:req_body()
-- extract request tag (final occurence of  DEGRADED_THRESHOLD) then
core.Debug(
"shit is hiting the fan! Sending requests to degraded agent")

--TODO:  Add the sessionID to Redis and check for future requests if
it's in Redis then redirect to Offline Agent
return "offlineAgents"
end
end
return selectedBackend
end

function getTotalSessions()
local total_sessions = 0
for _, backend in pairs(core.backends) do
if backend.name == "Onlineagent" then
for _, server in pairs(backend.servers) do
-- Get server's stats
local stats = server:get_stats()

-- Get the backend's total number of current sessions
if stats["status"] == "UP" then
total_sessions = total_sessions + stats["scur"]
end
end
end
end
return total_sessions
end

-- register HAProxy "fetch"
core.register_fetches("SmartRouting", dreSmartRouting)


PATCH: DNS: enforce resolve timenout for all cases

2019-11-07 Thread Baptiste
Hi,

Please find in attachment a new patch related to gihub issue #345.
Basically, when the resolution status was VALID, we ignored the "timeout
resolve", which goes against the documentation...
And as stated in the github issue, there was some impacts: an entire
backend could go down when the nameserver is not very reliable...

Baptiste
From d278cff87aa9037f1d05216ea14e2bc8bab5cd2a Mon Sep 17 00:00:00 2001
From: Baptiste Assmann 
Date: Thu, 7 Nov 2019 11:02:18 +0100
Subject: [PATCH] BUG: dns: timeout resolve not applied for valid resolutions

Documentation states that the interval between 2 DNS resolution is
driven by "timeout resolve " directive.
From a code point of view, this was applied unless the latest status of
the resolution was VALID. In such case, "hold valid" was enforce.
This is a bug, because "hold" timers are not here to drive how often we
want to trigger a DNS resolution, but more how long we want to keep an
information if the status of the resolution itself as changed.
This avoid flapping and prevent shutting down an entire backend when a
DNS server is not answering.

This issue was reported by hamshiva in github issue #345.

Backport status: 1.8
---
 src/dns.c | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/src/dns.c b/src/dns.c
index 15d40a1..78349a2 100644
--- a/src/dns.c
+++ b/src/dns.c
@@ -150,10 +150,7 @@ static inline uint16_t dns_rnd16(void)
 
 static inline int dns_resolution_timeout(struct dns_resolution *res)
 {
-	switch (res->status) {
-		case RSLV_STATUS_VALID: return res->resolvers->hold.valid;
-		default:return res->resolvers->timeout.resolve;
-	}
+	return res->resolvers->timeout.resolve;
 }
 
 /* Updates a resolvers' task timeout for next wake up and queue it */
-- 
2.7.4