HAProxy and site failover

2015-03-20 Thread Brendan Kearney
hi, first time / long time...

i am wondering if the ability exists in HAProxy to reply to a HTTP proxy
request with a reset (RST) if no backend server is available.

the scenario goes as such:
i have a proxy pac file that assigns multiple proxies to all clients,
and through the logic tree in the pac file, the proxies are assigned in
a specific order.  i have multiple sites with load balanced proxies, and
the intention is to provide site failover, should a larger event occur
like ISP issues that breaks internet access.  with the pac file
assigning all proxy VIPs to the client, should the default VIP not have
an available backend server to fulfill the request, i would want to
configure HAProxy to send a reset to the client, indicating that the
next assigned proxy should be used.

with site failover happening transparently, a user who would normally
browse through the proxy/proxies at site1 would be automatically failed
over and browse through the proxy/proxies at site2.  if no servers were
available in site2, then the next assigned proxy would be used and
failures with RST replies would result in failovers until all assigned
proxies are exhausted.

the intention is not to provide / assign hundreds of proxies in the pac
file, but to provide resiliency with a couple of sites serving as
backups to each other, should an event warrant it.

thank you,

brendan kearney




Re: HAProxy and site failover

2015-03-21 Thread Brendan Kearney
On Sat, 2015-03-21 at 14:03 +0100, Lukas Tribus wrote:
  haproxy is a tcp (layer 3/4) proxy, that can perform application (layer
  7) functions. i am already doing service checks against my proxies to
  validate their availability. when no pool member is available, haproxy
  knows it. there are no external helpers needed to make this
  determination. the layer 7 capabilities make this possible.
 
  the injection of a RST is part-and-parcel to the tcp proxy
  functionality. i can understand if the functionality in not in haproxy,
  but it is not outside the realm of capability for a t.
 
 The 3 way TCP handshake happens before the application (haproxy) is even
 aware of the session, therefor this is only possible if the kernel handles
 it (iptables), which is why I said its only possible with external helpers.
 
 Or is what you are requesting to send a RST in the middle of an already
 established TCP session?
 
 
 Please CC the mailing list.
 
 
 Lukas
 
 

sorry, thought i did cc the list.

i will have to test out the behavior, as this is an implemented solution
where i work, using other products.  i can test a couple of different
scenarios that come to mind.

1, new browser session comes in to the load balancer, and no backend
servers are available.  where / when is the RST sent?

2, a session to the load balanced exists, and the backend servers become
unavailable.  where / when is the RST sent?

i'll run these scenarios and let you know what i find in a packet
capture.




Re: HAProxy and site failover

2015-03-23 Thread brendan kearney
I have confirmed the behavior.  In both cases all new connections receive a
RST when a backend server is not available to service the request.  The
behavior is Syn - RST in both cases.  Any existing connections timeout.
On Mar 21, 2015 9:11 AM, Brendan Kearney bpk...@gmail.com wrote:

 On Sat, 2015-03-21 at 14:03 +0100, Lukas Tribus wrote:
   haproxy is a tcp (layer 3/4) proxy, that can perform application (layer
   7) functions. i am already doing service checks against my proxies to
   validate their availability. when no pool member is available, haproxy
   knows it. there are no external helpers needed to make this
   determination. the layer 7 capabilities make this possible.
  
   the injection of a RST is part-and-parcel to the tcp proxy
   functionality. i can understand if the functionality in not in haproxy,
   but it is not outside the realm of capability for a t.
 
  The 3 way TCP handshake happens before the application (haproxy) is even
  aware of the session, therefor this is only possible if the kernel
 handles
  it (iptables), which is why I said its only possible with external
 helpers.
 
  Or is what you are requesting to send a RST in the middle of an already
  established TCP session?
 
 
  Please CC the mailing list.
 
 
  Lukas
 
 

 sorry, thought i did cc the list.

 i will have to test out the behavior, as this is an implemented solution
 where i work, using other products.  i can test a couple of different
 scenarios that come to mind.

 1, new browser session comes in to the load balancer, and no backend
 servers are available.  where / when is the RST sent?

 2, a session to the load balanced exists, and the backend servers become
 unavailable.  where / when is the RST sent?

 i'll run these scenarios and let you know what i find in a packet
 capture.




SSL errors with HAProxy

2015-09-08 Thread Brendan Kearney
i am not sure what i am doing wrong, but i keep getting errors in my 
browser when trying to browse to my site.  i just moved from an old OS 
and HAProxy instance to current, and may have issues with config 
directives to work out.  please be patient :)


just about every third request works.  otherwise i get "Error code: 
ssl_error_rx_record_too_long" errors in firefox.  if i try to reload the 
page, it takes a couple tries, but does finally load.  i have no idea 
how the error exists for some of the requests but not for all. relevant 
info below.  any help is appreciated.  any other info needed available 
upon request:


version: 1.5.14

haproxy.cfg (edited for relevance, brevity):

global
#debug
daemon
log localhost local1 notice
log-send-hostname router
#uid 996
#gid 995
maxconn 1024
pidfile /var/run/haproxy.pid
stats socket /var/run/haproxy.sock level admin
stats maxconn 2
tune.ssl.default-dh-param 2048

defaults
balance leastconn

log global

mode http

option httplog
option http-server-close
option forwardfor except 127.0.0.0/8

stats enable
stats hide-version
stats refresh 5s
stats scope   .
stats show-legends
stats uri /admin?stats

timeout http-request10s
timeout queue   1m
timeout connect 10s
timeout client  1m
timeout server  1m
timeout http-keep-alive 10s
timeout check   10s

listen https 192.168.120.2:443
bind 192.168.120.2:443 ssl crt /etc/haproxy/www.pem
server www1 192.168.88.1:80
server www2 192.168.88.2:80



Fwd: Re: [squid-users] intercepting traffic

2015-12-03 Thread Brendan Kearney
i am looking to setup a transparent intercepting proxy, where i use 
iptables to DNAT traffic on port 80 and redirect it to HAProxy and in 
turn load balance to Squid for fulfillment.  the DNAT to HAProxy works 
and the load balance to Squid works, but Squid sees the request without 
the correct or full request.


the lovely and helpful Squid folks have said:

Whatever is receiving the packet from DNAT has to also translate the 
HTTP layer messages from origin relative-URI format to intermediary 
absolute-URI format.


while i understand what is being said, i don't know how to implement 
this in HAProxy.  Where do i go for more info around how to set this up 
in HAProxy?  Any help is greatly appreciated.


TIA,

brendan

 Forwarded Message 
Subject:Re: [squid-users] intercepting traffic
Date:   Fri, 20 Nov 2015 17:12:02 +1300
From:   Amos Jeffries <squ...@treenet.co.nz>
To: squid-us...@lists.squid-cache.org



On 20/11/2015 1:09 p.m., Brendan Kearney wrote:

when i put in just the DNAT that sends the traffic to the proxy VIP and
load balances the requests to the squid instances on port 3128 (not the
intercept port), i issue a curl command:

curl -vvv --noproxy squid-cache.org http://squid-cache.org/

and get an error page saying:

...
The following error was encountered while trying to retrieve the URL:
/


is the DNAT stripping header info, such as the Host header, or am i
still missing something?


HTTP != TCP/IP ... DNAT is only changing the IP:port details.

Whatever is receiving the packet from DNAT has to also translate the
HTTP layer messages from origin relative-URI format to intermediary
absolute-URI format.

That rule-of-thumb "MUST rule" you mentioned earlier is about those two
DNAT and HTTP translation operations being required to be done together
on the same machine. It is not limited to Squid. It could be HAProxy or
some other LB software responsible for doing it.

Squid is just the only software which actually tells you up front about
the issue, instead of leaving other software later on down the transfer
chain (possibly in somebody elses network) to break with errors like you
see above.

Amos

___
squid-users mailing list
squid-us...@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users





Set the URI

2015-12-05 Thread Brendan Kearney
I am trying to use HAProxy to perform http interception and 
transparently proxy outbound http traffic.  i am having a dog of a time 
trying to get this working.  I need to rewrite the GET line on a request 
so that the request is for the absolute URL, and not the relative URI.


i found this article:
http://www.haproxy.com/doc/aloha/7.0/haproxy/http_rewriting.html#rewriting-http-urls

and the "Set the URI" section of that page is exactly what i want to do, 
but i need to do it in the community version of HAProxy, not Aloha.


i cant seem to get the process down on how to capture or extract the 
request URL, and rewrite the GET line to contain that info.  Can someone 
point me in the right direction on what i  need to do?


thanks in advance,

brendan



Re: Set the URI

2015-12-20 Thread Brendan Kearney

On 12/05/2015 03:42 PM, Brendan Kearney wrote:
I am trying to use HAProxy to perform http interception and 
transparently proxy outbound http traffic.  i am having a dog of a 
time trying to get this working.  I need to rewrite the GET line on a 
request so that the request is for the absolute URL, and not the 
relative URI.


i found this article:
http://www.haproxy.com/doc/aloha/7.0/haproxy/http_rewriting.html#rewriting-http-urls 



and the "Set the URI" section of that page is exactly what i want to 
do, but i need to do it in the community version of HAProxy, not Aloha.


i cant seem to get the process down on how to capture or extract the 
request URL, and rewrite the GET line to contain that info. Can 
someone point me in the right direction on what i  need to do?


thanks in advance,

brendan
no feedback on how to do this?  this would be a really helpful function 
to have working.  any help is appreciated.


thanks,

brendan



Re: Set the URI

2015-12-21 Thread Brendan Kearney

On 12/21/2015 11:09 AM, Willy Tarreau wrote:

On Mon, Dec 21, 2015 at 10:54:00AM -0500, Brendan Kearney wrote:

rpm -qi haproxy
Name: haproxy
Version : 1.5.12
Release : 1.fc20
Architecture: x86_64

i did try and it seems the version might be at issue..

This config stanza:

backend tproxy
 acl https ssl_fc

 http-request set-uri http://%[req.hdr(Host)]%[path]?%[query]
unless https

 server proxy1 192.168.88.1:3128 check inter 1
 server proxy2 192.168.88.2:3128 check inter 1

results in the below error message:

[ALERT] 354/105146 (26637) : parsing [/etc/haproxy/haproxy.cfg:149]:
'http-request' expects 'allow', 'deny', 'auth', 'redirect', 'tarpit',
'add-header', 'set-header', 'replace-header', 'replace-value',
'set-nice', 'set-tos', 'set-mark', 'set-log-level', 'add-acl',
'del-acl', 'del-map', 'set-map', but got 'set-uri'.
[ALERT] 354/105146 (26637) : Error(s) found in configuration file :
/etc/haproxy/haproxy.cfg

Indeed, set-uri appeared in 1.6.

I don't see how to do it without set-uri. The only action which can
affect the request URI is reqrep, and it doesn't take variables nor
sample fetches on input. Thus you'll have to upgrade your version
I'm afraid.

Regards,
Willy

the feedback is most appreciated.  I am in a holding pattern for 
upgrades right now, but will keep that in mind.  Thanks a bunch for 
confirming.


brendan



Re: Set the URI

2015-12-21 Thread Brendan Kearney

On 12/21/2015 01:20 AM, Willy Tarreau wrote:

On Sun, Dec 20, 2015 at 09:31:45PM -0500, Brendan Kearney wrote:

On 12/05/2015 03:42 PM, Brendan Kearney wrote:

I am trying to use HAProxy to perform http interception and
transparently proxy outbound http traffic.  i am having a dog of a
time trying to get this working.  I need to rewrite the GET line on a
request so that the request is for the absolute URL, and not the
relative URI.

i found this article:
http://www.haproxy.com/doc/aloha/7.0/haproxy/http_rewriting.html#rewriting-http-urls


and the "Set the URI" section of that page is exactly what i want to
do, but i need to do it in the community version of HAProxy, not Aloha.

i cant seem to get the process down on how to capture or extract the
request URL, and rewrite the GET line to contain that info. Can
someone point me in the right direction on what i  need to do?

thanks in advance,

brendan

no feedback on how to do this?  this would be a really helpful function
to have working.  any help is appreciated.

Did you try what you pointed above ? What you find for ALOHA will most
often work on a recent mainline haproxy since aloha includes haproxy plus
some backports. What version are you using ?

That just makes me realize that if people are linking to ALOHA docs, we
should indicate in the aloha docs what mainline version of haproxy is
included in each aloha version. I'll check this with Baptiste.

Regards,
Willy


rpm -qi haproxy
Name: haproxy
Version : 1.5.12
Release : 1.fc20
Architecture: x86_64

i did try and it seems the version might be at issue..

This config stanza:

backend tproxy
acl https ssl_fc

http-request set-uri http://%[req.hdr(Host)]%[path]?%[query] 
unless https


server proxy1 192.168.88.1:3128 check inter 1
server proxy2 192.168.88.2:3128 check inter 1

results in the below error message:

[ALERT] 354/105146 (26637) : parsing [/etc/haproxy/haproxy.cfg:149]: 
'http-request' expects 'allow', 'deny', 'auth', 'redirect', 'tarpit', 
'add-header', 'set-header', 'replace-header', 'replace-value', 
'set-nice', 'set-tos', 'set-mark', 'set-log-level', 'add-acl', 
'del-acl', 'del-map', 'set-map', but got 'set-uri'.
[ALERT] 354/105146 (26637) : Error(s) found in configuration file : 
/etc/haproxy/haproxy.cfg




Re: proper https interception

2016-07-17 Thread Brendan Kearney

On 07/17/2016 04:59 PM, Evgeniy Sudyr wrote:

Brendan,

I'm also interesting for this topic as our company is preparing for
switching most traffic to be SSL enabled soon.

What I found so far are these quite informative articles:

1) 
http://blog.haproxy.com/2013/09/16/howto-transparent-proxying-and-binding-with-haproxy-and-aloha-load-balancer/
2) 
http://loadbalancer.org/blog/configure-haproxy-with-tproxy-kernel-for-full-transparent-proxy

Also you did not posted your iptables config, routing rules on backend
servers (as they need reply to "spoofed" IP's back to Haproxy servers
(tcp mode, right?) all are very important for tproxy config to be
working.

Let me know your results if you will get first.

Btw, I will be glad to see working configs from other community
members. Thank you all in advance!

--
Evgeniy

On Sun, Jul 17, 2016 at 10:19 PM, Brendan Kearney <bpk...@gmail.com> wrote:

i have iptables configured to redirect outbound HTTP to HAProxy, and then
load balance to a couple of squid instances.  the below works well:

backend tproxy
 acl https ssl_fc
 http-request set-uri http://%[req.hdr(Host)]%[path]?%[query] unless
https
 ...

i have tried to perform HTTPS interception using the below, in addition to
the redirect of HTTPS traffic to the HAProxy VIP:

 http-request set-method CONNECT if https
 http-request set-uri https://%[req.hdr(Host)]%[path]?%[query] if
https

this does not seem to work as expected.  where can i find more info on
performing HTTPS interception, for transparent proxying?  any help would be
appreciated.

thanks,

brendan




HAProxy does not need the kernel to have nonlocal binding turned on, as 
i am performing DNAT with IPTables:


# Rule 5 (NAT)
#
echo "Rule 5 (NAT)"
#
$IPTABLES -t nat -N Cid130089X1041.0
$IPTABLES -t nat -A PREROUTING -p tcp -m tcp  -s 192.168.1.4 
--dport 80 -j Cid130089X1041.0
$IPTABLES -t nat -A PREROUTING -p tcp -m tcp  -s 192.168.1.5 
--dport 80 -j Cid130089X1041.0
$IPTABLES -t nat -A PREROUTING -p tcp -m tcp  -s 192.168.1.200 
--dport 80 -j Cid130089X1041.0
$IPTABLES -t nat -A PREROUTING -p tcp -m tcp  -s 192.168.24.1 
--dport 80 -j Cid130089X1041.0
$IPTABLES -t nat -A PREROUTING -p tcp -m tcp  -s 192.168.24.2 
--dport 80 -j Cid130089X1041.0
$IPTABLES -t nat -A PREROUTING -p tcp -m tcp  -s 192.168.24.4 
--dport 80 -j Cid130089X1041.0

$IPTABLES -t nat -A Cid130089X1041.0  -d 192.168.1.0/24  -j RETURN
$IPTABLES -t nat -A Cid130089X1041.0  -d 192.168.24.0/24  -j RETURN
$IPTABLES -t nat -A Cid130089X1041.0  -d 192.168.88.0/24  -j RETURN
$IPTABLES -t nat -A Cid130089X1041.0  -d 192.168.100.1  -j RETURN
$IPTABLES -t nat -A Cid130089X1041.0  -d 192.168.120.0/24  -j RETURN
$IPTABLES -t nat -A Cid130089X1041.0  -d 192.168.152.0/24  -j RETURN
$IPTABLES -t nat -A Cid130089X1041.0  -d 192.168.184.0/24  -j RETURN
$IPTABLES -t nat -A Cid130089X1041.0  -d 192.168.185.0/24  -j RETURN
$IPTABLES -t nat -A Cid130089X1041.0  -d 192.168.216.0/24  -j RETURN
$IPTABLES -t nat -A Cid130089X1041.0  -d 192.168.248.0/24  -j RETURN
$IPTABLES -t nat -A Cid130089X1041.0 -p tcp -m tcp   --dport 80 -j 
DNAT --to-destination 192.168.120.1:3129


i use FWBuilder to create my iptables policy, and the above takes 
traffic from some sources and DNATs their outbound traffic on port 80 to 
my proxy VIP on port 3129.  this load balances to squid, which satisfies 
the request.  i am not doing full transparent proxying because my load 
balancer is also my router/firewall, and out-of-state or asynchronous 
routing will be dropped by the firewall.  this means that i am setting 
and using the X-Forwarded-For header, and in squid i digest that header 
for the client IP.


the squid servers see the routers local interface as the source of the 
connection, and reply back to it.  the DNAT is unraveled/undone on the 
return trip to the client, from the router.  because of this, there is 
no special routing needed on the servers, and only their default route 
is required.


what i need is the know-how to intercept the HTTPS.  this requires a 
change to the METHOD, and the URI.  i am not sure how to go about that, 
and am looking for more reading material on the subject.




proper https interception

2016-07-17 Thread Brendan Kearney
i have iptables configured to redirect outbound HTTP to HAProxy, and 
then load balance to a couple of squid instances.  the below works well:


backend tproxy
acl https ssl_fc
http-request set-uri http://%[req.hdr(Host)]%[path]?%[query] 
unless https

...

i have tried to perform HTTPS interception using the below, in addition 
to the redirect of HTTPS traffic to the HAProxy VIP:


http-request set-method CONNECT if https
http-request set-uri https://%[req.hdr(Host)]%[path]?%[query] 
if https


this does not seem to work as expected.  where can i find more info on 
performing HTTPS interception, for transparent proxying?  any help would 
be appreciated.


thanks,

brendan



transparent or intercepting proxy with https

2016-09-20 Thread Brendan Kearney
i am trying to setup a transparent or intercepting proxy, that works 
with HTTPS, and have hit a bit of a wall.


i am using IPTables to intercept the port 80 and 443 traffic, and 
DNAT'ing the traffic to a HAProxy VIP.


i have the front end configured as such:

frontend tproxy
bind 192.168.120.1:3129
option httplog
option http-server-close
option forwardfor except 127.0.0.0/8
default_backend tproxy

the backend is where i have problems.

backend tproxy
acl https ssl_fc

http-request set-uri http://%[req.hdr(Host)]%[path]?%[query] 
unless https


http-request set-method CONNECT if https
http-request set-uri https://%[ssl_fc_sni] if https

server proxy1 192.168.88.1:3129 check inter 1
server proxy2 192.168.88.2:3129 check inter 1

right now, HTTP interception works without issue.  as i understand 
things having read through some docs, the acl will never match HTTPS 
traffic that is to be proxied, because the front end bind statement does 
not have the "ssl" option.  subsequently, the rewrites of the method and 
uri will never happen.  i also believe the rewrite of the uri will not 
work because ssl_fc_sni requires the "ssl" option be present on the bind 
line for the front end.  that leads me to wonder how i differentiate 
between HTTP and HTTPS in a transparent proxy scenario.  would 
req.proto_http be appropriate?  being that the match does not occur 
until the request is complete, i am not sure.


once i am properly differentiating between HTTP and HTTPS traffic, what 
would the correct way to rewrite the uri?  i think req.ssl_sni is the 
value i need to use, instead of ssl_fc_sni.


any insight is appreciated.

thank you,

brendan



Re: http-reuse always, work quite well

2016-10-22 Thread Brendan Kearney

On 10/22/2016 02:08 AM, Willy Tarreau wrote:


You're welcome. Please note that the reuse mechanism is not perfect and
can still be improved. So do not hesitate to report any issue you find,
we definitely need real-world feedback like this. I cannot promise that
every issue will be fixed, but at least we need to consider them and see
what can be done.

Cheers,
Willy

i have http interception in place, using iptables/DNAT to redirect 
traffic to haproxy and load balance to 2 squid instances.  i was using 
aggressive mode http-reuse and it seemed to provide better streaming 
experience for roku/sling.  after a period of time, the performance 
degraded and the experience was worse than original state.  buffering, 
lag and pixelation are the symptoms.  i did not try to use the always 
mode, and turned http-reuse off for the interception i am doing.  the 
issue has cleared since.


while interception and transparent proxying seem to be problematic, 
explicit proxying and internal http have both seen a marked improvement 
in performance.  no scientific collection of data has been done, but 
page load times have been noticeably improved.  i may move from 
aggressive to always for these backends.


keep up the good work, and thanks for some really great software,

brendan




Re: OneConnect feature in HAProxy

2017-05-25 Thread Brendan Kearney

On 05/25/2017 08:26 AM, James Stroehmann wrote:


Is there a feature in HAProxy similar to OneConnect that the F5 LTM 
has? https://www.f5.com/pdf/deployment-guides/oneconnect-tuning-dg.pdf


I am trying to migrate some frontends from an LTM to an HAProxy load 
balancer, and a few of the existing frontends have the OneConnect 
feature turned on. I spoke to the app owner and he believes that it 
allows us to have less connections (and therefore less backend 
servers) and it enables more seamless rolling bounces on the stateless 
backends.



http-reuse is the directive you are looking for.



invalid request

2021-12-28 Thread brendan kearney
list members,

i am running haproxy, and see some errors with requests.  i am trying to
understand why the errors are being thrown.  haproxy version and error
info below.  i am thinking that the host header is being exposed outside
the TLS encryption, but cannot be sure that is what is going on.

of note, the gnome weather extension runs into a similar issue. and the
eclipse IDE, when trying to call out to the download site.

where can i find more about what is going wrong with the requests and
why haproxy is blocking them?  if it matters, the calls are from apps to
a http VIP in haproxy, load balancing to squid backends.

# haproxy -v
HA-Proxy version 2.1.11-9da7aab 2021/01/08 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2021.
Known bugs: http://www.haproxy.org/bugs/bugs-2.1.11.html
Running on: Linux 5.11.22-100.fc32.x86_64 #1 SMP Wed May 19 18:58:25 UTC
2021 x86_64

[28/Dec/2021:12:17:14.412] frontend proxy (#2): invalid request
   backend  (#-1), server  (#-1), event #154, src
192.168.1.90:44228
   buffer starts at 0 (including 0 out), 16216 free,
   len 168, wraps at 16336, error at position 52
   H1 connection flags 0x, H1 stream flags 0x0012
   H1 msg state MSG_HDR_L2_LWS(24), H1 msg flags 0x1410
   H1 chunk len 0 bytes, H1 body len 0 bytes :

   0  CONNECT admin.fedoraproject.org:443 HTTP/1.1\r\n
   00046  Host: admin.fedoraproject.org\r\n
   00077  Accept-Encoding: gzip, deflate\r\n
   00109  User-Agent: gnome-software/40.4\r\n
   00142  Connection: Keep-Alive\r\n
   00166  \r\n

[28/Dec/2021:12:48:34.023] frontend proxy (#2): invalid request
   backend  (#-1), server  (#-1), event #166, src
192.168.1.90:44350
   buffer starts at 0 (including 0 out), 16258 free,
   len 126, wraps at 16336, error at position 49
   H1 connection flags 0x, H1 stream flags 0x0012
   H1 msg state MSG_HDR_L2_LWS(24), H1 msg flags 0x1410
   H1 chunk len 0 bytes, H1 body len 0 bytes :

   0  CONNECT download.eclipse.org:443 HTTP/1.1\r\n
   00043  Host: download.eclipse.org\r\n
   00071  User-Agent: Apache-HttpClient/4.5.10 (Java/11.0.13)\r\n
   00124  \r\n

thanks in advance,

brendan


Re: invalid request

2022-01-13 Thread brendan kearney
i am load balancing against 2 squid instances, and have gone down the
path of using mode tcp, with proxy protocol, and found that i prefer
mode http with http-reuse and x-forwarded-for.

with tcp and proxy protocol, every connection is sent with the clients
ip, so any ip based acls or rules are affected by the decision.  with
http and http-reuse, you use x-forwarded-for, and have collapsed tcp
sessions, resulting in better performance and still have the clients ip
for acls, etc.

as your environment grows, the overhead of huge numbers of tcp
connections becomes a factor, and the performance implications grow at a
commensurate rate. the instant improvement seen with http-reuse
persuaded me to leverage x-f-f, instead of proxy protocol.  in other
instances like mariadb, or imap, where you expect a single connection to
be assoicated with a single ip, i do use proxy-protocol.

ymmv

On 1/12/22 8:57 PM, Aleksandar Lazic wrote:
>
> On 12.01.22 21:52, Andrew Anderson wrote:
>>
>> On Wed, Jan 12, 2022 at 11:58 AM Aleksandar Lazic > > wrote:
>>
>> Well, looks like you want a forward proxy like squid not a
>> reverse proxy like haproxy.
>>
>>
>> The application being load balanced is a proxy, so http_proxy is not
>> a good fit (and as you mention on the deprecation list), but haproxy
>> as a load balancer is a much better at front-ending this environment
>> than any other solution available.
>>
>> We upgraded to 2.4 recently, and a Java application that uses these
>> proxy servers is what exposed this issue for us.  Even if we were to
>> use squid, we would still run into this, as I would want to ensure
>> that squid was highly available for the environment, and we would hit
>> the same code path when going through haproxy to connect to squid.
>>
>> The only option currently available in 2.4 that I am aware of is to
>> setup internal-only frontend/backend paths
>> with accept-invalid-http-request configured on those paths
>> exclusively for Java clients to use. This is effectively how we have
>> worked around this for now:
>>
>> listen proxy
>>  bind :8080
>>  mode http
>>  option httplog
>>  server proxy1 192.0.2.1:8080
>>  server proxy2 192.0.2.2:8080
>>
>> listen proxy-internal
>>  bind :8081
>>  mode http
>>  option httplog
>>  option accept-invalid-http-request
>>  server proxy1 192.0.2.1:8080 track proxy/proxy1
>>  server proxy2 192.0.2.2:8080 track proxy/proxy2
>>
>> This is a viable workaround for us in the short term, but this would
>> not be a solution that would work for everyone.  If the uri parser
>> patches I found in the 2.5/2.6 branches are the right ones to make
>> haproxy more permissive on matching the authority with the host in
>> CONNECT requests, that will remove the need for the parallel
>> frontend/backends without validation enabled.  I hope to be able to
>> have time to test a 2.4 build with those patches included over the
>> next few days.
>
> By design is HAProxy a reverse proxy to a origin server not to a
> forwarding proxy which is the reason why the
> CONNECT method is a invalid method.
>
> Because of that fact I would not use "mode http" for the squid
> backend/servers because of the issues you
> described.
> Why not "mode tcp" with proxy protocol
> http://www.squid-cache.org/Doc/config/proxy_protocol_access/ if you
> need the client ip.
>
>
> Regards
> Alex


Re: invalid request

2022-01-12 Thread brendan kearney
my haproxy config details are below.  i am using haproxy to load balance
2 squid instances, and the http/layer 7 aware configs in haproxy trap
these requests and fail them.

[root@haproxy]# haproxy -v
HA-Proxy version 2.1.11-9da7aab 2021/01/08 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2021.
Known bugs: http://www.haproxy.org/bugs/bugs-2.1.11.html
Running on: Linux 5.11.22-100.fc32.x86_64 #1 SMP Wed May 19 18:58:25 UTC
2021 x86_64

global
 daemon
 log localhost local1 notice
 log-send-hostname router
 maxconn 1024
 pidfile /var/run/haproxy.pid
 stats socket /var/run/haproxy.sock level admin
 stats maxconn 2
 tune.ssl.default-dh-param 2048
 #see
https://mozilla.github.io/server-side-tls/ssl-config-generator/ for the
below
 #the above now redirects to https://ssl-config.mozilla.org/
 ssl-default-bind-ciphers
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
 ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
no-tls-tickets

 ssl-default-server-ciphers
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
 ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11
no-tls-tickets

defaults
 balance leastconn

 log global

 mode http

 option contstats

 timeout http-request10s
 timeout queue   1m
 timeout connect 10s
 timeout client  1m
 timeout server  1m
 timeout http-keep-alive 10s
 timeout check   10s

frontend proxy
 bind 192.168.120.1:8080
 option httplog
 option http-keep-alive
 option forwardfor except 127.0.0.0/8
 default_backend proxy

backend proxy
 source 192.168.120.1
 http-reuse always
 option http-keep-alive
 option httpchk GET /squid-internal-periodic/store_digest HTTP/1.1
 server proxyA 192.168.88.1:3128 check inter 1
 server proxyB 192.168.88.2:3128 check inter 1

On 1/12/22 11:58 AM, Aleksandar Lazic wrote:
>
> On 12.01.22 17:06, Andrew Anderson wrote:
>>
>>
>> On Thu, Dec 30, 2021 at 10:15 PM Willy Tarreau > > wrote:
>>
>> On Wed, Dec 29, 2021 at 12:29:11PM +0100, Aleksandar Lazic wrote:
>>  > > 0  CONNECT download.eclipse.org:443 HTTP/1.1\r\n
>>  > > 00043  Host: download.eclipse.org\r\n
>>  > > 00071  User-Agent: Apache-HttpClient/4.5.10
>> (Java/11.0.13)\r\n
>>  > > 00124  \r\n
>>
>> It indeed looks like a recently fixed problem related to the
>> mandatory
>> comparison between the authority part of the request and the Host
>> header
>> field, which do not match above since only one contains a port.
>>
>>
>> I don't know how pervasive this issue is on non-Java clients, but the
>> sendCONNECTRequest() method from
>> Java's HttpURLConnection API is responsible for the authority/host
>> mismatch when using native Java HTTP
>> support, and has been operating this way for a very long time:
>>
>>  /**
>>   * send a CONNECT request for establishing a tunnel to proxy server
>>   */
>>  private void sendCONNECTRequest() throws IOException {
>>  int port = url.getPort();
>>
>>  requests.set(0, HTTP_CONNECT + " " + connectRequestURI(url)
>>   + " " + httpVersion, null);
>>  requests.setIfNotSet("User-Agent", userAgent);
>>
>>  String host = url.getHost();
>>  if (port != -1 && port != url.getDefaultPort()) {
>>  host += ":" + String.valueOf(port);
>>  }
>>  requests.setIfNotSet("Host", host);
>>
>> The Apache-HttpClient library has a similar issue as well (as
>> demonstrated above).
>>
>> More recent versions are applying scheme-based normalization
>> which consists
>> in dropping the port from the comparison when it matches the scheme
>> (which is implicitly https here).
>>
>>
>> Is there an option other than using "accept-invalid-http-request"
>> available to modify this behavior on the
>> haproxy side in 2.4?  I have also run into this with Java 8, 11 and
>> 17 clients.
>>
>> Are these commits what you are referring to about scheme-based
>> normalization available in more recent
>> versions (2.5+):
>>
>>
https://github.com/haproxy/haproxy/commit/89c68c8117dc18a2f25999428b4bfcef83f7069e
>>
>> (MINOR: http: implement http uri parser)
>>
https://github.com/haproxy/haproxy/commit/8ac8cbfd7219b5c8060ba6d7b5c76f0ec539e978
>>
>> (MINOR: http: use http uri parser for scheme)
>>

Re: dsr and haproxy

2022-11-04 Thread Brendan Kearney
i've always thought of IPVS and DSR as a poor man's anycast.  for 
stateless protocols (DNS, NTP, RADIUS, Kerberos, Syslog) i have anycast 
setup.  for MariaDB, OpenLDAP and other stateful protocols, i use 
HAProxy.  for HTTP, which is stateless but is being driven towards 
stateful-ness with TLS, session maintenance, etc, i do use HAProxy as 
the intelligence and connection reuse capabilities make for a lot of 
performance and scalability gains.


best,

brendan

On 11/4/22 12:33 PM, Lukas Tribus wrote:

On Fri, 4 Nov 2022 at 16:50, Szabo, Istvan (Agoda)
 wrote:

Yeah, that’s why I’m curious anybody ever made it work somehow?

Perhaps I should have been clearer.

It's not supported because it's not possible.

Haproxy the OSS uses the socket API, haproxy cannot forward IP packets
arbitrarily, which is required for DRS.

This is a hard no, not a "we do not support this configuration because
nobody ever tried it and we can't guarantee it will work".


Lukas





Re: PostgreSQL: How can use slave for some read operations?

2023-03-15 Thread Brendan Kearney
what i have done is create frontends and backends for all of the load 
balanced nodes, and separate f/e and b/e for the individual nodes.  for 
instance:


frontend mariadb
    mode tcp
    bind 192.168.120.3:3306
    default_backend mariadb

frontend mariadb1
    mode tcp
    bind 192.168.120.3:3316
    default_backend mariadb1

frontend mariadb2
    mode tcp
    bind 192.168.120.3:3326
    default_backend mariadb2

frontend mariadb3
    mode tcp
    bind 192.168.120.3:3336
    default_backend mariadb3

...

backend mariadb
    source 192.168.120.3
    mode tcp
    option mysql-check user haproxy

    server mariadb1 192.168.88.1:3306 check inter 1 send-proxy-v2
    server mariadb2 192.168.88.2:3306 check inter 1 send-proxy-v2
    server mariadb3 192.168.88.3:3306 check inter 1 send-proxy-v2

backend mariadb1
    source 192.168.120.3
    mode tcp
    option mysql-check user haproxy
    server mariadb1 192.168.88.1:3306 check inter 1 send-proxy-v2

backend mariadb2
    source 192.168.120.3
    mode tcp
    option mysql-check user haproxy
    server mariadb2 192.168.88.2:3306 check inter 1 send-proxy-v2

backend mariadb3
    source 192.168.120.3
    mode tcp
    option mysql-check user haproxy
    server mariadb3 192.168.88.3:3306 check inter 1 send-proxy-v2

by doing this, i can load balance across all mariadb nodes using port 
3306, but also hit each of the nodes individually using the same VIP 
name, but a different port (3316, 3326, 3336).  i chose to keep the same 
frontend IP, so that kerberos authentication still works, as the krb 
principal is tied to the DNS name of the VIP.


essentially you would wind up with different VIPs for the R/W access and 
R/O access.


HTH,

brendan kearney

On 3/15/23 4:12 AM, Илья Шипицин wrote:

there are several L7 balancing tool like pgPool.

as for haproxy, currently it does not provide such advanced postgresql 
routing


ср, 15 мар. 2023 г. в 06:09, Muhammed Fahid :

Hi,

I have A master and a slave PostgreSQL databases. I would like to
know that major read operations can be processed with slave for
reducing load in master ??

for example : I have a large number of products.when customers
want to list all products. Is it possible to read from a slave
database? instead of from the master database ?. If major read
operations are done in master its slows down the other operations
in master.


Re: haproxy 2.4 and Kafka sink/source connector issues

2023-08-01 Thread Brendan Kearney

hey,

first, use "option mysql-check", for better service checking. you'll 
have to add a user and access to the database, and the howto is in the 
configuration.txt file 
(https://www.haproxy.org/download/2.1/doc/configuration.txt).  the 
"option httpchk" is doing you nothing because the backend isnt talking 
HTTP and the mode is tcp, for mysql.


second, look into the proxy protocol, and you can have HAProxy send the 
client IP in a TCP header, similar to the X-Forwarded-For header in 
HTTP.  you need to add a line like:


proxy-protocol-networks=::1, localhost, 

into the my.cnf or mariadb-server.cnf file.  replace the ip with a 
network cidr, without the brackets, to specify client ranges that should 
be sent using the proxy protocol.  then add the check option 
"send-proxy-v2" to the sever line in the backend of HAproxy.  mine is:


server mariadb1 192.168.88.1:3306 check inter 1 send-proxy-v2

this will help you better identify the client that is losing the connection.

if there is a firewall between the client and HAProxy, look at the logs 
there.  the firewall could be reaping the connections if they are long 
running and the firewall hits a threshold, gets busy or maybe has a 
policy update pushed to it.  something in between could be an issue.


hope this helps,

brendan

On 8/1/23 8:11 PM, David Greenwald wrote:

Hi all,

Looking for some help with a networking issue we've been debugging for 
several days. We use haproxy to TCP load-balance between Kafka 
Connectors and a Percona MySQL cluster. In this set-up, the connectors 
(i.e., Java JDBC) maintain long-running connections and read the 
database binlogs. This includes regular polling.


We have seen connection issues starting with haproxy 2.4 and 
persisting through 2.8 which result in the following errors in MySQL:


|2023-07-31T17:25:45.745607Z 3364649 [Note] Got an error reading 
communication packets|


As you can see, this doesn't include a host or user and is happening 
early in the connection. The host cache shows handshake errors here 
regularly accumulating.


We were unable to see errors on the haproxy side with tcplog on and 
have been unable to get useful information from tcpdump, netstat, etc.


We are aware FE/BE connection closure behavior changed in 2.4. The 2.4 
option of idle-close-on-response seemed like a possible solution but 
isn't compatible with mode tcp, so we're not sure what's happening 
here or next steps for debugging. Appreciate any help or guidance here.


We're running haproxy in Kubernetes using the official container, and 
are also not seeing any issues with current haproxy versions with our 
other (Python) applications.


A simplified version of our config:

global
     daemon
     maxconn 25000

defaults
     balance roundrobin
     option dontlognull
     option redispatch
     timeout http-request 5s
     timeout queue 1m
     timeout connect 4s
     timeout client 50s
     timeout server 30s
     timeout http-keep-alive 10s
     timeout check 10s
     retries 3

frontend main_writer
     bind :3306
     mode tcp
     timeout client 30s
     timeout client-fin 30s
     default_backend main_writer

backend main_writer
     mode tcp
     balance leastconn
     option httpchk
     timeout server 30s
     timeout tunnel 3h
     server db1 :3306 check port 9200 on-marked-down 
shutdown-sessions weight 100 inter 3s rise 1 fall 2
     server db2 :3306 check port 9200 on-marked-down 
shutdown-sessions weight 100 backup
     server db3 :3306 check port 9200 on-marked-down 
shutdown-sessions weight 100 backup





*

David Greenwald

Senior Site Reliability Engineer


david.greenw...@discogsinc.com

*

The contents of this communication are confidential. If you are not 
the intended recipient, please immediately notify the sender by reply 
email and delete this message and its attachments, if any.