force close connection for HTTP/2

2022-11-08 Thread Benedikt Fraunhofer
Hello Haproxy-List,

I need a way to forcefully close a HTTP/2 connection with a
haproxy-internally generated response ('http-request redirect" or
"http-request return")

Basically what "Connection: close" ("option httpclose" or "no option
http-keepalive") did for 1.1.

I know the HTTP/2 spec provides GOAWAY Frames for this
and haproxy already sends those on shutdown [1].

Is there a way to manually trigger these?

After lots of trying, crying and cursing I finally was able to abuse
"timeout client 100", but this seems ugly, even for me.
Not enabling HTTP/2 and using "option httpclose" or "no option
http-keep-alive" is - of course - another "workaround"

I also found [2] which suggests using a 421 response and an errorfile
for the content (one should be able to use 'http-request return'
instead today) but this is for retrying _the same_ request over a new
connection, not a redirect?
[3] is about another 421 foo for yet another ssl-problem as was [2];
an answer cites the RFC which says "client MAY retry", not "SHOULD" or
"MUST" and that chrome had a now-fixed bug in 2021 which ruined that.

I know use cases for this are rare. The Authors in [2] needed this for
client-certificates and [3] for some SNI stuff;  I need it for some
nat-conntrack-foo I'd rather not solve using raw/mangle iptables.

Hopefully the "timeout client " workaround at least
makes it into the docs so others running in this problem might find a
low-impact workaround. Or search engines scrape the mailinglist :)

Thx in Advance

  Benedikt

[1]
https://github.com/haproxy/haproxy/issues/13

[2]
https://haproxy.formilux.narkive.com/fyNOpSGz/force-response-to-send-http-2-goaway

[3] https://serverfault.com/questions/916724/421-misdirected-request



Connect to SNI-only server (haproxy as a client)

2014-08-18 Thread Benedikt Fraunhofer
Hello List,

I'm trying to help an java6-app that can't connect to a server which
seems to support SNI-only.

I thought I could just add some frontend and backend stancas
and include the sni-only server as a server in the backend-section like so:

   server a 1.2.3.4:443 ssl verify none force-tlsv12

(I had verify set, just removed it to keep it simple and rule it out)

But it seems the server in question insists on SNI, whatever force-* I
use and the connection is tcp-reset by the server (a) right after the
Client-Hello from haproxy.

Is there a way to specify the "TLS SNI field" haproxy should use for
these outgoing connections?

Thx in advance

 Benedikt



Re: Low performance when using mode http for Exchange-Outlook-Anywhere-RPC

2012-05-08 Thread Benedikt Fraunhofer
Hello Willy,

2012/5/8 Willy Tarreau :

> For such border-line uses, you need to enable "option http-no-delay". By

great! that did it.

> default, haproxy tries to merge as many TCP segments as possible. But in
> your case, the application is abusing the HTTP protocol by expecting that

Does haproxy even discard the PUSH Flag on tcp-packets? or is
microsoft simply not sending it?

> wrong but it's not the first time microsoft does crappy things with HTTP,
> see NTLM).

HTTP is such a versatile protocol,  and, as already being sung by some
"some of them want to be abused" :)

> Please note that such protocols will generally not work across
> caches or anti-virus proxies.

Well. In this case, all proxies on the client side will only see https
traffic; they should not be able to inspect that.

> With "option http-no-delay", haproxy refrains from merging consecutive
> segments and forwards data as fast as they enter. This obviously leads
> to higher CPU and network usage due to the increase of small packets,
> but at least it will work as expected.

I'm following the mailing-list and saw that you did something
different for web-sockets?
[...]because haproxy switches to tunnel mode when it sees the WS
handshake and it
keeps the connection open for as long as there is traffic.[...]
or is tunnel mode something different and keeps the inner working of
assembling and merging packets in the "http-mode"

I dunno if that's important but maybe one should do that for
"Content-Type:application/rpc", too, but anyhow it easy to throw in
the option and i'm more than happy that i can stay with my setup and
have client-stickiness for dryout-purposes.

And congrats to your new president :)

Thx again and again

 Beni.



Low performance when using mode http for Exchange-Outlook-Anywhere-RPC

2012-05-08 Thread Benedikt Fraunhofer
Hello List,

I placed haproxy in front of our exchange cluster for OutlookAnywhere
Clients (that's just RPCoverHTTP, port 443). SSL is terminated by
pound and forwards traffic on loopback to haproxy.

Everything works but it's awfully slow when i use "mode http";
requests look like this:

RPC_IN_DATA /rpc/rpcproxy.dll?[...] HTTP/1.1

HTTP/1.1 200 Success..Content-Type:application/rpc..Content-Length:1073741824

RPC_OUT_DATA /rpc/rpcproxy.dll?[..] HTTP/1.1

HTTP/1.1 200 Success..Content-Type:application/rpc..Content-Length:1073741824

(this is the nature of microsoft rpc I've been told, it's using two
channels to make it "duplex")
and are held open in both cases (mode tcp and mode http) due to long
configured timeouts (and "no option httpclose" for the http-mode)

I can't see a big difference in how packets look, there's an awful lot
of nearly empty packets
with Syn and Push set, but that's in both cases. Packets reach 16k
(that's the mtu of the loopback device)

The only difference you can see in the Outlook Connection Info Window
is the response-time: with mode tcp it's around 16-200ms while in
http-mode it's above 800ms.

Any hint? Or is mode-http of no use because I'll be unable to inject
stuff into the session-cookie at all?

Thx in advance
  Beni.



Re: Matching URLs at layer 7

2010-04-28 Thread Benedikt Fraunhofer
Hi *,

> (2) Host header is www.example.com
> (3) All is good! Pass request on to server.
> (2) Host header is www.whatever.com
> (3) All is NOT good! Flick request somewhere harmless.

If that's all you want, you should be able to go with

 acl xxx_host hdr(Host)  -i xxx.example.com
 block if !xxx_host

, in your listen(, ...) section. But everything comes with a downside:
IMHO HTTP/1.0 doesnt require the Host header to be set so you'll be
effecitvely lock out all the HTTP/1.0 users unless you make another
rule checking for an undefined Host header (and allowing that) (or
checking for HTTP/1.0, there should be a "macro" for that.

Just my 2cent
  Beni.



Re: Matching URLs at layer 7

2010-04-28 Thread Benedikt Fraunhofer
Hi Andrew,

2010/4/28 Andrew Commons :

> url_beg 
>  Returns true when the URL begins with one of the strings. This can be used to
>  check whether a URL begins with a slash or with a protocol scheme.
>
> So I'm assuming that "protocol scheme" means http:// or ftp:// or whatever

I would assume that, too..
but :) reading the other matching options it looks like those only
affect the "anchoring" of the matching. Like

> url_ip 
>  Applies to the IP address specified in the absolute URI in an HTTP request.
>  It can be used to prevent access to certain resources such as local network.
>  It is useful with option "http_proxy".

yep. but watch this "http_proxy"


> url_port 
>  "http_proxy". Note that if the port is not specified in the request, port 80
>  is assumed.

same here.. This enables plain proxy mode where requests are issued
(from the client) like

 GET http://www.example.com/importantFile.txt HTTP/1.0
.

> This seems to be reinforced (I think!) by:
>
> url_dom 
>  Returns true when one of the strings is found isolated or delimited with dots
>  in the URL. This is used to perform domain name matching without the risk of
>  wrong match due to colliding prefixes. See also "url_sub".

I personally don't think so.. I guess this is just another version of
"anchoring", here
"\.$STRING\."

> If I'm suffering from a bit of 'brain fade' here just set me on the right 
> road :-) If the url_ criteria have different interpretations in terms of what 
> the 'url' is then let's find out what these are!

I currently can't give it a try as i finally managed to lock myself out, but

http://haproxy.1wt.eu/download/1.4/doc/configuration.txt

has an example that looks exactly as what you need:
---
To select a different backend for requests to static contents on the "www" site
and to every request on the "img", "video", "download" and "ftp" hosts :

   acl url_static  path_beg /static /images /img /css
   acl url_static  path_end .gif .png .jpg .css .js
   acl host_wwwhdr_beg(host) -i www
   acl host_static hdr_beg(host) -i img. video. download. ftp.

   # now use backend "static" for all static-only hosts, and for static urls
   # of host "www". Use backend "www" for the rest.
   use_backend static if host_static or host_www url_static
   use_backend wwwif host_www

---

and as "begin" really means anchoring it with "^" in a regex this
would mean that there's no host in url as this would redefine the
meaning of "begin" which should not be done :)

So you should be fine with

   acl xxx_host hdr(Host)  -i xxx.example.com
   acl xxx_url  url_beg /
   #there's already a predefined acl doing this.
   use_backend xxx if xxx_host xxx_url

if i recall your example correctly.. But you should really put
something behind the url_beg to be of any use :)

Just my 2 cent

 Beni.



Re: Matching URLs at layer 7

2010-04-28 Thread Benedikt Fraunhofer
Hi *,

2010/4/28 Andrew Commons :
>        acl xxx_url      url_beg        -i http://xxx.example.com
>        acl xxx_url      url_sub        -i xxx.example.com
>        acl xxx_url      url_dom        -i xxx.example.com

The Url is the part of the URI without the host :)
A http request looks like

 GET /index.html HTTP/1.0
 Host: www.example.com

so you can't use url_beg to match on the host unless you somehow
construct your urls to look like
 http://www.example.com/www.example.com/
but don't do that :)

so what you want is something like chaining
acl xxx_host hdr(Host) 
acl xxx_urlbe1 url_begin /toBE1/
use_backend BE1 if xxx_host xxx_urlbe1
?

Cheers

  Beni.



Re: issue with using digest with jetty backends

2010-04-06 Thread Benedikt Fraunhofer
Hi,

2010/4/6 Matt :

> < HTTP/1.1 100 Continue
> < HTTP/1.1 200 OK

Somehow this looks very odd to me :)
Dunno if that helps, but we had problems with curl and digest
authentication some time ago and solved it using

  curl --digest -H "Expect:" [...]

but we might have used a very old (buggy) version of curl.
Please let me know if that helps in your case.

Just my 2 cent
  Beni.



Re: [PATCH] [MINOR] CSS & HTML fun

2009-10-13 Thread Benedikt Fraunhofer
Hello,

2009/10/13 Dmitry Sivachenko :

> End tag for  is optional according to

really? Something new to me :)

> http://www.w3.org/TR/html401/struct/lists.html#edef-UL

hmm. "" is optional (implied by next "" or closing "",
 "" not?






Start tag: required, End tag: required

the line stating "Start tag: required, End tag: optional"
is for the 

Just my 2cent
  Beni.



Re: HAproxy not accepting http health check response

2009-05-25 Thread Benedikt Fraunhofer
Ah, forgot to cc the list in my first reply, so sorry for the
following fullquote.


2009/5/25 Benedikt Fraunhofer :
> Hello *,
>
> 2009/5/25 Sanjeev Kumar :
>>  My config file.
> [...]
>>   option httpchk HEAD /check.tst HTPP/1.0
>
> do you really have "HTPP" there?
>
> can you paste the tcpdump or strace output?
>
> just my 2 cent.
>
> Beni.
>

---
2009/5/25 Sanjeev Kumar :
> I have changed the DB-application on server is  to respond to HTTP-HEAD cmd
> with the response:
>
> HTTP/1.0 200 OK\r\n\r\n

3 things...

1) the EAGAIN is not really an error. It's the expected answer for a
recv() call on a non-blocking-io handle
which has no data available to read. It's more like a "please try
again later"-hint.
non-blocking-io-operation is requested with
fcntl64(6, F_SETFL, O_RDONLY|O_NONBLOCK) = 0

2) option httpchk HEAD /check.tst HTPP/1.0
is somehow wrong. That was the first question in my first reply.
It's "HTTP" not "HTPP" and you must have some other typo in there, as
your strace output
says:
  send(6, "HEAD /check.txt; HTTP/1.0\r\n\r\n", 29,
MSG_DONTWAIT|MSG_NOSIGNAL) = 29
.
note the ";" after "check.txt", which is incorrect.
Furthermore the filename you supplied in your config "check.tst" does not appear
here. A correct http-request obeying the config you supplied would look like
HEAD /check.tst HTTP/1.0\r\n\r\n

3) surprisingly your application nevertheless returns a
   HTTP/1.0 200 OK\r\n\r\n
which looks fine, so you've got me stumped.

Could you double check that you've no typos in your config? It's just
that the requests
seen in the strace output do not comply with the config you pasted.

Please also note that you've "mode tcp" in there and you are
requesting http-checks.
This is ok and also noted in the docs, just make sure this is what you
really want :)

does haproxy say something about the servers going down in the logs?
Which version are you using?
Should you repeat your strace, please use something like "-s 8192" so we can
see all messages (especially those sent to the log) and they're not
chopped after 32 characters.

Cheers

Beni.