502 Bad Gateway

2018-05-07 Thread UPPALAPATI, PRAVEEN
Hi Haproxy-Team,

I have the following configuration:

listen http_proxy-1000
bind *:1000 
mode http
option httplog
http-request set-uri https://%[url_param(redirHost)]%[capture.req.uri]
option http_proxy


If I issue a request to that port :

https://:1000
/test/test.txt?Host=:8093

I get 

If I add ssl termination to the config:

listen http_proxy-1000
bind *:1000 ssl  test.pem
mode http
option httplog
http-request set-uri https://%[url_param(redirHost)]%[capture.req.uri]
option http_proxy


I get :
http-9876~ bk_9876/ 0/0/1/-1/2 502 211 - - PH-- 1/1/0/0/0 0/0 "GET 
/test/test.txt?idnsredirHost=:5300 HTTP/1.1"

I have also set :

ssl-server-verify none

@global still no luck.

Let me know if I am missing anything .

Thanks,
Praveen.


-Original Message-
From: Aleksandar Lazic [mailto:al-hapr...@none.at] 
Sent: Tuesday, May 01, 2018 7:22 AM
To: UPPALAPATI, PRAVEEN ; Willy Tarreau 
Cc: Olivier Houchard ; haproxy@formilux.org
Subject: Re: Logging Question

Hi.

Am 30.04.2018 um 19:05 schrieb UPPALAPATI, PRAVEEN:
> 
> Hi Willy/Oliver,
> 
> One small question:
> 
> When I capture the header it's returning .com in the log but when I 
> perform Get on .com:1000 it is not matching the following configuration.
> 
> frontend http-1000
> bind *:1000 
> option httplog
> capture request header Host len 20
> acl is_east hdr(host) -i .com 

Maybe this helps?

acl is_east hdr_beg(host) -i .com

> use_backend east_bk_1000_read if is_east
> 
> My question is how can I print o/p of hdr(host) & is_east  to log?
> 
> Appreciate your help.
> 
> Thanks,
> Praveen.

Regards
Aleks


Re: haproxy startup at boot too quick

2018-05-07 Thread Bill Waggoner
On Mon, May 7, 2018 at 8:44 PM Kevin Decherf  wrote:

> Hello,
>
> On 8 May 2018 02:32:01 CEST, Bill Waggoner  wrote:
>
> >Anyway, when the system boots haproxy fails to start. Unfortunately I
> >forgot to save the systemctl status message but the impression I get is
> >that it's starting too soon.
>
> You can find all past logs of your service using `journalctl -u
> haproxy.service`. If journal persistence is off you'll not be able to look
> at logs sent before the last boot.
>
>
> --
> Sent from my mobile. Please excuse my brevity.
>

Thank you, that was very helpful. I am new to systemd so please forgive my
lack of knowledge.

Looking at the messages it looks like one server was failing to start. That
one happens to have a name instead of a static address in the server
definition. My guess is that DNS isn't available yet when haproxy was
starting and the retries are so quick that it didn't have time to recover.

I'll simply change that to a literal IP address as all the others are.

Thanks!

Bill Waggoner
-- 
Bill Waggoner
ad...@greybeard.org
{Even Old Dogs can learn new tricks!}


Re: haproxy startup at boot too quick

2018-05-07 Thread Kevin Decherf
Hello,

On 8 May 2018 02:32:01 CEST, Bill Waggoner  wrote:
 
>Anyway, when the system boots haproxy fails to start. Unfortunately I
>forgot to save the systemctl status message but the impression I get is
>that it's starting too soon.

You can find all past logs of your service using `journalctl -u 
haproxy.service`. If journal persistence is off you'll not be able to look at 
logs sent before the last boot.


-- 
Sent from my mobile. Please excuse my brevity.



haproxy startup at boot too quick

2018-05-07 Thread Bill Waggoner
I feel I should tap the screen and say "Is this on?"  I just signed up for
the list but the haproxy+help address didn't supply much info at all. SO
I'll try this ...

I just moved my running haproxy instance over to a new Intel NUC box
running Ubuntu Server 18.04. SO much faster that the Raspberry Pi that I
was running on before. It's obviously not a "critical" system but it's
important to me ...

Anyway, when the system boots haproxy fails to start. Unfortunately I
forgot to save the systemctl status message but the impression I get is
that it's starting too soon.

What I think I narrowed it down to is in the systemd haproxy.service config
it says After=network.service rsyslog.service

I think that should be:

Requires=network-online.target
After=network-online.target rsyslog.service

I've made that change but haven't tested it yet. Seems like a logical
change to me.

I'd appreciate any comments ...

Bill Waggoner
-- 
Bill Waggoner
ad...@greybeard.org
{Even Old Dogs can learn new tricks!}


Re: 1.8.8 & 1.9dev, lua, xref_get_peer_and_lock hang / 100% cpu usage after restarting haproxy a few times

2018-05-07 Thread PiBa-NL

Hi List, Thierry,

Actually this is not limited to restarts, and also happens with 1.9dev. 
It now happens while haproxy was running for a while and no restart was 
attempted while running/debugging in my NetBeans IDE..


Root cause imo is that hlua_socket_receive_yield and hlua_socket_release 
both try and acquire the same lock.



For debugging purposes ive added some code in 
hlua_socket_receive_yield(..) before the stream_int_notify:


    struct channel *ic2 = si_ic(si);
    struct channel *oc2 = si_oc(si);
    ha_warning("hlua_socket_receive_yield calling notify peer:%9x 
si[0].state:%d oc2.flag:%09x ic2.flag:%09x\n", peer, s->si[0].state, 
oc2->flags, ic2->flags);

    stream_int_notify(>si[0]);

And:
static void hlua_socket_release(struct appctx *appctx)
{
    struct xref *peer;
    if (appctx->ctx.hlua_cosocket.xref.peer > 1)
        ha_warning("hlua_socket_release peer: %9x %9x\n", 
appctx->ctx.hlua_cosocket.xref, appctx->ctx.hlua_cosocket.xref.peer->peer);

    else
        ha_warning("hlua_socket_release peer: %9x 0\n", 
appctx->ctx.hlua_cosocket.xref);



And also added code in xref_get_peer_and_lock(..):
static inline struct xref *xref_get_peer_and_lock(struct xref *xref)
{
    if (xref->peer == 1) {
        printf("  xref_get_peer_and_lock xref->peer == 1 \n");
    }


This produces the logging:

[WARNING] 127/001127 (36579) : hlua_socket_receive_yield calling notify 
peer:  2355590  si[0].state:7 oc2.flag:0c000c220 ic2.flag:00084a024

[WARNING] 127/001127 (36579) : hlua_socket_release peer: 1 0
  xref_get_peer_and_lock xref->peer == 1

When xref_get_peer_and_lock is called with a parameter xref->peer value 
of 1 then it looks like it keeps swapping 1 and 1 until it is not 1, 
that never happens..


As for the oc2.flags it contains the CF_SHUTW_NOW.. of which im still 
not 100% when that flag is exactly set to get a foolproof reproduction.. 
but it happens on pretty much a daily basis for me in production and in 
test i can now usually trigger it after a few testruns with no actual 
traffic passing along within the first minute of running (healthchecks 
are performed on several backend, and a mail or 2 is send by the lua 
code during this startup period..).. with the full production config..


Below the stacktrace that comes with it..

xref_get_peer_and_lock (xref=0x802355590) at 
P:\Git\haproxy\include\common\xref.h:37

hlua_socket_release (appctx=0x802355500) at P:\Git\haproxy\src\hlua.c:1595
si_applet_release (si=0x8023514c8) at 
P:\Git\haproxy\include\proto\stream_interface.h:233
stream_int_shutw_applet (si=0x8023514c8) at 
P:\Git\haproxy\src\stream_interface.c:1504
si_shutw (si=0x8023514c8) at 
P:\Git\haproxy\include\proto\stream_interface.h:320
stream_int_notify (si=0x8023514c8) at 
P:\Git\haproxy\src\stream_interface.c:465
hlua_socket_receive_yield (L=0x80223b388, status=1, ctx=0) at 
P:\Git\haproxy\src\hlua.c:1789

?? () at null:
?? () at null:
lua_resume () at null:
hlua_ctx_resume (lua=0x8022cb800, yield_allowed=1) at 
P:\Git\haproxy\src\hlua.c:1022

hlua_process_task (task=0x80222a500) at P:\Git\haproxy\src\hlua.c:5556
process_runnable_tasks () at P:\Git\haproxy\src\task.c:232
run_poll_loop () at P:\Git\haproxy\src\haproxy.c:2401
run_thread_poll_loop (data=0x802242080) at P:\Git\haproxy\src\haproxy.c:2463
main (argc=4, argv=0x7fffea80) at P:\Git\haproxy\src\haproxy.c:3053

I don't yet have a any idea about the direction of a possible fix.. :(..
Issue is that probably the hlua_socket_release should happen, but it 
doesnt know what socket / peer it should release at that point.. its in 
the local peer variable of the hlua_socket_receive_yield funtion.. 
Should it be 'unlocked' before calling stream_int_notify??


Does anyone dare to take a stab at a creating a patch ? If so thanks in 
advance ;)


Regards,
PiBa-NL (Pieter)


Op 3-5-2018 om 1:30 schreef PiBa-NL:

Hi List,

Sometimes after a few 'restarts' of haproxy 1.8.8 (using -sf  
parameter) one of the processes seems to get into a 'hanging' state 
consuming 100% cpu..


In this configuration i'm using 'nbthread 1' not sure if this is 
related to the corrupted task-tree from my other lua issue.?. 
https://www.mail-archive.com/haproxy@formilux.org/msg29801.html .?.


Also i'm using my new smtpmailqueue and serverhealthchecker lua 
scripts (can be found on github.).. these probably 'contribute' to 
triggering the condition.


Anything i can check / provide.?

(cant really share the config itself a.t.m. as its from our production 
env, but it has like 15 backends with 1 server each, a little header 
rewriting/insertion but nothing big..)


GNU gdb (GDB) 8.0.1 [GDB v8.0.1 for FreeBSD]
Copyright (C) 2017 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 


This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show 
copying"

and "show warranty" for details.
This GDB was configured as 

Re: http-response set-header is unreliable

2018-05-07 Thread Tim Düsterhus
Willy,

Am 03.05.2018 um 18:18 schrieb Willy Tarreau:
>> Personally I'd prefer the rate limited warning over the counter. As
>> outlined before: A warning counter probably will be incremented for
>> multiple unrelated reasons in the longer term and thus loses it
>> usefulness. Having a warning_headers_too_big counter and a
>> warning_whatever_there_may_be is stupid, no?
> 
> For now we don't have such a warning, so the only reason for logging
> it would be this header issue. It's never supposed to happen in theory
> as it normally needs to be addressed immediately and ultimately we
> should block by default on this. And if later we find another reason
> to add a warning, we'll figure if it makes sense to use a different
> counter or not.
> 
> Also you said yourself that you wouldn't look at the logs first but at
> munin first. And munin monitors your stats socket, so logically munin
> should report you increases of this counter found on the stats socket.

So, definitely log + counter then, because counter alone IMO is a
debuggability nightmare. At least if you are not the person implementing
the counter.

>> I feel that the error counter could / should be re-used for this and
>> just the log message should be added.
> 
> Except that it's not an error until we block. We detected an error and
> decided to let it pass through, which is a warning. It would be an error
> if we'd block on it though.

Understood.

>> My munin already monitors the
>> error counts. The `eresp` counter seems to fit: "- failure applying
>> filters to the response.".
> 
> If you see an error, you have the guarantee that the request or response
> was blocked, so definitely here it doesn't fit for the case where you
> don't block. And it's very important not to violate such guarantees as
> some people really rely on them. For example during forensics after an
> intrusion attempt on your systems, you really want to know if the attacker
> managed to retrieve something or not.
> 

Understood.

I'll see whether I manage to prepare a first stab of a patch this week.

Best regards
Tim Düsterhus



Re: Domain fronting

2018-05-07 Thread Tim Düsterhus
Holger,
Mildis,

Am 07.05.2018 um 22:54 schrieb Holger Just:
> This approach is a bit special since regular expressions (or generally
> any compared value) needs to be static in HAProxy can can't contain
> dynamically generated values.
> 

FWIW on April, 27th 2018 I shipped a patch adding a strcmp converter to
haproxy master (i.e. 1.9):
https://www.mail-archive.com/haproxy@formilux.org/msg29786.html

@Holger I acknowledged your solution to my question in my initial mail
to that subthread, it's still working fine. Thank you.

@Mildis Make sure to read the sibling mails in the thread also.
Depending on you exact set-up of certificates you might or might not
break legitimate requests when preventing domain fronting.

Best regards
Tim Düsterhus



Re: Domain fronting

2018-05-07 Thread Holger Just
Hi Mildis (and this time the list too),

Mildis wrote:
> Is there a simple way to limit TLS domain fronting by forcing SNI and Host 
> header to be the same ?
> Maybe add an optional parameter like "strict_sni_host" ?

You can do a little trick here to enforce this without having to rely on
additional code in HAProxy.

What you can do is to build a new temporary HTTP header which contains
the concatenated values of the HTTP host header and the SNI server name
value. Using a regular expression, you can then check that the two
values are the same.

This approach is a bit special since regular expressions (or generally
any compared value) needs to be static in HAProxy can can't contain
dynamically generated values.

I often the following configuration snippet in my frontends (probably
remove newlines added in this mail):

# Enforce that the TLS SNI field (if provided) matches the HTTP hostname
# This is a bit "hacky" as HAProxy neither allows to compare two
# headers directly nor allows dynamic patterns in general. Thus, we
# concatenate the HTTP Header and the SNI field in an  internal header
# and check if the same value is repeated in that header.
http-request set-header X-CHECKSNI %[req.hdr(host)]==%[ssl_fc_sni] if {
ssl_fc_has_sni }

# This needs to be a named capture because of "reasons". Backreferences
# to normal captures are rejected by (my version of) HAProxy
http-request deny if { ssl_fc_has_sni } ! { hdr(X-CHECKSNI) -m reg -i
^(?.+)==\1$ }

# Cleanup after us
http-request del-header X-CHECKSNI

Cheers, Holger



Re: Question on Caching.

2018-05-07 Thread Aaron West
Hi Willy,

I think what we are looking for is some kind of small cache to
accelerate the load times of a single page; this is particularly for
things such as WordPress where page load times can be slow. I imagine
it being set to cache the homepage only, fairly small(just a few K)
and I guess it would need to only cache the HTML body rather than
headers... Does that make any sense at all?

It may be that the small object cache would help? Or the idea itself
may be a waste of time... Currently, I've been looking at the Apache
module mod_cache.

I'd value your opinion either way.

Aaron West

Loadbalancer.org Ltd.

www.loadbalancer.org

+1 888 867 9504 / +44 (0)330 380 1064
aa...@loadbalancer.org



stable-bot: WARNING: 13 bug fixes in queue for next release

2018-05-07 Thread stable-bot
Hi,

This is a friendly bot that watches fixes pending for the next haproxy-stable 
release!  One such e-mail is sent every week once patches are waiting in the 
last maintenance branch, and an ideal release date is computed based on the 
severity of these fixes and their merge date.  Responses to this mail must be 
sent to the mailing list.

Last release 1.8.8 was issued on 2018/04/19.  There are currently 13 patches in 
the queue cut down this way:
- 1 MAJOR, first one merged on 2018/04/26
- 4 MEDIUM, first one merged on 2018/04/26
- 8 MINOR, first one merged on 2018/04/26

Thus the computed ideal release date for 1.8.9 would be 2018/05/10, which is in 
one week or less.

The current list of patches in the queue is:
- MAJOR   : channel: Fix crash when trying to read from a closed socket
- MEDIUM  : threads: Fix the sync point for more than 32 threads
- MEDIUM  : h2: implement missing support for chunked encoded uploads
- MEDIUM  : lua: Fix segmentation fault if a Lua task exits
- MEDIUM  : task: Don't free a task that is about to be run.
- MINOR   : lua: schedule socket task upon lua connect()
- MINOR   : lua: Put tasks to sleep when waiting for data
- MINOR   : pattern: Add a missing HA_SPIN_INIT() in pat_ref_newid()
- MINOR   : map: correctly track reference to the last ref_elt being dumped
- MINOR   : lua/threads: Make lua's tasks sticky to the current thread
- MINOR   : checks: Fix check->health computation for flapping servers
- MINOR   : config: disable http-reuse on TCP proxies
- MINOR   : log: t_idle (%Ti) is not set for some requests

---
The haproxy stable-bot is freely provided by HAProxy Technologies to help 
improve the quality of each HAProxy release.  If you have any issue with these 
emails or if you want to suggest some improvements, please post them on the 
list so that the solutions suiting the most users can be found.