Re: Can you block this?
On Thu, Feb 23, 2023 at 06:48:14PM -0700, Bryan Arenal wrote: > Hi there, > > I’m seeing some traffic from what appears to be bad actors and am > wanting to block them. I see this in the existing config but being > new to haproxy, it doesn’t seem like it’s configured correctly but I’m > not sure: > > frontend main > bind :80 > acl bad_ip src > acl bad_ip_proxy hdr_ip(X-Forwarded-For) Off the top of my head, this should probably be: acl bad_ip src -f /etc/haproxy/blocklist.lst acl bad_ip_proxy hdr_ip(X-Forwarded-For) -f /etc/haproxy/blocklist.lst > tcp-request connection reject if bad_ip || bad_ip_proxy I'm not sure offhand about the processing order for the header case. You might need BOTH: tcp-request connection reject if bad_ip || bad_ip_proxy http-request connection reject if bad_ip || bad_ip_proxy Depending on the scale of the traffic, the one problem you'll have here is that HAProxy still has to process the problematic requests. In that case I suggest writing a feedback loop that adds the bad IP to an ipset set, to block the traffic before it gets to HAProxy, for some period of time. The trigger for the loop can either be a tail on the logfile, or using some variation of the set* functionality (set-acl, set-map, set-mark) and exporting the data to ipset. -- Robin Hugh Johnson Pronouns : They/he E-Mail : robb...@orbis-terrarum.net signature.asc Description: PGP signature
[2.4.2] Thread .. is about to kill the process - Lua-involved
=== Resending this, with the threading broken, so that other readers hopefully see it. It was in the thread previously. === Hi, This is a followup to the prior threads about 100% in 2.2.x & 2.3.x; where I referenced heavy workloads causing HAProxy to initially hit 100% CPU, but then after the watchdog detection was added, they just killed the process instead. After months searching, at work we stumbled onto an internally usable-only reproduction case using a tool we wrote that made millions of requests: Turning it up around ~6K RPS w/ lots of the headers being processed by our Lua code triggered the issue, running on a single-sock EPYC 7702P system. We also found a surprising mitigation: enabling multithreaded Lua w/ "lua-load-per-thread" made the problem go away entirely (and gave a modest 10% performance boost, we are mostly limited by backend servers, not HAProxy or Lua). The Lua script was described in the previous script, and only does complex string parsing, used for variables, and driving some applets. It doesn't do any blocking operations, sockets, files or rely on globals. It got a few cleanups for multi-threaded usage (forcing more variables to be explicitly local), but has no other significant changes relevant to this discussion (it had some business logic changes to string handling used to compute stick table keys, but not really functionality changes). The full errors are attached along with decoded core dump, with some details redacted per $work security team requirements. Repeated the error twice and both attempts are attached, 4 files in total. I'll repeat the short form here for interest from just one of the occurrences: Thread 23 is about to kill the process. ... *>Thread 23: id=0x7f78d4ff9700 act=1 glob=1 wq=1 rq=1 tl=1 tlsz=8 rqsz=30 stuck=1 prof=0 harmless=0 wantrdv=0 cpu_ns: poll=63597658036 now=66142046101 diff=2544388065 curr_task=0x7f7888c9db60 (task) calls=1 last=0 fct=0x55e4b662ca30(process_stream) ctx=0x7f78890147e0 strm=0x7f78890147e0 src=REDACTED-CLIENT-IP fe=tls-ipv4 be=tls-ipv4 dst=unknown rqf=40d08002 rqa=30 rpf=8000 rpa=0 sif=EST,200020 sib=INI,30 af=(nil),0 csf=0x7f7888d02bb0,104000 ab=(nil),0 csb=(nil),0 cof=0x7f78b0685500,80003300:H1(0x7f78886e1fa0)/SSL(0x7f78886c3270)/tcpv4(3900) cob=(nil),0:NONE((nil))/NONE((nil))/NONE(0) Current executing Lua from a stream analyser -- stack traceback: call trace(20): | 0x55e4b6759008 [eb c5 66 0f 1f 44 00 00]: wdt_handler+0x98/0x129 | 0x7f78ef9f4980 [48 c7 c0 0f 00 00 00 0f]: libpthread:+0x12980 | 0x55e4b65d650a [48 8b 05 2f b9 45 00 48]: main+0x346da | 0x55e4b667778d [85 c0 0f 84 8b 00 00 00]: sample_process+0x4d/0x127 | 0x55e4b670ba99 [48 85 c0 0f 84 c1 00 00]: main+0x169c69 | 0x55e4b660a421 [83 f8 07 4c 8b 0c 24 0f]: main+0x685f1 | 0x55e4b660d436 [83 f8 07 48 8b 4c 24 20]: http_process_req_common+0xf6/0x1659 | 0x55e4b662e508 [85 c0 0f 85 0a f4 ff ff]: [NOTICE] (35377) : haproxy version is 2.4.2-1ppa1~bionic [NOTICE] (35377) : path to executable is /usr/sbin/haproxy [ALERT](35377) : Current worker #1 (35404) exited with code 134 (Aborted) [ALERT](35377) : exit-on-failure: killing every processes with SIGTERM [WARNING] (35377) : All workers exited. Exiting... (134) -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 syslog.35404.gz Description: Binary data syslog.34872.gz Description: Binary data core.haproxy.35404.HOSTNAME.1627083123.txt.gz Description: Binary data core.haproxy.34872.HOSTNAME.1627082514.txt.gz Description: Binary data signature.asc Description: PGP signature
[2.4.2] Thread .. is about to kill the process - Lua-involved
Hi, This is a followup to the prior threads about 100% in 2.2.x & 2.3.x; where I referenced heavy workloads causing HAProxy to initially hit 100% CPU, but then after the watchdog detection was added, they just killed the process instead. After months searching, at work we stumbled onto an internally usable-only reproduction case using a tool we wrote that made millions of requests: Turning it up around ~6K RPS w/ lots of the headers being processed by our Lua code triggered the issue, running on a single-sock EPYC 7702P system. We also found a surprising mitigation: enabling multithreaded Lua w/ "lua-load-per-thread" made the problem go away entirely (and gave a modest 10% performance boost, we are mostly limited by backend servers, not HAProxy or Lua). The Lua script was described in the previous script, and only does complex string parsing, used for variables, and driving some applets. It doesn't do any blocking operations, sockets, files or rely on globals. It got a few cleanups for multi-threaded usage (forcing more variables to be explicitly local), but has no other significant changes relevant to this discussion (it had some business logic changes to string handling used to compute stick table keys, but not really functionality changes). The full errors are attached along with decoded core dump, with some details redacted per $work security team requirements. Repeated the error twice and both attempts are attached, 4 files in total. I'll repeat the short form here for interest from just one of the occurrences: Thread 23 is about to kill the process. ... *>Thread 23: id=0x7f78d4ff9700 act=1 glob=1 wq=1 rq=1 tl=1 tlsz=8 rqsz=30 stuck=1 prof=0 harmless=0 wantrdv=0 cpu_ns: poll=63597658036 now=66142046101 diff=2544388065 curr_task=0x7f7888c9db60 (task) calls=1 last=0 fct=0x55e4b662ca30(process_stream) ctx=0x7f78890147e0 strm=0x7f78890147e0 src=REDACTED-CLIENT-IP fe=tls-ipv4 be=tls-ipv4 dst=unknown rqf=40d08002 rqa=30 rpf=8000 rpa=0 sif=EST,200020 sib=INI,30 af=(nil),0 csf=0x7f7888d02bb0,104000 ab=(nil),0 csb=(nil),0 cof=0x7f78b0685500,80003300:H1(0x7f78886e1fa0)/SSL(0x7f78886c3270)/tcpv4(3900) cob=(nil),0:NONE((nil))/NONE((nil))/NONE(0) Current executing Lua from a stream analyser -- stack traceback: call trace(20): | 0x55e4b6759008 [eb c5 66 0f 1f 44 00 00]: wdt_handler+0x98/0x129 | 0x7f78ef9f4980 [48 c7 c0 0f 00 00 00 0f]: libpthread:+0x12980 | 0x55e4b65d650a [48 8b 05 2f b9 45 00 48]: main+0x346da | 0x55e4b667778d [85 c0 0f 84 8b 00 00 00]: sample_process+0x4d/0x127 | 0x55e4b670ba99 [48 85 c0 0f 84 c1 00 00]: main+0x169c69 | 0x55e4b660a421 [83 f8 07 4c 8b 0c 24 0f]: main+0x685f1 | 0x55e4b660d436 [83 f8 07 48 8b 4c 24 20]: http_process_req_common+0xf6/0x1659 | 0x55e4b662e508 [85 c0 0f 85 0a f4 ff ff]: [NOTICE] (35377) : haproxy version is 2.4.2-1ppa1~bionic [NOTICE] (35377) : path to executable is /usr/sbin/haproxy [ALERT](35377) : Current worker #1 (35404) exited with code 134 (Aborted) [ALERT](35377) : exit-on-failure: killing every processes with SIGTERM [WARNING] (35377) : All workers exited. Exiting... (134) -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 syslog.35404.gz Description: Binary data syslog.34872.gz Description: Binary data core.haproxy.35404.HOSTNAME.1627083123.txt.gz Description: Binary data core.haproxy.34872.HOSTNAME.1627082514.txt.gz Description: Binary data signature.asc Description: PGP signature
Re: [2.2.11] 100% CPU again
On Wed, Apr 21, 2021 at 01:53:32PM +0200, Christopher Faulet wrote: > Le 21/04/2021 à 08:48, Maciej Zdeb a écrit : > > I'm very happy you managed to reproduce a similar issue! :) > The fix was merge in upstream : > > * BUG/MAJOR: mux-h2: Properly detect too large frames when decoding headers >(http://git.haproxy.org/?p=haproxy.git;a=commit;h=07f88d75) > > It has not been backported yet. But it may be manually applied on the 2.3 and > 2.2 if required. If so, optionally, this other one may be applied too : > > * BUG/MEDIUM: mux-h2: Fix dfl calculation when merging CONTINUATION frames >(http://git.haproxy.org/?p=haproxy.git;a=commit;h=cb1847c7) Thanks for these patches and the other nghttp patch you sent. I'll try get a local build out in our systems (2.2), but I'm worried based on our backtraces that the patches aren't going to be fixed by this, because HTTP/2 isn't used in our systems yet (it's on the roadmap). -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 signature.asc Description: PGP signature
Re: [2.2.11] 100% CPU again
On Tue, Apr 20, 2021 at 06:38:48PM +0200, Christopher Faulet wrote: > I'm able to reproduce a similar bug hacking the nghttp2 client to send at > most > 16383 bytes per frame (instead of 16384). By sending too large headers, we > are > falling into a wakeup loop, waiting for more data while the buffer is full. > > I have a fix but I must check with Willy how to proceed because I'm not 100% > sure for now. Can you share the hack to nghttp2 so I can compare against our crash behaviors? -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 signature.asc Description: PGP signature
Re: Still 100% CPU usage in 2.3.9 & 2.2.13 (Was: Re: [2.2.9] 100% CPU usage)
On Thu, Apr 15, 2021 at 07:53:15PM +, Robin H. Johnson wrote: > But your thought of CPU pinning was good. > I went to confirm it in the host, and I'm not certain if the cpu-map is > working > right. Ignore me, long day and I didn't think to check each thread PID: # ps -e -T | grep haproxy -w |awk '{print $2}' |xargs -n1 taskset -pc pid 120689's current affinity list: 0-127 pid 120702's current affinity list: 0 pid 120706's current affinity list: 1 .. pid 120776's current affinity list: 63 -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 signature.asc Description: PGP signature
Re: Still 100% CPU usage in 2.3.9 & 2.2.13 (Was: Re: [2.2.9] 100% CPU usage)
On Thu, Apr 15, 2021 at 09:23:07AM +0200, Willy Tarreau wrote: > On Thu, Apr 15, 2021 at 07:13:53AM +0000, Robin H. Johnson wrote: > > Thanks; I will need to catch it faster or automate this, because the > > watchdog does a MUCH better job restarting it than before, less than 30 > > seconds of 100% CPU before the watchdog reliably kills it. > I see. Then collecting the watchdog outputs can be instructive to see > if it always happens at the same place or not. And the core dumps will > indicate what all threads were doing (and if some were competing on a > lock for example). The truncation in the log output for the crash was interesting, I couldn't see why it was being cut off. I wish we could get a clean reproduction in our testing environment, because the production core dumps absolutely have customer data in them. > > Varnish runs on the same host and is used to cache some of the backends. > > Please of free memory at the moment. > > I'm now thinking about something. Do you have at least as many CPUs as the > total number of threads used by haproxy and varnish ? Otherwise there will > be some competition and migrations will happen. If neither is bounded, you > can even end up with two haproxy threads forced to run on the same CPU, > which is the worst situation as one could be scheduled out with a lock > held and the other one spinning waiting for this lock. Single socket, AMD EPYC 7702P 64-Core Processor, 128 threads! Shows as single NUMA node in our present configuration. Hopefully the kernel is mostly doing the right thing, but read on. HAProxy already pinned to the first 64 threads with: cpu-map 1/1 0 ... cpu-map 1/64 63 Varnish isn't explicitly pinned right now, but uses less than 200% CPU overall (we know most requests aren't cachable so they don't get routed to Varnish at all) But your thought of CPU pinning was good. I went to confirm it in the host, and I'm not certain if the cpu-map is working right. # pid_haproxy_leader=68839 ; pid_haproxy_follower=68848 # taskset -pc $pid_haproxy_leader pid 68839's current affinity list: 0-127 # taskset -pc $pid_haproxy_follower pid 68848's current affinity list: 0 -- Robin Hugh Johnson Gentoo Linux: Dev, Infra Lead, Foundation Treasurer E-Mail : robb...@gentoo.org GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136 signature.asc Description: PGP signature
Re: Still 100% CPU usage in 2.3.9 & 2.2.13 (Was: Re: [2.2.9] 100% CPU usage)
On Thu, Apr 15, 2021 at 08:59:35AM +0200, Willy Tarreau wrote: > On Wed, Apr 14, 2021 at 01:53:06PM +0200, Christopher Faulet wrote: > > > nbthread=64, nbproc=1 on both 1.8/2.x > > > > It is thus surprising, if it is really a contention issue, that you never > > observed slow down on the 1.8. There is no watchdog, but the thread > > implementation is a bit awkward on the 1.8. 2.X are better on this point, > > the best being the 2.4. > > Agreed, I'd even say that 64 threads in 1.8 should be wa slower than > a single thread. > > A few things are worth having a look at, Robin: > - please run "perf top" when this happens, and when you see a function > reporting a high usage, do no hesitate to navigate through it, pressing > enter, and "annotate function ". Then scrolling through it will > reveal some percentage of time certain points were met. It's very possible > you'll find that 100% of the CPU are used in 1, 2 or 3 functions and > that there is a logic error somewhere. Usually if you find a single one > you'll end up spotting a lock. Thanks; I will need to catch it faster or automate this, because the watchdog does a MUCH better job restarting it than before, less than 30 seconds of 100% CPU before the watchdog reliably kills it. > - please also check if your machine is not swapping, as this can have > terrible consequences and could explain why it only happens on certain > machines dealing with certain workloads. I remember having spent several > weeks many years ago chasing a response time issue happening only in the > morning, which was in fact caused by the log upload batch having swapped > and left a good part of the unused memory in the swap, making it very > difficult for the network stack to allocate buffers during send() and > recv(), thus causing losses and retransmits as the load grew. This was > never reproducible in the lab because we were not syncing logs :-) 512GiB RAM and no swap configured on the system at all. Varnish runs on the same host and is used to cache some of the backends. Please of free memory at the moment. -- Robin Hugh Johnson GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 signature.asc Description: PGP signature
Re: Still 100% CPU usage in 2.3.9 & 2.2.13 (Was: Re: [2.2.9] 100% CPU usage)
On Fri, Apr 09, 2021 at 10:14:26PM +0200, Christopher Faulet wrote: > It seems you have a blocking call in one of your lua script. The threads dump > shows many threads blocked in hlua_ctx_init. Many others are executing lua. > Unfortunately, for a unknown reason, there is no stack traceback. All of our Lua is string handling. Parsing headers, and then building TXN variables as well as a set of applets that return responses in cases where we don't want to go to a backend server as the response is simple enough to generate inside the LB. > For the 2.3 and prior, the lua scripts are executed under a global lock. Thus > blocking calls in a lua script are awful, because it does not block only one > thread but all others too. I guess the same issue exists on the 1.8, but > there > is no watchdog on this version. Thus, time to time HAProxy hangs and may > report > huge latencies but, at the end it recovers and continues to process data. It > is > exactly the purpose of the watchdog, reporting hidden bugs related to > spinning > loops and deadlocks. Nothing in this Lua code should have blocking calls at all. The Lua code has zero calls to external services, no sockets, no sleeps, no print, no Map.new (single call in the Lua startup, not inside any applet or fetch), no usage of other packages, no file handling, no other IO. I'm hoping I can get $work to agree to fully open-source the Lua, so you can see this fact and review the code to confirm that it SHOULD be non-blocking. > However, I may be wrong. It may be just a contention problem because your are > executing lua with 64 threads and a huge workload. In this case, you may give > a > try to the 2.4 (under development). There is a way to have a separate lua > context for each thread loading the scripts with "lua-load-per-thread" > directive. Out of curiosity, on the 1.8, are you running HAProxy with several > threads or are you spawning several processes? nbthread=64, nbproc=1 on both 1.8/2.x Yes, we're hoping to try 2.4.x, just working on some parts to get there. -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 signature.asc Description: PGP signature
Still 100% CPU usage in 2.3.9 & 2.2.13 (Was: Re: [2.2.9] 100% CPU usage)
Hi, Maciej had said they were going to create a new thread, but I didn't see one yet. I want to start by noting problem was much worse on 2.2.8 & 2.2.9, and that 2.2.13 & 2.3.9 don't get entirely hung at 100% anymore: a big thanks for that initial work in fixing the issue. As I mentioned in my other mail asking for a 1.8.30 release, we're experiencing this problem in DigitalOcean's HAProxy instances used to run the Spaces product. I've been trying to dig out deeper detail as well with a debug threads version, but I have the baseline error output from 2.3.9 here to share, after passing redaction review. This dump was generated with vbernat's PPA version of 2.3.9, not any internal builds. We have struggled to reproduce the problem in testing environments, it only turns up at the biggest regions, and plotting occurances of the issue over the time dimension suggest that it might have some partial correlation w/ a weird workload input. The dumps do suggest Lua is implicated as well, and we've got some extensive Lua code, so it's impossible to rule it out as contributing to the problem (We have been discussing plans to move it to SPOA instead). The Lua code in question hasn't changed significantly in nearly 6 months, and it was problem-free on the 1.8 series (having a test suite for the Lua code has been invaluable). -- Robin Hugh Johnson E-Mail : robb...@gentoo.org GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 20210409_haproxy-crashlogs.REDACTED.txt.gz Description: Binary data signature.asc Description: PGP signature
Request for new 1.8.x release due to freq counter bug
Hi, Wondering if you could make a new 1.8.x release to get the fix for the freq counter bug into more systems? dde80e111 BUG/MEDIUM: time: make sure to always initialize the global tick 491b86ed0 BUG/MEDIUM: freq_ctr/threads: use the global_now_ms variable It's a problem for anybody using counters for ratelimiting purposes, who can't yet upgrade to 2.x series due to other issues encountered there (the 100% cpu bug even on the latest 2.3.9, that I have logs I'm hoping to post when they are approved by $work) -- Robin Hugh Johnson E-Mail : robb...@gentoo.org GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 signature.asc Description: PGP signature
Request for new 1.8.x release due to freq counter bug
Hi, Wondering if you could make a new 1.8.x release to get the fix for the freq counter bug into more systems? dde80e111 BUG/MEDIUM: time: make sure to always initialize the global tick 491b86ed0 BUG/MEDIUM: freq_ctr/threads: use the global_now_ms variable It's a problem for anybody using counters for ratelimiting purposes, who can't yet upgrade to 2.x series due to other issues encountered there (the 100% cpu bug even on the latest 2.3.9, that I have logs I'm hoping to post when they are approved by $work) -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 signature.asc Description: PGP signature
Re: 'show errors' - logging & reasons
On Fri, Apr 02, 2021 at 06:38:35PM +0200, Willy Tarreau wrote: > > This has come out of cases where we upgraded HAProxy 1.8 -> 2.2, and > > $work customers started reporting requests that previously worked fine > > now return 400 Invalid Request errors. > That's never good. Often it indicates that they've long been doing > something wrong without ever noticing and that for various reasons > it's not possible anymore. Yes, see below for details of what a tool used by customers has been doing wrong. > > Two things stand out: > > - way to reliably capture that output, not being limited to the last > > error > Ideally I'd like to have a global ring buffer to replace all per-proxy > ones (which also consume a lot of RAM). We could imagine keeping the > ability to have a per-proxy copy based on a config option. I'd keep per-proxy support, because I can see cases where different retention might be wanted. > > - messaging about WHY a given position is an error > > - partial list of reasons I've seen so far included below > In a capture, the indicated position is *always* an invalid character. > It may require to have a look at the standards to know why, but it > seems particularly difficult to me to start to emit the list of all > permitted characters whenever a forbidden one is met. I remember that > we emit the parser's state, maybe this is what should be turned to a > more human-readable form to indicate "in header field name" or "in > header field value" or stuff like this which can help the reader > figure why the char is not welcome (maybe because they expect that > a previous one had switched the state). I wouldn't emit an entire list of permitted characters, but certainly via the parser's state we can point to it by reference. E.g. in the target URI, non-encoded spaces or unicode. > > Partial list of low-level invalid request reasons > > - path/queryparams has character that was supposed to be encoded > > - header key invalid character for given position > > - header line malformed (no colon!) > > - header value invalid relative to prior pieces of request** > For the last one we will not have a position because the request is > *semantically* invalid, it's not a parsing issue. Hmm, I think it does have a position value, but one that didn't seem to make sense. > > ** This one is bugging me: user requests with an absolute URI as the > > urlpath, but the hostname in that URI does not match the hostname in the > > Host header. > > This is mandated by the standard: > >https://tools.ietf.org/html/rfc7230#section-5.4 > >If the target URI includes an authority component, then a >client MUST send a field-value for Host that is identical to that >authority component, excluding any userinfo subcomponent and its "@" >delimiter (Section 2.7.1). >... >A server MUST respond with a 400 (Bad Request) status code to any >HTTP/1.1 request message that lacks a Host header field and to any >request message that contains more than one Host header field or a >Host header field with an invalid field-value. > > Do you regularly encounter this case ? If so maybe we could have an > option to relax the rule in certain cases. The standard allows proxies > to ignore the provided Host field and rewrite it from the authority. > Note that we're not a proxy by a gateway and it's often the case that > a number of other gateways (and possibly proxies) have been met from > the client, so we wouldn't do that by default but with a compelling > case I woudln't find this problematic. If this changed in 1.8->2.2, then yes, it's absolutely the case. I have already asserted to $work customers, that the third-party tool that is being used is doing the wrong thing, but I haven't had much traction in getting them to change what they are doing, since it needs. I can't disclose the name of tool in question on the mailing list, but it choses to implement HTTPS by having users of the tool run a local stunnel, pointing to $work service, and point the tool to stunnel as an http proxy As for 'regularly', what is the metric to compare that to? It's a tiny fraction of overall requests, but has been loud at the support level. -- Robin Hugh Johnson Gentoo Linux: Dev, Infra Lead, Foundation Treasurer E-Mail : robb...@gentoo.org GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136 signature.asc Description: PGP signature
'show errors' - logging & reasons
Hi, I'm wondering if there is any ongoing development or improvement plans around the 'show errors' functionality? This has come out of cases where we upgraded HAProxy 1.8 -> 2.2, and $work customers started reporting requests that previously worked fine now return 400 Invalid Request errors. Two things stand out: - way to reliably capture that output, not being limited to the last error - messaging about WHY a given position is an error - partial list of reasons I've seen so far included below Partial list of low-level invalid request reasons - path/queryparams has character that was supposed to be encoded - header key invalid character for given position - header line malformed (no colon!) - header value invalid relative to prior pieces of request** ** This one is bugging me: user requests with an absolute URI as the urlpath, but the hostname in that URI does not match the hostname in the Host header. -- Robin Hugh Johnson Gentoo Linux: Dev, Infra Lead, Foundation Treasurer E-Mail : robb...@gentoo.org GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136 signature.asc Description: PGP signature
Re: Removal / obsolescence of keywords in 2.3 and future - replacing 'monitor-uri' w/ Lua
On Wed, Oct 14, 2020 at 03:35:30PM +0200, Tim Düsterhus wrote: > I believe I already said it somewhere: The most valuable thing about > monitor-uri is that it does not create entries within the access log. I > don't think that can be replicated with http-request return as of now, > but I am happy to learn otherwise. I recently had to implement some health-of-HAProxy work, and went with lua-services since HAProxy 1.8 didn't have 'http-request return', but the other parts of the solution are still relevant. 'http-request set-log-level silent if ...' was the key to NOT logging the health-checks of HAProxy. The Lua service used in the below is fairly trivial, but it's in Lua rather than errorfiles or something else, as we had some dynamic data built into it, and that worked will with existing code. Rest of what I had built: # Fill these in with something meaningful for your use-case: acl haproxy_maint ... # map or dynamic-acl acl haproxy_drain ... # map or dynamic-acl acl haproxy_drain stopping eq 1 acl health_permitted_src src ... # what IPs are allowed to health-check # All of the paths: acl health_ANY path -m beg /haproxy-health/ # Usual check point: acl health_check path -m str /haproxy-health/check # Special cases if you want to force an output: acl health_force_up path -m str /haproxy-health/force-up acl health_force_ready path -m str /haproxy-health/force-ready acl health_force_down path -m str /haproxy-health/force-down acl health_force_drain path -m str /haproxy-health/force-drain acl health_force_maint path -m str /haproxy-health/force-maint # Deny unauthorized access to healthcheck # This will be logged! http-request set-log-level warning if health_ANY !health_permitted_src http-request deny if health_ANY !health_permitted_src # Make normal access to healthcheck silent, so it does not spam logs http-request set-log-level silent if health_ANY health_permitted_src # Forced states # all implicit health_permitted_src in these: http-request use-service lua.health_up if health_force_up http-request use-service lua.health_ready if health_force_ready http-request use-service lua.health_down if health_force_down http-request use-service lua.health_drain if health_force_drain http-request use-service lua.health_maint if health_force_maint # Actual check: # all implicit health_permitted_src in these: # 1. If the sysadmin created a maint flag, return that http-request use-service lua.health_maint if health_check haproxy_maint # 2. If HAProxy is stopping, return a DRAIN state http-request use-service lua.health_drain if health_check haproxy_drain # 3. If the backend is up, return up http-request use-service lua.health_up if health_check # 4. If the backend is down, return down http-request use-service lua.health_down if !health_check -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 signature.asc Description: PGP signature
Re: Loading multiple TLS certificates
On Mon, May 13, 2019 at 09:10:15PM +, Gibson, Brian (IMS) wrote: > > For the first time, I have a client that refused to let me use a wildcard > certificate. > So I submitted 6 separate CSRs and now have 6 separate certificates and 6 > separate keys. > The intermediate certificates all appear to be the same. > So should I create 6 separate PEM files containing the certificate, the > intermediates, and the key, > or should I create a single PEM file containing all 6 certificates, 6 keys, > and 1 intermediate file? 6 distinct files, and map them all via crt-list. -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 signature.asc Description: PGP signature
[PATCH] MINOR: skip get_gmtime where tm is unused
For LOG_FMT_TS (%Ts), the tm variable is not used, so save some cycles on the call to get_gmtime. Backport: 1.9 1.8 Signed-off-by: Robin H. Johnson --- src/log.c | 1 - 1 file changed, 1 deletion(-) diff --git a/src/log.c b/src/log.c index f8d3414e2..39e472b33 100644 --- a/src/log.c +++ b/src/log.c @@ -2019,7 +2019,6 @@ int sess_build_logline(struct session *sess, struct stream *s, char *dst, size_t break; case LOG_FMT_TS: // %Ts - get_gmtime(logs->accept_date.tv_sec, ); if (tmp->options & LOG_OPT_HEXA) { iret = snprintf(tmplog, dst + maxsize - tmplog, "%04X", (unsigned int)logs->accept_date.tv_sec); if (iret < 0 || iret > dst + maxsize - tmplog) -- 2.21.0
[PATCH 3/3] MEDIUM: lua: expose safe fetch/conv via val_args_flags
Any val_args functions with side-effects is unsafe for use in the Lua context, and previously this stopped them from having object methods created except where explicitly whitelisted (val_payload_lv, val_hdr). Using the new val_args_flags field, we can track which sample fetches/converters are safe, without having to explicitly whitelist in the lua codebase. Before any val_args function is used, the val_args_flags field is checked as an additional safety measure. The following fetch/converters are now available from Lua: - 51d.all (51d_all) - 51d.single (51d_all) - distcc_body - distcc_param - bool - meth - json - field - word - regsub Initial-Discovery: Yue Zhu Signed-off-by: Robin H. Johnson Signed-off-by: Robin H. Johnson --- src/hlua.c | 28 +++- 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/src/hlua.c b/src/hlua.c index a9d126b53..6ddfed4b3 100644 --- a/src/hlua.c +++ b/src/hlua.c @@ -3299,9 +3299,14 @@ __LJMP static int hlua_run_sample_fetch(lua_State *L) MAY_LJMP(hlua_lua2arg_check(L, 2, args, f->arg_mask, hsmp->p)); /* Run the special args checker. */ - if (f->val_args && !f->val_args(args, NULL)) { - lua_pushfstring(L, "error in arguments"); - WILL_LJMP(lua_error(L)); + if (f->val_args) { + if (unlikely(f->val_args_flags != SMP_VAL_ARGS_F_SAFE)) { + lua_pushfstring(L, "argument validation is unsafe"); + WILL_LJMP(lua_error(L)); + } else if(!f->val_args(args, NULL)) { + lua_pushfstring(L, "error in arguments"); + WILL_LJMP(lua_error(L)); + } } /* Initialise the sample. */ @@ -3405,9 +3410,14 @@ __LJMP static int hlua_run_sample_conv(lua_State *L) MAY_LJMP(hlua_lua2arg_check(L, 3, args, conv->arg_mask, hsmp->p)); /* Run the special args checker. */ - if (conv->val_args && !conv->val_args(args, conv, "", 0, NULL)) { - hlua_pusherror(L, "error in arguments"); - WILL_LJMP(lua_error(L)); + if (conv->val_args) { + if (unlikely(conv->val_args_flags != SMP_VAL_ARGS_F_SAFE)) { + lua_pushfstring(L, "argument validation is unsafe"); + WILL_LJMP(lua_error(L)); + } else if (!conv->val_args(args, conv, "", 0, NULL)) { + hlua_pusherror(L, "error in arguments"); + WILL_LJMP(lua_error(L)); + } } /* Initialise the sample. */ @@ -7668,8 +7678,7 @@ void hlua_init(void) * not safe during the runtime. */ if ((sf->val_args != NULL) && - (sf->val_args != val_payload_lv) && -(sf->val_args != val_hdr)) + (sf->val_args_flags != SMP_VAL_ARGS_F_SAFE)) continue; /* gL.Tua doesn't support '.' and '-' in the function names, replace it @@ -7714,7 +7723,8 @@ void hlua_init(void) /* Dont register the keywork if the arguments check function are * not safe during the runtime. */ - if (sc->val_args != NULL) + if ((sc->val_args != NULL) && + (sc->val_args_flags != SMP_VAL_ARGS_F_SAFE)) continue; /* gL.Tua doesn't support '.' and '-' in the function names, replace it -- 2.18.0
[PATCH 1/3] MINOR: samples: Prep for val_args_flags
In preparation for val_args_flags, introduce a macro that is used to populate the val_args field in sample_conv_kw_list and sample_fetch_kw_list. This commit produces code that has zero changes after preprocessor expansion. All sample registration can be found with: $ git grep -n -e 'sample.*kw_list.*ILH' -A5000 \ | sed -r -n '/struct sample_(conv|fetch)_kw_list\>/,/};/p' Initial-Discovery: Yue Zhu Signed-off-by: Robin H. Johnson Signed-off-by: Robin H. Johnson --- include/types/sample.h | 6 + src/51d.c | 4 +- src/backend.c | 28 ++--- src/connection.c | 4 +- src/da.c | 4 +- src/flt_http_comp.c| 4 +- src/frontend.c | 10 +- src/listener.c | 4 +- src/map.c | 62 +-- src/payload.c | 54 - src/proto_http.c | 164 +-- src/proto_tcp.c| 28 ++--- src/sample.c | 110 +-- src/ssl_sock.c | 102 - src/stick_table.c | 244 - src/vars.c | 7 +- src/wurfl.c| 4 +- 17 files changed, 422 insertions(+), 417 deletions(-) diff --git a/include/types/sample.h b/include/types/sample.h index cf0528edc..a0a440512 100644 --- a/include/types/sample.h +++ b/include/types/sample.h @@ -342,4 +342,10 @@ struct sample_conv_kw_list { typedef int (*sample_cast_fct)(struct sample *smp); extern sample_cast_fct sample_casts[SMP_TYPES][SMP_TYPES]; +/* Macros for easier to use val_args list usage + * All sample_conv_kw_list/sample_fetch_kw_list should use the SMP_VAL_ARGS macro to populate the + * val_args field. + */ +#define SMP_VAL_ARGS(func)func + #endif /* _TYPES_SAMPLE_H */ diff --git a/src/51d.c b/src/51d.c index a36333ef5..f415fac26 100644 --- a/src/51d.c +++ b/src/51d.c @@ -668,13 +668,13 @@ static struct cfg_kw_list _51dcfg_kws = {{ }, { /* Note: must not be declared as its list will be overwritten */ static struct sample_fetch_kw_list sample_fetch_keywords = {ILH, { - { "51d.all", _51d_fetch, ARG5(1,STR,STR,STR,STR,STR), _51d_fetch_check, SMP_T_STR, SMP_USE_HRQHV }, + { "51d.all", _51d_fetch, ARG5(1,STR,STR,STR,STR,STR), SMP_VAL_ARGS(_51d_fetch_check), SMP_T_STR, SMP_USE_HRQHV }, { NULL, NULL, 0, 0, 0 }, }}; /* Note: must not be declared as its list will be overwritten */ static struct sample_conv_kw_list conv_kws = {ILH, { - { "51d.single", _51d_conv, ARG5(1,STR,STR,STR,STR,STR), _51d_conv_check, SMP_T_STR, SMP_T_STR }, + { "51d.single", _51d_conv, ARG5(1,STR,STR,STR,STR,STR), SMP_VAL_ARGS(_51d_conv_check), SMP_T_STR, SMP_T_STR }, { NULL, NULL, 0, 0, 0 }, }}; diff --git a/src/backend.c b/src/backend.c index b3fd6c677..a2bc13d68 100644 --- a/src/backend.c +++ b/src/backend.c @@ -1877,25 +1877,25 @@ static int sample_conv_nbsrv(const struct arg *args, struct sample *smp, void *p * Please take care of keeping this list alphabetically sorted. */ static struct sample_fetch_kw_list smp_kws = {ILH, { - { "avg_queue", smp_fetch_avg_queue_size, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, }, - { "be_conn", smp_fetch_be_conn,ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, }, - { "be_id", smp_fetch_be_id, 0, NULL, SMP_T_SINT, SMP_USE_BKEND, }, - { "be_name", smp_fetch_be_name,0, NULL, SMP_T_STR, SMP_USE_BKEND, }, - { "be_sess_rate", smp_fetch_be_sess_rate, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, }, - { "connslots", smp_fetch_connslots, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, }, - { "nbsrv", smp_fetch_nbsrv, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, }, - { "queue", smp_fetch_queue_size, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, }, - { "srv_conn", smp_fetch_srv_conn, ARG1(1,SRV), NULL, SMP_T_SINT, SMP_USE_INTRN, }, - { "srv_id",smp_fetch_srv_id, 0, NULL, SMP_T_SINT, SMP_USE_SERVR, }, - { "srv_is_up", smp_fetch_srv_is_up, ARG1(1,SRV), NULL, SMP_T_BOOL, SMP_USE_INTRN, }, - { "srv_queue", smp_fetch_srv_queue, ARG1(1,SRV), NULL, SMP_T_SINT, SMP_USE_INTRN, }, - { "srv_sess_rate", smp_fetch_srv_sess_rate, ARG1(1,SRV), NULL, SMP_T_SINT, SMP_USE_INTRN, }, + { "avg_queue", smp_fetch_avg_queue_size, ARG1(1,BE), SMP_VAL_ARGS(NULL), SMP_T_SINT, SMP_USE_INTRN, }, + { "be_conn", smp_fetch_be_conn,ARG1(1,BE), SMP_VAL_ARGS(NULL), SMP_T_SINT, SMP_USE_INTRN, }, + { "be_id", smp_fetch_be_id, 0, SMP_VAL_ARGS(NULL), SMP_T_SINT, SMP_USE_BKEND, }, + { &
[PATCH 2/3] MEDIUM: samples: add val_args_flags
Some argument validation (val_args) functions have unsafe side-effects, introduce val_args_flags field to track which functions have side-effects. Keep the actual values in preprocessor symbols so that they can be kept with the functions, and consumed via macros in the static lists. Initial-Discovery: Yue Zhu Signed-off-by: Robin H. Johnson Signed-off-by: Robin H. Johnson --- include/types/sample.h | 22 -- src/51d.c | 2 ++ src/hlua.c | 3 +++ src/map.c | 1 + src/payload.c | 2 ++ src/proto_http.c | 1 + src/sample.c | 7 +++ src/vars.c | 2 ++ 8 files changed, 38 insertions(+), 2 deletions(-) diff --git a/include/types/sample.h b/include/types/sample.h index a0a440512..0f4e93c30 100644 --- a/include/types/sample.h +++ b/include/types/sample.h @@ -206,6 +206,18 @@ enum { SMP_F_CONST = 1 << 7, /* This sample use constant memory. May diplicate it before changes */ }; +/* Flags used to describe val_args functions. + * In future, some of these conditions might be permitted, but for now they are + * only markers about types of unsafe functions. + */ +enum { + SMP_VAL_ARGS_F_SAFE = 0,/* no side-effects! */ + SMP_VAL_ARGS_F_UNSAFE_VAR = 1 << 0, /* unsafe side-effect: creates variable */ + SMP_VAL_ARGS_F_UNSAFE_ALLOC = 1 << 1, /* unsafe side-effect: allocates */ + SMP_VAL_ARGS_F_UNSAFE_FILE = 1 << 2, /* unsafe side-effect: file access */ + SMP_VAL_ARGS_F_UNSAFE_ANY = (1<<0)|(1<<1)|(1<<2), /* any unsafe side-effect */ +}; + /* needed below */ struct session; struct stream; @@ -291,6 +303,7 @@ struct sample_conv { struct sample_conv *smp_conv, const char *file, int line, char **err_msg); /* argument validation function */ + unsigned int val_args_flags; /* argument validation flags */ unsigned int in_type; /* expected input sample type */ unsigned int out_type;/* output sample type */ void *private;/* private values. only used by maps and Lua */ @@ -313,6 +326,7 @@ struct sample_fetch { uint64_t arg_mask;/* arguments (ARG*()) */ int (*val_args)(struct arg *arg_p, char **err_msg); /* argument validation function */ + unsigned int val_args_flags; /* argument validation flags */ unsigned long out_type; /* output sample type */ unsigned int use; /* fetch source (SMP_USE_*) */ unsigned int val; /* fetch validity (SMP_VAL_*) */ @@ -344,8 +358,12 @@ extern sample_cast_fct sample_casts[SMP_TYPES][SMP_TYPES]; /* Macros for easier to use val_args list usage * All sample_conv_kw_list/sample_fetch_kw_list should use the SMP_VAL_ARGS macro to populate the - * val_args field. + * val_args and val_args_flags fields together. + * They should define SMP_VAL_ARGS_SAFEVAL_funcname as the bitfield of SMP_VAL_ARGS_F_* enum. + * SMP_VAL_ARGS(NULL) is common for fetches with no arguments or no validation */ -#define SMP_VAL_ARGS(func)func +#define SMP_VAL_ARGS(func)func, SMP_VAL_ARGS_SAFEVAL_##func +#define SMP_VAL_ARGS_SAFEVAL(x) SMP_VAL_ARGS_SAFEVAL_##x +#define SMP_VAL_ARGS_SAFEVAL_NULL SMP_VAL_ARGS_F_SAFE #endif /* _TYPES_SAMPLE_H */ diff --git a/src/51d.c b/src/51d.c index f415fac26..d6e130e82 100644 --- a/src/51d.c +++ b/src/51d.c @@ -133,6 +133,7 @@ static int _51d_cache_size(char **args, int section_type, struct proxy *curpx, return 0; } +#define SMP_VAL_ARGS_SAFEVAL__51d_fetch_check SMP_VAL_ARGS_F_SAFE static int _51d_fetch_check(struct arg *arg, char **err_msg) { if (global_51degrees.data_file_path) @@ -142,6 +143,7 @@ static int _51d_fetch_check(struct arg *arg, char **err_msg) return 0; } +#define SMP_VAL_ARGS_SAFEVAL__51d_conv_check SMP_VAL_ARGS_F_SAFE static int _51d_conv_check(struct arg *arg, struct sample_conv *conv, const char *file, int line, char **err_msg) { diff --git a/src/hlua.c b/src/hlua.c index 085544dc6..a9d126b53 100644 --- a/src/hlua.c +++ b/src/hlua.c @@ -1471,6 +1471,7 @@ __LJMP static int hlua_map_new(struct lua_State *L) conv.process = NULL; /* unused. */ conv.arg_mask = 0; /* unused. */ conv.val_args = NULL; /* unused. */ + conv.val_args_flags = SMP_VAL_ARGS_SAFEVAL(NULL); /* unused */ conv.out_type = SMP_T_STR; conv.private = (void *)(long)match; switch (match) { @@ -6011,6 +6012,7 @@ __LJMP static int hlua_register_converters(lua_State *L) sck->kw[0].process = hlua_sample_conv_wrapper; sck->kw[0].arg_mask = ARG12(0,STR,STR,STR,S
Re: [PATCH] BUG/MEDIUM: Expose all converters & fetches
On Fri, Dec 07, 2018 at 01:14:47PM +0100, Willy Tarreau wrote: > I had a quick look, some converters use check_operator() which creates > a variable upon each invocation of the parsing function. Some people > might inadvertently get caught by using these ones to look up cookie > values or session identifiers for example and not realize that zoombie > variables are lying behind. Another one is smp_check_const_bin() which > will call parse_binary(), returning an allocated memory block representing > the binary string, that nobody will free either. And for converters you > can see that all map-related functions use sample_load_map(), which will > attempt to load a file from the file system. Not only this must not be > attempted at run time for obvious performance reasons (will not work if > the config properly uses chroot anyway), but it may also use huge amounts > of memory on each call. That's a horrible side-effect for functions that are intended to validate arguments to functions. - Should we introduce a field into sample_conv/sample_fetch that describes if the argument validation has side-effects? - Can we improve the fetch/conv to not have side-effects, or at least cleanup afterwards? Here's my validation of each of the checks so far. safe: sample_conv_json_check sample_conv_regsub_check sample_conv_field_check _51d_conv_check val_distcc val_payload_lv val_hdr smp_check_const_bool smp_check_const_meth unsafe: check_operator -> vars_check_arg -> register_name smp_check_concat -> vars_check_arg -> register_name smp_check_strcmp -> vars_check_arg -> register_name sample_load_map -> (it's a messy) conv_check_var -> vars_check_arg -> register_name smp_check_const_bin -> parse_binary -> ?? -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 signature.asc Description: Digital signature
rsync deny & healthcheck
Seeing the MQTT CONNECT parsing recently, I thought to share my draft work in rsync balancing: - Lua to generate deny messages for rate-limited clients - tcp-check to check rsync health https://gist.github.com/robbat2/2c8414bd617c013be12cb9b41830e010 I want to try and finish my lua-check code, as a way of making this much cleaner. -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 signature.asc Description: Digital signature
[PATCH] BUG/MEDIUM: Expose all converters & fetches
One of my coworkers was having some trouble trying to escape data for JSON in Lua, using the 'json' converter, based on the documentation, and this lead to a deep bug discovery. The Lua documentation [1] states that JSON escaping converter is exposed in Lua, but it turns out that's not quite true. Creatively dumping the function metatable [2] (see code at the end) shows only a subset of converters exposed, and notable is missing at least the following as of v1.8.12: - add * - and * - div * - field - json - mod * - mul * - or * - regsub - sub * - word - xor * Some fetches are also omitted: - bool - bin - meth - map* - var - set-var - unset-var (there might be more, I didn't have a comprehensive list of converters & fetches). All of these have in common that they have a validation of arguments. Those with a * gained validation of arguments in commit 5d86fae2344dbfacce5479ba86bd2d2866bf5474 (v1.6-dev2-52-g5d86fae23) This bug has been around since the start of Lua fetchers & converters. hlua_run_sample_conv is capable of running the args checker [3]: ```c /* Run the special args checker. */ if (conv->val_args && !conv->val_args(args, conv, "", 0, NULL)) { hlua_pusherror(L, "error in arguments"); WILL_LJMP(lua_error(L)); } ``` But any converters with arguments checking functions are not registered [4]: ```c /* Dont register the keywork if the arguments check function are * not safe during the runtime. */ if (sc->val_args != NULL) continue; ``` Fetchers have a similar issue, but some checkers were explicitly permitted: ```c /* Dont register the keywork if the arguments check function are * not safe during the runtime. */ if ((sf->val_args != NULL) && (sf->val_args != val_payload_lv) && (sf->val_args != val_hdr)) continue; ``` Here's a Lua AppletHTTP [5] that lets you dump out which fetchers & converters are exposed: ```lua core.register_service("dump", "http", function(applet) local response = core.concat() response:add('---\n') response:add('applet:\n') members = { 'sf', 'f', 'sc', 'c' } for i,m in ipairs(members) do response:add('applet.' .. m .. ':\n') for k, v in pairs(getmetatable(applet[m])) do if k == "__index" then local funcs = {} for i, j in pairs(v) do table.insert(funcs, i) end table.sort(funcs) for i,j in ipairs(funcs) do response:add('- "' .. tostring(j) .. '"\n') end break end end response:add('\n') end response = response:dump() applet:set_status(200) applet:add_header("content-length", string.len(response)) applet:add_header("content-type", "text/yaml") applet:start_response() applet:send(response) end) ``` [1] https://www.arpalert.org/src/haproxy-lua-api/1.8/index.html#Converters [2] https://github.com/haproxy/haproxy/blob/master/doc/lua.txt#L297-L307 [3] https://github.com/haproxy/haproxy/commit/594afe76e4694d9faf281ae87f2d026506f7a9d9#diff-fc1678dd7de891cf951a19f59a9a7375R2643 [4] https://github.com/haproxy/haproxy/commit/594afe76e4694d9faf281ae87f2d026506f7a9d9#diff-fc1678dd7de891cf951a19f59a9a7375R4003 [5] https://gist.github.com/robbat2/6c75f78e0d857b6d8649d591bc44c452 Initial-Discovery: Yue Zhu Tracing: Robin H. Johnson Signed-off-by: Robin H. Johnson Signed-off-by: Robin H. Johnson --- src/hlua.c | 15 --- 1 file changed, 15 deletions(-) diff --git a/src/hlua.c b/src/hlua.c index 4715639a1..d575f31ea 100644 --- a/src/hlua.c +++ b/src/hlua.c @@ -7737,15 +7737,6 @@ void hlua_init(void) */ sf = NULL; while ((sf = sample_fetch_getnext(sf, )) != NULL) { - - /* Dont register the keywork if the arguments check function are -* not safe during the runtime. -*/ - if ((sf->val_args != NULL) && - (sf->val_args != val_payload_lv) && -(sf->val_args != val_hdr)) - continue; - /* gL.Tua doesn't support '.' and '-' in the function names, replace it * by an underscore. */ @@ -7785,12 +7776,6 @@ void hlua_init(void) */ sc = NULL; while ((sc = sample_conv_getnext(sc, )) != NULL) { - /* Dont register the keywork if the arguments check function are -* not safe during the runtime. -*/ - if (sc->val_args != NULL) - continue; - /* gL.Tua doesn't support '.' and '-' in the function names, replace it * by an underscore. */ -- 2.18.0
Re: Design Proposal: http-agent-check, explict health checks & inline-mode
On Sat, Oct 27, 2018 at 01:52:29PM +0200, Aleksandar Lazic wrote: > > Right now, if you want to use load feedback for weights, you either need > > something entirely out-of-band from the servers back to HAProxy, or you > > have to use the agent-check option and run a separate health agent. > > With that you mean "external-check command" ? > > https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#external-check%20command No, I mean 'agent-check' per https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-agent-check This is an agent that runs on the realserver, not the load balancer. ... > > I would like to propose a new http-agent-check option, with two usage > > modes. > > 1. health-check mode: this connects like the existing agent-check, but > >sends does HTTP request & response rather than pure TCP. > > > > 2. inline mode: if the server has best-case knowledge about it's status, > >and HTTP headers are used for the feedback information, then it > >should be possible to include the feedback in an HTTP response header > >as part of normal queries. The header processing would detect & feed > >the data into the health system during normal traffic. > > Interesting Ideas. > Are there any LB's out there which already uses this concept? I haven't looked specifically, but I am aware of a lot of other dynamic-realserver weight work (mostly in the keepalived/ipvs world, like feedbackd and lvs-kiss from the early 2000's). The inline mode is probably deserving of seperate work, I think it might be possible to implement it with the existing Lua codebase. > > Question: where & how should the feedback information be encoded in the > > response? > > 1. HTTP payload > > 2. Single HTTP header > > 3. Multiple HTTP headers > > I would like to have it in the one header per value 'Server-State-*' as the X- > Prefix is Deprecated. > > https://tools.ietf.org/html/rfc6648 > Deprecating the "X-" Prefix and Similar Constructs in Application Protocols > > for example: > Server-State-Load > Server-State-Users > Server-State-Health > Server-State-... Multiple headers easier to write new parsers for I agree, but supporting the other variants might be worthwhile. I'm thinking to maybe implement my lua-check first, then write a simple HTTP checker within Lua. -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 signature.asc Description: Digital signature
design proposal: lua-agent-check
As a followup to the http-agent-check design idea, I wondered if implementing a general-case lua-agent-check mode would be beneficial. lua-agent-check keyword would take one parameter, the name of a function that can be called to determine the health of a server. The finer details about the design model I'm not sure about yet. Option 1: function is called for every (backend, server) tuple that specifies the lua-agent-check keyword. The Lua function would get two parameters: - instance of Backend class that is being checked - instance of Server class that is being checked Option 2: function is called once for every backend. This would be useful if you have an external system that knows the health for multiple servers at once. -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 signature.asc Description: Digital signature
Design Proposal: http-agent-check, explict health checks & inline-mode
Hi, This is something I have a vague recollection of existing somewhere, but didn't find any leads in documentation or source. Right now, if you want to use load feedback for weights, you either need something entirely out-of-band from the servers back to HAProxy, or you have to use the agent-check option and run a separate health agent. The agent-check protocol is described only in the configuration.txt 'agent-check' section, and is conveyed entirely over pure TCP, no HTTP. It supports conveying useful health including weight and DRAIN/MAINT states. The http-check behavior only supports matching strings or status codes, and does not convey any load feedback. I would like to propose a new http-agent-check option, with two usage modes. 1. health-check mode: this connects like the existing agent-check, but sends does HTTP request & response rather than pure TCP. 2. inline mode: if the server has best-case knowledge about it's status, and HTTP headers are used for the feedback information, then it should be possible to include the feedback in an HTTP response header as part of normal queries. The header processing would detect & feed the data into the health system during normal traffic. Question: where & how should the feedback information be encoded in the response? 1. HTTP payload 2. Single HTTP header 3. Multiple HTTP headers -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 signature.asc Description: Digital signature
url_param not matching key-only params (also testcases for fetchers)
I looked in tests & reg-tests, but didn't see any clear way to add tests for verifying that fetchers work correctly. I think my co-worker found an edge-case on smp_fetch_url_param/smp_fetch_param. Trying to identify URLs that have a URL parameter set, that MIGHT not have a value. This is commonly found in AWS S3 sub-resource URLs. Sample inputs that are all expected to match below. # Desired code: acl bucket_website_crud urlp(website) -m found # Should_match, Input 1,/?website 1,/?website=a 1,/?website& 1,/?website 1,/?website=a 1,/?website 1,/?website=1 1,/?website=a=1 1,/?website=1 1,/?foo 1,/?foo=a 1,/?foo& 1,/?foo= 1,/?foo==a 1,/?foo=& # Workaround: acl bucket_website_crud url ?website acl bucket_website_crud url -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 signature.asc Description: Digital signature
Limiting bandwidth of connections
Hi, I'm wondering about the status of bandwidth limiting that was originally planned for 1.6. In the archives I see discussions in 2012 & 2013; Willy's responses: 2012-04-17 planned for 1.6: https://www.mail-archive.com/haproxy@formilux.org/msg07096.html 2013-05-01 planned for 1.6: https://www.mail-archive.com/haproxy@formilux.org/msg09812.html I can't do it in other layers in my use case, as I'd like to apply different bandwidth rates depending on an API key in the HTTP keys (behind SSL). Mostly outgoing GET requests that I want to limit, but I can see limiting incoming PUT/POST as being useful for other people. I do already have Lua code working to extract & validate signed API tokens and rate limit the number of connections for each API key, but there's a subset of consumers that I want to limit to certain speeds. -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 ICQ# : 30269588 or 41961639 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
Re: Introduction and small changes to HAProxy for adding custom errorfiles for 401 and 407 http status page
On Sat, Feb 11, 2017 at 07:17:20PM +0100, Michael Hamburger wrote: > If you nonetheless like a git patch I will try to send one. Please do send a patch, it's a LOT easier to review, and if it's good, it can be applied with your name on it :-). If you have all of your changes in a single commit, and that commit is the head of your branch, then you can do 'git format-patch HEAD^', and then either use git send-email, or just attach the patch to an email. -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 ICQ# : 30269588 or 41961639 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 signature.asc Description: Digital signature
Re: [PATCH] MEDIUM: ssl: Add TLS-PSK client and server side support
On Fri, Feb 03, 2017 at 02:19:29AM +0100, Nenad Merdanovic wrote: > +psk-file > + Enables use of PSK cipher suites with PSKs stored in the specified file. > + The entries should be in form "identity:key", one per line. > + Rather than new file handling routine, could you instead hook this into using a map? As a side bonus, it becomes trivially possible to update the map at runtime. -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 ICQ# : 30269588 or 41961639 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
Re: HAProxy Lua Map.end & reserved keywords
On Wed, Jan 11, 2017 at 12:17:26PM +0100, Willy Tarreau wrote: > On Mon, Jan 09, 2017 at 08:47:17PM +0000, Robin H. Johnson wrote: > > Maybe Willy would considering changing the name of the matches to 'prefix' > > & 'suffix' instead of 'beg' & 'end', and just keep beg/end as legacy. > Another approach would be to have simple rules when mapping a namespace > to another one. For example we could easily imagine that all "map_something" > in haproxy are mapped to their exact equivalent name in Lua (thus we would > have "map_end" instead of "map.end"), or that we could prefix all these > words by "_" which would give "map._end" which is not much different from > the haproxy keyword "map_end" and easy enough to remember. This is actually why I thought of Map.match_str, with a potential idea that at some point it MIGHT become a bitfield for additional behaviors (eg the output types per https://cbonte.github.io/haproxy-dconv/1.6/configuration.html#7.3.1-map). The underscore is not sufficiently unique I think, but my knowledge of Lua is nowhere near as extensive as I'd like. -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 ICQ# : 30269588 or 41961639 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
Re: HAProxy Lua Map.end & reserved keywords
On Mon, Jan 09, 2017 at 07:49:40PM +0100, thierry.fourn...@arpalert.org wrote: > > I see two potential ways forward: > > a) Map['end'] # works right now, but ugly > > b) Map.match_end # intent is much clearer > Hi, thank for you comment ! You're absolutely right. This keyword > doesn't run because it is reserved. The good news is that I can change > the API because, obviously, nobody use this keyword :) Nobody used it until me it seems :-). > I agree, the solution "a" is ugly, but it is working now. I dont like > the solution "b", because the form of the word is different than other. > > Maybe "suffix" in place of "end", but I prefer the same word that the > word used in haproxy configs. > > I think that I will rename all the match keywords with the prefix > "match_", and keep the old names available, but undocumented for the > backward compatibility. Maybe Willy would considering changing the name of the matches to 'prefix' & 'suffix' instead of 'beg' & 'end', and just keep beg/end as legacy. -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 ICQ# : 30269588 or 41961639 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 signature.asc Description: Digital signature
HAProxy Lua Map.end & reserved keywords
TL;DR: 'end' is a reserved Lua keyword, and cannot be used as a structure member as in Map.end. Need to change the naming of constants maybe? http://www.arpalert.org/src/haproxy-lua-api/1.7/index.html#map-class > -- Create and load map > geo = Map.new("geo.map", Map.ip); Now if you want to use the match method 'end', the docs say to use 'Map.end'. This is not legal Lua, because 'end' is a reserved keyword. I see two potential ways forward: a) Map['end'] # works right now, but ugly b) Map.match_end # intent is much clearer -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 ICQ# : 30269588 or 41961639 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
[PATCH v2] MINOR: cfgparse: Allow disable of stats
Add a 'stats disable' option that can be used to explicitly disable the stats, without issuing the warning message as seen on TCP proxies. If any stats options are present in a default block, there is presently no way to explicitly disable them for a single proxy, other than defining a new default block with all of the options repeated EXCEPT the stats options. This normally generates a warning: [WARNING] ... 'stats' statement ignored for proxy 'my-proxy' as it requires HTTP mode. After the warning is issued, the stats for that proxy are disabled anyway. The new 'stats disable' option just disables the stats without generating the warning message; it uses the exact same means to disable the stats as used by the warning path. Changes since v1: Free uri_auth structure as suggested by Willy Tarreau <w...@1wt.eu>. X-Backport: 1.7 Signed-off-by: Robin H. Johnson <robb...@gentoo.org> --- doc/configuration.txt | 12 +++- include/common/uri_auth.h | 1 + src/cfgparse.c| 8 +++- src/uri_auth.c| 50 +++ 4 files changed, 69 insertions(+), 2 deletions(-) diff --git a/doc/configuration.txt b/doc/configuration.txt index 2a5f7dc06..4d00f2f5a 100644 --- a/doc/configuration.txt +++ b/doc/configuration.txt @@ -1944,6 +1944,7 @@ srvtimeout (deprecated) X - X X stats admin - X X X stats authX X X X stats enable X X X X +stats disable X X X X stats hide-versionX X X X stats http-request- X X X stats realm X X X X @@ -7769,8 +7770,17 @@ stats enable stats uri /admin?stats stats refresh 5s - See also : "stats auth", "stats realm", "stats uri" + See also : "stats auth", "stats realm", "stats uri", "stats disable" +stats disable + Disable statistics reporting. + May be used in sections : defaults | frontend | listen | backend + yes |yes | yes | yes + Arguments : none + + This statement explicitly disables statistics reporting. + + See also : "stats enable" stats hide-version Enable statistics and hide HAProxy version reporting diff --git a/include/common/uri_auth.h b/include/common/uri_auth.h index e80722d4e..9fb8411e2 100644 --- a/include/common/uri_auth.h +++ b/include/common/uri_auth.h @@ -85,6 +85,7 @@ struct uri_auth *stats_add_auth(struct uri_auth **root, char *user); struct uri_auth *stats_add_scope(struct uri_auth **root, char *scope); struct uri_auth *stats_set_node(struct uri_auth **root, char *name); struct uri_auth *stats_set_desc(struct uri_auth **root, char *desc); +void stats_free_uri_auth(struct uri_auth *ua); #endif /* _COMMON_URI_AUTH_H */ diff --git a/src/cfgparse.c b/src/cfgparse.c index 2f75c1660..b868d6cad 100644 --- a/src/cfgparse.c +++ b/src/cfgparse.c @@ -4290,6 +4290,10 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm) err_code |= ERR_ALERT | ERR_ABORT; goto out; } + } else if (!strcmp(args[1], "disable")) { + if (curproxy == || curproxy->uri_auth != defproxy.uri_auth) + stats_free_uri_auth(curproxy->uri_auth); + curproxy->uri_auth = NULL; } else if (!strcmp(args[1], "enable")) { if (!stats_check_init_uri_auth(>uri_auth)) { Alert("parsing [%s:%d] : out of memory.\n", file, linenum); @@ -4366,7 +4370,7 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm) } } else { stats_error_parsing: - Alert("parsing [%s:%d]: %s '%s', expects 'admin', 'uri', 'realm', 'auth', 'scope', 'enable', 'hide-version', 'show-node', 'show-desc' or 'show-legends'.\n", + Alert("parsing [%s:%d]: %s '%s', expects 'admin', 'uri', 'realm', 'auth', 'scope', 'enable', 'disable', 'hide-version', 'show-node', 'show-desc' or 'show-legends'.\n", file, linenum, *args[1]?"unknown stats parameter":"missing keyword in", args[*args[1]?1:0]); err_code |= ERR_ALERT | ERR_FATAL; goto out; @@ -8465,6 +8469,8 @@ out_uri_auth_compat: Warning("config : 'stats' statement ignor
Re: [PATCH] MINOR: http: custom status reason.
On Mon, Jan 02, 2017 at 11:47:36AM +0100, Willy Tarreau wrote: > On Sun, Jan 01, 2017 at 01:10:52PM -0800, Robin H. Johnson wrote: > > The older 'rsprep' directive allows modification of the status reason. > > > > Extend 'http-response set-status' to take an optional string of the new > > status reason. > > > > http-response set-status 418 reason "I'm a coffeepot" > > > > Matching updates in Lua code: > > - AppletHTTP.set_status > > - HTTP.res_set_status > > > > Signed-off-by: Robin H. Johnson <robb...@gentoo.org> > > Looks good, thanks. I'm CCing Thierry so that he can check if the Lua > part is OK as well. I think it's reasonable to backport this to 1.7. > However for 1.6 I'd rather avoid adding new features, unless participants > here are fine with it being backported. Do you really need it there ? 1.7 is a little too new, and I was aiming at configuration compatibility between the releases, so ported it back to the actively maintained series. It wasn't clear how to use those tests fully, but Lua set_status does work for me. -- Robin Hugh Johnson Gentoo Linux: Dev, Infra Lead, Foundation Trustee & Treasurer E-Mail : robb...@gentoo.org GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136
[PATCH-1.6] MINOR: http: custom status reason.
The older 'rsprep' directive allows modification of the status reason. Extend 'http-response set-status' to take an optional string of the new status reason. http-response set-status 418 reason "I'm a coffeepot" Matching updates in Lua code: - AppletHTTP.set_status - HTTP.res_set_status Signed-off-by: Robin H. Johnson <robb...@gentoo.org> (cherry picked from commit 4ce5080b32cfc8591f5639e740a1a83079e9a308) --- doc/configuration.txt | 9 ++--- doc/lua-api/index.rst | 11 +++ include/proto/proto_http.h | 2 +- include/types/action.h | 1 + include/types/applet.h | 1 + src/hlua.c | 14 +++--- src/proto_http.c | 23 +-- tests/setstatus.lua| 26 ++ tests/test-http-set-status-lua.cfg | 31 +++ tests/test-http-set-status.cfg | 32 10 files changed, 133 insertions(+), 17 deletions(-) create mode 100644 tests/setstatus.lua create mode 100644 tests/test-http-set-status-lua.cfg create mode 100644 tests/test-http-set-status.cfg diff --git a/doc/configuration.txt b/doc/configuration.txt index 9266bcfae..430b42f5f 100644 --- a/doc/configuration.txt +++ b/doc/configuration.txt @@ -3844,7 +3844,7 @@ http-response { allow | deny | add-header | set-nice | set-header | del-header | replace-header| replace-value| -set-status | +set-status [reason ] | set-log-level | set-mark | set-tos | add-acl() | del-acl() | @@ -3935,13 +3935,16 @@ http-response { allow | deny | add-header | set-nice | Cache-Control: max-age=3600, private - "set-status" replaces the response status code with which must - be an integer between 100 and 999. Note that the reason is automatically - adapted to the new code. + be an integer between 100 and 999. Optionally, a custom reason text can be + provided defined by , or the default reason for the specified code + will be used as a fallback. Example: # return "431 Request Header Fields Too Large" http-response set-status 431 +# return "503 Slow Down", custom reason +http-response set-status 503 reason "Slow Down". - "set-nice" sets the "nice" factor of the current request being processed. It only has effect against the other requests being processed at the same diff --git a/doc/lua-api/index.rst b/doc/lua-api/index.rst index 682c2149f..06e8cdc4c 100644 --- a/doc/lua-api/index.rst +++ b/doc/lua-api/index.rst @@ -885,13 +885,15 @@ HTTP class :param class_http http: The related http object. :param string uri: The new uri. -.. js:function:: HTTP.res_set_status(http, status) +.. js:function:: HTTP.res_set_status(http, status [, reason]) - Rewrites the response status code with the parameter "code". Note that the - reason is automatically adapted to the new code. + Rewrites the response status code with the parameter "code". + + If no custom reason is provided, it will be generated from the status. :param class_http http: The related http object. :param integer status: The new response status code. + :param string reason: The new response reason (optional). .. _txn_class: @@ -1512,13 +1514,14 @@ AppletHTTP class Contains an array containing all the request headers. -.. js:function:: AppletHTTP.set_status(applet, code) +.. js:function:: AppletHTTP.set_status(applet, code [, reason]) This function sets the HTTP status code for the response. The allowed code are from 100 to 599. :param class_AppletHTTP applet: An :ref:`applethttp_class` :param integer code: the status code returned to the client. + :param string reason: the status reason returned to the client (optional). .. js:function:: AppletHTTP.add_header(applet, name, value) diff --git a/include/proto/proto_http.h b/include/proto/proto_http.h index 4ed96e305..65891d333 100644 --- a/include/proto/proto_http.h +++ b/include/proto/proto_http.h @@ -106,7 +106,7 @@ int http_header_match2(const char *hdr, const char *end, const char *name, int l int http_remove_header2(struct http_msg *msg, struct hdr_idx *idx, struct hdr_ctx *ctx); int http_header_add_tail2(struct http_msg *msg, struct hdr_idx *hdr_idx, const char *text, int len); int http_replace_req_line(int action, const char *replace, int len, struct proxy *px, struct stream *s); -void http_set_status(unsigned int status, struct stream *s); +void http_set_status(unsigned int status, const char *reason, struct stream *s); int http_transform_header_str(struct stream* s, struct http_msg *msg, const char* n
[PATCH-1.7] MINOR: http: custom status reason.
The older 'rsprep' directive allows modification of the status reason. Extend 'http-response set-status' to take an optional string of the new status reason. http-response set-status 418 reason "I'm a coffeepot" Matching updates in Lua code: - AppletHTTP.set_status - HTTP.res_set_status Signed-off-by: Robin H. Johnson <robb...@gentoo.org> (cherry picked from commit 4ce5080b32cfc8591f5639e740a1a83079e9a308) --- doc/configuration.txt | 9 ++--- doc/lua-api/index.rst | 11 +++ include/proto/proto_http.h | 2 +- include/types/action.h | 1 + include/types/applet.h | 1 + src/hlua.c | 14 +++--- src/proto_http.c | 23 +-- tests/setstatus.lua| 26 ++ tests/test-http-set-status-lua.cfg | 31 +++ tests/test-http-set-status.cfg | 32 10 files changed, 133 insertions(+), 17 deletions(-) create mode 100644 tests/setstatus.lua create mode 100644 tests/test-http-set-status-lua.cfg create mode 100644 tests/test-http-set-status.cfg diff --git a/doc/configuration.txt b/doc/configuration.txt index 6795166f5..ef1f6832d 100644 --- a/doc/configuration.txt +++ b/doc/configuration.txt @@ -4122,7 +4122,7 @@ http-response { allow | deny | add-header | set-nice | set-header | del-header | replace-header| replace-value| -set-status | +set-status [reason ] | set-log-level | set-mark | set-tos | add-acl() | del-acl() | @@ -4215,13 +4215,16 @@ http-response { allow | deny | add-header | set-nice | Cache-Control: max-age=3600, private - "set-status" replaces the response status code with which must - be an integer between 100 and 999. Note that the reason is automatically - adapted to the new code. + be an integer between 100 and 999. Optionally, a custom reason text can be + provided defined by , or the default reason for the specified code + will be used as a fallback. Example: # return "431 Request Header Fields Too Large" http-response set-status 431 +# return "503 Slow Down", custom reason +http-response set-status 503 reason "Slow Down". - "set-nice" sets the "nice" factor of the current request being processed. It only has effect against the other requests being processed at the same diff --git a/doc/lua-api/index.rst b/doc/lua-api/index.rst index 22c053884..e4174cf6f 100644 --- a/doc/lua-api/index.rst +++ b/doc/lua-api/index.rst @@ -1347,13 +1347,15 @@ HTTP class :param class_http http: The related http object. :param string uri: The new uri. -.. js:function:: HTTP.res_set_status(http, status) +.. js:function:: HTTP.res_set_status(http, status [, reason]) - Rewrites the response status code with the parameter "code". Note that the - reason is automatically adapted to the new code. + Rewrites the response status code with the parameter "code". + + If no custom reason is provided, it will be generated from the status. :param class_http http: The related http object. :param integer status: The new response status code. + :param string reason: The new response reason (optional). .. _txn_class: @@ -1982,13 +1984,14 @@ AppletHTTP class Contains an array containing all the request headers. -.. js:function:: AppletHTTP.set_status(applet, code) +.. js:function:: AppletHTTP.set_status(applet, code [, reason]) This function sets the HTTP status code for the response. The allowed code are from 100 to 599. :param class_AppletHTTP applet: An :ref:`applethttp_class` :param integer code: the status code returned to the client. + :param string reason: the status reason returned to the client (optional). .. js:function:: AppletHTTP.add_header(applet, name, value) diff --git a/include/proto/proto_http.h b/include/proto/proto_http.h index 5ee2e2fa4..6c81766aa 100644 --- a/include/proto/proto_http.h +++ b/include/proto/proto_http.h @@ -108,7 +108,7 @@ int http_header_match2(const char *hdr, const char *end, const char *name, int l int http_remove_header2(struct http_msg *msg, struct hdr_idx *idx, struct hdr_ctx *ctx); int http_header_add_tail2(struct http_msg *msg, struct hdr_idx *hdr_idx, const char *text, int len); int http_replace_req_line(int action, const char *replace, int len, struct proxy *px, struct stream *s); -void http_set_status(unsigned int status, struct stream *s); +void http_set_status(unsigned int status, const char *reason, struct stream *s); int http_transform_header_str(struct stream* s, struct http_msg *msg, const char* n
git.haproxy.org down?
fatal: unable to access 'http://git.haproxy.org/git/haproxy.git/': Failed to connect to git.haproxy.org port 80: Connection refused -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 ICQ# : 30269588 or 41961639 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
[PATCH] MINOR: http: custom status reason.
The older 'rsprep' directive allows modification of the status reason. Extend 'http-response set-status' to take an optional string of the new status reason. http-response set-status 418 reason "I'm a coffeepot" Matching updates in Lua code: - AppletHTTP.set_status - HTTP.res_set_status Signed-off-by: Robin H. Johnson <robb...@gentoo.org> --- doc/configuration.txt | 9 ++--- doc/lua-api/index.rst | 11 +++ include/proto/proto_http.h | 2 +- include/types/action.h | 1 + include/types/applet.h | 1 + src/hlua.c | 14 +++--- src/proto_http.c | 23 +-- tests/setstatus.lua| 26 ++ tests/test-http-set-status-lua.cfg | 31 +++ tests/test-http-set-status.cfg | 32 10 files changed, 133 insertions(+), 17 deletions(-) create mode 100644 tests/setstatus.lua create mode 100644 tests/test-http-set-status-lua.cfg create mode 100644 tests/test-http-set-status.cfg diff --git a/doc/configuration.txt b/doc/configuration.txt index f24c39623..c0d6ee5ff 100644 --- a/doc/configuration.txt +++ b/doc/configuration.txt @@ -4122,7 +4122,7 @@ http-response { allow | deny | add-header | set-nice | set-header | del-header | replace-header| replace-value| -set-status | +set-status [reason ] | set-log-level | set-mark | set-tos | add-acl() | del-acl() | @@ -4215,13 +4215,16 @@ http-response { allow | deny | add-header | set-nice | Cache-Control: max-age=3600, private - "set-status" replaces the response status code with which must - be an integer between 100 and 999. Note that the reason is automatically - adapted to the new code. + be an integer between 100 and 999. Optionally, a custom reason text can be + provided defined by , or the default reason for the specified code + will be used as a fallback. Example: # return "431 Request Header Fields Too Large" http-response set-status 431 +# return "503 Slow Down", custom reason +http-response set-status 503 reason "Slow Down". - "set-nice" sets the "nice" factor of the current request being processed. It only has effect against the other requests being processed at the same diff --git a/doc/lua-api/index.rst b/doc/lua-api/index.rst index f930277df..e04c5b5a7 100644 --- a/doc/lua-api/index.rst +++ b/doc/lua-api/index.rst @@ -1443,13 +1443,15 @@ HTTP class :param class_http http: The related http object. :param string uri: The new uri. -.. js:function:: HTTP.res_set_status(http, status) +.. js:function:: HTTP.res_set_status(http, status [, reason]) - Rewrites the response status code with the parameter "code". Note that the - reason is automatically adapted to the new code. + Rewrites the response status code with the parameter "code". + + If no custom reason is provided, it will be generated from the status. :param class_http http: The related http object. :param integer status: The new response status code. + :param string reason: The new response reason (optional). .. _txn_class: @@ -2080,13 +2082,14 @@ AppletHTTP class AppletHTTP.headers["accept"][2] = "*/*, q=0.1" .. -.. js:function:: AppletHTTP.set_status(applet, code) +.. js:function:: AppletHTTP.set_status(applet, code [, reason]) This function sets the HTTP status code for the response. The allowed code are from 100 to 599. :param class_AppletHTTP applet: An :ref:`applethttp_class` :param integer code: the status code returned to the client. + :param string reason: the status reason returned to the client (optional). .. js:function:: AppletHTTP.add_header(applet, name, value) diff --git a/include/proto/proto_http.h b/include/proto/proto_http.h index 5ee2e2fa4..6c81766aa 100644 --- a/include/proto/proto_http.h +++ b/include/proto/proto_http.h @@ -108,7 +108,7 @@ int http_header_match2(const char *hdr, const char *end, const char *name, int l int http_remove_header2(struct http_msg *msg, struct hdr_idx *idx, struct hdr_ctx *ctx); int http_header_add_tail2(struct http_msg *msg, struct hdr_idx *hdr_idx, const char *text, int len); int http_replace_req_line(int action, const char *replace, int len, struct proxy *px, struct stream *s); -void http_set_status(unsigned int status, struct stream *s); +void http_set_status(unsigned int status, const char *reason, struct stream *s); int http_transform_header_str(struct stream* s, struct http_msg *msg, const char* name, unsigned int name_len, const char *str, struct my
[RFC] Setting custom reasons with http-response: optional param vs new directive
'rsprep' allows modification of the reason text, for custom status reasons. 'http-response set-status' however just uses the hard-coded reason for each status code. Should set-status get an additional optional second parameter of a string, or should we add a set-reason directive instead? The code I'm intended on sending is '503 Slow Down', as per: http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html It also needs to be wired up to the Lua function: AppletHTTP.set_status(applet, code[, reason]) AppletHTTP.set_reason(applet, reason) I'm willing to write up the code, just wondering if it should be a new directive or an optional parameter. -- Robin Hugh Johnson E-Mail : robb...@orbis-terrarum.net Home Page : http://www.orbis-terrarum.net/?l=people.robbat2 ICQ# : 30269588 or 41961639 GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
[PATCH] MINOR: cfgparse: Allow disable of stats
Add a 'stats disable' option that can be used to explicitly disable the stats, without issuing the warning message as seen on TCP proxies. If any stats options are present in a default block, there is presently no way to explicitly disable them for a single proxy, other than defining a new default block with all of the options repeated EXCEPT the stats options. This normally generates a warning: [WARNING] ... 'stats' statement ignored for proxy 'my-proxy' as it requires HTTP mode. After the warning is issued, the stats for that proxy are disabled anyway. The new 'stats disable' option just disables the stats without generating the warning message; it uses the exact same means to disable the stats as used by the warning path. This patch should be back-ported to 1.7. Signed-off-by: Robin H. Johnson <robb...@gentoo.org> --- doc/configuration.txt | 12 +++- src/cfgparse.c| 4 +++- 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/doc/configuration.txt b/doc/configuration.txt index f24c39623..18992d37e 100644 --- a/doc/configuration.txt +++ b/doc/configuration.txt @@ -1944,6 +1944,7 @@ srvtimeout (deprecated) X - X X stats admin - X X X stats authX X X X stats enable X X X X +stats disable X X X X stats hide-versionX X X X stats http-request- X X X stats realm X X X X @@ -7769,8 +7770,17 @@ stats enable stats uri /admin?stats stats refresh 5s - See also : "stats auth", "stats realm", "stats uri" + See also : "stats auth", "stats realm", "stats uri", "stats disable" +stats disable + Disable statistics reporting. + May be used in sections : defaults | frontend | listen | backend + yes |yes | yes | yes + Arguments : none + + This statement explicitly disables statistics reporting. + + See also : "stats enable" stats hide-version Enable statistics and hide HAProxy version reporting diff --git a/src/cfgparse.c b/src/cfgparse.c index ec8f6a1f0..1c65bf398 100644 --- a/src/cfgparse.c +++ b/src/cfgparse.c @@ -4498,6 +4498,8 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm) err_code |= ERR_ALERT | ERR_ABORT; goto out; } + } else if (!strcmp(args[1], "disable")) { + curproxy->uri_auth = NULL; } else if (!strcmp(args[1], "enable")) { if (!stats_check_init_uri_auth(>uri_auth)) { Alert("parsing [%s:%d] : out of memory.\n", file, linenum); @@ -4574,7 +4576,7 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm) } } else { stats_error_parsing: - Alert("parsing [%s:%d]: %s '%s', expects 'admin', 'uri', 'realm', 'auth', 'scope', 'enable', 'hide-version', 'show-node', 'show-desc' or 'show-legends'.\n", + Alert("parsing [%s:%d]: %s '%s', expects 'admin', 'uri', 'realm', 'auth', 'scope', 'enable', 'disable', 'hide-version', 'show-node', 'show-desc' or 'show-legends'.\n", file, linenum, *args[1]?"unknown stats parameter":"missing keyword in", args[*args[1]?1:0]); err_code |= ERR_ALERT | ERR_FATAL; goto out; -- 2.11.0.rc2