Re: Using haproxy together with NFS

2018-08-01 Thread Michael Ezzell
On Wed, Aug 1, 2018, 16:00 Lucas Rolff  wrote:

>
> I use the “send-proxy” to let the NFS Server see the actual source IP,
> instead of the haproxy machine IP.
>
You'll probably need remove that.  Unless the destination service
explicitly supports the Proxy Protocol (in which case, it must not, by
definition, process connections where the protocol's preamble is *absent*
from the stream), then this would just look like corrupt data.  This option
doesn't actually change the source address.

HAProxy in TCP mode should work fine with NFS -- at least, it does with
NFS4.1 as implemented in Amazon Elastic File System -- which is the only
version I've tested against.

https://serverfault.com/a/799213/153161


possible bug: no warnings are emitted if server has health-check options but "check" keyword is absent

2017-12-28 Thread Michael Ezzell
I just recovered from an outage that was ultimately triggered by HAProxy
failing to keep track of the correct IP address for the back-end via
periodic DNS queries, despite what appears to have been a correct
configuration.

Using HAProxy 1.6.13, the backend server configuration entry looks like
this:

server backend backend.example.com:443 ssl verify required ca-file
/etc/haproxy/example.pem resolvers vpc resolve-prefer ipv4 inter 15000
fastinter 5000 downinter 2500 rise 1 fall 2

That seems pretty standard, right?

Oops, where's the check keyword?  It should be there, but for some reason
it isn't, which means no health checks were occurring on the back-end, and
no DNS queries were happening, despite the appearance of inter, downinter,
fastinter, rise, fall, and even resolvers... all of which imply health
checking.

It seems to me that HAProxy should at a minimum emit a warning on startup,
because this is almost certainly an unintentional misconfiguration, as it
was in this case... and the lack of the check directive causes all of these
other options (and presumably some others) to be silently ignored.


Re: Certificate bundles seem to be non-functional

2017-12-19 Thread Michael Ezzell
On Dec 20, 2017 01:19, "Andrew Heberle"  wrote:

just wanting to know where the failing is...


With me, in this case.  Apologies for the complete misunderstanding of your
question.  I have not used the feature you're referring to and mistakenly
assumed "bundle" was a reference to cert + intermediate + key.


Re: Certificate bundles seem to be non-functional

2017-12-19 Thread Michael Ezzell
On Dec 19, 2017 20:46, "Andrew Heberle"  wrote:

I am attempting to utilise certificate bundles so we can have multi-type
certs in haproxy however this seems non-functional.

I have a two cert bundles as follows (only testing with RSA certs at the
moment):

/etc/haproxy/ssl # ls -l /etc/haproxy/ssl/
total 16
-rw-r--r-- 1 root root 1184 Dec 20 01:39 test1.pem.issuer.rsa
-rw-r--r-- 1 root root 2888 Dec 20 01:26 test1.pem.rsa
-rw-r--r-- 1 root root 1184 Dec 20 01:40 test2.pem.issuer.rsa
-rw-r--r-- 1 root root 2888 Dec 20 01:30 test2.pem.rsa

With the following config of my two front-ends:

frontend test1
bind *:5000 ssl crt test1.pem
default_backend app1

frontend test2
bind *:5001 ssl crt test2.pem
default_backend app2

But this then fails:

/etc/haproxy/ssl # haproxy -f /etc/haproxy/haproxy.cfg -c
[ALERT] 353/014339 (59) : parsing [/etc/haproxy/haproxy.cfg:34] : 'bind
*:5000' : unable to stat SSL certificate from fi
le '/etc/haproxy/ssl/test1.pem' : No such file or directory.


Refer to the documentation.

There is no implied extension for the specified filename, such as ".rsa".
The "crt" directive expects the exact path to a single file containing the
certificate AND chain AND private key.

crt 


This setting is only available when support for OpenSSL was built in. It
designates a PEM file containing both the required certificates and any
associated private keys. This file can be built by concatenating multiple
PEM files into one (e.g. cat cert.pem key.pem > combined.pem). If your CA
requires an intermediate certificate, this can also be concatenated into this
file.


http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#5.1-crt


Re: AWS ELB with HA proxy showing 5XX errors

2017-09-07 Thread Michael Ezzell
On Sep 6, 2017 5:18 AM, "DHAVAL JAISWAL"  wrote:

I have some queries as well. Will above configuration slow down request -
response or site performance ?


The configuration you have shown seems valid.

If this system is running in Amazon VIC, you can replace the nameserver IP
address with 169.254.169.253.  This is a resolver provided by the VPC
infrastructure that is always available regardless of the IPv4 CIDR block
of the VPC.  There should be no need for additional resolvers, since if
this isn't working, your instance's hypervisor has almost certainly failed
and the instance will have failed along with it.

http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_DHCP_Options.html#AmazonDNS


Re: Use server_name as variable or header value

2017-07-28 Thread Michael Ezzell
On Thu, Jul 27, 2017 at 11:54 PM, Mark Staudinger 
wrote:

> Hi Folks,
>
> I'm trying to export the server_name as a response header or variable for
> logging purposes ( %s in the log format ).
>
> Is there a way to access this string in a manner similar to one of these
> methods?  I've tried the following without success (usually with a "unknown
> fetch method %s".
>
> http-response set-var(txn.servername) %s
> or
> http-response set-var(txn.servername) %[server_name]
>

​You can use this:​

​http-response set-header X-Backend-Server-Name %s

When using http-response set-header or http-response add-header, the value
argument is a "log format expression," which understands log variables.

By contrast, http-response set-variable expects a "standard HAProxy
expression formed by a sample fetch followed by some converters," so it
doesn't have access to log variables, only fetches.  There's a srv_id layer
4 fetch that can get the server's ID, but there's not a sample fetch for
the server name.


Re: How to forward HTTP / HTTPS to different backend proxy servers

2017-07-02 Thread Michael Ezzell
On Jul 2, 2017 8:41 PM, "Daren Sefcik"  wrote:

yep, pretty much..I just need some help to figure out how to make it
work

example log entries for https and http, you can see how the "443" goes to
one backenad and the regular http "GET" request goes to another..but this
is not consistent and I know there has to be a better way..


use_backend HTPL_WEB_PROXY_http_ipvANY   if { meth_connect }

Or maybe...

use_backend HTPL_WEB_PROXY_http_ipvANY  if { meth_connect } !{ path_end :80
}

That should be all you need.

HTTPS through an HTTP proxy via HAProxy isn't an SSL session that HAProxy
can see.  It's an opaque tunnel, requested over HTTP, using CONNECT.

If the browser asks for a tunnel, it should be because it's wanting to
speak HTTPS once the target is connected.


Re: Looking for a way to limit simultaneous connections per IP

2017-06-28 Thread Michael Ezzell
On Jun 28, 2017 16:58, "Patrick Hemmer"  wrote:


We instead need a way to differentiate (count) connections held open and
sitting in the Lua delay function, and connections being processed by a
server.


My wishlist would be per-client queueing, sort of a source IP or XFF-based
maxconn.  How sweet would that be?

Without addressing some potential issues with your solution, such as the
potential for requests being handled out of order (#7 processes before #6
if it arrives after two of #1-#5 finish while #6 waits), it seems like
maybe the http_first_req fetch would be useful?  It's not a given that the
redirect would reuse the same connection, but it might be worth a shot.

http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#7.3.6-http_first_req

Or, modify the query string for the redirect, to append another parameter
(or create a query string if there isn't one).  I have a setup that
rewrites a 404 over to a 302 and adds a query string parameter to tell
HAProxy to use a different backend for the subsequent request... I use this
as a hackaround for the fact that we don't have a way for HAProxy to retain
and resend idempotent [ Willy :) ] requests to a different backend (which
would be another wishlist item for me) on certain errors. Same idea --
behave differently after a redirect than before it, even though the
redirect sends the browser back to (essentially) the same URL.  I redirect
to the same base with ?action=refresh appended, which the proxy interprets
to mean the request should route to the alternate backend.


Re: Issues with question mark in http-request deny

2017-06-27 Thread Michael Ezzell
On Tue, Jun 27, 2017 at 3:56 PM, Moomjian, Chad 
wrote:

> Hi,
>
>
>
> I am running haproxy v1.6.4, and I am attempting to block a specific
> request regex pattern. I am encountering issues with matching the question
> mark in the request. What I would like to block is requests that match this
> pattern:
>
> /api/…/…/sql?
>


​The ? is the delimiter between path and query string (collectively, the
"request URI").  It isn't valid for ? to appear in the path, so your
regexes testing for this against the path fetch will never match.

You're looking for something more like this:

acl uri_sql capture.req.uri -m reg -i ​^/api/(.*)?/sql\?.*$

​http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#7.3.6-
capture.req.uri​


Re: in-house vulnerability scan vs. stats socket

2017-06-19 Thread Michael Ezzell
On Mon, Jun 19, 2017 at 3:34 PM, Jim Freeman  wrote:

> FWIW / FYI -
>
> # haproxy -v
> HA-Proxy version 1.5.18 2016/05/10
>
> An in-house vulnerability scanner found our haproxy stats sockets and
> started probing, sending bogus requests, HTTP_* methods, etc.
>
> The many requests, even though the request paths were not valid at the
> stats socket, made for a DoS attack (with haproxy's CPU consumption
> often pegging at 100% generating stats pages).
>
> Since it looks like the only valid stats socket requests are GETs
> ​
>

​Possible point of clarification, it sounds like you're referring to a
listener/frontend with stats enabled.

The stats socket
 doesn't
speak http.​


Re: 1.7.5 503 Timeouts with SNI backend

2017-05-18 Thread Michael Ezzell
On May 18, 2017 3:07 PM, "Ryan Schlesinger" 
wrote:

We have the following backend configuration:

backend clientsite_ember
  server cf foobar.cloudfront.net:443 ssl verify required sni str(
foobar.cloudfront.net) ca-file /etc/ssl/certs/ca-certificates.crt

This has been working great with 1.7.2 since February.  I upgraded to 1.7.5
yesterday and today found that all requests through that backend were
returning 503.  Testing the cloudfront url manually loaded the site.

Sample Logs:
May 18 10:13:47 ip-10-4-13-35 haproxy:  :46924
[18/May/2017:17:13:32.237] http-in~ clientsite_ember/cf 0/0/-1/-1/14969 503
212 - - CC--


That second C is significant:

the proxy was waiting for the CONNECTION to establish on theserver.
The server might at most have noticed a connection attempt.


You don't have a healthcheck configured.  You don't want option httpchk
with CloudFront, but you do need at least a TCP check.  The place where you
were connecting to could have been unavailable.

To understand how, take a look at the results of dig
dzzzexample.cloudfront.net.  There will be several responses.  But, without
a DNS resolver section configured on the proxy and attached to each backend
server to continually re-resolve the addresses, the proxy will latch to
just one, and stick to it until restarted.

The DNS responses from CloudFront can vary from day to day or hour to hour,
since the DNS is dynamically derived from their system's current notion of
the "closest" (most optimal) location from where you query DNS from.  From
Cincinnati, Ohio, I see DNS responses indicating I'm connecting to South
Bend, IN, one day,  Chicago, IL, another,  then Ashburn, VA.  As I type
this, I'm actually seeing New York, NY.  (Do a reverse lookup on the IP
addresses currently associated with the CloudFront hostname.  An
alphanumeric code in the hostname gives you the IATA code of the nearest
airport to the CloudFront edge in question -- IADx is Ashburn, JFKx is NYC,
etc.)

If CloudFront lost an edge or took one out of DNS rotation and shut it down
for maintenance, what you saw would potentially be one behavior HAProxy
could be expected to exhibit, because it wouldn't know.  Unless I missed a
memo, HAProxy only resolves DNS at startup unless configured otherwise.

The browser you tested with would have resolved a different address.

I'm not saying there can't be an issue in 1.7.5 but your configuration
seems vulnerable to service disruptions, since it can't take advantage of
CloudFront's fault tolerance mechanisms.


Re: haproxy deleting domain socket on graceful reload if backlog overflows

2017-04-12 Thread Michael Ezzell
On Apr 12, 2017 6:49 PM, "Andrew Smalley"  wrote:

HI James

Thank you for your reply.

I do not see how the old haproxy being on a separate PID could do anything
with a socket created by a new PID.


How?  Easily.  Unix domain sockets are presented as files.  *Any* process
can unlink a domain socket right out from under you, just like any file,
even if you have an open handle... domain sockets aren't owned by a pid.

This report, prima facie, seems entirely plausible.


Re: Does reqrep work?

2017-03-10 Thread Michael Ezzell
On Mar 10, 2017 4:16 AM, "Ульянка Да"  wrote:

Update:
reqrep changes requests, but harmfully, that results in error 400 (bad
request).
How to debug the harm keeping in mind that traffic is SSLed?


SSL isn't really relevant, because your incorrect rewrite is corrupting the
request *inside* HAProxy.  The request never leaves the proxy.  What you
need to remember is that you are rewriting a line in a buffer that contains
a raw HTTP request.

reqrep ^([^\ :]*)\ /ABC/collection/(.*)/  \1\ /collection/collection_\2/

You're overlooking the "[space]HTTP/1.x" at the end of the start line.

reqrep ^([^\ :]*)\ /ABC/collection/(.*)/(\ HTTP.+)$ \1\
/collection/collection_\2/\3

Note that your expression as written also only works at all when the path
ends with a trailing slash, which may or may not be what you really want.
That behavior is preserved in my example.

If you're using HAProxy 1.6 or later, there is a simpler solution to use,
which looks something like this:

http-request set-path
%[path,regsub(^/ABC/collection/,/collection/collection/)] if { path_beg
/ABC/collection/ }

This also differs from your example, since it does not require a trailing
slash in the path but would match and rewrite any path beginning with the
pattern shown.


Re: add header into http-request redirect

2017-02-26 Thread Michael Ezzell
On Feb 26, 2017 12:14, "Andrew Smalley"  wrote:

Hello Bartek

I think the portion of my example you wanted is below

In my example I have a redirect from http to https and as such there is a
acl force src if my local ip address

Here I add the HSTS and then redirect 301 as you wanted.

http-response set-header Strict-Transport-Security
"max-age=15552000; includeSubDomains; preload;"


Andrew, I don't think http-response  is going to be processed
when the request results in a redirect generated internally by HAProxy...
is it?  The response isn't really from a back-end, so I wouldn't expect
those rules to fire.


Re: Status code "-1" in logs

2017-01-18 Thread Michael Ezzell
On Jan 18, 2017 4:06 PM, "Skarbek, John"  wrote:

Good Morning,

I was spying on my logs and something out of the ordinary popped out at me.
We are getting a status code of -1.


This (-1) is the marker for a null/nil/NaN/non-existent/inapplicable
values.  The associated timer is not applicable to this request because of
the session state at disconnect (CD).  It would be improper to log these as
0 since "0 milliseconds elapsed" is not actually true.

If you were loading the logs into a database for analyis, you'd actually
store a -1 from the log as NULL in the database so that aggregate functions
like AVG() and COUNT() will correctly exclude them.  In SQL, the AVG() of
(5, NULL, 10) is actually 7.5... while the average of (5, 0, 10) is of
course 5... so you can perhaps see the importance of having a distinct
marker for absent values.


Re: using comma in argument to sample fetch & converter

2016-12-07 Thread Michael Ezzell
On Dec 7, 2016 19:17, "Cyril Bonté"  wrote:


For example, you can provide an "urlencoded" string and use url_dec as a
converter :
  http-response add-header X-test %[str("foo%2Cbar"),url_dec]


Nice one.   Also, with the regsub() converter, in the first parameter,
\\x2c does the trick for matching a literal comma.


Re: capture header VS set-var

2016-12-05 Thread Michael Ezzell
On Dec 4, 2016 9:18 AM, "Patrick Hemmer"  wrote:


So what's the benefit of captures? Are they more performant or something?


The major "benefit" of captures is that they were formerly the only option.


Captures existed long before set-var, which is a much newer feature than
capture.

So that's not much of a "benefit" any more, but it may explain one reason
you may find examples using capture that could have used variables.

As you point out, variables are often much easier to work with.

But also, captured headers are easily logged *en bloc* using the %hr and
%hs log variables.


Re: how to match the URL exactly to avoid url hijacking?

2016-12-03 Thread Michael Ezzell
On Dec 2, 2016 02:29, "Qingshan Xie"  wrote:

redirect prefix https://  is_qx963


Shouldn't there be an "if" here, before is_qx963?


Re: rewrite and redirect with haproxy

2016-11-23 Thread Michael Ezzell
On Nov 23, 2016 22:21, "Jonathan Opperman"  wrote:

> https://www.test.1.example.com/ --> https://www-test-1.example.com/
>
> doesn't work in the browser, is http-request only applicable for and http
> request and hot https?

No.  The http-request directives are the same for either.  The problem is
that wildcard certificates simply don't work that way.  The * cannot match
a dot in the hostname, for wildcard certs.  The browser validates the cert
*before* HAProxy becomes aware of the address.

> In curl it works, but in Chrome/Chromium it comes up with a warning
> Your connection is not private
> As the wilcard cert *.example.com does match
https://www.test.1.example.com/ as
> the redrict is not working in the browser to
https://www-test-1.example.com/
> to match the wilcard cert.

You'd have to bypass the browser's security warning, and after that, the
redirect will work as expected.  Sorry if I gave you the impression that
you would magically be able to avoid the security warning, in the previous
message, with a direct https request with the extra dots.  I assumed you
were aware of the limitations of wildcard certs yet wanted https requests
to redirect anyway, if they did come through because the user bypassed the
validation.

The browser behavior is correct, curl is incorrect if it allows these
requests.

Not helpful, perhaps, but hopefully informative.


Re: rewrite and redirect with haproxy

2016-11-23 Thread Michael Ezzell
On Nov 23, 2016 20:16, "Jonathan Opperman"  wrote:

>> my.site.example.net/example.com -> my-site-example-net.example com
>
>
> This, is this do-able? It will be different domains, and different level
sub domains
> but they will utimately end up with using *.example.com *.example2.com
> certificates that terminate on the haproxy server.
>
> http://my.site.example.com/example.com --> http://my-site.example.com
> http://my.other.site.example.com/example.com -->
http://my-other-site.example.com

This can also be done, though it's a little trickier, because you'd need to
match with path_beg or path_reg and then munge the uri with regsub to
remove that and potentially the initial leading slash along with the host
header parts.

> Thanks for this, i've tested and mine for some reason looks like the one
you suggest
> on the other hand:
>
> * Rebuilt URL to: www.test.1.example.com/

> < Location: https://www-test-1-example.com.example.com/

Take a look at my setup again.

http-request redirect location https://
%[hdr(host),regsub(\.example\.com$,),regsub(\.,-,g)].example.com%[capture.req.uri]
if { hdr_reg(host) -i .+\..+\.example\.com$ }

I believe your problem is here:

hdr(host),regsub(\.example\.com$,)

This first regsub needs to match .example.com at the end of the original
host header, and strip it out completely by replacing it with the empty
string that is hiding between , and ) at the end.

If it doesn't match correctly, it would leave the .example.com in place and
fail in much the way your output illustrates.


Re: rewrite and redirect with haproxy

2016-11-23 Thread Michael Ezzell
On Nov 22, 2016 5:37 AM, "Jonathan Opperman"  wrote:

> I want http://foo.bar.bin/blah.com to redirect to
http://foo-bar-bin.blah.com
>
> I want that last dash-domain to also redirect to SSL.

The context of the rest of the message suggests that your first example
should have been a dot where you showed a slash, but perhaps not.  Please
clarify, which are we talking about?

This?

my.site.example.net/example.com -> my-site-example-net.example com

Or this?

my.site.example.net.example.com -> my-site-example-net.example.com

> The order is important. Browsers recently started doing their SSL check
BEFORE the redirects, so we are getting security warnings.

Um.  I don't think that's a new thing.  It isn't possible to send a request
and get a redirect response before validating the SSL cert, and it hasn't
been... so unless I misunderstand, it's not exactly clear what you are
saying has changed.

Obviously, though, you seem to be saying "don't send to https in one
redirect and expect to rewrite the hostname in the next."  Sensible enough.

​If you're talking about just redirecting to a rewritten host with some
character replacement, that's accomplished easily enough in 1.6.

​
http-request redirect location https://
%[hdr(host),regsub(\.example\.com$,),regsub(\.,-,g)].example.com%[capture.req.uri]
if { hdr_reg(host) -i .+\..+\.example\.com$ }

If the Host header matches the regex -- that is, if it ends with .
example.com and contains at least one literal "." previous to that, then
redirect to "https://; + the original host header with .example.com removed
from the end, then the rest of the "." replaced with "-" + ".example.com" +
the captured request uri, which is path + query string.
​

$ ​
curl -v 'http://my.test.here.example.com/some/path?query=1=1'

< HTTP/1.1 302 Found
< Location: https://my-test-here.example.com/some/path?query=1=1

This also has the desired behavior if the request is already https.​

​On the other hand, if you actually needed something like this... ​
​

my.site.example.net/example.com -> my-site-example-net.example co
​​
m

​...that​ is an odd use case, but it can be done... though more information
is needed about what should happen to the rest of the path and whether
there's more than one domain expected after the "/".


Re: S FTP + HA PROXY confutation facing one serious issue.

2016-10-29 Thread Michael Ezzell
On Wed, Oct 26, 2016 at 8:04 AM, mal reddy  wrote:

> Hi Ha proxy Team,
>
> Any updates.
>

You appear to be attempting to do something that isn't possible, for
reasons that are related to the protocol design of SSH/SFTP.​



>I checked HA-Proxy document and other website but most of solution that
> found was http.
>
> using ha proxy configuration document, i successfully upload file for
> one client.
>
>  but problem when i have to configuration more then one clients that
> time
>
> it will redirect all clients request to one sftp server instead of
> different sftp server.
>

​Earlier you said:

>Kindly suggest how to fetch the servername from which the request is
coming so that I can map that particular client to the associated sftp
servers.​


​If this point has already been made, I apologize for the redundancy, but
otherwise seems worth clarifying: That's not possible to do with SFTP.

SFTP is not HTTP, of course, but this is important because unlike HTTP,
there is no Host: header for HAProxy to interpret.

SFTP does not use TLS, and this is important because it means there is no
SNI available to interpret.

Those are the two mechanisms by which HAProxy typically makes name-based
routing decisions.  SFTP allows neither.

In both HTTP and TLS, the client talks first.  But SFTP uses SSH as its
transport, and in SSH, the server talks first.  The server begins the
negotiation, so HAProxy has no mechanism to know anything about what's
going on at layer 7 until it is too late to make any routing decisions, and
even then, HAProxy is a man-in-the-middle of what is almost always going to
be an encrypted and thus HAProxy is unable to learn anything from the
connection's payload.

SSH does not readily support name-based virtual hosting, which is
essentially what you are trying to accomplish.  See also
http://serverfault.com/q/34552/153161.

Potential workaround: if it is possible to constrain the clients to access
your endpoint from known/fixed IP addresses, you could use the source IP
address to select the back-end.


Re: Can I specify a wildcard redirect

2016-10-27 Thread Michael Ezzell
On Oct 27, 2016 6:41 AM, "Jürgen Haas"  wrote:
>
> Thanks Andrew,
>
> I still believe that your example is not redirecting, it is forwarding
> to the Apache server which responds with a 200 and the same content as
> before.
>
> But what we're loking for is a redirect which isn't the case here.

It seems like you are looking for something like this:

http-request redirect code 301 location %[capture.req.uri,regsub(^/de,)] if
{ path_beg /de }

Requires 1.6 or later.


Re: Clarification needed: variable scopes

2016-10-04 Thread Michael Ezzell
On Sep 20, 2016 7:58 AM, "John Dison"  wrote:
>
> Hello,
>
> I am reading about set-var():
>  The name of the variable starts with an indication about its
scope.
> Can you please explain what does these scopes mean?  Does they affect
when variables are evaluated? Or something else?

Variables are evaluated in the order of processing of the rules that
reference them. This isn't affected by variable scope.

This refers to the scope of validity.  When a variable is out of scope, its
value is no longer available for use and the stored value is no longer
consuming buffer space, which is of course a valuable and finite resource,
so you typically want to use the narrowest scope that is consistent with
the purpose of the variable.

sess - the value remains available during request and response processing
across multiple requests by the same client on the same connection, if
multiple requests are allowed and occur, so this is the broadest scope.  If
you force client connections closed, of course, this one behaves like txn.

txn - the value is available only during processing of the request and its
response.  This is an intermediate scope, and is the one you want if you
want to set a value with http-request set-var and do something with that
value when processing its response, such as with http-response.

The last two are the most narrow scopes.

req - available only during request evaluation, the variable is out of
scope once the request is handed off to a back-end server, so is not
available during response processing for the same request.

res - available only during response processing, once something comes back
from the back-end server.


Re: Rate limiting w/o 429s

2016-08-06 Thread Michael Ezzell
On Aug 6, 2016 00:54, "CJ Ess"  wrote:
>
> Not the tarpit feature, that will deny access to the content with 500
status. I don't want to kill the request, just delay it,

A hack that I have done in various permutations is to create an extra
back-end pointing to the same servers but with maxconn configured
artificially low and and maxqueue configured high ... and then use that
backend instead of the "real" one for requests I want to throttle without
actually denying service.

For example, I have a client where I have determined that the
infrastructure they use for sending "batch" updates to a certain API
endpoint on my side can actually send me up to 64 simultaneous (!?)
requests... not very useful since those requests briefly contend for a
global mutex on my side at one point in processing.  So those "batch"
requests (i.e. not speed-sensitive, not related to a real-time web user,
they're the functional equivalent of uploading a spreadsheet one line at a
time x 64) are handled with a second backend declaration in HAProxy with
maxconn 10 maxqueue 200 that points to the same servers as the real/normal
backend, but naturally "shapes" the requests to an arrival rate the target
system can easily handle.

This is hacky because I am effectively "pretending" that the requests are
going to a different backend, so it skews my stats... but it's very
effective.

HAProxy's concept of requests being in a queue is excellent for this, but
it would be great if it were somehow possible to leverage it with something
a little bit more "native" and granular, similar to the way requests can be
natively throttled by denying them outright, based on counters... but I
don't know the internals well enough yet to know whether such a thing is
even practical.


Re: "errorfile 503" doesn't appear to be working

2016-06-22 Thread Michael Ezzell
My previous post included a couple of spurious spaces after a couple of the
header values.  Corrected here:

HTTP/1.0 503 Service Unavailable[0d][0a]Content-Type:
text/html[0d][0a]Cache-Control: no-cache[0d][0a]Connection:
close[0d][0a][0d][0a]...

Side note: be sure your body is at least 512 bytes, passing it with  or equivalent wording consistent with local policy :) to
disable the ridiculous friendly messages in at least some versions of IE:
http://stackoverflow.com/a/11544049/1695906


On Jun 22, 2016 9:41 PM, "Michael Ezzell" <mich...@ezzell.net> wrote:
>
>
> On Jun 22, 2016 7:06 PM, "Shawn Heisey" <hapr...@elyograg.org> wrote:
> >
> > I have verified that there is nothing on the line after the headers.  On
> > the recommendation I saw elsewhere, the file is in DOS text format, so
> > each line ends in CRLF, not just LF.  Could the line endings be the
problem?
>
> Most definitely.
>
> Review the file's content with a hex editor or hexdump.
>
> Each line of headers *must* end with \r\n which is 0x0d 0x0a (CR, LF).
This file is used as a raw HTTP response, and the Chrome error suggests
strongly that this is your problem, or this:
>
> After the last header, you *must* have two sets of of those, e.g.:
>
> HTTP/1.0 503 Service Unavailable [0d][0a]Content-Type:
text/html[0d][0a]Cache-Control: no-cache [0d][0a]Connection:
close[0d][0a][0d][0a]...
>
> After that point, you're in the body, so pretty much anything goes, just
keep the whole thing under 16K.
>
> Definitely don't count on an indicator of "file format" to prove that
this is correct.
>
> Copy one of the other files and edit with vim.  You'll see the ^M in the
headers, which of course is the same as \r.  The \n doesn't show in vim
since that's the normal newline.


Re: "errorfile 503" doesn't appear to be working

2016-06-22 Thread Michael Ezzell
On Jun 22, 2016 7:06 PM, "Shawn Heisey"  wrote:
>
> I have verified that there is nothing on the line after the headers.  On
> the recommendation I saw elsewhere, the file is in DOS text format, so
> each line ends in CRLF, not just LF.  Could the line endings be the
problem?

Most definitely.

Review the file's content with a hex editor or hexdump.

Each line of headers *must* end with \r\n which is 0x0d 0x0a (CR, LF).
This file is used as a raw HTTP response, and the Chrome error suggests
strongly that this is your problem, or this:

After the last header, you *must* have two sets of of those, e.g.:

HTTP/1.0 503 Service Unavailable [0d][0a]Content-Type:
text/html[0d][0a]Cache-Control: no-cache [0d][0a]Connection:
close[0d][0a][0d][0a]...

After that point, you're in the body, so pretty much anything goes, just
keep the whole thing under 16K.

Definitely don't count on an indicator of "file format" to prove that this
is correct.

Copy one of the other files and edit with vim.  You'll see the ^M in the
headers, which of course is the same as \r.  The \n doesn't show in vim
since that's the normal newline.


Re: Lua converter not working in 1.6.5 with Lua 5.3.2

2016-06-07 Thread Michael Ezzell
On Tue, Jun 7, 2016 at 2:27 PM, Willy Tarreau  wrote:

> Hi Michael,
>
> Please do not forget to test Thierry's patches to confirm or invalidate
> that they fix your issue. If you don't have the time, that's fine, just
> say so and I'll merge them.
>

​Confirmed,
​ tested,​
the issue is no longer reproducible
​ in 1.6​
with these patches.

My apologies for the delay, and thank you for the heads-up.  I was waiting
for the files and yet somehow managed to overlook them when they arrived.
​


Re: Lua converter not working in 1.6.5 with Lua 5.3.2

2016-05-27 Thread Michael Ezzell
​​
On Fri, May 27, 2016 at 10:41 AM, Thierry FOURNIER 
wrote:

> Hi,
>
> Thank you for the bug repport. It was perfect. I found and fixed the
> bug. You can try the patch in attachement.
>
>
​Thanks, Thierry.

The converter is now able to be called, and returns a value as expected...
however, there does appear to be another bug that this fix has uncovered,
and perhaps this is something more serious.

Somehow, I am able to corrupt the *name* of the header I was trying to add,
if I call core.Alert() within the converter that is processing the header's
value.

First, a working example of a very simple converter:

Lua:

testfn = function(str)
return ("original value was '" .. str .."'");
end

core.register_converters('testconv',testfn);

Configuration uses this to process a value for a header:

http-request set-header X-Empty-Header %[str("test"),lua.testconv]

This now works exactly as expected, and I see this header in the request:

X-Empty-Header: original value was 'test'

However, if I call core.Alert() within the converter, the alert string
actually overwrites part of the header *name* that I was trying to set. (!?)

testfn = function(str)
core.Alert("Stomp");
return ("original value was '" .. str .."'");
end

core.register_converters('testconv',testfn);

Now, after adding the alert message to the Lua converter, with everything
else identical, here is the problem: in the HTTP request, the 5 byte
"Alert" message I sent ("Stomp") has actually overwritten 6 bytes
("X-Empt") in the header name I was trying to add.

Config:

http-request set-header X-Empty-Header %[str("test"),lua.testconv]

Request:

Stompy-Header: original value was 'test'

Clearly, that shouldn't be possible.

HAProxy also kills the request with "PH--" (doc: "One possible cause for
this error is an invalid syntax in an HTTP header name containing
unauthorized characters." ... such as the \0 that I assume may be hiding at
the end of "Stomp," which would explain why 5 characters appear to be
replacing 6.)​

Please advise.


Lua converter not working in 1.6.5 with Lua 5.3.2

2016-05-24 Thread Michael Ezzell
I'm trying to create a Lua converter, but every time I try to call the
converter, I get this:

May 24 17:59:34 localhost haproxy[31077]: Lua converter 'testconv': runtime
error: attempt to call a nil value.

I've simplified this down to a minimal test case:

--

testfn = function(str)
core.Alert("function was called")
core.Alert("arg was " .. str)
return str
end

core.register_converters('testconv',testfn)

core.Alert("does this function work? " .. testfn('yes'))

--

When the proxy starts, the function itself is valid, it behaves as expected.

[alert] 144/180532 (31345) : function was called
[alert] 144/180532 (31345) : arg was yes
[alert] 144/180532 (31345) : does this function work? yes

But when I try to use it, like this (in a frontend)...

   http-request set-header X-Test %[str("test"),lua.testconv]

...it is as if the reference to the function has been... misplaced.

May 24 18:05:59 localhost haproxy[31346]: Lua converter 'testconv': runtime
error: attempt to call a nil value.

...and, of course, the X-Test header is added but has no value.

Am I doing it wrong, or is there something not right, here?  Verified with
a clean build in a new directory.

--

HA-Proxy version 1.6.5 2016/05/10
Copyright 2000-2016 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=yes USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
Running on OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.31 2012-07-06
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with Lua version : Lua 5.3.2
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.


RE: redirect rule with multiple conditions

2016-05-05 Thread Michael Ezzell
On May 5, 2016 3:38 PM, "Haim Ari"  wrote:
>
> This does not work either:
>
> redirect location http://example.com/going-wild if !do_not_redirect or
!cloud_a !cloud_b

Your logic is flawed in both cases.

Logically, if (a or b or c) is true, then (!a and !b and !c) is false... so
you want to AND *all* the negations, like this:

redirect location http://example.com/going-wild if !do_not_redirect
!cloud_a !cloud_b

In other words, redirect only when all of the acls do not match.  If one or
more of them does match, the redirect does not occur, which is what you
appear to be trying to do.


Re: CIDR Notation in ACL -- silent failure

2016-04-12 Thread Michael Ezzell
On Apr 12, 2016 8:09 AM, "Willy Tarreau"  wrote:
>

> I learned it 18 years ago when QNX was shipping a fully working OS and
browser
> on a single diskette. The browser used to connect to http://127.1/ and
since
> then I don't think I have ever typed 127.0.0.1 anymore. Same for most IP
> addresses, on test platforms I arrange for setting the networks with
zeroes
> in the middle so that I can have 10.1, 11.1, etc... Very convenient.

Willy, isn't it true, though, that this notation is a holdover from
pre-CIDR days, and only makes sense (to the extent that it makes sense)
without a CIDR mask?

Isn't 127.1 interpreted as 127.0.0.1 because 127.* was a Class-A network?
By extension, the bizarre-looking 127.65535 would actually be 127.0.255.255
...

But it seems like 127.1/32 should be unambiguously interpreted as 127.1.0.0
because of the explicit mask.

Shouldn't it?

Otherwise this seems like we're interpreting addresses using sort of a
hybrid of classful and classless notation.


Re: Using Haproxy as a outgoing traffic routing server

2016-01-29 Thread Michael Ezzell
On Jan 29, 2016 8:01 PM, "Amol"  wrote:

> Here is what does not work
>
> $ curl -vL https://:443/matest.php
> *   Trying ...
> * Connected to  (127.0.0.1) port 443 (#0)
> * WARNING: using IP address, SNI is being disabled by the OS.
> * Server aborted the SSL handshake
> * Closing connection 0
> curl: (35) Server aborted the SSL handshake

The far end server may require that you try to negotiate with SNI -- which
the output here shows that you are not doing, since there's no proper
hostname to send.  The simple workaround, if that is the case, is to place
your HAProxy IP address and the far-end's hostname in your /etc/hosts file.
(Not the HAProxy machine, but the machine where you're running curl).

Then use curl https://that-hostname.example.com.

That way, curl will attempt the SSL negotiation in a way that the far-end
expects.  Since the hostname you're trying to connect to should match the
certificate that will then be offered, this configuration should work if
the lack of SNI on your side is indeed the issue.

You need to be sending the correct hostname in the request headers,
anyway... because the far-end may need it.


Re: acl help: path_beg unexpected matching

2015-12-12 Thread Michael Ezzell
On Dec 12, 2015 8:39 PM, "David Birdsong"  wrote:
>
>> You can try something like :
>>   acl API_HOST hdr_dom(host) -i api.company.com
>>   acl API_V3   path_beg -i /v3
>>
>>   use_backend new_backend if API_HOST API_V3
>
>
> this might be where i picked up the presence of an implicit AND.
>
> so acl's are implicit OR
> use_backend is implicit AND
> ?
>

ACLs are lists of tests.  The list, when tested in a condition, is true
when any rule matches, so... you could call that implicit OR.

Conditions, which usually follow the keyword "if" or "unless," are implicit
AND.

ACL can also be anonymous (unnamed), so the following would also be
equivalent.  To me, this form is often more intuitive, but can lead to
configuration inconsistencies unless the ACLs in question are only used in
one condition (otherwise, you run the risk of changing it in one place, not
another).

use_backend new_backend if { hdr_dom(host) -i api.company.com } { path_beg
-i /v3 }

This is a condition with two anonymous ACLs, implicitly ANDed.  Note that
at least one space around the braces is mandatory for the parser to
recognize them correctly.


Re: segfault writing to log from Lua

2015-10-01 Thread Michael Ezzell
Thank you for the patch.

This does appear to resolve the issue.

On Thu, Oct 1, 2015 at 8:11 AM, Dragan Dosen  wrote:

> Hi all,
>
> On 29.09.2015 19:31, Willy Tarreau wrote:
> >>
> >> Maybe, we should build a defaults strings (one for each rfc) at the
> >> start of haproxy, and uses these default strings if the proxy "*p" is
> >> NULL in the function "__send_log()" ?
> >
> > Yes indeed we should have one of each, statically defined in log.c with
> > the global log tag I guess. In fact this would even allow not to allocate
> > one tag per proxy since we'd be able to reuse the global one when a proxy
> > defines no log-tag.
> >
>
> I've managed to fix this problem by using individual vectors instead of
> the pre-generated strings, log_htp and log_htp_rfc5424.
>
> This should also resolve another problem related to "pid" in
> log_htp(_rfc5424), when haproxy is started in daemon/systemd mode.
>
> The patch is in attachment.
>
>
> Best regards,
> Dragan Dosen
>
>


Lua usage question - log format variables

2015-09-29 Thread Michael Ezzell
We have "fetches" accessible from Lua, but I have not found a mechanism
that allows me to access the value of log format variables.

Examples of useful variables:

%b   | backend_name
%f   | frontend_name
%rt  | request_counter

Is there a way to access values like these from Lua, since they aren't (as
far as I can tell) available as fetches?

I realize not all variables would be available in every context, and I
don't want to modify them, of course -- I only want to read them -- but I
would like to be able to read them and use them in content generated in Lua.

Are they accessible?


Re: segfault writing to log from Lua

2015-09-29 Thread Michael Ezzell
This a clean build, on both systems, using a freshly-extracted tarball of
1.6-dev6 downloaded from
http://www.haproxy.org/download/1.6/src/devel/haproxy-1.6-dev6.tar.gz.

I'll recheck and send files to replicate.
On Sep 29, 2015 4:47 AM, "Thierry FOURNIER" <thierry.fourn...@arpalert.org>
wrote:

> On Mon, 28 Sep 2015 20:50:44 -0400
> Michael Ezzell <mich...@ezzell.net> wrote:
>
> > I fired up HA-Proxy version 1.6-dev6-e7ae656 2015/09/28 for testing, and
> > was greeted with...
> >
> > Segmentation fault (core dumped)
> >
> > Since I've been primarily testing Lua, I started by commenting out my Lua
> > config lines.  Startup succeeds.  Re-enabling the scripts, I find this to
> > be the offending line:
> >
> > core.Alert("hello.lua");
> >
> > I can't seem to write to syslog in any Lua context without a segfault.
> The
> > failure occurs even if this is the only line, in the only Lua script
> loaded
> > in global config.
> >
> > Behavior is observed on both 64-bit and 32-bit builds on Ubuntu 14.04.
> >
> > HA-Proxy version 1.6-dev6-e7ae656
>
>
> Hello,
>
> Thank you for the bug repport. I can't reproduce it.
>
> Are you sure, that you did a "make clean" before compiling your
> HAProxy ? Some structs are changed, and doesn't running a "make clean"
> is a common way for segfaults :)
>
> If it is the case, can you send me your test files ?
>
> Thierry
>


Re: segfault writing to log from Lua

2015-09-29 Thread Michael Ezzell
Although I am seeing this in Lua, it does not appear (to me) to a the fault
in the Lua code... recent changes in __send_log() in log.c appear to assume
p is never null (there's a test for this condition earlier, then p is
referenced later), and that is not valid behavior in this case, because
there is no proxy associated:

(gdb) run
Starting program: /home/ubuntu/haproxy-1.6-dev6/haproxy -f ./haproxy.cfg

Program received signal SIGSEGV, Segmentation fault.
__send_log (p=p@entry=0x0, level=level@entry=1, message=,
size=10,
sd=sd@entry=0x711160  "- ",
sd_size=sd_size@entry=2) at src/log.c:1025
1025if (unlikely(htp->len >= maxlen)) {
(gdb) print htp->len
Cannot access memory at address 0xda4
(gdb) print htp
$1 = (struct chunk *) 0xd98
(gdb) print >log_htp
$2 = (struct chunk *) 0xd98
(gdb) print p
$3 = (struct proxy *) 0x0
(gdb)

This minimal test case can be replicated with the following two files:

$ cat haproxy.cfg

global
log 127.0.0.1 local0
lua-load crash.lua
user haproxy
group nogroup
daemon

defaults
log global

$ cat crash.lua

core.Alert("hello.lua");


segfault writing to log from Lua

2015-09-28 Thread Michael Ezzell
I fired up HA-Proxy version 1.6-dev6-e7ae656 2015/09/28 for testing, and
was greeted with...

Segmentation fault (core dumped)

Since I've been primarily testing Lua, I started by commenting out my Lua
config lines.  Startup succeeds.  Re-enabling the scripts, I find this to
be the offending line:

core.Alert("hello.lua");

I can't seem to write to syslog in any Lua context without a segfault.  The
failure occurs even if this is the only line, in the only Lua script loaded
in global config.

Behavior is observed on both 64-bit and 32-bit builds on Ubuntu 14.04.

HA-Proxy version 1.6-dev6-e7ae656


Re: haproxy bug?

2015-09-26 Thread Michael Ezzell
The errorfile directive configures static responses, including all of their
headers (also static), to be returned instead of the built-in response...
but the 401 response isn't static; it includes the WWW-Authenticate header,
which varies, at minimum, with the realm.

Docs indicate that "errorfile" is supported for codes 200, 400, 403, 408,
500, 502, 503, and 504.

Status code not handled... by the errorfile directive.

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-errorfile
On Sep 26, 2015 9:41 AM, "Guido Dolci"  wrote:

>  Hello,
>
> using this directive
>
> errorfile 401 /bla/blabla
>
> along with
>
>  http-request auth realm xxx if yyy
>
> unexpectedly gives:
>
> [WARNING] 267/183210 (15026) : parsing [/etc/haproxy/haproxy.cfg:nn] :
> status code 401 not handled, error customization will be ignored.
>
> which actually is incorrect since the 401 IS actually handled by haproxy...
>
> cheers
>
> Guido
>


Possible bug: Lua socket loses incoming payload when server closes the connection aggressively in 1.6-dev5-a02e8a6

2015-09-25 Thread Michael Ezzell
When a Lua socket is used for a web request, and the remote server closes
the connection quickly at the end of the http response, attempting to read
from the Lua socket fails, and the response data that should have been
readable from the buffer is lost.

This behavior occurs in other circumstances, but can be easily replicated
by sending a request to a remote HAProxy -- a request that is designed to
cause the remote server to respond with an errorfile payload, such as by
triggering the internal 503 Service Unavailable response by sending a Host:
header that won't match any back-end.

function test_fetch()
core.Alert("attempting fetch");
local rsp;
while rsp == nil do
local sock = core.tcp();
core.Alert("attempting connect");
con_ok, con_err = sock:connect("x.x.x.x",80);
if(con_ok) then
wrote = sock:send("GET / HTTP/1.0\r\nHost: fake\r\n\r\n");
rsp, read_err = sock:receive("*a");
if rsp == nil then
core.Alert("read failed: " .. read_err);
end
else
core.Alert("connect failed: " .. con_err);
end
if rsp == nil then
core.sleep(1);
end
end;
core.Alert("response: " .. rsp);
end;

core.register_task(test_fetch);

Sep 25 07:55:23 localhost haproxy[20278]: attempting fetch
Sep 25 07:55:23 localhost haproxy[20278]: attempting connect
Sep 25 07:55:23 localhost haproxy[20278]: read failed: connection closed.
Sep 25 07:55:24 localhost haproxy[20278]: attempting connect
Sep 25 07:55:24 localhost haproxy[20278]: read failed: connection closed.
Sep 25 07:55:25 localhost haproxy[20278]: attempting connect
Sep 25 07:55:26 localhost haproxy[20278]: read failed: connection closed.

Packet capture confirms that the remote HAProxy sets FIN in the same packet
that contains the response payload, so we apparently have a half-closed
connection on the local side.  We should still be able to read from it, but
not write to it.

In src/hlua.c, near line 1457, hlua_socket_handler() may not be handling
this case correctly... it looks like the socket is being destroyed and
other actions taken in both directions, if either side is closed.

if (!c || channel_output_closed(si_ic(si)) ||
channel_input_closed(si_oc(si))) {
if (appctx->ctx.hlua.socket) {
appctx->ctx.hlua.socket->s = NULL;
appctx->ctx.hlua.socket = NULL;
}
si_shutw(si);
si_shutr(si);
si_ic(si)->flags |= CF_READ_NULL;
hlua_com_wake(>ctx.hlua.wake_on_read);
hlua_com_wake(>ctx.hlua.wake_on_write);
return;
}

It seems like the socket shouldn't be destroyed and everything shut down
until both sides are closed, since the two may not occur at the same time.


Re: HAProxy/Lua survey about lua configuration form

2015-09-22 Thread Michael Ezzell
I don't see a significant problem, if there are benefits to be gained.

I assume part of the motivation is to prevent inadvertently calling an
inappropriate function as an action, to perhaps to allow earlier validation
of the configuration of actions referencing Lua functions, and to separate
the namespaces of Lua functions and HAProxy Lua actions.

As long as no capabilities are lost or performance penalties are created, I
don't see the change as being extreme.

One point that is not clear is how this "facilitates the distribution of
Lua scripts for HAProxy."  You would need to look at the script to find the
registered action name, the same as if you had to look at the script to
find the function name... though perhaps the intention is to express the
idea that "not every Lua function is appropriate to directly reference in
the configuration" since there may be "private" functions in a script that
are only intended to support other (exposed) functions, rather than being
called directly.

Perhaps you could provide more insight, particularly if I incorrect on some
of these assumptions.  Otherwise, I don't see a reason to object (but I am
not yet using Lua in production, so for me the change has minimal impact.)
On Sep 22, 2015 9:27 PM,  wrote:

> Hi list,
>
> Actually we two Lua registration forms:
>
> The style of fetches and converters (and coming soon the "services").
> With this registering style, the Lua developer register the function in
> the haproxy core like this:
>
>core.register_fetch("fetch-name", function(...)
>   ... code ...
>end)
>
> In the HAProxy configuration file, the registered Lua fetch can be used
> like other fetch. it is automatically prefixed by "lua.". For example:
>
>use-backend service if { lua.fetch-name }
>
> The second mode is used with actions. Each action doesn't require any
> registration. The user just give the Lua function name in the ha proxy
> configuration. For example, in Lua file:
>
>function my_action(...)
>   ... code ...
>end
>
> And the associated HAProxy file:
>
>http-request lua my_action
>
> I want to remove the second declaration mode. After this the actions
> will be registered like fetches and converters. The previous example
> should become:
>
>core.register_action("action-name", function(...)
>   ... code ...
>end)
>
> And
>
>http-request lua.action-name
>
> I think that this mode of registration facilitates the distribution of
> Lua scripts for HAProxy. The administrator doesn't look into the file
> bout any function name, and the administrator cannot call a function
> which is not designed for running in an HAProxy action.
>
> My problem, is that the second configuration mode exists in HAProxy
> from a long time. I don't want to remove it without a consultation of
> the people who already use this feature. My other problem, is that if
> the version 1.6 is released with the second mode, we will keep (and
> maintain) for a long time :)
>
> the discussion is open, please give me your opinions.
>
> Thierry
>
>


Lua syslog messages are truncated in 1.6-dev4

2015-09-07 Thread Michael Ezzell
Minor bug: when writing to syslog from Lua scripts, the last character from
each log entry is truncated.

core.Alert("this is truncated");

Sep  7 15:07:56 localhost haproxy[7055]: this is truncate

This issue appears to be related to the fact that send_log() (in src/log.c)
is expecting a newline at the end of the message's format string:

/*
 * This function adds a header to the message and sends the syslog message
 * using a printf format string. It expects an LF-terminated message.
 */
void send_log(struct proxy *p, int level, const char *format, ...)

I believe the fix would be in in src/hlua.c at line 760
,
where this...

   send_log(px, level, "%s", trash.str);

...should be adding a newline into the format string to accommodate what
the code expects.

send_log(px, level, "%s\n", trash.str);

This change provides what seems to be the correct behavior:

Sep  7 15:08:30 localhost haproxy[7150]: this is truncated

All other uses of send_log() in hlua.c have a trailing dot "." in the
message that is masking the truncation issue because the output message
stops on a clean word boundary.  I suspect these would also benefit from
"\n" appended to their format strings as well, since this appears to be the
pattern seen throughout the rest of the code base.


Re: Lua outbound Sockets in 1.6-dev4

2015-09-05 Thread Michael Ezzell
I can confirm, the patch appears to work correctly.  Thank you for the fix.
.

On Fri, Sep 4, 2015 at 1:08 PM, Thierry FOURNIER <
thierry.fourn...@arpalert.org> wrote:

> Hi, now I reproduce the bug, and I fixed it :)
> Can you test the attached patch ?
>
>


Re: Lua outbound Sockets in 1.6-dev4

2015-09-02 Thread Michael Ezzell
You are NOT able to reproduce?  I misunderstood your previous comment.

Further testing suggests (to me) that this is a timing issue, where HAProxy
does not discover that the connection is established, if connection
establishment doesn't happen within a very, very short window after the
connection is attempted.

Previously, I only tested "client talks first" (http) using a different
machine as the server.

Consider the following new results:

server talks first (ssh) - connection to local machine - *works*
server talks first (ssh) - connection to a different machine on same LAN -
*works*
server talks first (ssh) - connection to a different machine across
Internet - *works*
client talks first (http) - connection to local machine - *works*
client talks first (http) - connection to a different machine on same
LAN - *does
not work*
client talks first (http) - connection to a different machine across
Internet - *does not work*

The difference here seems to be the timing of the connection establishment,
and the presence or absence of additional events.  (Note that when I say
"local machine" I do not mean 127.0.0.1; I am still using the local
machine's Ethernet IP when talking to services on the local machine.)

When you are testing, are you using a remote machine, so that there is a
brief delay in connection establishment?  If not, this may explain why you
do not see the same behavior, since local connections do not appear to have
the same problem.

Most interesting, based on my "timing" theory, I found a workaround, which
seems very wrong in principle; so wrong, in fact, that I can't believe I
tried it; however, using the following tactic, I am able to make an
outgoing socket connection to a different machine, when client talks first.

local sock = core.tcp();
sock:settimeout(3);
local written = sock:send("GET
/latest/meta-data/placement/availability-zone HTTP/1.0\r\nHost:
169.254.169.254\r\n\r\n");
local connected, con_err = sock:connect("169.254.169.254",80);
...

This strange code works.  I hope you will agree that writing to the socket
before connecting seems very wrong, and I was surprised to find that this
code works successfully when connecting to a different machine --
 presumably because I'm pre-loading the outbound buffer, so the server's
response to my request actually triggers an event that does not occur in a
condition where the client talks first and when there is a delay in
connection establishment, even a very brief delay.


Re: Lua outbound Sockets in 1.6-dev4

2015-09-01 Thread Michael Ezzell
You *can* reproduce the error?  I feel better already.


> Can you run a tcpdump for validating the TCP connection establishment ?
>

It looks pretty much as expected. Is this what you wanted?

 73  69.516276 10.10.10.10 -> 10.20.20.20 TCP 74 44748 > http [SYN] Seq=0
Win=26883 Len=0 MSS=8961 SACK_PERM=1 TSval=833894013 TSecr=0 WS=128
 74  69.516893 10.20.20.20 -> 10.10.10.10 TCP 74 http > 44748 [SYN, ACK]
Seq=0 Ack=1 Win=26847 Len=0 MSS=8961 SACK_PERM=1 TSval=20615574
TSecr=833894013 WS=128
 75  69.516909 10.10.10.10 -> 10.20.20.20 TCP 66 44748 > http [ACK] Seq=1
Ack=1 Win=27008 Len=0 TSval=833894013 TSecr=20615574
 93  72.517981 10.10.10.10 -> 10.20.20.20 TCP 66 44748 > http [FIN, ACK]
Seq=1 Ack=1 Win=27008 Len=0 TSval=833894764 TSecr=20615574
 94  72.518672 10.20.20.20 -> 10.10.10.10 TCP 254 [TCP segment of a
reassembled PDU]
 95  72.518689 10.10.10.10 -> 10.20.20.20 TCP 66 44748 > http [ACK] Seq=2
Ack=190 Win=28032 Len=0 TSval=833894764 TSecr=20616324


Also a quick hack of src/hlua.c to discover which of the three
possibilities is causing the error reveals that in
hlua_socket_connect_yield()...

if (!hlua || !socket->s || channel_output_closed(>s->req)) {
...the condition being matched and prompting the "Can't connect" error
appears to be !socket->s.


Lua outbound Sockets in 1.6-dev4

2015-09-01 Thread Michael Ezzell
I have been thoroughly enjoying teaching my haproxy new tricks with Lua,
but trying sockets for the first time, using 1.6-dev4, I'm confused by what
I see.  The behavior is the same whether I try this in "task" or "action"
context.

I see this in the release notes:

- BUG/MEDIUM: lua: outgoing connection was broken since 1.6-dev2

...which seems applicable, but in 1.6-dev4-61d301f I'm not having luck with
outgoing sockets.

I wouldn't be surprised if I'm doin' it wrong.  Another perspective would
be appreciated.

function tricky_socket()
local sock = core.tcp();
sock:settimeout(3);
core.log(core.alert,"calling connect()\n");
local connected, con_err = sock:connect("x.x.x.x",80);
core.log(core.alert,"returned from connect()\n");
if con_err ~= nil then
  core.log(core.alert,"connect() failed with error: '" .. con_err
.. "'\n");
end
 ...

I removed the rest, since the above captures the essence of the problem.
This apparently fails:

Sep  1 14:15:41 localhost haproxy[25480]: calling connect()
Sep  1 14:15:44 localhost haproxy[25480]: returned from connect()
Sep  1 14:15:44 localhost haproxy[25480]: connect() failed with error:
'Can't connect'

However, it actually did make a connection, because now I have this:

tcp0  0 10.10.10.10:43368 x.x.x.x:80   TIME_WAIT

In fact, during the 3 seconds, it was ESTABLISHED.  The other side saw the
connection, too:

Sep  1 14:15:44 localhost haproxy[1033]: 10.10.10.10:43368
[01/Sep/2015:14:15:41.680] ... -1/-1/-1/-1/3001 400 ... CR-- ""

Network issues can be reasonably ruled out, since I can talk to the far end
with curl and http-over-telnet, no problem... and, in fact, the original
destination wasn't another haproxy -- I switched the destination so I could
confirm what I suspected... which was that the socket is connecting, almost
immediately, yet for reasons that aren't clear, it doesn't seem to realize
it.

Ubuntu 14.04.2 LTS 3.13.0-58-generic #97-Ubuntu SMP
HA-Proxy version 1.6-dev4-61d301f 2015/08/30
Lua 5.3.1  Copyright (C) 1994-2015 Lua.org, PUC-Rio

make TARGET=linux2628 CPU=native USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 \
 USE_LUA=yes LUA_LIB=/opt/lua53/lib/ LUA_INC=/opt/lua53/include/
LDFLAGS=-ldl


Re: Lua outbound Sockets in 1.6-dev4

2015-09-01 Thread Michael Ezzell
Thank you for the suggestion, Thierry.

I assumed the ~= nil test would be valid (and evaluate as false) if the
value were uninitialized, but I've taken your advice.

The result is the same.

if connected == nil then
core.log(core.alert,"connect() failed with error: '" .. con_err .. "'
(connected == nil)\n");
end

Sep  1 15:51:38 localhost haproxy[25652]: connect() failed with error:
'Can't connect' (connected == nil)