Say Bye to Adwords – Grow with Our Organic Listing Plan

2018-11-26 Thread Darin Booth
Dear  haproxy.com Team,





Hope all well at your end!





After a thorough check of your website, it came into notice that you trying
to captivate more customers and dealing to developing your standpoint both
internationally and locally. You run a sponsor listing on Google to build
your image globally, but “SORRY” it won’t help you in the long run.




At present, Internet has become crucial for business and marketing. Hence,
make your website active enough to gain more visibility on internet and
hold a good position on different search engines. Our enthusiasts with
their impeccable online marketing techniques will vitally improve your
Google's organic search results and ranks. Organic (non-paid) results are
superior to paid results from Google Adwords in both traffic and conversion
because people trust organic results more. Hence, our team will help in
generating enormous amount of traffic at an affordable price than Adwords
price.



 Join hands with us as soon as possible for a better market strategy and
web services. On your response, we will start with a detailed analysis
report about your current condition.





For a business relationship with us, email us or provide us your contact
details and let us know about the best time to reach you!





I look forward to hear from you.





Best Regards,





Darin Booth



*Digital Marketing Executive*





PS1: You may ask us to “REMOVE” if it seems not interesting else please
reply us back for more info on price list, “How we are different from
others?”, and “Why should you choose us?”








[image: beacon]


Re: SSL certs

2018-11-26 Thread Alberto Oliveira
Hello Azim,

HAproxy itself doesn't manage ssl certs so you should already have one, buy
one or generate one for free using Let's Encrypt (https://letsencrypt.org/).

You can find multiple sources to guide you on how to use ssl certs on
haproxy:
https://serversforhackers.com/c/using-ssl-certificates-with-haproxy
https://serverfault.com/q/560978/241849
https://gist.github.com/sethwebster/b48d7c872fe397c1db11

Basically you have to concatenate your certs and key to generate a pem file
that's valid for haproxy. They don't really need to be converted for this,
just concatenated.

For example, if you've bought your wildcard cert from comodo, it would go
like this:
cat STAR_your_domain.crt COMODORSADomainValidationSecureServerCA.crt
COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt STAR_your_domain.key >
STAR_your_domain.pem

Or if you generated the certs using let's encrypt you would have to only
concatenate two files:
cat fullchain.pem privkey.pem > your_domain.pem

Although it seems complicated at first, it's simple once you go through
with it.
Does that make sense to you? Please feel free to reply with any problem you
encounter or to tell us if this solves your issue.

Best regards,
Alberto

On Mon, 26 Nov 2018 at 23:54, Azim Siddiqui 
wrote:

> Hello,
>
> Hope you are doing good. We are using HAproxy in our company. But the ssl
> certs has been expired. I want to renew it. As i can see HAproxy only takes
> .pem format for certs. So what files should be included in that .pem file ?
> And can you please tell me how to convert the certs in .pem ?
>
> Thanks & Regards,
> Azeem
>
>
>


Re: [PATCH] MINOR: ssl: free ctx when libssl doesn't support NPN

2018-11-26 Thread Willy Tarreau
On Mon, Nov 26, 2018 at 10:57:17PM +0100, Lukas Tribus wrote:
> The previous fix da95fd90 ("BUILD/MINOR: ssl: fix build with non-alpn/
> non-npn libssl") does fix the build in old OpenSSL release, but I
> overlooked that the ctx is only freed when NPN is supported.
> 
> Fix this by moving the #endif to the proper place (this was broken in
> c7566001 ("MINOR: server: Add "alpn" and "npn" keywords")).

Applied, thank you Lukas!

Willy



RE: Enquiry

2018-11-26 Thread BRUNETTI ALBERTO FAKHRI
Kindly send us your price list or product catalog asap for trail order. Thanks 
Sales Manager.
Informativa Privacy - Ai sensi del Regolamento (UE) 2016/679 si precisa che le 
informazioni contenute in questo messaggio sono riservate e ad uso esclusivo 
del destinatario. Qualora il messaggio in parola Le fosse pervenuto per errore, 
La preghiamo di eliminarlo senza copiarlo e di non inoltrarlo a terzi, 
dandocene gentilmente comunicazione. Grazie. Privacy Information - This 
message, for the Regulation (UE) 2016/679, may contain confidential and/or 
privileged information. If you are not the addressee or authorized to receive 
this for the addressee, you must not use, copy, disclose or take any action 
based on this message or any information herein. If you have received this 
message in error, please advise the sender immediately by reply e-mail and 
delete this message. Thank you for your cooperation.



SSL certs

2018-11-26 Thread Azim Siddiqui
Hello,

Hope you are doing good. We are using HAproxy in our company. But the ssl certs 
has been expired. I want to renew it. As i can see HAproxy only takes .pem 
format for certs. So what files should be included in that .pem file ? And can 
you please tell me how to convert the certs in .pem ?

Thanks & Regards,
Azeem




Re: OCSP stapling with multiple domains

2018-11-26 Thread Igor Cicimov
Hi Moemen,

On Tue, Nov 27, 2018 at 1:24 AM Moemen MHEDHBI  wrote:
>
>
> On 11/14/18 1:34 AM, Igor Cicimov wrote:
>
> On Sun, Nov 11, 2018 at 2:48 PM Igor Cicimov  
> wrote:
>>
>> Hi,
>>
>> # haproxy -v
>> HA-Proxy version 1.8.14-1ppa1~xenial 2018/09/23
>> Copyright 2000-2018 Willy Tarreau 
>>
>> I noticed that in case of multiple domains and OCSP setup:
>>
>> # ls -1 /etc/haproxy/ssl.d/*.ocsp
>> /etc/haproxy/ssl.d/star_domain2_com.crt.ocsp
>> /etc/haproxy/ssl.d/star_domain_com.crt.ocsp
>> /etc/haproxy/ssl.d/star_domain3_com.crt.ocsp
>> /etc/haproxy/ssl.d/star_domain4_com.crt.ocsp
>>
>> I get OCSP response from haproxy only for one of the domains
>> domain.com. Tested via:
>>
>> $ echo | openssl s_client -connect domain[234].com:443 -tlsextdebug
>> -status -servername domain[234].com
>>
>> Is this expected?
>
>
> Any comments/ideas regarding this? Further noticed that OCSP code probably 
> does not check the certificates SANs and matches only based on the CN in the 
> subject since the calls to whatever.domain.tld get stapled but to domain.tld 
> do not.
>
> Hi Igor,
>
> Testing OCSP on multiple certificates with different domains (based on the 
> CN) works correctly for me. (a.domain.com, b.domain.com, c.domain.com)
>
> Are you using multiple certs with same CN but different SANs ?

The certificates belong to completely separate domains, so not
subdomains of the same domain like in your case. They are also
wildcard certs so here is the layout:

# ls -1 /etc/haproxy/ssl.d/
star_domain1_com.crt
star_domain1_com.crt.ocsp
star_domain2_com.crt
star_domain2_com.crt.ocsp
star_domain3_com.crt
star_domain3_com.crt.ocsp

# for i in `ls -1 /etc/haproxy/ssl.d/*.crt`; do openssl x509 -noout
-subject -in $i; done
subject= /C=AU/ST=New South Wales/L=Sydney/O=My Company/CN=*.domain1.com
subject= /C=AU/ST=New South Wales/L=Sydney/O=My Company/CN=*.domain2.com
subject= /C=AU/ST=New South Wales/L=Sydney/O=My Company/CN=*.domain3.com

The SAN only contains the certificates domain and nothing else, for
example for domain3.com:

X509v3 Subject Alternative Name:
DNS:*.domain3.com, DNS:domain3.com

The haproxy bind line in the frontend looks like:

 bind *:443 ssl crt /etc/haproxy/ssl.d/ ...

And here is the output of the daily cronjob that updates the OCSP for haproxy:

Date: Mon, 26 Nov 2018 05:00:01 + (GMT)

/etc/haproxy/ssl.d/star_domain1_com.crt: good
This Update: Nov 25 17:39:11 2018 GMT
Next Update: Dec  2 16:54:11 2018 GMT
OCSP Response updated!
/etc/haproxy/ssl.d/star_domain2_com.crt: good
This Update: Nov 24 20:49:57 2018 GMT
Next Update: Dec  1 20:04:57 2018 GMT
OCSP Response updated!
/etc/haproxy/ssl.d/star_domain3_com.crt: good
This Update: Nov 25 14:09:00 2018 GMT
Next Update: Dec  2 13:24:00 2018 GMT
OCSP Response updated!

I can confirm this is working as intended on other serves I have with
1.7.11 and 1.8.14, so it must be something specific to this one that I
struggle to understand (to be even more confusing it is all being
setup by Ansible in same way as everywhere else).

Under what circumstances would a setup like this not work in terms of
OCSP? Example:

$ echo | openssl s_client -connect server:443 -tlsextdebug -status
-servername domain1.com | grep -E 'OCSP|domain1'
depth=0 C = AU, ST = New South Wales, L = Sydney, O = My Company, CN =
*.domain1.com
verify return:1
DONE
OCSP response: no response sent
 0 s:/C=AU/ST=New South Wales/L=Sydney/O=My Company/CN=*.domain1.com
subject=/C=AU/ST=New South Wales/L=Sydney/O=My Company/CN=*.domain1.com

Thanks for your input by the way, very much appreciated.



[PATCH] MINOR: ssl: free ctx when libssl doesn't support NPN

2018-11-26 Thread Lukas Tribus
The previous fix da95fd90 ("BUILD/MINOR: ssl: fix build with non-alpn/
non-npn libssl") does fix the build in old OpenSSL release, but I
overlooked that the ctx is only freed when NPN is supported.

Fix this by moving the #endif to the proper place (this was broken in
c7566001 ("MINOR: server: Add "alpn" and "npn" keywords")).
---
>> Move the #ifdef's around so that we build again with older OpenSSL
>> releases (0.9.8 was tested).
>
> Applied, thank you Lukas!

I didn't see the real issue, the entire #ifdef was in the wrong place
and we have to move the #endif as well, otherwise we don't free the ctx
when NPN is not supported.

---
 src/ssl_sock.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index 86d4f22..a73fb2d 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -4846,9 +4846,9 @@ void ssl_sock_free_srv_ctx(struct server *srv)
 #ifdef OPENSSL_NPN_NEGOTIATED
if (srv->ssl_ctx.npn_str)
free(srv->ssl_ctx.npn_str);
+#endif
if (srv->ssl_ctx.ctx)
SSL_CTX_free(srv->ssl_ctx.ctx);
-#endif
 }
 
 /* Walks down the two trees in bind_conf and frees all the certs. The pointer 
may
-- 
2.7.4



1.7.11 with gzip compression serves incomplete files

2018-11-26 Thread Veiko Kukk

Hi!

There is not much to add, just that it's been broken before in 1.7.9 and 
is again broken in 1.7.11.  Works with 1.7.10.
When applying patch provided here 
https://www.mail-archive.com/haproxy@formilux.org/msg27155.html 1.7.11 
also works.
Testing is really simple, just configure haproxy gzip compression and 
download with curl --compression or with web browser. Sample .js file I 
have downloaded has real size 42202 bytes but when downloading with gzip 
compression, it's size is 37648 bytes - part of the end is missing.
Very similar issue is discussed here too 
https://discourse.haproxy.org/t/1-7-11-compression-issue-parsing-errors-on-response/2542


Best regards,
Veiko



Re: haproxy segfaults when clearing the input buffer via LUA

2018-11-26 Thread Moemen MHEDHBI


On 11/20/18 2:25 PM, Christopher Faulet wrote:
> Le 17/11/2018 à 20:42, Willy Tarreau a écrit :
>> Hi Moemen,
>>
>> On Wed, Nov 14, 2018 at 04:07:42PM +0100, Moemen MHEDHBI wrote:
>>> Hi,
>>>
>>> I was playing with LUA, to configure a traffic mirroring behavior.
>>> Basically I wanted HAProxy to send the http response of a request to a
>>> 3rd party before sending the response to the client.
>>>
>>> So this is the stripped down version of the script to reproduce the
>>> segfault with haproxy from the master branch:
>>>
>>> function mirror(txn)
>>>  local in_len = txn.res:get_in_len()
>>>  while in_len > 0 do
>>>  response = txn.res:dup()
>>>  -- sending response to 3rd party.
>>>  txn.res:forward(in_len)
>>>  core.yield()
>>>  in_len = txn.res:get_in_len()
>>>  end
>>> end
>>> core.register_action("mirror", { "http-res" }, mirror)
>>>
>>> Then I use this script via "http-response lua.mirror"
>>>
>>>
>>> I think problem here is that when I forward the response from the input
>>> buffer to the output buffer and hand processing back to HAProyx, the
>>> latter will try to send an invalid http request.
>>>
>>> The request is invalid because HAProxy did not have the opportunity to
>>> check the response and make sure there are valid headers because the
>>> input buffer is empty after the core.yield().
>>>
>>> So I was expecting an error and HAProxy telling me that this is an
>>> invalid request but not a segfault.
>>
>> I can't tell for sure, but I totally agree it should never segfault,
>> so at the very least we're missing a test. However I suspect there
>> is a problem with the presence of the forward() call in your script,
>> because by doing this you're totally bypassing the HTTP engine, so
>> your script was called in an http context, it discretely stole the
>> contents under the blanket, and went back to the http engine saying
>> "I did nothing, it's not me!". The rest of the code continues to
>> process the HTTP contents from the buffer where they are, resulting
>> it quite a big mess. Ideally we should have a way to detect that
>> parts of the buffer were moved on return and immediately send an
>> error there. But there are some cases where it's valid if called
>> using the HTTP API. So I don't know for sure how to detect such
>> anomalies. Maybe buffer contents being smaller than the size of
>> headers known by the parser would already be a good step forward.
>>
>> I remember Thierry recently had to try to strengthen a little bit
>> such use cases where tcp was used from within HTTP. We'll definitely
>> have to figure what the use cases are for this and to find a reliable
>> solution to this because by definition it will not work anymore with
>> HTX.
>>
>>> There are two ways to avoid this by changing the script:
>>>
>>> 1/ Use mode tcp
>>>
>>> 2/ Use "get" and "send" instead of "forward", this way the LUA script
>>> will send the response directly to the client, instead of HAProxy doing
>>> that.
>>
>> It should still cause the same problem which is that the HTTP parser
>> is totally bypassed and what you forward is not HTTP anymore, but bytes
>> from the wire, and that you may even expect that the HTTP parser appends
>> an error at some places and aborts if it discoveres the stream is
>> mangled.
>>
>> I don't know if we can register filters from the Lua, but ideall that's
>> what should be the best option in your case : having a Lua-based filter
>> running on the data part would allow you to intercept the data stream
>> for each chunk decoded by the HTTP parser.
>>
>
> For the record, here is my old reply on a similar issue:
> https://www.mail-archive.com/haproxy@formilux.org/msg29571.html
>
> So, to be safe, don't use get/set/forward/send in HTTP without
> terminated the transaction with txn.done().
>
> The Lua API must definitely be changed to be more restrictive in HTTP.
> When the LUA will be update to support the HTX representation, I'll
> see with Thierry how to clarify this point.
>

Thank you Willy and Christopher for the clarifications, this is pretty
clear.

-- 
Moemen MHEDHBI




Re: OCSP stapling with multiple domains

2018-11-26 Thread Moemen MHEDHBI

On 11/14/18 1:34 AM, Igor Cicimov wrote:
> On Sun, Nov 11, 2018 at 2:48 PM Igor Cicimov
>  > wrote:
>
> Hi,
>
> # haproxy -v
> HA-Proxy version 1.8.14-1ppa1~xenial 2018/09/23
> Copyright 2000-2018 Willy Tarreau  >
>
> I noticed that in case of multiple domains and OCSP setup:
>
> # ls -1 /etc/haproxy/ssl.d/*.ocsp
> /etc/haproxy/ssl.d/star_domain2_com.crt.ocsp
> /etc/haproxy/ssl.d/star_domain_com.crt.ocsp
> /etc/haproxy/ssl.d/star_domain3_com.crt.ocsp
> /etc/haproxy/ssl.d/star_domain4_com.crt.ocsp
>
> I get OCSP response from haproxy only for one of the domains
> domain.com . Tested via:
>
> $ echo | openssl s_client -connect domain[234].com:443 -tlsextdebug
> -status -servername domain[234].com
>
> Is this expected?
>
>
> Any comments/ideas regarding this? Further noticed that OCSP code
> probably does not check the certificates SANs and matches only based
> on the CN in the subject since the calls to whatever.domain.tld get
> stapled but to domain.tld do not.
>
Hi Igor,

Testing OCSP on multiple certificates with different domains (based on
the CN) works correctly for me. (a.domain.com, b.domain.com, c.domain.com)

Are you using multiple certs with same CN but different SANs ?

-- 
Moemen MHEDHBI



Re: [PATCH] CLEANUP: http: Fix typo in init_http's comment

2018-11-26 Thread Tim Düsterhus
Willy,

Am 16.09.18 um 00:42 schrieb Tim Duesterhus:
> It read "non-zero" where it should read zero.
> ---
>  src/http.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/src/http.c b/src/http.c
> index 932f3cf7..1ca1805b 100644
> --- a/src/http.c
> +++ b/src/http.c
> @@ -905,7 +905,7 @@ int http_find_next_url_param(const char **chunks,
>  }
>  
>  
> -/* post-initializes the HTTP parts. Returns non-zero on error, with 
> +/* post-initializes the HTTP parts. Returns zero on error, with 
>   * pointing to the error message.
>   */
>  int init_http(char **err)
> 

While cleaning up my local branches I noticed that this patch is not yet
merged and probably slipped through.

Message-ID: 20180915224230.12922-1-...@bastelstu.be
Archive   : https://www.mail-archive.com/haproxy@formilux.org/msg31231.html

Best regards
Tim Düsterhus



Re: BUG: Warning: invalid file descriptor -1 in syscall close()

2018-11-26 Thread William Lallemand
On Sun, Nov 25, 2018 at 08:04:14PM +0100, Tim Düsterhus wrote:
> I've taken the usage from your commit message in commit
> e736115d3aaa38d2cfc89fe74174d7e90f4a6976 :-)
> 

Oh right, I edited the usage message in this patch but I forgot to edit the
commit message when I rebased my patches :(

-- 
William Lallemand