Load Balancing Software Users

2019-11-19 Thread Craig Wilson
Good Day,

I would like to know if you are interested in reaching out “Load Balancing 
Software Users“.

If you would like to see a few examples I can send you the names of a few firms 
that use the specific technologies for review.

Looking forward to helping you build new revenue streams for your business.

Thanks and Regards,
Craig Wilson
Marketing Manager

If you don’t want to receive any more emails from us REPLY “Unsubscribe”.



Re: Support for Keep-Alive header and timeouts

2016-04-25 Thread Craig McLure
Hi,

On Mon, Apr 25, 2016 at 3:39 PM, Lukas Tribus <lu...@gmx.net> wrote:
> Hi,
>
>
> Am 25.04.2016 um 15:51 schrieb Craig McLure:
>>
>> >From a firewall perspective all sockets are configured to forcefully
>> stop after about 20 minutes after which time a connection will go
>> 'stale' and no longer function, any additional packets on that socket
>> will be ignored.
>
>
> And why would you configure the firewall to do this? I don't see how this
> makes
> sense.

Resource limitations, physical restrictions, upstream limitations,
security requirements, could be anything, it's not really relevant to
the discussion, there could be many reasons why someone needed a
specific cut-off after a certain amount of time.

>
>
>>   This is fine for our purposes, but when keep-alive
>> comes into play this raises some problems. Theoretically using all the
>> timeouts available in haproxy it's tentatively possible to maintain a
>> connection for *LONGER* than that period, at which point the
>> connection gets silently dropped, and in haproxy the connection fails
>> in a non-graceful way.
>
>
> Even if haproxy would *try* to close the session after time X, there is not
> guarantee
> that current in flight request/response would be finished in time to not get
> dropped
> at firewall level. What about slow downloads? They could go on for hours ...

This is true if you make assumptions about what's happening on the
backend, 10 minutes was (as noted) an example, could be 3 hours, could
be 200 years, the relevance here was simply existence of the
functionality. As far as connections dropped during in-flight request
/ response cycle, they should follow the HTTP spec on how to behave in
that scenario, and obviously the 'force close' would occur prior to
the firewall dropping the connection.

>
>
>> Ideally, obviously, I'd like for haproxy to have a way to close the
>> connection as gracefully as possible after X minutes, rather than the
>> current scenario where it may get killed ungracefully.
>
>
> This is not supported.

This is the answer I needed.

> You can simulate this behavior by soft reloading haproxy
> every X minutes or by shutting down those "offensive" session via the admin
> socket:
>
> http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2-shutdown%20session
>
>
> However I would strongly suggest you go back to the drawing board and work
> out why you need this behavior in the first place.
>

With that in mind, it's not overly uncommon behaviour. nginx for
example has keepalive_timeout to facilitate the behaviour I'm looking
for here, I simply needed to know if I had missed something in the
manual with regards to haproxys support of this functionality,
obviously I hadn't, and as you say it's not supported.

>
> If you are concerned about the number of open connection on the proxy, just
> lower
> timeout http-keep-alive to something like 30 - 300 ms. That is way more
> effective.
>

Using a low timeout is the general solution I'm using. Again, the
email was sent to see if the drop could be forced, because it's
possible even with strict timeouts for a request that a connection
could stay for a long time depending on how it's interacting with the
backend.

>
> cheers,
>
> Lukas
>

Thanks,
Craig



Re: Support for Keep-Alive header and timeouts

2016-04-25 Thread Craig McLure
Hi Aleks,

Sorry, I was a bit unclear about the initial request, it was more
about the timeout on keep alive connections than the actual support of
the Keep-Alive header!

I did review the manual for it, the http-keep-alive option isn't the
option I'm looking for, as "It will define how long to wait for a new
HTTP request to start coming after a response was sent.". This makes
it more of an 'idle timeout'.

I'd like the ability to, after exactly (for example) 10 minutes,
forcibly close the client socket regardless of it's current state or
what it's doing. All the timeouts I've found in haproxy seem to
related more to 'idle timeouts':

* timeout http-keep-alive - Amount of time until a socket is closed
because it's been idle and hasn't sent a request
* timeout client - Amount of time until a socket is closed because
it's expected to send data but has been idle for this period.
* timeout http-request - Amount of time until a socket is closed
because it hasn't sent a complete HTTP request in this time.


None of these really provide the type of behaviour I'm expecting, for
a small amount of context:

>From a firewall perspective all sockets are configured to forcefully
stop after about 20 minutes after which time a connection will go
'stale' and no longer function, any additional packets on that socket
will be ignored. This is fine for our purposes, but when keep-alive
comes into play this raises some problems. Theoretically using all the
timeouts available in haproxy it's tentatively possible to maintain a
connection for *LONGER* than that period, at which point the
connection gets silently dropped, and in haproxy the connection fails
in a non-graceful way.

Ideally, obviously, I'd like for haproxy to have a way to close the
connection as gracefully as possible after X minutes, rather than the
current scenario where it may get killed ungracefully.

Running v1.6.4

Cheers.



On Mon, Apr 25, 2016 at 2:20 PM, Aleksandar Lazic <al-hapr...@none.at> wrote:
> Hi.
>
> Am 25-04-2016 14:01, schrieb Craig McLure:
>>
>> Hi,
>>
>> Does HAProxy support the Keep-Alive header, and a 'max connection
>> duration' for Keep-Alive connections?
>>
>> I've poured through the manual, but can't see anything obvious, but it
>> would be useful for better control over Keep-Alive connections.
>
>
> please can you show us haproxy -vv
>
> and maybe this could help
>
> http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-timeout%20http-keep-alive
> http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#4-timeout%20http-keep-alive
>
> found it with search in the page.
>
> There are more keep alive settings on this page ;-)
>
> Best regards
> Aleks



Support for Keep-Alive header and timeouts

2016-04-25 Thread Craig McLure
Hi,

Does HAProxy support the Keep-Alive header, and a 'max connection duration'
for Keep-Alive connections?

I've poured through the manual, but can't see anything obvious, but it
would be useful for better control over Keep-Alive connections.

Thanks.


Re: Q: about HTTP/2

2016-04-01 Thread Craig Craig
Hi,

> Do you guys, on the ML, really need HTTP/2?
> If so what's your deadline??

Yea, we will definitively need it, our customers started asking about it two
months ago. Management will probably start to worry about pissing off premium
managed hosting customers if they keep asking and we can't add HTTP/2 for them
at some point.
It's not really urgent for us, deadline might be the end of the year. It might
be problematic, if we want to take part in a public bidding and someone just
snuck "HTTP/2 support" into the requirements for bidders and we just can't do
that with haproxy.
I'm pretty sure my boss would be willing to invest some €€€ if that helps.

- Craig



Re: Question about Keep-Alive behaviour

2016-03-31 Thread Craig McLure
Hi Baptiste,

Thanks for the answer, it does help!

There have been discussions on the list about maintaining a connection pool
with backend servers for the purposes of keep-alive, are there any plans
for this in the near future? If not, can you recommend a way to handle such
behaviour outside of haproxy?

Thanks.

On 22 March 2016 at 20:44, Baptiste <bed...@gmail.com> wrote:

> On Tue, Mar 22, 2016 at 2:17 PM, Craig McLure <cr...@mclure.eu> wrote:
> > Hi,
> >
> > I'm hoping to experiment with enabling keep-alive on my service, but the
> > documentation isn't entirely clear for my use case, the general
> > implementation is as follows:
> >
> > 1) A HTTP request comes in
> > 2) A LUA script grabs the request body, does some analysis on it, and
> > injects a Cookie: header into the request
> > 3) The request goes to a backend, where the cookie is used to determine
> the
> > server the request should be dispatched too.
> >
> > This behaviour seems to work fine with the http-server-close or httpclose
> > options, but I'm not entirely sure what would happen in a keep-alive
> session
> > when the backend server switches. I've set http-reuse to 'safe'  but when
> > the second request goes to a different backend server to the first, what
> > happens to the original socket on the first server? Will it be reused by
> > other connections or does it just get dropped in a 1:1 mapping style?
> Given
> > that it's rare that two subsequent requests on a single connection will
> > arrive at the same server, is it even worth having keep-alive support on
> the
> > backends?
> >
> > Hopefully you guys can help.
> >
> > Thanks!
>
> Hi Craig,
>
> We miss the backend configuration and how you perform this persistence
> to be able to deliver you the best support.
> As far as I can tell, the persistence will have precedence over
> keep-alive connections, if that helps. So Imagine a client which did a
> first request which has been routed to server 1 where the connection
> is now established, a second request comes from this same client and
> your lua script sets a cookie to point it to server 2, then HAProxy
> will close the first connection and establish a new one on the new
> server.
>
> Baptiste
>


Question about Keep-Alive behaviour

2016-03-22 Thread Craig McLure
Hi,

I'm hoping to experiment with enabling keep-alive on my service, but the
documentation isn't entirely clear for my use case, the general
implementation is as follows:

1) A HTTP request comes in
2) A LUA script grabs the request body, does some analysis on it, and
injects a Cookie: header into the request
3) The request goes to a backend, where the cookie is used to determine the
server the request should be dispatched too.

This behaviour seems to work fine with the http-server-close or httpclose
options, but I'm not entirely sure what would happen in a keep-alive
session when the backend server switches. I've set http-reuse to 'safe'
 but when the second request goes to a different backend server to the
first, what happens to the original socket on the first server? Will it be
reused by other connections or does it just get dropped in a 1:1 mapping
style? Given that it's rare that two subsequent requests on a single
connection will arrive at the same server, is it even worth having
keep-alive support on the backends?

Hopefully you guys can help.

Thanks!

--
Craig McLure


haproxy as a login portal

2016-02-05 Thread Craig Craig
Hi,

I'd like to use haproxy as login portal, has anyone done a configuration like
that?

I've got some users connecting from dynamic IPs to access a 3rd party content
management system which I don't want to expose globally and would like to
authenticate them not by IP, but by session/actual user before they actually can
try to login to the real system.

My idea is that haproxy is forwarding all unauthenticated requests to a portal
server, and after successfully logging in, that system sets a specific cookie
which I can match to in haproxy and forward authenticated users to the real
server. It's not possible to access stick-tables from a external source, e.g.
via admin socket for this, correct? Maybe I could code the login portal in LUA
and write to a data structure?

This is just a quick idea, I didn't look deeply into this yet, and was wondering
if anyone had done it before or has some ideas. :)

Best regards,

craig



Sponsorships

2015-06-17 Thread Joseph Craig
Hi, my name is Joseph Craig and I'm the Marketing Director here at Private
Internet Access. I came across haproxy.org and considering our
organizations' similar goals in Internet security, I wanted to speak with
someone from your organization about donations. Can we perhaps schedule a
time to jump on a call to discuss further?

Thank you!

Joseph Craig
Marketing Director
Private Internet Access | London Trust Media, Inc.
Cell: (213) 800-1368

https://www.privateinternetaccess.com/ | http://londontrustmedia.com/

Toll Free: (855) ANON-VPN
Local: (347) LTM-WINS
Fax: (347) 803-1582

This e-mail message is intended for the named recipient(s) above, and may
contain information that is privileged, confidential and/or exempt from
disclosure under applicable law. If you have received this message in
error, or are not the named recipient(s), please do not read the content.
Instead, immediately notify the sender and delete this e-mail message. Any
unauthorized use, disclosure or distribution is prohibited.

London Trust Media, Inc. assumes no responsibility for any errors or
omissions in the content or transmission of this email.


[SPAM] Your commerce outlet and channel will get wider and wider

2015-06-01 Thread Craig

Dear friend:

This is Craig, from 1style In A Million Industrial Co.,Limited, from China.

We can provide you with a lot of fashions.

Currently, we have bulk quantity of new nice stock T shirts/polo shirts for sale, we might cooperate in two ways:

1) If you can INTRODUCE our T shirts/polo shirtsto any other importer (who'll buy from us directly), you can get 10% of current EXW price as commission.

And you can get commission CONTINUOUSLY from your customers' continuous orders they place with us on those T shirts, by sitting at home and only introducing ourproductsto them or only coordinating between all the parties, without investing your own money.

We can additionally give your customers 5% of current EXW price as discount (if your customers want more than 5%, they must get it from your commission, if they can accept discount less than 5%, then you can keep the gap discount as your own commission, for example, if your customers can accept 3% discount, then the left 2% can be given to you, then your total commission will be: (10%+2%) =12%, you can arrange how to split 15% between your customers and you, but you'd better allow more discount to them, so that they can sell more quickly and place next order with us sooner, so, you can continuously get more commission more quickly.

We will not tell your customers about your own commission, but you are required to confirm which customer will be introduced to us by you, before we discuss the discounts with your customers, otherwise, we will give all 15% to them since we do not know which customer will have been introduced to us by you.

2) If you yourself buy from us, you can get 15% discount directly based on current EXW Shenzhen price(in such case, there is no commission any more).

Payment: PAYPAL, T/T, WESTERN UNION

If you can find the market for those T shirts, I am sure that this business will be very prosperous and lucrative, because my confidence originates from our fashionable designs, very nice quality and very good feedback in both China's domestic market and overseas market, trust me, you will know what I am saying is true.

For more details such as our catalogues or website, please feel free to contact us.

SINCERELY

Craig
1style In A Million
+86-186-88832546
+86-150-12875302
+86-755-84867606
+86-755-84860929
Skype Id: one-style-in-a-million


RE: No TCP RST on tcp-request connection reject

2015-01-16 Thread Craig Craig
Hi,

 I don't see how. The socket is immediately close()'ed when it hits
 tcp-request
 connection reject, this is as cheap as it gets.
 
If you're getting attacked, you try to send as few unnecessary packets as
possible, I guess a silent drop could be nice.
 
  a) HAProxy (configured with rate limiting etc.) does a tcp-request
  connection reject which ends up as a TCP RST. The attacker gets the
  RST and immediately again

 Are you saying that an attacker retransmits faster because of the RST?
 Thats nonsense, an attacker doesn't care about the RST at all.
 
His tools might care about it, for example if it's an automated SQLi-Test?
 
  b) the same as a) but the socket will be closed on the server side but no
  RST,
  nothing will be sent back to the remote side. The connections on the remote
  side
  will be kept open until timeout.

 An attacker doesn't keeps states on his local machine if his intention is to
 SYN
 flood you.
 
I think he's talking about established connections.
 

- Craig

consistent server status between config reloads

2014-02-24 Thread Craig Craig
Hi,

I'm running some scripts that can disable a server for maintainance/application
deployments. However a config reload enables the server again, and we have
frequent changes to our haproxy config. Would it be possible to leave disabled
servers in that state between reloads? Maybe with an additional config option
like disable_permanent or the like?
Any opinions on this?

Best regards,

Craig

Re: Loading configuration from multiple files

2014-01-15 Thread Craig
On 14.01.2014 15:01, Timh Bergström wrote:
 I would really love a clean/native way to basically do includes in
 the configuration file;

+1 on what Timh said.



Unix socket question

2014-01-09 Thread Craig Smith
Hello.

I'm attempting to use HA Proxy with some custom scripts with auto scaling
groups on EC2. If I run the 'disable server' command from a Unix socket
what will happen to that active connections to that server? Will HAP wait
until those connections are closed to mark the server down?

Thanks.

Craig


Re: External Monitoring of https on LB's

2012-08-27 Thread Craig Craig
Hi,

a patch is already upstream. I put some effort into getting patches upstream:

http://groups.google.com/group/mailing.unix.stunnel-users/tree/browse_frm/month/2011-02/a1956cc49beaf689?rnum=11_done=%2Fgroup%2Fmailing.unix.stunnel-users%2Fbrowse_frm%2Fmonth%2F2011-02%3Ffwc%3D1%26#doc_2d06864707c888ef

Changelog:

Version 4.36, 2011.05.03, urgency: LOW:
New features
*Backlog parameter of listen(2) changed from 5 to SOMAXCONN: improved
behavior on heavy load.

Also, notice the last posting. I really don't like his attitude on integrating
patches, but I don't care anymore - we moved away to nginx.


Best regards,

Stefan Behte



On August 27, 2012 at 7:14 AM Willy Tarreau w...@1wt.eu wrote: Hi,

 On Mon, Aug 27, 2012 at 09:11:43AM +1000, s...@summerwinter.com wrote:
  Hi there,
 
  Forgive me if this is the wrong place for advice, but I figure a lot
  of people here must use a similar setup.
 
  I've got 2 LB's setup with haproxy, heartbeat  stunnel. Http  https
  is working correctly.
 
  I am using HyperSpin.com for external monitoring to receive alerts
  based on ping, http  https on the float IP.
 
  Ping  http work without issue. However, 75% of there 20 or so global
  monitoring servers appear return errors 'couldn't connect to port
  443', so every 10-15minutes a server that can't connect on 443 tests
  it, fails and my inbox fills.
 
  There is no firewall on the LB, nothing I can tell that would be
  blocking access to 443.
 
  I've received the following logs from HyperSpin on a server that is
  unable to connect:
 
  -
 
  We do not know the cause of the problem, but we can confirm it is a SSL
  issue.
 
  We logged to our Singapore server and tried using curl and wget to
  access your website. Both returned errors.
 
  ===
  [admin@sg ~]$ curl https://floatip
  curl: (35) Unknown SSL protocol error in connection to floatip:443
 
  [admin@sg ~]$ wget -O - https://floatip
  --21:44:16-- https://floatip
  = `-'
  Connecting to floatip:443... connected.
  Unable to establish SSL connection.
 
  ===
 
  I thought it may be an issue with the intermediate certificate, but I
  have tacked that on at the end of the ssl.crt file i'm using.
 
  Any ideas?

 I could suspect something else. Did you patch your stunnel ? By default
 it has a very tiny listen queue of only 5 entries which can cause exactly
 this issue if there is even a moderate load on it. A patch to change this
 is available here if you want :

 http://www.exceliance.fr/download/free/patches/stunnel/

 It adds a listenqueue parameter allowing you to increase the backlog.
 I would really not be surprised if this was the issue.

 Regards,
 Willy





Re: Haproxy cluster?

2011-09-16 Thread Craig
Hi,

 I was wondering if there's a way to load balance traffic over a number of
 lower-bandwidth haproxy machines? I've used heartbeat to handle
 active-passive failover with haproxy before, I guess what I'm really asking
 if there's a way to do active-active load balancing over a bunch of haproxy
 machines, with the capability to add more machines to the pool easily, if
 required?

You might want to have a look at the iptables CLUSTERIP target. I
haven't tested it much, though.


Best regards,

- craig



Re: Best way to find the version

2011-08-01 Thread Craig
Hi,


 Just wondering what is the best way to find the haproxy version.

haproxy -v


- craig



Re: nbproc1, ksoftirqd and response time - just can't get it

2011-07-26 Thread Craig
Hi,

 We've tested from the outside. In fact that was a real attack. Botnet
 consisted of ~10-12k bots each opening 1000 connections/second.

This kind of DDoS seems popular lately. Did it originate from a specific
AS, did you try to nullroute? I'm curious because mostly when I see
botnet attacks, they are not widely spread throughout the internet but
mostly come from 10 AS.

Nice to see that AWS performed here, thanks for sharing. :)

- Craig



Re: https from source to destination

2011-07-13 Thread Craig
Hi,

 No. You terminate the ssl at the load-balancer, and send the http to
 the backend. You need to configure the backend servers to accept and
 trust the http traffic from the LB.

I hereby request the feature to do https to backends
Sometimes it's really troublesome not being able to do that, even more
so if a different party administrates the servers.

Best regards,

Craig



Re: https from source to destination

2011-07-13 Thread Craig
Hi,

 On Wed, Jul 13, 2011 at 5:57 PM, Craig cr...@haquarter.de wrote:
 
 I hereby request the feature to do https to backends
 Sometimes it's really troublesome not being able to do that, even more
 so if a different party administrates the servers.
 
 I'm not sure if you're serious or not, but If another party as
 administrating the backend servers, it seems likely that you won't
 have the private key for the ssl certificate.

Yea I am, I would't dare to write shitty semi-joke mails on Willy's list.

In a big company, the loadbalancer could be managed by the network team,
and the servers by the application team, that's what I meant; you will
have the keys. Making HTTPS connections to backends would be really
nice, because quite often you have rules on your webservers that will
redirect HTTP traffic to HTTPS ... which causes an endless loop, if you
terminate that traffic on the loadbalancer and send it via HTTP to your
backend, of course. Surely, you can add headers with the loadbalancer,
so that the backend knows if the connection is already secure or needs
to get redirected, but then there are sometimes also funny application
servers that still go nuts at you. Or your apache config is being send
to you by a managed hosting customer and you have to patch it all the
time for the header check. It's much nicer to just tell haproxy that
those backend servers are HTTPS instead of HTTP. Sure, it takes more
ressources, and might slow things down a bit, but if it's a system that
runs at 5% of available ressources anyways, you won't care much. Even if
so, you might rather invest 5000$ in hardware to keep the performance
as-is than to create a sucky workflow and/or piss off your customers
because you have a sucky loadbalancer that cannot loadbalance https
properly and makes us change our apache config which took us three days
and no one pays us that precious time.
Surely, you could just layer-3 balance, but that takes a lot of features
away and you might have to run a caching instance like varnish or squid
running, too. Some IT contracts suck. ;)

So I'd like to see HTTPS backend support and even much more so code for
HTTPS frontends, but that's a whole other story.

AFAIK, Willy has both on his TODO list. :)

Best regards,

Craig





Re: balance (hdr) problem (maybe bug?)

2011-02-14 Thread Craig
Hi,

 Thanks to all your tests and observations, I managed to spot the bug and to
 fix it. The headers are linked in a list whose tail is known as hdr_idx-tail.
 This pointer is used when adding new headers.  Unfortunately the header 
 removal
 function did not update it when it removed the last header. The effect is that
 when adding the x-forwarded-for header, it's added just after the previously
 removed Connection header so it does not appear in the list when walked from
 the beginning. It's really when you said that it broke when Connection was
 the last header that I understood what to look for. Thanks !
I've confirmed it's working fine now, and successfully deployed the new
version yesterday in our live environment. Many thanks for fixing it! :)

Sidenote to list-readers: Do not use stunnel with the out-of-tree
x-forwarded-for patch for SSL termination if you want to balance based
on that header with two haproxy frontends and one backend. Stunnel will
add an additional X-Forwarded-For header even if your clients send one.

 Expected behaviour:
 Case a) should not jump between servers. An empty x-forwarded-for header 
 means that always the same header (an empty one) should be hashed, you 
 should always end up on the same server.
 
 This is not true in haproxy, and there is an explicit test for this. If you
 try to hash on an empty or non-existing header or parameter, then it falls
 back to round robin. This is very important because there are many conditions
 where you want to optimize stickiness when the information is present, but
 not send all visitors to the same random server if they don't have the info.
 
Oh right, it really has to be like that! That was just a (too) quick
thought and I was misleaded...

Well thanks again for this great piece of software! :)


Best regards,

Craig



Re: Re: balance (hdr) problem (maybe bug?)

2011-02-11 Thread Craig Craig
Hi,

I decided to narrow the bug a bit and deleted all other backends/frontends we 
have; I've defined three servers in the backend which all query www.google.de. 
Thus you do not have to set up your own server if you want to test this config, 
google can take the load. ;)

The problem is reproducable with this config, I used need netcat here.

Running with #1 config (reqidel ^X-Forwarded-For:.* in frontend_btg not set).

case a) jumps between backends:
nc 127.0.0.1 8085 EOF
GET / HTTP/1.1
Host: www.google.de
Connection: keep-alive

EOF

case b) stays on same backend:
nc 127.0.0.1 8085 EOF
GET / HTTP/1.1
Host: www.google.de
Connection: close

EOF

case c) stays on same backend:
nc 127.0.0.1 8085 EOF
GET / HTTP/1.1
Host: www.google.de
X-Forwarded-For: 127.0.0.1
Connection: close

EOF

case d) stays on same backend:
nc 127.0.0.1 8085 EOF
GET / HTTP/1.1
Host: www.google.de
X-Forwarded-For: 127.0.0.1
Connection: keep-alive

EOF

Expected behaviour:
Case a) should not jump between servers. An empty x-forwarded-for header means 
that always the same header (an empty one) should be hashed, you should 
always end up on the same server.
Case a) and b) should behave the same. What does it matter if the connection is 
set to keep-alive or close, I've set option httpclose anyways.


Running with #2 config (reqidel ^X-Forwarded-For:.* in frontend_btg is set).

case e) jumps between backends:
nc 127.0.0.1 8085 EOF
GET / HTTP/1.1
Host: www.google.de
Connection: keep-alive

EOF

case f) stays on same backend:
nc 127.0.0.1 8085 EOF
GET / HTTP/1.1
Host: www.google.de
Connection: close

EOF

case g) stays on same backend:
nc 127.0.0.1 8085 EOF
GET / HTTP/1.1
Host: www.google.de
X-Forwarded-For: 127.0.0.1
Connection: close

EOF

case h) jumps between backends:
nc 127.0.0.1 8085 EOF
GET / HTTP/1.1
Host: www.google.de
X-Forwarded-For: 127.0.0.1
Connection: keep-alive

EOF

Expected behaviour:
Case e) same expections as with case a) and config-1
Case h) should really stay on one backend. I haproxy to delete X-Forwarded-For 
on the frontend, add new X-Forwarded-For: SRC-IP, and balance based on that 
header in the backend.

With this behaviour you will get some problems with http/https and sessions, 
stunnel will add an X-Forwarded-For header which contains the actual IP, but 
the user might have sent a different one (or none) resulting in the client to 
access different backends with http than with https.


Best regards,

Craig



- original Nachricht 

Betreff: Re: balance (hdr) problem (maybe bug?)
Gesendet: Do, 10. Feb 2011
Von: Willy Tarreauw...@1wt.eu

 Hi Craig,
 
 On Mon, Feb 07, 2011 at 09:24:24PM +0100, Craig wrote:
  Hi,
  
   The X-Forwarded-For header is only added once at the end of all
  processing.
   Otherwise, having it in the defaults section would result in both your
   frontend and your backend adding it.
  Then the possibility to add it only to a frontend or a backend in the
  defaults section would be nice?
 
 It is already the case. The fact is that we're telling haproxy that we
 want an outgoing request to have the header. If you set the option in
 the frontend, it will have it. If you set it in the backend, it will
 have it. If you set it in both, it will only be added once. It's really
 a flag : when the request passes through a frontend or backend which
 has the option, then it will have the header appended.
 
   So in your case, what happens is that you delete it in the frontend
  (using
   reqidel) then you tag the session for adding a new one after all
  processing
   is done.
  
   When at the last point we have to establish a connection to the
  server, we
   check the header and balance based on it. I agree we should always
  have it
   filled with the same value, so there's a bug.
  So if I got it right, I cannot balance based on the new header because
  it was not added yet. That behaviour comes really unexpected because one
  usually would believe it was already added in the frontend.
 
 It can come unexpected when you reason with header addition, but it's sort
 of an implicit header addition. The opposite would be much more unexpected,
 you'd really not want the header to be added twice because it was enable in
 both sections. It's possible that the doc is not clear enough :
 
  This option may be specified either in the frontend or in the backend. If
 at
   least one of them uses it, the header will be added. Note that the
 backend's
   setting of the header subargument takes precedence over the frontend's if
   both are defined.
 
 Maybe we should insist on the fact that it's done only at the end.
 
 We could try to add it in the frontend and tag the session to know it was
 already performed. But this would slightly change the semantics to a new
 one which might not necessarily be desirable. For instance, it's possible
 in a backend to delete the header and set the option. That way you know
 that your servers will receive exactly one occurrence of it. Many people
 are doing

Re: HAProxy Stunnel SSL Setup question

2011-02-07 Thread Craig
Hello,

 The last stunnel patch I saw on this mailing list was for stunnel 4.34
 available from this thread:
 http://www.mail-archive.com/haproxy@formilux.org/msg04024.html

Someone worked on this today, too:
https://bugs.gentoo.org/show_bug.cgi?id=353955

It fixes the manpage, too. No one reviewed the patch yet, so if someone
from the list is willing to do, I'd be glad.

- Craig



Re: balance (hdr) problem (maybe bug?)

2011-02-07 Thread Craig
Hi,

 The X-Forwarded-For header is only added once at the end of all
processing.
 Otherwise, having it in the defaults section would result in both your
 frontend and your backend adding it.
Then the possibility to add it only to a frontend or a backend in the
defaults section would be nice?

 So in your case, what happens is that you delete it in the frontend
(using
 reqidel) then you tag the session for adding a new one after all
processing
 is done.

 When at the last point we have to establish a connection to the
server, we
 check the header and balance based on it. I agree we should always
have it
 filled with the same value, so there's a bug.
So if I got it right, I cannot balance based on the new header because
it was not added yet. That behaviour comes really unexpected because one
usually would believe it was already added in the frontend.

 My guess is that you're running a version prior to 1.4.10 which has the
 header deletion bug : the header list can become corrupted when exactly
 two consecutive headers are removed from the request (eg: connection and
 x-forwarded-for). Then the newly added X-Forwarded-For could not be seen
 by the code responsible for hashing it.

 If so, please try to upgrade to the last bug fix (1.4.10) and see if the
 problem persists.
I am already using 1.4.10 - sorry, it seems I somehow forgot to mention
it! :/

 Also, I'd like to add that what you're doing is simply equivalent (though
 more complex) to hashing the source address. You'd better use
balance src
 for this :-)
That is a good hint, but I also have a frontend for SSL (with stunnel
which adds the X-Forward-For header) that I'd want to use the same
backend. I did not like defining backends twice as it introduces
redundancy and might lead to inconsistency, it is a good workaround
though. Note: my testing and the bug happened with the normal frontend.

Also, I could leave out the reqidel of the header, but then a malicious
party could theoretically choose the server it accesses (by forging
x-forwarded-for) and overload one after another; I prefer to take away
this possibility (yea I am overdoing it, maybe). ;)



Best regards,

Craig






balance (hdr) problem (maybe bug?)

2011-02-03 Thread Craig Craig
Hi,

I've stumbled upon a problem with balance(hdr), specefically with 
X-Forwarded-For.
When you use the config that I've attached, you get different results wheather 
you send a X-Forwarded-For or not.

The source IP does not change when I perform those queries, hosts did not 
change state:

curl http://www.foo.de/host.jsp -s
Stays always on the same server.

curl http://www.foo.de/host.jsp -s -H X-Forwarded-For: x.x.x.x
Jumps between the three hosts.

This is strange: I delete the header that is sent by the client on the frontend 
with reqidel and set a new one with option forwardfor - I expected the 
backend to balance based on that new header.

If my assumption was wrong, and the original header is used, then I should not 
jump between hosts when I am always sending the same header.

Something smells fishy here...is this a bug? A Feature? ;) Or misunderstanding 
on my part?


Thanks,

Craig


haproxy.cfg:
---
global
user haproxy
group haproxy
maxconn 75000
log 127.0.0.1 local0
stats socket /var/run/haproxy.stat mode 600

defaults
timeout client 300s
timeout server 300s
timeout queue 60s
timeout connect 7s
timeout http-request 10s

backend backend_btg
mode http
balance hdr(X-Forwarded-For)
option redispatch
option httpchk HEAD / HTTP/1.1\r\nHost:\ www.foo.de
server S43 192.168.x.43:80 weight 100 maxconn 16384 check inter 1 fall 2 
rise 2
server S56 192.168.x.56:80 weight 100 maxconn 16384 check inter 1 fall 2 
rise 2
server S76 192.168.x.76:80 weight 100 maxconn 16384 check inter 1 fall 2 
rise 2

frontend frontent_btg
bind 0.0.0.0:8085
maxconn 3
mode http
option httplog
reqidel ^X-Forwarded-For:.*
option forwardfor except 192.168.X.Y
option httpclose
log 127.0.0.1 local0
capture request header Host len 192

default_backend backend_btg




ACL to use a single Server

2010-12-16 Thread Craig
Hi,

I think it's currently not possible to direct traffic to just one server
from a backend, example:

default_backend foo
acl myacl1 [..]
acl myacl2 [..]
acl myacl3 [..]
use_backend foo if myacl1
# does not exist:
use_server my_backend/serverX if myacl2

serverX would then inherit the settings from my_backend; you would not
have to define another backend and use it. Traffic status should/could
be updated in the my_backend definition only. I think it might make the
config shorter and one would not have to take care of settings in two
different backends.

A typical use-case is a special server from you cluster that fullfills a
maintainance special task, I guess it's a common use-case. Any opinions
on this?
This is just a discussion/feature request, unfortunately my C is weak. ;(

Best wishes,

Craig



haproxy bug or wrong kernel settings?

2010-06-30 Thread Craig Craig
Hi list,

I'm having a strange problem with haproxy 1.3.24, when the server gets more 
connections. Load is still ok by then (about 0.5), througput is about 
50-100MBit.

Right now, everything is fine, I'm seeing:

Server connection states (it also runs a squid, which is not used for this 
domain):

 92 CLOSE_WAIT
 21 CLOSING
   3315 ESTABLISHED
 86 FIN_WAIT1
171 FIN_WAIT2
 60 LAST_ACK
 34 LISTEN
 99 SYN_RECV
  1 SYN_SENT
   9532 TIME_WAIT

Port 8085 (haproxy-frontend) only:

ESTABLISHED  1544
FIN_WAIT116
FIN_WAIT2141
LAST_ACK 59
SYN_RECV 44
SYN_SENT 0
TIME_WAIT1101
CLOSE_WAIT   1
CLOSING  0


It seems that somewhere over 2000 and between 4500 established connections, the 
problems start; I've not been able to determine
the exact number, as I've changed the NAT to the server directly - it could 
handle the ~6600 connections without problems.

When I was querying a server through haproxy (on the haproxy itself), I saw 
this huge lag:

1.) time printf GET / HTTP/1.1\r\nhost: www.foo.de\r\nConnection: 
close\r\nCookie: -\r\n\r\n | nc -v 192.168.92.11 8085 /dev/null
real0m19.976s
user0m0.000s
sys 0m0.008s

And at the same time, I queried the server directly, from the server running 
haproxy again:
2.) time printf GET / HTTP/1.1\r\nhost: www.foo.de\r\nConnection: 
close\r\nCookie: -\r\n\r\n | nc -v 192.168.70.43 80 /dev/null
real0m0.049s
user0m0.000s
sys 0m0.004s

Nr. 1.)  always had the lag, Nr. 2.) was always slow, but it seemed to get 
slower, the more connections were open.. After switching the NAT from haproxy 
to the host directly, the query times are in the range of #2 again. It seems, 
that after a specific limit is reached by haproxy, the connections get slower 
and slower. 

It might also be a linux kernel setting but, any hint would be much 
appreciated...

Best regards,
Craig



My config:

# haproxy.cfg
global
user haproxy
group haproxy
maxconn 75000
ulimit-n 192000

log 127.0.0.1 local0

defaults
timeout client 300s
timeout server 300s
timeout queue 60s
timeout connect 7s
timeout http-request 10s

backend backend_btg
mode http
balance hdr(X-Forwarded-For)
option redispatch
option httpchk HEAD / HTTP/1.1\r\nHost:\ www.foo.de
server Sxxx 192.168.71.43:80 weight 100 maxconn 16384 check inter 1 fall 2 
rise 2

frontend frontend_btg
bind 0.0.0.0:8085
mode http
option httplog
reqidel ^X-Forwarded-For:.*
option forwardfor except 192.168.97.11
log 127.0.0.1 local0
capture request header Host len 192
timeout client 1m

acl request_btgdomain hdr_reg(host) -i (^|\.)foo\.de

acl redirect1   url_beg /1
acl redirect2   url_beg /2
acl redirect3   url_beg /3
acl redirect4   url_beg /4
acl redirect5   url_beg /5
acl forum_request   hdr_dom(host)   -i forum.foo.de

acl forum_allow_bt1 src 193.17.232.0/24
acl forum_allow_bt2 src 193.17.236.0/24
acl forum_allow_bt3 src 193.17.243.0/24
acl forum_allow_bt4 src 193.17.244.0/24

redirect location https://www.foo.de/1 if redirect1 request_btgdomain
redirect location https://www.foo.de/2 if redirect2 request_btgdomain
redirect location https://www.foo.de/3 if redirect3 request_btgdomain
redirect location https://www.foo.de/4 if redirect4 request_btgdomain
redirect location https://www.foo.de/5 if redirect5 request_btgdomain

default_backend backend_btg



## sysctl -a output:

kernel.sched_rt_period_us = 100
kernel.sched_rt_runtime_us = 95
kernel.sched_compat_yield = 0
kernel.panic = 0
kernel.core_uses_pid = 0
kernel.core_pattern = core
kernel.tainted = 0
kernel.print-fatal-signals = 0
kernel.ctrl-alt-del = 0
kernel.modprobe = /sbin/modprobe
kernel.hotplug = 
kernel.sg-big-buff = 32768
kernel.cad_pid = 1
kernel.threads-max = 274432
kernel.random.poolsize = 4096
kernel.random.entropy_avail = 130
kernel.random.read_wakeup_threshold = 64
kernel.random.write_wakeup_threshold = 128
kernel.overflowuid = 65534
kernel.overflowgid = 65534
kernel.pid_max = 32768
kernel.panic_on_oops = 0
kernel.printk = 1   4   1   7
kernel.printk_ratelimit = 5
kernel.printk_ratelimit_burst = 10
kernel.ngroups_max = 65536
kernel.unknown_nmi_panic = 0
kernel.nmi_watchdog = 0
kernel.panic_on_unrecovered_nmi = 0
kernel.bootloader_type = 113
kernel.kstack_depth_to_print = 12
kernel.io_delay_type = 0
kernel.randomize_va_space = 1
kernel.acpi_video_flags = 0
kernel.compat-log = 1
kernel.max_lock_depth = 1024
kernel.poweroff_cmd = /sbin/poweroff
kernel.scan_unevictable_pages = 0
kernel.vsyscall64 = 1
kernel.ostype = Linux
kernel.osrelease = 2.6.29-gentoo-r3
kernel.version = #2 SMP Tue May 11 19:55:13 CEST 2010
kernel.hostname = N111
kernel.domainname = (none)
kernel.shmmax

Re: haproxy bug or wrong kernel settings?

2010-06-30 Thread Craig
Hi,

at 30.06.2010 23:08, Willy Tarreau wrote
 I'm seeing that you have nf_conntrack loaded on the server, are
 you absolutely sure that the session table never fills up ? You can
 check that with dmesg. I'm asking because this is an extremely common
 issue. Just in doubt, you should check if you can disable it.
I had already ruled conntrack out - sorry, I forgot to mention it. It's
a really common problem I had on my radar. ;)

 maxconn 75000
 ulimit-n 192000

 you can safely remove ulimit-n above, it's correctly computed from
 maxconn.
Thanks for the hint.

 OK I see. You have no maxconn setting in your frontend. So it's
 limited to the default value (2000). You should set it slightly
 below the global maxconn setting (which is for the whole process).
I was always of the opinion (not sure from where I got that, though),
that setting the maxconn at the beginning would set it as default for
every frontend and backend. Oh was I wrong. Next time it's probably a
good idea to read documentation on every configuration option again.

 your sysctls look correct overall.
Thanks!

I'm relatively certain maxconn that was the issue, some tests will
verify it...thank you very much Willy!

Bests,

Craig



Re: http-server-close problem (also with Tomcat backend)

2010-03-30 Thread Craig
Hi,

I've got the same problem here with forwardfor, and are looking forward
to full HTTP/1.1 support (or a dirty fix ;) that won't force option
httpclose on me...
I'm also thinking about implementing TPROXY so that I don't have to add
the headers and modify the webserver logging. Maybe that's an option for
you, too?

regards,

Craig


Am 30.03.2010 23:55, schrieb Óscar Frías Barranco:
 I am forced to use http-server-close because in our application we need to
 know the remote IP addresses of the users which are connecting to our
 service.
 And for this we need to use option forwardfor.  And the problem when using
 forwardfor and keepalive is that only some of the requests include the
 X-Forwarded-For header.  Then we are using http-server-close to use only
 keep alive in the connections from browser to haproxy and not in the
 connections from haproxy to the backend servers.  This way we force all the
 requests to include the X-Forwarded-For header.
 
 But then we found the problem that I explained in my first email which,
 summing up, is that http-server-close breaks keepalive support in the
 browser-haproxy connections when using some backend servers (Tomcat and
 Jetty at least).
 
 Oscar
 
 
 On Tue, Mar 30, 2010 at 20:29, Nicolas Maupu nma...@gmail.com wrote:
 
 From documentation, it seems to be an option to provide keep-alive from
 backend which cannot support it ...
 Keep-Alive connection is only on frontend side, not from backend !

 So for chunked problem, it is easy to understand :
 RFC 2616 (http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html) says :

- Whenever a transfer-coding is applied to a message-body, the set of
transfer-codings MUST include chunked, unless the message is terminated 
 by
closing the connection
- The chunked encoding modifies the body of a message in order to
transfer it as a series of chunks

 So chunk transfers need a keep-alive connection. If a connection: close
 header is present, server should not proceed with chunk transfer ...

 Why do you want to use http-server-close option ?
 Why not to use directly Keep-Alive ability of tomcat Connector and specify
 'no option httpclose' in haproxy ? This way, haproxy should act
 transparently using keep-alive ability of tomcat connector.

 nm.



 On Tue, Mar 30, 2010 at 20:07, Nicolas Maupu nma...@gmail.com wrote:

 Hi,

 I just wanted to confirm that HAproxy replaces Connection: Keep-Alive with
 Connection: close.
 I configured haproxy with a listen section with a backend server on port
 80 and put option http-connection-close
 I opened a port with netcat on the backend machine :
 sudo nc -l -p 80

 And finally, I tried to telnet to haproxy et get a page :
 telnet 192.168.10.148 80
 Trying 192.168.10.148...
 Connected to 192.168.10.148.
 Escape character is '^]'.
 GET / HTTP/1.1
 Host: toto
 Connection: Keep-Alive

 Netcat received in fact :
 GET / HTTP/1.1
 Host: toto
 Connection: close

 I don't get it ... HTTP/1.1 says that by default keep-alive is activated
 on every request unless Connection: close is specified.
 If HAProxy sends a Connection: close to HTTP backend, this one will close
 the connection ...
 Do somebody is able to explain why HAProxy sends a Connection: close to
 backend ?

 nm.

 2010/3/30 Óscar Frías Barranco ofr...@gmail.com

 Hello.

 I have just read Patrik Nilsson email about http-server-close and Jetty.
 Unfortunately I cannot reply to that email because I read it in the 
 archives
 and I was not subscribed to the list.

 We are facing a very similar problem using Tomcat 6.0.20 in the backend
 (and haproxy 1.4.2).

 When we go direct (without haproxy) to the backend servers, these are the
 headers:

 REQUEST:
 GET http://ned.trabber.com/es/ HTTP/1.1
 Accept: image/gif, image/jpeg, image/pjpeg, application/x-ms-application,
 application/vnd.ms-xpsdocument, application/xaml+xml, 
 application/x-ms-xbap,
 application/x-shockwave-flash, */*
 Accept-Language: es
 User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0;
 Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.0.30618; .NET CLR
 3.5.30729)
 Accept-Encoding: gzip, deflate
 Connection: Keep-Alive
 Host: ned.trabber.com

 RESPONSE:
 HTTP/1.1 200 OK
 Server: Apache-Coyote/1.1
 Expires: Thu, 01 Jan 1970 00:00:00 GMT
 Cache-Control: no-cache
 Content-Type: text/html;charset=UTF-8
 Content-Language: es
 Transfer-Encoding: chunked
 Content-Encoding: gzip
 Vary: Accept-Encoding
 Date: Tue, 30 Mar 2010 15:49:05 GMT


 This means that the keep alive is working OK.

 However, when we go through haproxy, this is the response:

 HTTP/1.1 200 OK
 Server: Apache-Coyote/1.1
 Expires: Thu, 01 Jan 1970 00:00:00 GMT
 Cache-Control: no-cache
 Content-Type: text/html;charset=UTF-8
 Content-Language: es
 Content-Encoding: gzip
 Vary: Accept-Encoding
 Date: Tue, 30 Mar 2010 16:11:56 GMT
 Connection: close


 So the keepalive is not working in this case.

 It seems to me that when http-server-close option is enabled, haproxy
 replaces

Re: setup with Oracle and SSL

2010-03-13 Thread Craig Carl
Anne -
   Your would need an application to handle SSL and forward HTTP. I use
stunnel for that with no problem. This is the guide I used, the basics are
the same on any distro -

http://www.buro9.com/blog/2009/12/07/installing-haproxy-load-balance-http-and-https/

Craig


On Sat, Mar 13, 2010 at 2:27 PM, Anne Moore diabeticith...@yahoo.comwrote:

  Very interesting. Thank you for the reply. That's very disappoint that
 haproxy doesn't support SSL.

 However, what if I my haproxy was HTTP, and it forwarded requests to my two
 backend HTTPS (SSL) URL servers?

 Would this scenario work fine with haproxy?

 Thank you

 Anne

  --
 *From:* XANi [mailto:xani...@gmail.com]
 *Sent:* Saturday, March 13, 2010 4:25 PM
 *To:* Anne Moore
 *Cc:* haproxy@formilux.org
 *Subject:* Re: setup with Oracle and SSL

 Hi
 Dnia 2010-03-13, sob o godzinie 13:34 -0500, Anne Moore pisze:

 Greetings to all,

  I'm new to this group, but have really been working hard on getting
 haproxy working for Oracle Application HTTP server over SSL.

  I've looked through the website, but can't seem to find anything that
 shows how to setup SSL on the haproxy. I also can't find anything on how to
 setup haproxy with Oracle Application HTTP server.

  Would someone on this list have that knowledge, and be willing to share?

  Thank you!

  Anne

 That's because haproxy doesn't support SSL in http mode, if u want HTTPS u
 need to set up SSL proxy in form of for example Lighttpd.
 so it works like that:
 Lighttpd( https:443) - Haproxy(http:80) -your_backend_servers.

 Only thing to watch out is loggin client IP, basically u have to add to
 config
 option forwardfor except 127.0.0.1
 where 127.0.0.1 is ur SSL proxy address
 Then proxy will be passing original client IP thru X-Forwarded-For header

 except 127.0.0.1 is because lighttpd adds X-Forwarded-For when used as
 proxy so haproxy doesn't have to (obv. replace it with other ip if ur SSL
 proxy is on different host)

 Regards
 XANi

   --
 Mariusz Gronczewski (XANi) xani...@gmail.com
 GnuPG: 0xEA8ACE64http://devrandom.pl




Re: setup with Oracle and SSL

2010-03-13 Thread Craig Carl
Anne -
You really need to read the documentation at
http://haproxy.1wt.eu/download/1.3/doc/architecture.txt. Check section 3.1
for a stunnel example.

C

On Sat, Mar 13, 2010 at 5:22 PM, Anne Moore diabeticith...@yahoo.comwrote:

  This is wonderful. Thank you.

 Would I have to setup stunnel on a different server, and then forward those
 SSL requests to the haproxy server, and then from there, forward those
 request to the web servers? Or, can stunnel be installed and used on the
 same server as the haproxy? If I used stunnel and haproxy, would each of my
 web servers websites also need an SSL certificate installed? (Or is the SSL
 certificate only installed on the stunnel box?)

 Also, quick question regarding how haproxy works (I'm a newbie, as you can
 tell). Does my users put in the haproxy server name in their url, like so:
 http://haproxyservername.domain.com ? And then that forwards requests the
 webservers and load balances them?

 Sorry for so many questions! I'm totally new at this.

 Thank you again for taking the time to help.

 Anne

  --
 *From:* Craig Carl [mailto:cr...@gestas.net]
 *Sent:* Saturday, March 13, 2010 5:52 PM
 *To:* Anne Moore
 *Cc:* XANi; haproxy@formilux.org

 *Subject:* Re: setup with Oracle and SSL

 Anne -
Your would need an application to handle SSL and forward HTTP. I use
 stunnel for that with no problem. This is the guide I used, the basics are
 the same on any distro -


 http://www.buro9.com/blog/2009/12/07/installing-haproxy-load-balance-http-and-https/

 Craig


 On Sat, Mar 13, 2010 at 2:27 PM, Anne Moore diabeticith...@yahoo.comwrote:

  Very interesting. Thank you for the reply. That's very disappoint that
 haproxy doesn't support SSL.

 However, what if I my haproxy was HTTP, and it forwarded requests to my
 two backend HTTPS (SSL) URL servers?

 Would this scenario work fine with haproxy?

 Thank you

 Anne

  --
 *From:* XANi [mailto:xani...@gmail.com]
 *Sent:* Saturday, March 13, 2010 4:25 PM
 *To:* Anne Moore
 *Cc:* haproxy@formilux.org
 *Subject:* Re: setup with Oracle and SSL

   Hi
 Dnia 2010-03-13, sob o godzinie 13:34 -0500, Anne Moore pisze:

 Greetings to all,

  I'm new to this group, but have really been working hard on getting
 haproxy working for Oracle Application HTTP server over SSL.

  I've looked through the website, but can't seem to find anything that
 shows how to setup SSL on the haproxy. I also can't find anything on how to
 setup haproxy with Oracle Application HTTP server.

  Would someone on this list have that knowledge, and be willing to share?

  Thank you!

  Anne

 That's because haproxy doesn't support SSL in http mode, if u want HTTPS u
 need to set up SSL proxy in form of for example Lighttpd.
 so it works like that:
 Lighttpd( https:443) - Haproxy(http:80) -your_backend_servers.

 Only thing to watch out is loggin client IP, basically u have to add to
 config
 option forwardfor except 127.0.0.1
 where 127.0.0.1 is ur SSL proxy address
 Then proxy will be passing original client IP thru X-Forwarded-For
 header

 except 127.0.0.1 is because lighttpd adds X-Forwarded-For when used as
 proxy so haproxy doesn't have to (obv. replace it with other ip if ur SSL
 proxy is on different host)

 Regards
 XANi

   --
 Mariusz Gronczewski (XANi) xani...@gmail.com
 GnuPG: 0xEA8ACE64http://devrandom.pl





Re: Broken link HAproxy1.3 in Solaris

2010-01-29 Thread Craig Carl

Gustavo -   
	I'm getting different errors on the .gz file depending on the browser 
I'm using, so there is certainly something wrong. (wget = 404, FF= 
content encoding error, IE 7 = md5 doesn't match)


The non-gzip version seems to be fine -

http://haproxy.1wt.eu/download/1.3/bin/haproxy-1.3.18-pcre-solaris-sparc.notstripped

Be sure to check the hash -

http://haproxy.1wt.eu/download/1.3/bin/haproxy-1.3.18-pcre-solaris-sparc.notstripped.md5

--
Craig Carl
408 829 9953

Gustavo JIménez wrote:

Hi

I need work HAproxy 1.3 in solaris 10 but when i try to download 
haproxy-pcre-1.3.23-solaris-sparc.notstripped.gz (MD5) Solaris8/Sparc 
executable  the link was broken, you can help me






Re: Broken link HAproxy1.3 in Solaris

2010-01-29 Thread Craig Carl

Cyril -
   The file - haproxy-1.3.18-pcre-solaris-sparc.notstripped.gz - does 
exist, it just appears to be corrupted somehow.


Craig


Cyril Bonté wrote:

Le Vendredi 29 Janvier 2010 18:26:20, Craig Carl a écrit :

Gustavo -   
	I'm getting different errors on the .gz file depending on the browser 
I'm using, so there is certainly something wrong. (wget = 404, FF= 
content encoding error, IE 7 = md5 doesn't match)


The non-gzip version seems to be fine -

http://haproxy.1wt.eu/download/1.3/bin/haproxy-1.3.18-pcre-solaris-sparc.notstripped

Be sure to check the hash -

http://haproxy.1wt.eu/download/1.3/bin/haproxy-1.3.18-pcre-solaris-sparc.notstripped.md5


I think that what Gustavo wanted to say is that the Download section of the 
haproxy home page ( http://haproxy.1wt.eu/#down )  points to files that don't exist in 
the target directory :
http://haproxy.1wt.eu/download/1.3/bin/





Re: Broken link HAproxy1.3 in Solaris

2010-01-29 Thread Craig Carl

I totally missed the *.23. Sorry about my confusion.

C

Cyril Bonté wrote:

Le Vendredi 29 Janvier 2010 19:28:15, Craig Carl a écrit :

Cyril -
The file - haproxy-1.3.18-pcre-solaris-sparc.notstripped.gz - does 
exist, it just appears to be corrupted somehow.


Yes of course :) This one exists (and is not corrupted for me) but he talked 
about the 1.3.23 version (which appears in the download section).





Re: Config error -

2010-01-13 Thread Craig Carl

That cleared up the error, thanks Ryan.

FYI I took that config straight out of the documentation - 
http://haproxy.1wt.eu/download/1.3/doc/architecture.txt



3.1. Alternate solution using Stunnel
===

Is the documentation incorrect?

Craig


Ryan Schlesinger wrote:

Craig,

It looks like you're trying to specify the listen section's bind as its 
name.  I think this is what you want:


listen www 64.164.194.16:80

(Any name will work in place of www).

Ryan

On 01/13/2010 08:09 PM, Craig Carl wrote:

All -
I have a simple config using stunnel. I am trying to load balance 
between 2 Apache servers, and place cookies for session mgmt. Stunnel 
should be handling :443 traffic and forwarding it to haproxy which 
should be accepting :80 traffic from stunnel and :80 from the public.


When I start haproxy I get this error -

 * Restarting haproxy haproxy
[ALERT] 012/195104 (11195) : parsing /etc/haproxy/haproxy.cfg : proxy 
'69.164.194.163:80' has no listen address. Please either specify a 
valid address on the listen line, or use the bind keyword.

[ALERT] 012/195104 (11195) : Errors found in configuration file,
[ALERT] 012/195104 (11195) : Error reading configuration file : 
/etc/haproxy/haproxy.cfg

...fail!
###

Stunnel.conf is -

cert = /etc/ssl/certs/mycert.crt
key = /etc/ssl/certs/mykey.info.key
;setuid = nobody
;setgid = nogroup

pid = /etc/stunnel/stunnel.pid
debug = 3
output = /etc/stunnel/stunnel.log

socket=l:TCP_NODELAY=1
socket=r:TCP_NODELAY=1

[https]
accept=64.164.194.16:443
connect=64.164.194.16:80
TIMEOUTclose=0
xforwardedfor=yes
###

haproxy.cfg is -
###
maxconn 1 # Total Max Connections.
ulimit-n65536
log 127.0.0.1   local0
log 127.0.0.1   local1 notice
daemon
nbproc  4 # Number of processes
userhaproxy
group   haproxy
daemon
defaults
log global
option  httplog
modehttp
clitimeout  6
srvtimeout  3
contimeout  4000
retries 3
option  redispatch
option  httpclose

listen 64.164.194.16:80
   mode http
   balance roundrobin
   option forwardfor except 64.164.194.16
   cookie SERVERID insert indirect nocache
   option httpchk HEAD /index.html HTTP/1.0
   server fe1-dal 192.168.146.169:80 cookie A check
   server fe2-dal 192.168.146.17:80 cookie B check
###

Thanks for your help.

Craig








Does anyone have an init.d script for Debian?

2010-01-10 Thread Craig Carl

All -
	I installed from source so I could set USE_PCRE=1.  It doesn't look 
like the make process included a startup script for /etc/init.d/. The 
two included files (init.haproxy, haproxy.init) both look to be for RedHat.


Does anyone know where I can find a /etc/init.d/haproxy script for 
Debian?

Thanks,

C




Re: Session stickiness over HTTP and HTTPS

2009-12-07 Thread Craig
 Is this a common use case?
Yes.

 I see that section 3.1 in the configuration guide discusses using
 stunnel for this, but it's not clear whether haproxy will choose the
 sticky server based on stunnel's X-Forwarded-For header or it will
 choose the destination by the stunnel machine's address?
You can balance on X-Forwarded-For or sourceip (you want x-forwarded-for).
You could also inject cookies to archieve stickyness. Just read the
documentation. ;)

Best regards,

Craig