Re: option httpchk is reporting servers as down when they're not

2009-03-03 Thread Jeffrey 'jf' Lim
On Wed, Mar 4, 2009 at 4:05 AM, Allen, Thomas tal...@asce.org wrote:
 Hi,

 I like the idea of having HAProxy check server health, but for some reason,
 it reports all of my servers as down. Here's my full config:

 listen http_proxy :80
     mode http
     balance roundrobin
     option httpchk
     server webA {IP} cookie A check
     server webB {IP} cookie B check

 I tried option httpchk /index.php just to be sure, and got the same
 result. If I remove the httpchk option, HAProxy has no problem proxying
 these servers. What am I doing wrong?


what's listed under Status for these servers when viewing your
haproxy status page?

-jf

--
In the meantime, here is your PSA:
It's so hard to write a graphics driver that open-sourcing it would not help.
-- Andrew Fear, Software Product Manager, NVIDIA Corporation
http://kerneltrap.org/node/7228



Re: option httpchk is reporting servers as down when they're not

2009-03-04 Thread Jeffrey 'jf' Lim
well, looks like ur servers are actually down then. Do a curl from
your haproxy machine to both servers. What do you get?

-jf

--
In the meantime, here is your PSA:
It's so hard to write a graphics driver that open-sourcing it would not help.
-- Andrew Fear, Software Product Manager, NVIDIA Corporation
http://kerneltrap.org/node/7228


On Wed, Mar 4, 2009 at 9:40 PM, Allen, Thomas tal...@asce.org wrote:
 Never mind, I got it going. My stats page simply says that both servers are 
 down. What else should I be looking for?

 Thomas Allen
 Web Developer, ASCE
 703.295.6355

 -Original Message-
 From: Jeffrey 'jf' Lim [mailto:jfs.wo...@gmail.com]
 Sent: Wednesday, March 04, 2009 2:22 AM
 To: Allen, Thomas
 Cc: haproxy@formilux.org
 Subject: Re: option httpchk is reporting servers as down when they're not

 - Show quoted text -
 On Wed, Mar 4, 2009 at 4:05 AM, Allen, Thomas tal...@asce.org wrote:
 Hi,

 I like the idea of having HAProxy check server health, but for some reason,
 it reports all of my servers as down. Here's my full config:

 listen http_proxy :80
     mode http
     balance roundrobin
     option httpchk
     server webA {IP} cookie A check
     server webB {IP} cookie B check

 I tried option httpchk /index.php just to be sure, and got the same
 result. If I remove the httpchk option, HAProxy has no problem proxying
 these servers. What am I doing wrong?


 what's listed under Status for these servers when viewing your
 haproxy status page?

 -jf

 --
 In the meantime, here is your PSA:
 It's so hard to write a graphics driver that open-sourcing it would not 
 help.
    -- Andrew Fear, Software Product Manager, NVIDIA Corporation
 http://kerneltrap.org/node/7228




Re: haproxy 1.3.16 getting really really closer

2009-03-07 Thread Jeffrey 'jf' Lim
Woohoo!! :) thanks, Willy, for the work. Seems like a really great
list of stuff there.

Especially love the HTTP invalid request and response captures per
frontend/backend feature - I would definitely love to be able to see
what we're getting over here where we use haproxy

One question if u dont mind - session rate limiting on frontends -
what's the use case for this?

-jf

--
In the meantime, here is your PSA:
It's so hard to write a graphics driver that open-sourcing it would not help.
-- Andrew Fear, Software Product Manager, NVIDIA Corporation
http://kerneltrap.org/node/7228


On Sat, Mar 7, 2009 at 7:11 AM, Willy Tarreau w...@1wt.eu wrote:

 Hi all !

 About 3 months ago I told you that 1.3.16 was getting closer. Now I
 really think it's getting even closer. Since then, we have fixed all
 remaining visible bugs, which does not mean that all bugs went away of
 course. Also several new useful features have been implemented due to
 concrete opportunities :

  - doc is finished. All keywords, log options etc... have been migrated
    to the new doc. The old one is still there just in case, but should
    not be needed anymore.

  - autonomous forwarding layer between sockets without waking the
    task up : when data must be forwarded from one socket to another
    one, we don't wake the task up anymore if not needed. This saves
    many CPU cycles on large objects and has improved the maximum data
    rate by 10-15% depending on the workload. A further improvement
    will consist in allocating buffers on the fly from a pool just
    during the transfer, and releasing empty buffers when not in use
    in order to reduce memory requirements.

  - TCP splicing on very recent linux 2.6 (2.6.27.19, 2.6.28.6, or
    2.6.29). TCP splicing enables zero-copy forward between two network
    interfaces. With most network interfaces it does not seem to change
    anything, however with some other NICs (at least Myricom's 10GE), we
    observe huge savings. I could even reach 10 Gbps of layer 7 data
    forwarding with only 25% CPU usage! Please note that this must not
    be used with any kernel earlier than versions above since all
    previous tcp-splice implementations are buggy and will randomly and
    silently corrupt your data.

  - unix stats socket is working again.

  - complete session table dumps on the unix socket. It reports
    pointers, states, protocol, timeouts, etc... It's primarily meant
    for development, but will help understand why sometimes a process
    refuses to die when some sessions remain present.

  - HTTP invalid request and response captures per frontend/backend :
    those who are fed up with tracking 502 coming from buggy servers
    will love this one. Each invalid request or response (non HTTP
    compliant) is copied into a reserved space in the frontend (request)
    or backend (response) with info about the client's IP, the server,
    the exact date, etc... so that the admin can later consult those
    errors by simply sending a show errors request on the unix stats
    socket. The exact position of the invalid character is indicated,
    and that eases the troubleshooting a lot ! It's also useful to keep
    complete captures of attacks, when the attacker sends invalid
    requests :-)

  - the layering work has been continued for a long time and a massive
    cleanup has been performed (another one is still needed though)

  - the internal I/O and scheduler subsystems are progressively getting
    more mature, making it easier to integrate new features.

  - add support for options clear-cookie, set-cookie and drop-query
    to the redirect keyword.

  - ability to bind to a specific interface for listeners as well as for
    source of outgoing connections. This will help on complex setups where
    several interfaces are attached to the same LAN.

  - ability to bind some instances to some processes in multi-process
    mode.  Some people want to run frontend X on process 1 and frontend
    Y on process 2. This is now possible. A future improvement will
    consist in defining what CPU/core each process can run on (OS
    dependant).

  - session rate measurement per frontend, backend and server. This is
    now reported in the stats in a rate column, in number of sessions
    per second. Right now this is only on the last second, but at least
    the algorithm gives an accurate measurement with very low CPU usage.
    This value may also be checked in ACLs in order to write conditions
    based on performance.

  - session rate limiting on frontends : using rate-limit sessions XXX
    it is now possible to limit the rate at which a frontend will accept
    connections. This is very accurate too. Right now it's only limited
    to sessions per second, but it will evolve to different periods. I've
    successfully tried values as low as 1 and as high as 55000 sessions/s,
    all of them gave me the expected performance within +/- 0.1% due to
    the measuring 

Re: option httpchk is reporting servers as down when they're not

2009-03-07 Thread Jeffrey 'jf' Lim
On Sat, Mar 7, 2009 at 2:38 AM, Willy Tarreau w...@1wt.eu wrote:
 Hi Thomas,

 On Thu, Mar 05, 2009 at 08:45:20AM -0500, Allen, Thomas wrote:
 Hi Jeff,

 The thing is that if I don't include the health check, the load balancer 
 works fine and each server receives equal distribution. I have no idea why 
 the servers would be reported as down but still work when unchecked.

 It is possible that your servers expect the Host: header to
 be set during the checks. There's a trick to do it right now
 (don't forget to escape spaces) :

        option httpchk GET /index.php HTTP/1.0\r\nHost:\ www.mydomain.com


you know Thomas, Willy may be very right here. And I just realized as
well - u say u're using 'option httpchk /index.php'?  - without
specifying the 'GET' verb?

-jf


 Also, you should check the server's logs to see why it is reporting
 the service as down. And as a last resort, a tcpdump of the traffic
 between haproxy and a failed server will show you both the request
 and the complete error from the server.

 Regards,
 Willy





Re: balance source based on a X-Forwarded-For

2009-03-29 Thread Jeffrey 'jf' Lim
On Wed, Mar 25, 2009 at 8:02 PM, Benoit maver...@maverick.eu.org wrote:


 diff -ru haproxy-1.3.15.7/doc/configuration.txt 
 haproxy-1.3.15.7-cur/doc/configuration.txt
 --- haproxy-1.3.15.7/doc/configuration.txt      2008-12-04 11:29:13.0 
 +0100
 +++ haproxy-1.3.15.7-cur/doc/configuration.txt  2009-02-24 16:17:19.0 
 +0100
 @@ -788,6 +788,19 @@

                 balance url_param param [check_post [max_wait]]

 +      header      The Http Header specified in argument will be looked up in
 +                  each HTTP request.
 +
 +                  With the Host header name, an optionnal use_domain_only
 +                  parameter is available, for reducing the hash algorithm to
 +                  the main domain part, eg for haproxy.1wt.eu, only 1wt
 +                  will be taken into consideration.
 +

I'm not so sure how balancing based on a hash of the Host header would
be useful. How would this be useful? I would see an application for
balancing on perhaps other headers (like xff as mentioned), but for
Host... I dunno... (so basically what I'm saying is, is the code for
the 'use_domain_only' bit useful? can it be left out?)

-jf

--
In the meantime, here is your PSA:
It's so hard to write a graphics driver that open-sourcing it would not help.
-- Andrew Fear, Software Product Manager, NVIDIA Corporation
http://kerneltrap.org/node/7228



Re: balance source based on a X-Forwarded-For

2009-03-29 Thread Jeffrey 'jf' Lim
On Mon, Mar 30, 2009 at 3:48 AM, Willy Tarreau w...@1wt.eu wrote:
 On Sun, Mar 29, 2009 at 12:31:27PM -0700, John L. Singleton wrote:
 I'm a little mystified as to the usefulness of this as well. I mean,
 what does hashing the domain name solve that just balancing back to a
 bunch of Apache instances with virtual hosting turned on doesn't? Are
 you saying that you have domains like en.example.com, fr.example.com
 and you want them all to be sticky to the same backend server when
 they balance? If that's the case, I could see that being useful if the
 site in question were doing some sort of expensive per-user asset
 generation that was being cached on the server. Is this what you are
 talking about?

 There are proxies which can do prefetching, and in this case, it's
 desirable that all requests for a same domain name pass through the
 same cache.


so are you saying haproxy - cache - backend? (in which case, you
would be talking more about an ISP, i think? or does anybody here not
running an ISP actually do this (I would be interested to know))

-jf

--
In the meantime, here is your PSA:
It's so hard to write a graphics driver that open-sourcing it would not help.
-- Andrew Fear, Software Product Manager, NVIDIA Corporation
http://kerneltrap.org/node/7228



Re: [RFC] development model for future haproxy versions

2009-03-30 Thread Jeffrey 'jf' Lim
On Tue, Mar 31, 2009 at 5:06 AM, Willy Tarreau w...@1wt.eu wrote:
 Hi all!

 Now that the storm of horror stories has gone with release of 1.3.17,
 I'd like to explain what I'm planning to do for future versions of
 haproxy.

 Right now there are a few issues with the development process and
 the version numbering in general :

 snip

 4) encourage people to work on a next feature set with their own tree.

 Since haproxy has migrated to use GIT for version control, it has really
 changed my life, and made it a lot more convenient for some contributors
 to maintain their own patchsets.


yo, I hadnt noticed! What's the clone url, though? All links I've
tried only get me to a gitweb interface.

-jf



Re: Persistence based on a server id url param

2009-06-02 Thread Jeffrey 'jf' Lim
On Tue, Jun 2, 2009 at 3:32 PM, Willy Tarreau w...@1wt.eu wrote:

 Hi Ryan,

 On Mon, Jun 01, 2009 at 12:22:57PM -0700, Ryan Schlesinger wrote:
  I've got haproxy set up (with 2 frontends) to load balance a php app
  which works great.  However, we're using a java uploader applet that
  doesn't appear to handle cookies.  It would be simple for me to have the
  uploader use a URL with the server id in it (just like we're already
  doing with the session id) but I don't see any way to get haproxy to
  treat that parameter as the actual server id.  Using hashing is not an
  option as changing the number of running application servers is a normal
  occurrence for us.  I also can't use the appsession directive as the
  haproxy session id cache isn't shared between the two frontends (both
  running an instance of haproxy).  Can this be done with ACLs and I'm
  missing it?



I actually made a patch for a client the last time that does this exact
thing. I'll see if this client is ok with sharing the code - or opensourcing
it. Willy's approach is also an interesting way of doing it - you control
the decision of what to do if the backend is down using the acl
'srv(1|2)_up'

-jf



 You could very well use ACLs to match your URL parameter in the
 frontend and switch to either backend 1 or backend 2 depending
 on the value.

 Alternatively, you could hash the URL parameter (balance url_param)
 but it would not necessarily be easy for your application to generate
 an URL param which will hash back to the same server. So I think that
 the ACL method is the most appropriate for your case.

 Basically you'd do that :

 frontend
acl srv1 url_sub SERVERID=1
acl srv2 url_sub SERVERID=2
acl srv1_up nbsrv(bck1) gt 0
acl srv2_up nbsrv(bck2) gt 0
use_backend bck1 if srv1_up srv1
use_backend bck2 if srv2_up srv2
default_backend bck_lb

 backend bck_lb
# Perform load-balancing. Servers state is tracked
# from other backends.
balance roundrobin
server srv1 1.1.1.1 track bck1/srv1
server srv2 1.1.1.2 track bck2/srv2
...

 backend bck1
balance roundrobin
server srv1 1.1.1.1 check

 backend bck2
balance roundrobin
server srv2 1.1.1.2 check

 That's just a guideline, but I think you should manage to get
 it working based on that.

 Regards,
 Willy





Re: Backend sends 204, haproxy sends 502

2009-10-28 Thread Jeffrey 'jf' Lim
On Wed, Oct 28, 2009 at 9:02 PM, Dirk Taggesell 
dirk.tagges...@googlemail.com wrote:

 Hi all,

 I want to load balance a new server application that generally sends
 http code 204 - to save bandwidth and to avoid client-side caching.
 In fact it only exchanges cookie data, thus no real content is delivered
 anyway.

 When requests are made via haproxy, the backend - as intended - delivers
 a code 204 but haproxy instead turns it into a code 502. Unfortunately I
 cannot use tcp mode because the server app needs the client's IP
 address. Is there something else I can do?


what version of haproxy is this? do 200 requests from the same backend
passed through haproxy work? I can't say that i've looked too closely at the
code for this, but, I get the impression that haproxy generally returns 502
for stuff that it cannot recognize.

And one other thing to look at - what is the log line like for this
particular request?

-jf

--
In the meantime, here is your PSA:
It's so hard to write a graphics driver that open-sourcing it would not
help.
   -- Andrew Fear, Software Product Manager, NVIDIA Corporation
http://kerneltrap.org/node/7228


Re: Matching URLs at layer 7

2010-04-28 Thread Jeffrey 'jf' Lim
On Wed, Apr 28, 2010 at 7:51 PM, Andrew Commons
andrew.comm...@bigpond.com wrote:
 Hi Beni,

 A few things to digest here.

 What was leading me up this path was a bit of elementary (and probably naïve) 
 white-listing with respect to the contents of the Host header and the URI/L 
 supplied by the user. Tools like Fiddler make request manipulation trivial so 
 filtering out 'obvious' manipulation attempts would be a good idea. With this 
 in mind my thinking (if it can be considered as such) was that:

 (1) user request is for http://www.example.com/whatever
 (2) Host header is www.example.com
 (3) All is good! Pass request on to server.

 Alternatively:

 (1) user request is for http://www.example.com/whatever
 (2) Host header is www.whatever.com
 (3) All is NOT good! Flick request somewhere harmless.


Benedikt has explained this already (see his first reply). There is no
such thing. What you see as user request is really sent as host
header, + uri.

Also to answer another question you raised - the http specification
states that header names are case-insensitive. I dont know about
haproxy's treatment, though (i'm too lazy to delve into the code right
now - and really you can test it out to find out for urself).

-jf


--
Every nonfree program has a lord, a master --
and if you use the program, he is your master.
--Richard Stallman

It's so hard to write a graphics driver that open-sourcing it would not help.
-- Andrew Fear, Software Product Manager, NVIDIA Corporation
http://kerneltrap.org/node/7228



Re: Performance on an Atom D510 Dual Core 1.66GHz

2010-07-27 Thread Jeffrey 'jf' Lim
On Tue, Jul 27, 2010 at 1:27 PM, Willy Tarreau w...@1wt.eu wrote:

 OK so here are a few results of haproxy 1.4.8 running on Atom D510 (64-bit)
 without keep-alive :

 6400 hits/s on 0-bytes objets
 6200 hits/s on 1kB objects (86 Mbps)
 5700 hits/s on 2kB objects (130 Mbps)
 5250 hits/s on 4kB objects (208 Mbps)
 3300 hits/s on 8kB objects (250 Mbps)
 2000 hits/s on 16kB objects (300 Mbps)
 1300 hits/s on 32kB objects (365 Mbps)
 800 hits/s on 64kB objects (450 Mbps)
 480 hits/s on 128kB objects (535 Mbps)
 250 hits/s on 256kB objects (575 Mbps)
 135 hits/s on 512kB objects (610 Mbps)


 This requires binding the NIC's interrupt on one core and binding haproxy
 to the other core. That way, it leaves about 20% total idle on the NIC's
 core. Otherwise, the system tends to put haproxy on the same core as the
 NIC and the results are approximately half of that.

 Quick tests with keep-alive enabled report 7400 hits/s instead of 6400
 for the empty file test, and 600 instead of 5250 for the 4kB file, thus
 minor savings.


hi Willy, are you talking about 6000 (6000 instead of 5250)? or 600?

-jf


--
Every nonfree program has a lord, a master --
and if you use the program, he is your master.
--Richard Stallman

It's so hard to write a graphics driver that open-sourcing it would not help.
-- Andrew Fear, Software Product Manager, NVIDIA Corporation
http://kerneltrap.org/node/7228



Re: clarification of CD termination code

2010-08-04 Thread Jeffrey 'jf' Lim
On Thu, Aug 5, 2010 at 7:29 AM, Bryan Talbot btal...@aeriagames.com wrote:
 In the tcpdump listed below, isn't the next-to-the-last RST also include an
 ACK of the data previously sent?  If that is the case, then the client has
 received all of the data and ACK'd it but then rudely closed the TCP
 connection without the normal FIN exchange.  Is my reading correct?

 19:03:33.106842 IP 10.79.25.20.4266  10.79.6.10.80: S
 2041799057:2041799057(0) win 65535 mss 1460,nop,nop,sackOK
 19:03:33.106862 IP 10.79.6.10.80  10.79.25.20.4266: S
 266508528:266508528(0) ack 2041799058 win 5840 mss 1460,nop,nop,sackOK
 19:03:33.106945 IP 10.79.25.20.4266  10.79.6.10.80: . ack 1 win 65535
 19:03:33.107045 IP 10.79.25.20.4266  10.79.6.10.80: P 1:269(268) ack 1 win
 65535
 19:03:33.107060 IP 10.79.6.10.80  10.79.25.20.4266: . ack 269 win 6432
 19:03:33.134401 IP 10.79.6.10.80  10.79.25.20.4266: P 1:270(269) ack 269
 win 6432
 19:03:33.134442 IP 10.79.6.10.80  10.79.25.20.4266: F 270:270(0) ack 269
 win 6432
 19:03:33.134548 IP 10.79.25.20.4266  10.79.6.10.80: R 269:269(0) ack 270
 win 0
 19:03:33.134562 IP 10.79.25.20.4266  10.79.6.10.80: R
 2041799326:2041799326(0) win 0



yes - i've encountered this myself, and after looking into the
traffic, observed the very same thing from windows clients...
Definitely frustrating behaviour in terms of causing all these alerts
in the logs...

-jf


--
Every nonfree program has a lord, a master --
and if you use the program, he is your master.
--Richard Stallman

It's so hard to write a graphics driver that open-sourcing it would not help.
-- Andrew Fear, Software Product Manager, NVIDIA Corporation
http://kerneltrap.org/node/7228



Re: Multiple Load Balancers, stick table and url-embedded session support

2010-12-09 Thread Jeffrey 'jf' Lim
On Thu, Dec 9, 2010 at 7:27 PM, Hank A. Paulson 
h...@spamproof.nospammail.net wrote:

 Please see the thread:
 need help figuring out a sticking method

 I asked about this, Willie says there are issues figuring out a workable
 config syntax for 'regex to pull the URL/URI substring' but (I think) that
 coding the functionality is not technically super-difficult just not enough
 hands maybe and the config syntax?


Actually if the key is to taken from a query param, that is relatively
easy enough (I coded something myself for a client some time back based on
1.3.15.4). If, however, more flexibility is required (like in your case),
then the point that Willie has mentioned will definitely come into play.

-jf


I have a feeling this would be a fairly commonly used feature, so it is good
 to see others asking the same question  :)

 How are you planning to distribute the traffic to the different haproxy
 instances? LVS? Some hardware?


 On 12/8/10 8:58 PM, David wrote:

 Hi there,

 I have been asked to design an architecture for our load-balancing needs,
 and
 it looks like haproxy can do almost everything needed in a fairly
 straightfoward way. Two of the requirements are stickiness support (always
 send a request for a given session to the same backend) as well as
 multiple
 load balancers running at the same time to avoid single point of failure
 (hotbackup with only one haproxy running at a time is not considered
 acceptable).

 Using multiple HAproxy instances in parallel with stickiness support looks
 relatively easy if cookies are allowed (through e.g. cookie prefixing)
 since
 no information needs to be shared. Unfortunately, we also need to support
 session id embedded in URL (e.g. http://example.com/foo?sess=someid), and
 I
 was hoping that the new sticky table replication in 1.5 could help for
 that,
 but I am not sure it is the case.

 As far as I understand, I need to first define a table with string type,
 and
 then use the store-request to store the necessary information. I cannot
 see a
 way to get some information embedded in the URL using the existing query
 extraction methods. Am I missing something, or is it difficult to do this
 with
 haproxy ?

 regards,

 David





Re: Haproxy F5 usage question

2013-01-09 Thread Jeffrey 'jf' Lim
On Thu, Jan 10, 2013 at 2:05 AM, DeMarco, Alex alex.dema...@suny.eduwrote:

  I have a situation where a backend server defined in HAProxy may be a
 vip on our F5.The F5 vip is setup for source persistence.  Right now
 all the requests to this vip from the haproxy  box are all going to one
 pool member.  Obviously the f5 is seeing the ip of the server and not the
 true client.  I do have haproxy sending out the X-Forwarded-For. But the f5
 does not see it.

 **


So let me get this right. You've got a BIGIP sitting behind a HAProxy
instance? Why are things configured this way?

-jf



  **

 Anyone have an example of how  scenario like this would work?   Do I need
 to modify haproxy or is this an f5 issue?

 ** **

 Thank you again  in advance..

 ** **

 [image: circle] http://www.suny.edu/**

 *Alex DeMarco*
 *Manager of Technical Services*
 The State University of New York
 State University Plaza - Albany, New York 12246
 Tel: 518.320.1398Fax: 518.320.1550
 *Be a part of Generation SUNY: 
 **Facebook*http://www.facebook.com/generationsuny
 * - **Twitter* http://www.twitter.com/generationsuny* - 
 **YouTube*http://www.youtube.com/generationsuny
 

 ** **

 ** **

image001.gif

Re: Inkonsistent forward-for

2014-05-21 Thread Jeffrey 'jf' Lim
On Wed, May 21, 2014 at 2:29 PM, Jürgen Haas juer...@paragon-es.de wrote:
 Hi there,

 I'm having some issues with the forward-for feature. It seems to be
 working in general but for some reason not consistently. My default
 section in the config file looks like this:

 defaults
   log global
   mode http
   option httplog
   option dontlognull
   option forwardfor
   retries  3
   maxconn 1000
   timeout connect 5000ms
   timeout client 120s
   timeout server 120s
   default_backend backend_ts1

 The apache config files on all web servers are configured so that they
 use the X-Forwarded-For header field if available:

 LogFormat %{X-Forwarded-For}i %l %u %t \%r\ %s %b \%{Referer}i\
 \%{User-Agent}i\ proxy
 SetEnvIf X-Forwarded-For ^.*\..*\..*\..* forwarded
 CustomLog ${APACHE_LOG_DIR}/access.log combined env=!forwarded
 CustomLog ${APACHE_LOG_DIR}/access.log proxy env=forwarded

 However, a lot of requests still get logged with the IP address of the
 proxy instead of the original client.

 We are using HA-Proxy version 1.5-dev19 2013/06/17 and I wonder if
 anyone had an idea what the reason for that could be.



It's been some time since i last looked at the code; but I reckon it
would be the same issue I came across some time back. Do a dump on the
traffic to be sure. The RFC allows for headers with multiple values to
either be represented as repeated headers, each with one value, or as
a single header, with all of the values separated by commas. In either
case, your backend has to be capable / smart enough to be able to deal
with the 2 formats.

-jf

--
He who settles on the idea of the intelligent man as a static entity
only shows himself to be a fool.

Mensan / Full-Stack Technical Polymath / System Administrator
12 years over the entire web stack: Performance, Sysadmin, Ruby and Frontend



Re: Inkonsistent forward-for

2014-05-21 Thread Jeffrey 'jf' Lim
On Wed, May 21, 2014 at 2:47 PM, Jürgen Haas juer...@paragon-es.de wrote:
 Am 21.05.2014 08:40, schrieb Jeffrey 'jf' Lim:
 On Wed, May 21, 2014 at 2:29 PM, Jürgen Haas juer...@paragon-es.de wrote:
 It's been some time since i last looked at the code; but I reckon it
 would be the same issue I came across some time back. Do a dump on the
 traffic to be sure. The RFC allows for headers with multiple values to
 either be represented as repeated headers, each with one value, or as
 a single header, with all of the values separated by commas. In either
 case, your backend has to be capable / smart enough to be able to deal
 with the 2 formats.

 -jf

 Thanks Jeffrey, you reckon to dump traffic at the backend or on the
 proxy? If the latter, any advise on how this could be done?


At the backend, of course. Look into tcpdump. I think you would do
well to investigate the point that others have made about tunnel mode
as well.

-jf

--
He who settles on the idea of the intelligent man as a static entity
only shows himself to be a fool.

Mensan / Full-Stack Technical Polymath / System Administrator
12 years over the entire web stack: Performance, Sysadmin, Ruby and Frontend