On 5/12/2012 9:19 a.m., Alex Rousskov wrote:
On 12/01/2012 06:02 PM, Alex Rousskov wrote:
To summarize, our choices for a pinned connection created for the
current client are:

Before sending the request:

1. Reopen and repin used pconns to send unretriable requests,
    just like non-pinned connection code does. This eliminates
    HTTP pconn race conditions for unretriable requests. This is
    what the proposed patch does for SslBump.

2. Send all requests without reopening the pinned connection.
    If we encounter an HTTP pconn race condition, see below.


After the request, if an HTTP pconn race happens:

0. Send an error message to the client.
    This is what we do today and it breaks things.

1. Reset the client connection.
    Pretend to be a dumb tunnel.

2. Retry, just like the current non-pinned connection code does.
    This is what the proposed patch implements.


Current code does 2+0. That does not work.

The proposed patch does 1+2. That works in my limited tests.

Henrik suggested 2+1. I agree that it is also an option worth
considering.
I am leaning towards implementing 2+1 _and_ removing the current
connection repinning code as not needed.

We could leave repinning code in case it is needed later to fix broken
clients, but no such clients are known at this time, while that code is
complex and requires fixes (the patch posted here earlier), so I am
leaning towards removing it completely for now. With some effort it can
be re-introduced at a later time, of course.

If there are any better ideas, please let me know.

Re-pinning is somewhat useful for the cases of:
* Kerberos/Bearer auth where credentials on the request are sufficient to re-auth the new connection identically to the original one. * NAT (not TPROXY) intercepted connections to the same destination IP:port. Where verification can lock it down to that destination, but not completely.

in both cases provided the destination IP:port is the same as the original one. Were it not for the required client IP:port combo already being in-use this would also be true for TPROXY. And were it not for the handshake hassle this would also be true for NTLM.

The cases with problems are:
* NTLM due to the handshake complexity which nobody has implemented yet (otherwise this would be in the useful category too with Kerberos)
 * TPROXY where the src+dst tuplet has collisions with the old connection
* CONNECT where we don't know what the state in the data channel contains. Bumping should fit into this too if we are to emulate the CONNECT properly, even though we do know the state.


Amos

Reply via email to