Hi Aleks,

On Sun, Jul 27, 2014 at 01:44:01PM +0200, Aleksandar Lazic wrote:
> >Peers are a bit different, as they can be accessed at a very high rate.
> >However I'm pretty sure we'll move that to a shared memory just like
> >the SSL session storage at some point, except that we still have to
> >inform each process about the changes synchronously. I'm also thinking
> >that stick tables could possibly be shared between multiple processes
> >if we allocate entries from a shared memory area.
> 
> This way is only able to handle the informations on one machine.
> 
> Due to the fact that haproxy have already
> 
> peer <peername> <ip>:<port>
> 
> I thought to extend this to a more or less opener protocol.

Yes and that's why I wrote that we need to improve support for
peers synchronization.

> >>When haproxy is able to send $Session-Information to redis, memcache,
> >>... then you can "better" use current environment resources.
> >
> >It's by far too slow, you have to communicate over the network for 
> >this,
> >even if it's on the loopback, meaning you need to poll to access them
> >and stop/start any processing which needs them. You really want to have
> >the information accessible from within the process without doing a
> >context switch!
> 
> But isn't this
> 
> peer <peername> <ip>:<port>
> 
> a similar solution?

No, because peers are push-only, so it's asynchronous. Thus every node
has all the information, there's never any need for requesting something
outside. This makes a huge difference.

> I would understand this part of the documentation as a similar solution 
> but with haproxy on protocol.
> Maybe I have misunderstand it.

Not necessarily, I know that the doc on the protocol is missing. It was
planned but you know how it is when the doc is planned and not written
before the code...

> I'm sure you have thought like a http-generic "plugin" and depend on the 
> request the http-1 or the http-2 plugin handle the request.

Not exactly, because as you might remember when you implemented the
appsession code, the HTTP code is spread all over the processing and
not easy to adapt to two different architectures. Thus I'm thinking
about changing everything to have an internal representation that
suits HTTP/2 and convert HTTP/1 to that representation. The code will
then have to be ported to work with this new representation. That
means that HTTP/1 to HTTP/1 proxying will more or less look like
it's converted to HTTP/2 then to HTTP/1 again, at least for headers
indexation. But I'd rather do it this way than the other way around,
since performance-sensible sites will definitely go the HTTP/2 route.

Cheers,
Willy


Reply via email to