--- Comment #24 from Ryan Lane <> ---
(In reply to Gabriel Wicke from comment #23)
> (In reply to Ryan Lane from comment #22)
> > (In reply to Gabriel Wicke from comment #21)
> > > > Can you use a shared SSL cache? 
> > > 
> > > Can you describe what this is about?
> > > 
> > 
> > Sure. Currently we use source hash in LVS to ensure a client always hits the
> > same frontend SSL server to ensure they always reuse the SSL session. This
> > works ok, but often leads to bugs. For instance, the monitoring servers
> > don't always detect errors, because they are always hitting the same nodes,
> > subsets of users see a problem while others, including us, don't, etc..
> > Also, it's not possible to weight the load of different servers while using
> > source hash, so we'd really like to switch to weighted round robin.
> > 
> > Part of that is supporting an SSL session cache that spans the SSL nodes.
> > Apache has support for this. Stud has support for this. Nginx does not, so
> > we were considering switching away or adding it. Adding SPDY into this may
> > change things. Does it use the same cache as HTTPS? If we switch to
> > something other than Nginx, will it support SPDY?
> SPDY sets up a single TCP connection per host (really IP and cert) and then
> multiplexes all requests over that connection. My understanding of the way
> we use LVS is that all incoming traffic for a given TCP connection is
> forwarded to the same backend server even in round robin mode. The single
> SPDY connection would end up talking to the same backend all the time. So
> moving towards SPDY might actually reduce the need for a shared SSL cache
> for most connections.

If the connection is broken and a new connection is needed, it'll likely hit
another server when using round robin, which means an SSL cache miss. This is
pretty common with mobile clients, which is where things matters the most.

You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
Wikibugs-l mailing list

Reply via email to