I've looked over the spdy documentation, such as it is, and have a
bunch of questions and comments.

First and foremost: What is the transition plan? I gather from a post
on this group that spdy is supposed to be backwards compatible with
http, which would seem to imply that the client should somehow hint in
the initial (commmon to both) http headers that it supports spdy and
that the server should somehow clearly indicate in its response that
it would like to proceed with spdy, but the exact details of what
hinting is acceptable aren't even alluded to in the current
documentation. It's very important that this be made clear, especially
if it isn't something along the lines of what I just said. There's
also the laudable but problematic comment that spdy should always use
ssl, but more on that later.

Terminating POST requests with an empty frame is a gross hack. Making
it have real semantics would be nicer and more flexible. For starters,
it could differentiate between cancelled and finished requests.

I'm skeptical of the proposed prioritization scheme's utility. While
there's nothing particularly wrong with including it, I suspect that
server-based heuristics for determining what to send first are likely
to work a lot better. The simplest such scheme is to send all html
objects first, followed by all flash objects, then images, and finally
video. Within each type sort by size and send the smallest objects
first. I suspect it's hard to improve on this simple approach.

There's a very common use case which is supported suboptimally by spdy
as it currently stands, which is that when a user clicks a hyperlink
you generally want to cancel all currently pending requests from the
server and start a new one for the new page. One can send a whole slew
of cancels, but it would be a lot more elegant to have some kind of
request grouping built into the protocol.

The documentation says that spdy always uses SSL. While I get very
excited at the prospect of ubiquitous web encryption, after mulling it
over my recommendation (and I cringe as I type this) is to view the
issue as outside the scope of spdy and drop it. I'm fully in favor of
having an initiative to add encryption by default to http, but it's an
orthogonal issue with a whole other set of problems and tying any such
thing to spdy is likely to kill both of them.

On the plus side for using ssl, it encrypts the traffic, which is a
useful end in and of itself, and it makes the protocol able to go over
existing proxies.

The first downside to ssl sounds stupid, but it's basically a deal
killer. When a connection is first initiated, the sending side doesn't
necessarily know if the receiving side wishes to speak ssl, and ssl
being a complete substitute at the connection layer has no way of
being transitioned to being on in an already transferring connection,
so basically you're stuck with no graceful transition plan. There's a
reason why http and https are separate protocols, and the one stupid
little fact that they require different urls is the main reason why
encryption of web traffic isn't a lot more widespread than it is
currently. The two aren't even comparable to each other - most sites
don't run an https server, and among those that do it's downright rare
for them to be the same content being transported over two different
protocols. Spdy can't do anything to make this situation better, and I
think it shouldn't try. It's far easier to simply allow spdy to work
over either http or https and let sites which want to get the
encryption and existing proxy compatibility benefits at the expense of
extra resources and cert brain damage of ssl to use https. If you'd
like to know my thoughts on what a good transition plan for getting
http widely encrypted are, I'd be happy to share them, but it's rather
far afield from and orthogonal to spdy, and would best be served as an
independent proposal.

Other problems with ssl include: it adds an extra round trip, it bulks
up the data with hash bytes for data integrity, it requires enough CPU
on the server that it could melt some high-volume sites, and the cert
policies are completely brain damaged. That last one is in principle
simple enough to fix: allow self-signed certs without giving the end
user a security warning, but that will inevitably ignite several holy
wars and make getting through a standards process a nightmare. The
others would be best served by using something lighter weight than ssl
which makes no pretense of man in the middle prevention, but that's a
whole other topic.

Unrelated to all that, and likely outside the scope of spdy, is the
brain damage in DNS. The biggest problems with common DNS practice are
that it simply gives an IP rather than a mapping from a service to an
(ip, port) pair, and that when it specifies multiple IPs the specified
behavior is to pick one randomly rather than the much more useful
behavior of trying the first one first, and if that fails go and try
the second one. Having real failover on the client would make running
high availability web sites vastly easier. Hopefully spdy will succeed
and then then Google can get around to trying to help fix some of
these other problems.

-- 
Chromium Discussion mailing list: [email protected] 
View archives, change email options, or unsubscribe: 
    http://groups.google.com/group/chromium-discuss

Reply via email to