Thanks for looking at SPDY, Bill!

On Nov 15, 11:21 am, Bill <[email protected]> wrote:
> Mike, Roberto,
>
> Have you folks heard of the W3C project called HTTP-NG, back in
> 1997-2000?  Among other things, we developed a MUX protocol to use
> over TCP that multiplexed 128 connections in each direction, and
> greatly accelerated HTTP 1.1 pipelining.  We also proposed a new data
> transfer protocol, which optimized data transfer over the links.

Yes, we've looked at it, although probably not as carefully as we
should.  I've just taken another quick read through it.

I think the big difference between SPDY and HTTPNG is that SPDY is
myopically focused on one primary goal: make the web faster.  We have
another consideration about security (and a belief that running over
SSL all the time is a good thing for users due to privacy and security
benefits).  On the other hand, HTTP-NG had many goals, one of which
was performance.  SPDY has not done anything to address HTTP-NG's
other goals.

>
> The project report is 
> athttp://www2.parc.com/isl/members/janssen/pubs/www9-http-next-generati....
> Our key test was downloading a complicated page:  ``This page had been
> developed as part of earlier performance testing work with HTTP 1.1
> [HFN97]. Called the "Microscape site", it is a combination of the home
> pages of the Microsoft and Netscape web sites. It consists of a 43KB
> HTML file, which references 41 embedded GIF images (actually 42
> embedded GIF images, but in our tests the image "enter.gif" was not
> included, as libwww's webbot did not fetch it as a result of fetching
> the main page). Thus a complete fetch of the site consists of 42 HTTP
> GET requests. Our base performance measures were the time it took to
> fetch this page, and the number of bytes actually transferred. We
> repeated this test with HTTP 1.0, HTTP 1.1, and HTTP-NG.''

Cool.  We've been testing a little differently - trying to model the
web as it is used today as closely as possible.  To do so, we've been
capturing real world content and then we replay it through various
protocols (and network configurations).  We maintain everything as it
was originally sent, including CSS, JS, etc.  We use a browser to load
pages because we're not simply looking at page load times from a
networking perspective.  We're also looking at the time it takes to
get usable content rendered in the browser.  This latter point turns
out to be key.  Many people cite CNN as being a poorly designed page,
but I will contend that it is not so bad.  While loading 150 resources
is heavy, most of those resources are far below the fold, and the user
has a render-able page in a reasonable time.  Of course there is room
for optimization, but if we merely look at a protocol to load all
resources, we might think that the problem is worse than it really is.

Nonetheless - we've struggled to find the "right" benchmark metric.
Because of the flexibility of HTML and CSS, it's very hard to pick a
single metric for page-load-time which works across all sites.  (We've
come to the conclusion that there is no one metric that works for all,
you have to look at all 3).  And the only "easy" one to site that
others can reproduce and understand is "page load time" - meaning that
all resources have loaded; this is what we used for the numbers we
published.


> We demonstrated that using single mux'ed connection was faster, and
> reduced the number of bytThe <a href="http://www.w3.org/TR/WD-
> mux">HTTP-NG WebMUX protocol</a> was quite carefully designed; you
> might want to read through it, or even consider adopting it.es
> transfered by about 50%, which would accord with the speed-up in page-
> load times you are seeing.

WebMux definitely has a lot of common properties with SPDY.  It looks
like the benchmark in your whitepaper didn't show very significant
performance gains, though, did it?  I'm not sure I'm reading it right.


Here are a couple of items which we think are important, and would be
interested in how you see these in WebMUX.

a) In our testing, we first implemented a multiplexing protocol.  As
soon as that was in place, however, we discovered we needed priorities
- because not all web resources are equal!  For instance, the
downloading of a CSS is critical to maintain webpage paint times.  If
you allow downloading of images to compete with CSS, you can
completely destroy paint time performance.  Of course, from a
networking view, this doesn't matter - but from a rendering
performance it is important.  When we tested under packet loss
conditions, we found the necessity of priorities even higher.  At some
point we intend to write this all up.  Does WebMux address priorities?

b) Compression, especially given slow uplink capacities on various
networks, is key.  HTTP is sending a lot of redundant data in the
headers.  This duplicate data either needs to be completely removed or
highly compressed.  The WebMux specification discusses compression,
but did it actually use it?

c) Consideration for mobile and wired networks.  This is an area where
SPDY is weak, and we could use help.  But, we are noticing that the
characteristics of the protocol for different networks is very
different.  We're in a world where consumers will have 50Mbps to their
homes in ~5years.  On these networks, with low RTT, low packet loss,
and massive bandwidth, compression hurts, and some of the
optimizations may be ill-advised.  At the same time, with the growth
of wireless networks, where speeds can vary from 100Kbps to 2Mbps with
RTTs which can be north of 400ms, the needs of the protocol are very
different.  We hope that the next protocol is able to assist clients
and servers with tuning, so that we don't end up with a static
protocol in the future.  I see that WebMux has extensions; so this
could probably be implemented as an extension, just not yet defined.

Lastly, we're actually trying to not change the application layer
semantics of HTTP.  Are we missing something performance critical by
doing that?

Thoughts?
Mike




>
> Bill

-- 
Chromium Discussion mailing list: [email protected] 
View archives, change email options, or unsubscribe: 
    http://groups.google.com/group/chromium-discuss

Reply via email to