± > HTTP 2.0 can send you multiple files in parallel on the same connection: 
that
± way you don't pay (1) the TCP's Slow Start cost, (2) the HTTPS handshake and
± (3) the cookie/useragent/... headers cost.
± 
± Doesn't connection:keep-alive deal with (1) and (2) nicely?

You still have to pay it 6 times, since you open 6 connections.



± > Under HTTP 2.0, you can also ask for the next files while you receive the
± current one (or send them by batch), and that reduces the RTT cost to 0.
± 
± Http2.0 doesn't (and can't) fix the 1 RTT per request cost: it's just like 
http1.1.

Yes it can. Because the channel is now bidirectional, you don't have to wait to 
receive FILE1 to ask for FILE2. You can therefore ask for all files you need at 
once, even at the same time you are still downloading your "index.html" file. 
And, as noted, the server can even push files to you if he thinks you'll need 
them, removing the necessity to ask for them.

± If http2.0 lets me ask for n files in a single request then yes, the RTT 
would be
± ˜ 1, or 1/n per request if you will, which is just like asking for a .zip in 
http1.1

Of course, you still loose a first RTT when you ask for the first file. As you 
point out, it's also the case if you were to download a ZIP. The difference 
being that the files are stored independtly in the cache and can be updated 
indepently. If only one of them changed, you're not forced to redownload all of 
them.



± > Also, the server can decide to send you a list of files you didn't request 
(à la
± ZIP), making totally unnecessary for your site to ask for the files to preload
± them.
± 
± Can a server always know what the page is going to need next...
± beforehand? Easily?

Yes and no. It can learn from past experiences. It can know that people asking 
for "index.html" and not having it in cache ended up requiring "main.css", 
"index.css", "lib.js" and "index.js". This is a no-config experience, and it 
learns over time. It can also know that people asking "index.html" with an 
outdated eTag ended up having the latest version of "main.css" and "lib.js" in 
cache already and it can therefore send 304 replies for those files 
immediately, with their respective eTags.


± 4.- It's not http2.0 *or* .zip bundling. We could have both. Why not?

Because using a ZIP file is a bad practice we certainly should not allow. As 
stated before, it will make the website slow (because you need to download 
everything before you can start extracting the files, process them (JPEG, JS, 
CSS...) and start displaying anything. Basically you leave the client idle for 
nothing, then you give him a whole lot of things to do, which will make it 
unresponsive.

This also prevents independent caching (if you change one file of the ZIP, the 
client has to redownload everything), and proper GIT/FS management on the 
server side, as well as per-file content negotiation (like sending a 2x image 
to a phone, and a 1x to a desktop).

If you really want to use this kind of technique, and understand the tradeoffs, 
you will still be allowed in the near future to use a Network Domain Controller 
(I think "Event Worker" is the new name of this thing) and define your own 
network logic. Then you can decide to use ZIP as a transfer & caching protocol 
if you so want (by downloading the ZIP in the init procedure of the controller, 
putting it in the cache, and by unzipping its files as needed to reply to the 
requests, instead of letting them go through the network). 

But I remain dubious about making this a part of the platform. It has no clear 
advantage (at best it offers no performance gain over HTTP2), and people will 
certainly misuse it (at worse, the performance is much worse than even HTTP1).

As for the general use case of Web App packages (aka "Generic Bundling not over 
HTTP") - which is a great idea by the way, - this is called an App Package and 
this mailing list isn't the best place to discuss it. I would recommend trying 
to find the WG that works over this and try to follow the discussions there (is 
that really linked to ECMAScript?). 

I think the problem here is not really the package format itself, but more the 
"authorizations" required to run the app (access to filesystem, camera, 
location, ...) whose granularity depends on the platform, which explains why 
each platform requires its own bundling even if the app itself is identical. 

There's room for improvement there.
_______________________________________________
es-discuss mailing list
[email protected]
https://mail.mozilla.org/listinfo/es-discuss

Reply via email to