Re: Generic Bundling

2013-10-28 Thread Jorge Chamorro
On 25/10/2013, at 08:17, Ilya Grigorik wrote:
 
 With HTTP 1.x (and without sharding) you can fetch up to six resources in 
 parallel. With HTTP 2.0, you can fetch as many resources as you wish in 
 parallel. The only reason bundling exists as an optimization is to work 
 around the limit of six parallel requests. The moment you remove that 
 limitation, bundling is unnecessary and only hurts performance.

The ability to ask for n files in a *single* request is key, yes.

-- 
( Jorge )();
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


RE: Re: Generic Bundling

2013-10-27 Thread Jonathan Bond-Caron
On Fri Oct 25 11:48 PM, Ilya Grigorik wrote:
 On Fri, Oct 25, 2013 at 6:24 AM, Jonathan Bond-Caron jbo...@gdesolutions.com 
 mailto:jbo...@gdesolutions.com  wrote:
   I disagree, if you want to treat this as an optimization problem, 
 let's look at it:
   1. x number of resources/files
   2. x number of network connections
 
   What is the optimal way of loading the resources/files?
   I don't have a proof but I'm pretty sure that's NP-complete.
 
 I assure you it is not NP-complete. We know what the critical resources are 
 for the page (i.e. their priorities), and we know the properties of TCP 
 connections (like slow-start). The optimal number
 of connection is 1 (with exception of few edge cases where TCP window scaling 
 is disabled and BDP is high).

Let's look at use a case: 200mb application I want to distribute over internet.
The optimal number of connections is 1?

You wouldn't get faster delivery with a P2P-like algorithm?
e.g.:
Server sends a header:
Cache-clients: my-neighbor.com:4000, my-other-neighor.com:6000

Some security considerations for sure but your claim that 1 connection is 
optimal is bogus at best. 

 
   Are you saying that HTTP 2.0 loading is the best known algorithm in all 
 cases?
   That's bogus. It's certainly a better algorithm but there's a wide 
 range of strategies that will result in faster load times (it involves 
 bundling in some cases).
 
 Bundling does not achieve anything that a good multiplexing solution can't -- 
 bundling is multiplexing at the application layer. HTTP 2.0 provides the 
 necessary tools to achieve same performance as
 bundling, but with additional benefits of granular downloads, incremental 
 execution, etc. (I feel like I'm stuck in a loop ;-))

That's interesting, marketing will usually say but that's just an engineering 
problem, engineering will say but that's just marketing.

In this case, that's just at the application layer, so I'll answer that's 
just at the network layer.

How would you suggest to deliver an application over internet (e.g. myapp.zip)? 
Isn't that a bundle already?

I agree that delivering a single.zip makes the user wait vs. delivering images 
etc.. progressively. But in some cases that might be ok:
- ios app (splashscreen, user waits)
- android app (splashcreen, user waits)
- windows 8, ...
- flash, ...

It depends on the use cases, bundling certainly has its advantages at the 
application layer.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


RE: Re: Generic Bundling

2013-10-27 Thread François REMY
± How would you suggest to deliver an application over internet (e.g.
± myapp.zip)? Isn't that a bundle already?

This claim is bogus. In all the cases I know, the packages are unzipped by the 
OS before running the application, and the application itself has no need to 
know anything about the package. The proof is that you can package the very 
same HTML app in multiple formats, and it will work for all of them.

In other terms, the application layer (ECMAScript code) is unaware of the 
packaging layer (ZIP-based format).


± Let's look at use a case: 200mb application I want to distribute over 
internet.
± The optimal number of connections is 1?
± 
± You wouldn't get faster delivery with a P2P-like algorithm?

This is a different problem. The optimal number is one connection per server. 
Given servers generally can't send you files as fast as you can receive them, 
connecting to multiple servers may actually help in some cases, especially if 
the servers you connect to (peers) are closer to you than the original source, 
actually acting like a CDN.

But you cannot possibly win anything by connecting to the same server multiple 
times, which is what we were trying to say anyway (well, technically, it may 
happen you can: if the server balances its output bandwidth between clients 
because connecting twice make you count twice in the distribution algorithm; 
but then every client would connect multiple times and create some congestion 
without any bandwidth progress).
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


RE: Re: Generic Bundling

2013-10-27 Thread Jonathan Bond-Caron
On Sun Oct 27 09:35 AM, François REMY wrote:
 ± How would you suggest to deliver an application over internet (e.g.
 ± myapp.zip)? Isn't that a bundle already?
 
 This claim is bogus. In all the cases I know, the packages are unzipped by 
 the OS
 before running the application, and the application itself has no need to know
 anything about the package. The proof is that you can package the very same
 HTML app in multiple formats, and it will work for all of them.
 
 In other terms, the application layer (ECMAScript code) is unaware of the
 packaging layer (ZIP-based format).
 

I don't think I'm making any bogus claim here, those are questions?

My point is that distributing applications implies some level(s) of bundling. 
An application update could be a 'bundle' of files that has changed or single 
files (that change frequently).

My interest is at the application layer and how this can fit with System.Loader 
 modules. Again, I don't see why bundling and HTTP 2.0 can't co-exist.

Is it possible that HTTP 2.0 just happens to optimize for the use cases that we 
are seeing today?
Do you know what application use cases are going to occur 5 years from today? 
This propaganda that HTTP 2.0 is optimal for all those use cases is just wrong. 
If you have data or some objective proof, then please share.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Re: Generic Bundling

2013-10-27 Thread Ilya Grigorik
On Sun, Oct 27, 2013 at 6:22 AM, Jonathan Bond-Caron 
jbo...@gdesolutions.com wrote:

 You wouldn't get faster delivery with a P2P-like algorithm?
 e.g.:
 Server sends a header:
 Cache-clients: my-neighbor.com:4000, my-other-neighor.com:6000

 Some security considerations for sure but your claim that 1 connection is
 optimal is bogus at best.


We're deviating way off the intended topic, but quick comment on this: more
TCP connections does not magically raise your bandwidth throughput. If
there is sufficient capacity between client and server, and no bottlenecks
in between, then yes, 1 connection is indeed optimal - it'll ramp up to
full bandwidth of your pipe and occupy all of it. With that said, back to
regularly scheduled programming.. :)

My actual point, as I stated it in my first post: bundling is a performance
anti-pattern in the context of regular web apps. That's all.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


RE: Generic Bundling

2013-10-26 Thread François REMY
±  Because using a ZIP file is a bad practice we certainly should not
±  allow. As stated before, it will make the website slow [...]
± 
± It seems what you're saying is that there are already superior ways to bundle
± JS modules and we don't need W3C to define another one.
± Perhaps—but this definitely has nothing to do with the ES module loaders
± proposal before TC39, which is bunding-system agnostic.
± 
± We'll be working with the HTML standards folks to decide how the browser's
± default loader will work, but basically I expect it'll just fetch URLs. That 
means
± it'll support HTTP2 out of the box, if the server supports it, and zip urls 
if
± W3C decides to spec such a thing. Apps can of course customize the loader if
± they need some custom bundling scheme. So your beef is not with us or with
± the HTML WG but with the TAG—or wherever the work on zip urls is taking
± place these days (I really don't know).

I agree with you here. I think the word allow is an overstatement I didn't 
really intend. As I said further in the mail, a platform should always be 
flexible enough to make sure you can do something if you so want (i.e. with a 
Domain Controller). What I meant is indeed that the ZIP bundling is a 
measurably suboptimal practice overall and should not be promoted (i.e. made 
simpler than the state-of-art approach and/or mentioned as an acceptable 
solution).

If ZIP URLs are found necessary for other use cases and are added to the 
platform, there's no reason to ban them from the Module Loader spec. But it 
remains that it's not a good practice in most cases. 

Bundling in general is not going to be a valid approach for any purpose related 
to efficiency soon (except maybe archive-level compression where grouping 
similar files may improve compression rate slightly). My point is that I'm not 
sure it's worth exploring bundling in the case of new standards that are going 
to be used in a future that someone can expect to come after HTTP2 birth. 

My intuition is that by the time every browser supports ES6, they will most 
likely support HTTP2 too - most of the already support drafts of it. It's not 
sure that most servers will support HTTP2 at that time, though. Therefore, I 
don't say this approach isn't worth exploring at all, but it's a temporary 
solution at best.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-26 Thread Sam Tobin-Hochstadt
On Sat, Oct 26, 2013 at 9:05 AM, François REMY
francois.remy@outlook.com wrote:

 Bundling in general is not going to be a valid approach for any purpose 
 related to efficiency soon (except maybe archive-level compression where 
 grouping similar files may improve compression rate slightly). My point is 
 that I'm not sure it's worth exploring bundling in the case of new standards 
 that are going to be used in a future that someone can expect to come after 
 HTTP2 birth.

This is a very narrow perspective. Bundling may not necessarily serve
the use cases that it currently serves once HTTP2 is deployed.
However, we should consider why real game engines use sprites, why
Java uses JAR files, and why native applications are constantly
reconsidering tradeoffs around single vs multiple dynamically-loaded
shared libraries -- all on local file systems. I/O isn't free,
multiple I/O requests are not free, and there will always be tradeoffs
in this space.

Sam
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-26 Thread Andrea Giammarchi
Is it possible to not put HTTP in the middle of your thoughts?

Why is **non HTTP** bundling a no go?

Don't you donwload a single blob to install chrome and then eventually have
incremental updates?

Why that single blob at the beginning should not be possible only in JS
since every other programming langauge has one and working without HTTP in
the middle? Without servers? Without an internet connection ?

Thanks


On Fri, Oct 25, 2013 at 8:39 PM, Ilya Grigorik igrigo...@gmail.com wrote:

 On Fri, Oct 25, 2013 at 12:17 AM, Andrea Giammarchi 
 andrea.giammar...@gmail.com wrote:

 Ilya ... just to re-clarify what was the discussion about: Generic
 Bundling ... not HTTP Bundling.
 I don't know why many keep coupling and confining HTML5 over HTTP and
 nothing else.
 Bundling as you do with executables or apps, bundling as you send a
 single file update for your customer to replace instead of unzipping,
 overwriting each file, etcetera.
 Why is in your opinion bundling bad for non HTTP, offline, apps created
 using these technologies ?
 Every programming language I know have some bundle support that works as
 single package/file ... C has the executable, then we have phar, war, jar,
 python has many ... what about JS ? Won't work without HTTP ? Why ?


 I'm not saying it won't work. I'm saying there are many downsides to
 distributing large blobs of data. Case in point, once you start
 distributing large blobs, you'll soon realize that it sucks that you have
 to download the entire blob every time a single byte has changed. As a
 result, you end up developing binary-diff formats.. like Courgette [1] that
 we use to update Chrome. A much simpler solution for web apps is to do
 exactly what AppCache did, create a manifest which lists all the resources,
 and let HTTP do the rest: each file can be downloaded and updated
 individually, etc.

 ig

 [1]
 http://www.chromium.org/developers/design-documents/software-updates-courgette



 On Thu, Oct 24, 2013 at 11:17 PM, Ilya Grigorik igrigo...@gmail.comwrote:

 + 1 to François's comments.

 You're not saying that gzipping and wise pre-fetching and parallel
 download of scripts don't improve page load times. Or are you?


 - We already have transfer-encoding in HTTP, and yes, you should
 definitely use it!
 - Prefetching is also an important optimization, but in the context of
 this discussion (bundling), it's an orthogonal concern.


 In the equation you paint above something important is missing: the
 fact that there's a round-trip delay per request (even with http2.0), and
 that the only way to avoid it is to bundle things, as in .zip bundling, to
 minimize the (number of requests and thus the) impact of latencies.


 With HTTP 1.x (and without sharding) you can fetch up to six resources
 in parallel. With HTTP 2.0, you can fetch as many resources as you wish in
 parallel. The only reason bundling exists as an optimization is to work
 around the limit of six parallel requests. The moment you remove that
 limitation, bundling is unnecessary and only hurts performance.


 And there's something else I think .zip bundling can provide that
 http2.0 can't: the guarantee that a set of files are cached by the time
 your script runs: with such a guarantee you could do synchronous module
 require()s, à la node.js.


 This is completely orthogonal... if you need to express dependencies
 between multiple resources, use a loader script, or better.. look into
 using upcoming promise API's. As I mentioned previously, bundling breaks
 streaming / incremental execution / prioritization.

 ig


 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss




___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-26 Thread François REMY
I can’t help but repeat, what you describe is called an app package format. 
Windows 8 has one, Chrome has one, Firefox OS has one; others may have one, 
too. There are discussions about a standardized app package format going on, 
but they are not happening on es-discuss. 


Why do you think this discussion belongs to es-discuss? Did I miss something?








De : Andrea Giammarchi
Envoyé : ‎samedi‎ ‎26‎ ‎octobre‎ ‎2013 ‎22‎:‎15
À : 'Ilya Grigorik'
Cc : es-discuss





Is it possible to not put HTTP in the middle of your thoughts?


Why is **non HTTP** bundling a no go?



Don't you donwload a single blob to install chrome and then eventually have 
incremental updates?




Why that single blob at the beginning should not be possible only in JS since 
every other programming langauge has one and working without HTTP in the 
middle? Without servers? Without an internet connection ?




Thanks




On Fri, Oct 25, 2013 at 8:39 PM, Ilya Grigorik igrigo...@gmail.com wrote:



On Fri, Oct 25, 2013 at 12:17 AM, Andrea Giammarchi 
andrea.giammar...@gmail.com wrote:






Ilya ... just to re-clarify what was the discussion about: Generic Bundling ... 
not HTTP Bundling.
I don't know why many keep coupling and confining HTML5 over HTTP and nothing 
else.

Bundling as you do with executables or apps, bundling as you send a single file 
update for your customer to replace instead of unzipping, overwriting each 
file, etcetera.

Why is in your opinion bundling bad for non HTTP, offline, apps created using 
these technologies ?

Every programming language I know have some bundle support that works as single 
package/file ... C has the executable, then we have phar, war, jar, python has 
many ... what about JS ? Won't work without HTTP ? Why ?




I'm not saying it won't work. I'm saying there are many downsides to 
distributing large blobs of data. Case in point, once you start distributing 
large blobs, you'll soon realize that it sucks that you have to download the 
entire blob every time a single byte has changed. As a result, you end up 
developing binary-diff formats.. like Courgette [1] that we use to update 
Chrome. A much simpler solution for web apps is to do exactly what AppCache 
did, create a manifest which lists all the resources, and let HTTP do the rest: 
each file can be downloaded and updated individually, etc. 




ig




[1] 
http://www.chromium.org/developers/design-documents/software-updates-courgette






 





On Thu, Oct 24, 2013 at 11:17 PM, Ilya Grigorik igrigo...@gmail.com wrote:





+ 1 to François's comments.





You're not saying that gzipping and wise pre-fetching and parallel download of 
scripts don't improve page load times. Or are you?





- We already have transfer-encoding in HTTP, and yes, you should definitely use 
it!

- Prefetching is also an important optimization, but in the context of this 
discussion (bundling), it's an orthogonal concern.


 

In the equation you paint above something important is missing: the fact that 
there's a round-trip delay per request (even with http2.0), and that the only 
way to avoid it is to bundle things, as in .zip bundling, to minimize the 
(number of requests and thus the) impact of latencies.





With HTTP 1.x (and without sharding) you can fetch up to six resources in 
parallel. With HTTP 2.0, you can fetch as many resources as you wish in 
parallel. The only reason bundling exists as an optimization is to work 
around the limit of six parallel requests. The moment you remove that 
limitation, bundling is unnecessary and only hurts performance. 


 

And there's something else I think .zip bundling can provide that http2.0 
can't: the guarantee that a set of files are cached by the time your script 
runs: with such a guarantee you could do synchronous module require()s, à la 
node.js.


 

This is completely orthogonal... if you need to express dependencies between 
multiple resources, use a loader script, or better.. look into using upcoming 
promise API's. As I mentioned previously, bundling breaks streaming / 
incremental execution / prioritization. 




ig




___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-26 Thread Andrea Giammarchi
Apologies, I just answered what Ilya answered but I'd like to see this
discussion ... where is this happening? Thanks a lot and send even off
thread if you can.


On Sat, Oct 26, 2013 at 2:10 PM, François REMY 
francois.remy@outlook.com wrote:

  I can’t help but repeat, what you describe is called an app package
 format. Windows 8 has one, Chrome has one, Firefox OS has one; others may
 have one, too. There are discussions about a standardized app package
 format going on, but they are not happening on es-discuss.

 Why do you think this discussion belongs to es-discuss? Did I miss
 something?



 *De :* Andrea Giammarchi andrea.giammar...@gmail.com
 *Envoyé :* samedi 26 octobre 2013 22:15
 *À :* 'Ilya Grigorik' igrigo...@gmail.com
 *Cc :* es-discuss es-discuss@mozilla.org

 Is it possible to not put HTTP in the middle of your thoughts?

 Why is **non HTTP** bundling a no go?

 Don't you donwload a single blob to install chrome and then eventually
 have incremental updates?

 Why that single blob at the beginning should not be possible only in JS
 since every other programming langauge has one and working without HTTP in
 the middle? Without servers? Without an internet connection ?

 Thanks


 On Fri, Oct 25, 2013 at 8:39 PM, Ilya Grigorik igrigo...@gmail.comwrote:

 On Fri, Oct 25, 2013 at 12:17 AM, Andrea Giammarchi 
 andrea.giammar...@gmail.com wrote:

 Ilya ... just to re-clarify what was the discussion about: Generic
 Bundling ... not HTTP Bundling.
 I don't know why many keep coupling and confining HTML5 over HTTP and
 nothing else.
 Bundling as you do with executables or apps, bundling as you send a
 single file update for your customer to replace instead of unzipping,
 overwriting each file, etcetera.
 Why is in your opinion bundling bad for non HTTP, offline, apps created
 using these technologies ?
 Every programming language I know have some bundle support that works as
 single package/file ... C has the executable, then we have phar, war, jar,
 python has many ... what about JS ? Won't work without HTTP ? Why ?


 I'm not saying it won't work. I'm saying there are many downsides to
 distributing large blobs of data. Case in point, once you start
 distributing large blobs, you'll soon realize that it sucks that you have
 to download the entire blob every time a single byte has changed. As a
 result, you end up developing binary-diff formats.. like Courgette [1] that
 we use to update Chrome. A much simpler solution for web apps is to do
 exactly what AppCache did, create a manifest which lists all the resources,
 and let HTTP do the rest: each file can be downloaded and updated
 individually, etc.

 ig

 [1]
 http://www.chromium.org/developers/design-documents/software-updates-courgette



 On Thu, Oct 24, 2013 at 11:17 PM, Ilya Grigorik igrigo...@gmail.comwrote:

 + 1 to François's comments.

 You're not saying that gzipping and wise pre-fetching and parallel
 download of scripts don't improve page load times. Or are you?


 - We already have transfer-encoding in HTTP, and yes, you should
 definitely use it!
 - Prefetching is also an important optimization, but in the context of
 this discussion (bundling), it's an orthogonal concern.


 In the equation you paint above something important is missing: the
 fact that there's a round-trip delay per request (even with http2.0), and
 that the only way to avoid it is to bundle things, as in .zip bundling, to
 minimize the (number of requests and thus the) impact of latencies.


 With HTTP 1.x (and without sharding) you can fetch up to six resources
 in parallel. With HTTP 2.0, you can fetch as many resources as you wish in
 parallel. The only reason bundling exists as an optimization is to work
 around the limit of six parallel requests. The moment you remove that
 limitation, bundling is unnecessary and only hurts performance.


 And there's something else I think .zip bundling can provide that
 http2.0 can't: the guarantee that a set of files are cached by the time
 your script runs: with such a guarantee you could do synchronous module
 require()s, à la node.js.


 This is completely orthogonal... if you need to express dependencies
 between multiple resources, use a loader script, or better.. look into
 using upcoming promise API's. As I mentioned previously, bundling breaks
 streaming / incremental execution / prioritization.

 ig


 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss





___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-26 Thread François REMY
I think it was once discussed at public-weba...@w3.org. 


→ http://www.w3.org/TR/2012/PER-widgets-20120925/#zip-archive


At some point there was also a community group but I’m not sure it’s still 
active; it published some interesting format comparisons:


→ http://www.w3.org/community/webappstore/wiki/Manifest


There are a lot of links you can follow from there.









De : Andrea Giammarchi
Envoyé : ‎samedi‎ ‎26‎ ‎octobre‎ ‎2013 ‎23‎:‎29
À : François REMY
Cc : 'Ilya Grigorik', es-discuss





Apologies, I just answered what Ilya answered but I'd like to see this 
discussion ... where is this happening? Thanks a lot and send even off thread 
if you can.




On Sat, Oct 26, 2013 at 2:10 PM, François REMY francois.remy@outlook.com 
wrote:




I can’t help but repeat, what you describe is called an app package format. 
Windows 8 has one, Chrome has one, Firefox OS has one; others may have one, 
too. There are discussions about a standardized app package format going on, 
but they are not happening on es-discuss. 




Why do you think this discussion belongs to es-discuss? Did I miss something?












De : Andrea Giammarchi
Envoyé : samedi 26 octobre 2013 22:15
À : 'Ilya Grigorik'
Cc : es-discuss







Is it possible to not put HTTP in the middle of your thoughts?


Why is **non HTTP** bundling a no go?



Don't you donwload a single blob to install chrome and then eventually have 
incremental updates?




Why that single blob at the beginning should not be possible only in JS since 
every other programming langauge has one and working without HTTP in the 
middle? Without servers? Without an internet connection ?




Thanks




On Fri, Oct 25, 2013 at 8:39 PM, Ilya Grigorik igrigo...@gmail.com wrote:



On Fri, Oct 25, 2013 at 12:17 AM, Andrea Giammarchi 
andrea.giammar...@gmail.com wrote:






Ilya ... just to re-clarify what was the discussion about: Generic Bundling ... 
not HTTP Bundling.
I don't know why many keep coupling and confining HTML5 over HTTP and nothing 
else.

Bundling as you do with executables or apps, bundling as you send a single file 
update for your customer to replace instead of unzipping, overwriting each 
file, etcetera.

Why is in your opinion bundling bad for non HTTP, offline, apps created using 
these technologies ?

Every programming language I know have some bundle support that works as single 
package/file ... C has the executable, then we have phar, war, jar, python has 
many ... what about JS ? Won't work without HTTP ? Why ?




I'm not saying it won't work. I'm saying there are many downsides to 
distributing large blobs of data. Case in point, once you start distributing 
large blobs, you'll soon realize that it sucks that you have to download the 
entire blob every time a single byte has changed. As a result, you end up 
developing binary-diff formats.. like Courgette [1] that we use to update 
Chrome. A much simpler solution for web apps is to do exactly what AppCache 
did, create a manifest which lists all the resources, and let HTTP do the rest: 
each file can be downloaded and updated individually, etc. 




ig




[1] 
http://www.chromium.org/developers/design-documents/software-updates-courgette






 





On Thu, Oct 24, 2013 at 11:17 PM, Ilya Grigorik igrigo...@gmail.com wrote:





+ 1 to François's comments.





You're not saying that gzipping and wise pre-fetching and parallel download of 
scripts don't improve page load times. Or are you?





- We already have transfer-encoding in HTTP, and yes, you should definitely use 
it!

- Prefetching is also an important optimization, but in the context of this 
discussion (bundling), it's an orthogonal concern.


 

In the equation you paint above something important is missing: the fact that 
there's a round-trip delay per request (even with http2.0), and that the only 
way to avoid it is to bundle things, as in .zip bundling, to minimize the 
(number of requests and thus the) impact of latencies.





With HTTP 1.x (and without sharding) you can fetch up to six resources in 
parallel. With HTTP 2.0, you can fetch as many resources as you wish in 
parallel. The only reason bundling exists as an optimization is to work 
around the limit of six parallel requests. The moment you remove that 
limitation, bundling is unnecessary and only hurts performance. 


 

And there's something else I think .zip bundling can provide that http2.0 
can't: the guarantee that a set of files are cached by the time your script 
runs: with such a guarantee you could do synchronous module require()s, à la 
node.js.


 

This is completely orthogonal... if you need to express dependencies between 
multiple resources, use a loader script, or better.. look into using upcoming 
promise API's. As I mentioned previously, bundling breaks streaming / 
incremental execution / prioritization. 




ig




___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org

Re: Generic Bundling

2013-10-25 Thread Ilya Grigorik
+ 1 to François's comments.

You're not saying that gzipping and wise pre-fetching and parallel download
 of scripts don't improve page load times. Or are you?


- We already have transfer-encoding in HTTP, and yes, you should definitely
use it!
- Prefetching is also an important optimization, but in the context of this
discussion (bundling), it's an orthogonal concern.


 In the equation you paint above something important is missing: the fact
 that there's a round-trip delay per request (even with http2.0), and that
 the only way to avoid it is to bundle things, as in .zip bundling, to
 minimize the (number of requests and thus the) impact of latencies.


With HTTP 1.x (and without sharding) you can fetch up to six resources in
parallel. With HTTP 2.0, you can fetch as many resources as you wish in
parallel. The only reason bundling exists as an optimization is to work
around the limit of six parallel requests. The moment you remove that
limitation, bundling is unnecessary and only hurts performance.


 And there's something else I think .zip bundling can provide that http2.0
 can't: the guarantee that a set of files are cached by the time your script
 runs: with such a guarantee you could do synchronous module require()s, à
 la node.js.


This is completely orthogonal... if you need to express dependencies
between multiple resources, use a loader script, or better.. look into
using upcoming promise API's. As I mentioned previously, bundling breaks
streaming / incremental execution / prioritization.

ig
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-25 Thread Andrea Giammarchi
Ilya ... just to re-clarify what was the discussion about: Generic Bundling
... not HTTP Bundling.

I don't know why many keep coupling and confining HTML5 over HTTP and
nothing else.

Bundling as you do with executables or apps, bundling as you send a single
file update for your customer to replace instead of unzipping, overwriting
each file, etcetera.

Why is in your opinion bundling bad for non HTTP, offline, apps created
using these technologies ?

Every programming language I know have some bundle support that works as
single package/file ... C has the executable, then we have phar, war, jar,
python has many ... what about JS ? Won't work without HTTP ? Why ?

Thanks for your thoughts on this and also thanks for the story and the
material about HTTP2 goodness.

Cheers





On Thu, Oct 24, 2013 at 11:17 PM, Ilya Grigorik igrigo...@gmail.com wrote:

 + 1 to François's comments.

 You're not saying that gzipping and wise pre-fetching and parallel
 download of scripts don't improve page load times. Or are you?


 - We already have transfer-encoding in HTTP, and yes, you should
 definitely use it!
 - Prefetching is also an important optimization, but in the context of
 this discussion (bundling), it's an orthogonal concern.


 In the equation you paint above something important is missing: the fact
 that there's a round-trip delay per request (even with http2.0), and that
 the only way to avoid it is to bundle things, as in .zip bundling, to
 minimize the (number of requests and thus the) impact of latencies.


 With HTTP 1.x (and without sharding) you can fetch up to six resources in
 parallel. With HTTP 2.0, you can fetch as many resources as you wish in
 parallel. The only reason bundling exists as an optimization is to work
 around the limit of six parallel requests. The moment you remove that
 limitation, bundling is unnecessary and only hurts performance.


 And there's something else I think .zip bundling can provide that http2.0
 can't: the guarantee that a set of files are cached by the time your script
 runs: with such a guarantee you could do synchronous module require()s, à
 la node.js.


 This is completely orthogonal... if you need to express dependencies
 between multiple resources, use a loader script, or better.. look into
 using upcoming promise API's. As I mentioned previously, bundling breaks
 streaming / incremental execution / prioritization.

 ig


 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-25 Thread Jorge Chamorro
On 24/10/2013, at 17:06, François REMY wrote:

 HTTP 2.0 can send you multiple files in parallel on the same connection: that 
 way you don't pay (1) the TCP's Slow Start cost, (2) the HTTPS handshake and 
 (3) the cookie/useragent/... headers cost.

Doesn't connection:keep-alive deal with (1) and (2) nicely?

 Under HTTP 2.0, you can also ask for the next files while you receive the 
 current one (or send them by batch), and that reduces the RTT cost to 0.

Http2.0 doesn't (and can't) fix the 1 RTT per request cost: it's just like 
http1.1.

If http2.0 lets me ask for n files in a single request then yes, the RTT would 
be ˜ 1, or 1/n per request if you will, which is just like asking for a .zip in 
http1.1

 Also, the server can decide to send you a list of files you didn't request (à 
 la ZIP), making totally unnecessary for your site to ask for the files to 
 preload them.

Can a server always know what the page is going to need next... beforehand? 
Easily?

 The priority of downloads is negotiated between the browser and the server, 
 and not dependent on the 6 connections and the client.

Yes, that sounds great!

 The big advantage of the HTTP2 solution over the ZIP is that your site could 
 already load with only the most important files downloaded while if you use a 
 ZIP you've to wait until all files have been downloaded.

1.- Bundle *wisely*
2.- n gzipped files multiplexed in a single http2.0 connection don't 
necessarily arrive faster than the same files .zipped through a non-multiplexed 
http1.1 connection: multiplexing has an overhead (at both ends) that http1.1 
hasn't.
3.- Yes, you can't (you can, but shouldn't until you've got the index which 
comes last) unzip a .zip as it arrives, but knowing for sure that all its files 
are cached (after unzipping) is a plus, imo.
4.- It's not http2.0 *or* .zip bundling. We could have both. Why not?

 From a performance point of view, this is an issue. Also, since you can only 
 start analyzing the resources at that time, you will overload the CPU at that 
 time. If you can unzip the files one by one, you can spread the load over a 
 much longer time.

Overload the cpu? :-P

 ± In the equation you paint above something important is missing: the fact 
 that
 ± there's a round-trip delay per request (even with http2.0), and that the 
 only
 ± way to avoid it is to bundle things, as in .zip bundling, to minimize the
 ± (number of requests and thus the) impact of latencies.
 
 Go find some HTTP2 presentation, you'll learn things ;-)

Look, I've done it, I♥it, it's awesome, and I keep thinking that .zip bundling 
would be a nice thing to have too.

-- 
( Jorge )(); 
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


RE: Generic Bundling

2013-10-25 Thread François REMY
±  HTTP 2.0 can send you multiple files in parallel on the same connection: 
that
± way you don't pay (1) the TCP's Slow Start cost, (2) the HTTPS handshake and
± (3) the cookie/useragent/... headers cost.
± 
± Doesn't connection:keep-alive deal with (1) and (2) nicely?

You still have to pay it 6 times, since you open 6 connections.



±  Under HTTP 2.0, you can also ask for the next files while you receive the
± current one (or send them by batch), and that reduces the RTT cost to 0.
± 
± Http2.0 doesn't (and can't) fix the 1 RTT per request cost: it's just like 
http1.1.

Yes it can. Because the channel is now bidirectional, you don't have to wait to 
receive FILE1 to ask for FILE2. You can therefore ask for all files you need at 
once, even at the same time you are still downloading your index.html file. 
And, as noted, the server can even push files to you if he thinks you'll need 
them, removing the necessity to ask for them.

± If http2.0 lets me ask for n files in a single request then yes, the RTT 
would be
± ˜ 1, or 1/n per request if you will, which is just like asking for a .zip in 
http1.1

Of course, you still loose a first RTT when you ask for the first file. As you 
point out, it's also the case if you were to download a ZIP. The difference 
being that the files are stored independtly in the cache and can be updated 
indepently. If only one of them changed, you're not forced to redownload all of 
them.



±  Also, the server can decide to send you a list of files you didn't request 
(à la
± ZIP), making totally unnecessary for your site to ask for the files to preload
± them.
± 
± Can a server always know what the page is going to need next...
± beforehand? Easily?

Yes and no. It can learn from past experiences. It can know that people asking 
for index.html and not having it in cache ended up requiring main.css, 
index.css, lib.js and index.js. This is a no-config experience, and it 
learns over time. It can also know that people asking index.html with an 
outdated eTag ended up having the latest version of main.css and lib.js in 
cache already and it can therefore send 304 replies for those files 
immediately, with their respective eTags.


± 4.- It's not http2.0 *or* .zip bundling. We could have both. Why not?

Because using a ZIP file is a bad practice we certainly should not allow. As 
stated before, it will make the website slow (because you need to download 
everything before you can start extracting the files, process them (JPEG, JS, 
CSS...) and start displaying anything. Basically you leave the client idle for 
nothing, then you give him a whole lot of things to do, which will make it 
unresponsive.

This also prevents independent caching (if you change one file of the ZIP, the 
client has to redownload everything), and proper GIT/FS management on the 
server side, as well as per-file content negotiation (like sending a 2x image 
to a phone, and a 1x to a desktop).

If you really want to use this kind of technique, and understand the tradeoffs, 
you will still be allowed in the near future to use a Network Domain Controller 
(I think Event Worker is the new name of this thing) and define your own 
network logic. Then you can decide to use ZIP as a transfer  caching protocol 
if you so want (by downloading the ZIP in the init procedure of the controller, 
putting it in the cache, and by unzipping its files as needed to reply to the 
requests, instead of letting them go through the network). 

But I remain dubious about making this a part of the platform. It has no clear 
advantage (at best it offers no performance gain over HTTP2), and people will 
certainly misuse it (at worse, the performance is much worse than even HTTP1).

As for the general use case of Web App packages (aka Generic Bundling not over 
HTTP) - which is a great idea by the way, - this is called an App Package and 
this mailing list isn't the best place to discuss it. I would recommend trying 
to find the WG that works over this and try to follow the discussions there (is 
that really linked to ECMAScript?). 

I think the problem here is not really the package format itself, but more the 
authorizations required to run the app (access to filesystem, camera, 
location, ...) whose granularity depends on the platform, which explains why 
each platform requires its own bundling even if the app itself is identical. 

There's room for improvement there.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-25 Thread Jason Orendorff
On Fri, Oct 25, 2013 at 3:53 AM, François REMY
francois.remy@outlook.com wrote:
 ± 4.- It's not http2.0 *or* .zip bundling. We could have both. Why not?

 Because using a ZIP file is a bad practice we certainly should not allow. As 
 stated before, it will make the website slow [...]

It seems what you're saying is that there are already superior ways to
bundle JS modules and we don't need W3C to define another one.
Perhaps—but this definitely has nothing to do with the ES module
loaders proposal before TC39, which is bunding-system agnostic.

We'll be working with the HTML standards folks to decide how the
browser's default loader will work, but basically I expect it'll just
fetch URLs. That means it'll support HTTP2 out of the box, if the
server supports it, and zip urls if W3C decides to spec such a
thing. Apps can of course customize the loader if they need some
custom bundling scheme. So your beef is not with us or with the HTML
WG but with the TAG—or wherever the work on zip urls is taking place
these days (I really don't know).

The folks on this list are by and large not networking or Internet
architecture experts.

-j
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-25 Thread Ilya Grigorik
On Fri, Oct 25, 2013 at 12:17 AM, Andrea Giammarchi 
andrea.giammar...@gmail.com wrote:

 Ilya ... just to re-clarify what was the discussion about: Generic
 Bundling ... not HTTP Bundling.
 I don't know why many keep coupling and confining HTML5 over HTTP and
 nothing else.
 Bundling as you do with executables or apps, bundling as you send a single
 file update for your customer to replace instead of unzipping, overwriting
 each file, etcetera.
 Why is in your opinion bundling bad for non HTTP, offline, apps created
 using these technologies ?
 Every programming language I know have some bundle support that works as
 single package/file ... C has the executable, then we have phar, war, jar,
 python has many ... what about JS ? Won't work without HTTP ? Why ?


I'm not saying it won't work. I'm saying there are many downsides to
distributing large blobs of data. Case in point, once you start
distributing large blobs, you'll soon realize that it sucks that you have
to download the entire blob every time a single byte has changed. As a
result, you end up developing binary-diff formats.. like Courgette [1] that
we use to update Chrome. A much simpler solution for web apps is to do
exactly what AppCache did, create a manifest which lists all the resources,
and let HTTP do the rest: each file can be downloaded and updated
individually, etc.

ig

[1]
http://www.chromium.org/developers/design-documents/software-updates-courgette



 On Thu, Oct 24, 2013 at 11:17 PM, Ilya Grigorik igrigo...@gmail.comwrote:

 + 1 to François's comments.

 You're not saying that gzipping and wise pre-fetching and parallel
 download of scripts don't improve page load times. Or are you?


 - We already have transfer-encoding in HTTP, and yes, you should
 definitely use it!
 - Prefetching is also an important optimization, but in the context of
 this discussion (bundling), it's an orthogonal concern.


 In the equation you paint above something important is missing: the fact
 that there's a round-trip delay per request (even with http2.0), and that
 the only way to avoid it is to bundle things, as in .zip bundling, to
 minimize the (number of requests and thus the) impact of latencies.


 With HTTP 1.x (and without sharding) you can fetch up to six resources in
 parallel. With HTTP 2.0, you can fetch as many resources as you wish in
 parallel. The only reason bundling exists as an optimization is to work
 around the limit of six parallel requests. The moment you remove that
 limitation, bundling is unnecessary and only hurts performance.


 And there's something else I think .zip bundling can provide that
 http2.0 can't: the guarantee that a set of files are cached by the time
 your script runs: with such a guarantee you could do synchronous module
 require()s, à la node.js.


 This is completely orthogonal... if you need to express dependencies
 between multiple resources, use a loader script, or better.. look into
 using upcoming promise API's. As I mentioned previously, bundling breaks
 streaming / incremental execution / prioritization.

 ig


 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Re: Generic Bundling

2013-10-25 Thread Ilya Grigorik
On Fri, Oct 25, 2013 at 6:24 AM, Jonathan Bond-Caron 
jbo...@gdesolutions.com wrote:

 On Wed Oct 23 10:17 PM, Ilya Grigorik wrote:
  In short, pitching zip bundling as a performance optimization is a
  complete misnomer. If anything, it will only make things worse, even
  for HTTP 1.x clients.

 I disagree, if you want to treat this as an optimization problem, let's
 look at it:
 1. x number of resources/files
 2. x number of network connections

 What is the optimal way of loading the resources/files?
 I don't have a proof but I'm pretty sure that's NP-complete.


I assure you it is not NP-complete. We know what the critical resources are
for the page (i.e. their priorities), and we know the properties of TCP
connections (like slow-start). The optimal number of connection is 1 (with
exception of few edge cases where TCP window scaling is disabled and BDP is
high).


 Are you saying that HTTP 2.0 loading is the best known algorithm in all
 cases?
 That's bogus. It's certainly a better algorithm but there's a wide range
 of strategies that will result in faster load times (it involves bundling
 in some cases).


Bundling does not achieve anything that a good multiplexing solution can't
-- bundling is multiplexing at the application layer. HTTP 2.0 provides the
necessary tools to achieve same performance as bundling, but with
additional benefits of granular downloads, incremental execution, etc. (I
feel like I'm stuck in a loop ;-))


 The way I look at it, this isn't about zipping a whole application vs.
 HTTP 2.0, it's actually a combination of both strategies that will yield
 better load times.


No. You don't need bundling with HTTP 2.0.


 I agree that large bundles break prioritization, but why wouldn't you
 for example zip 20 image files that never change during the year and choose
 that method of delivery?


Because you can load 20 images individually, which means you can display
each of them sooner, and you have a more granular cache - if you're under
pressure, you can evict one or more images without evicting all of them.
Similarly, you don't need to read the entire zip to get just one image, etc.

ig
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-24 Thread Jorge Chamorro
On 24/10/2013, at 04:17, Ilya Grigorik wrote:

 Hey all. Late to the discussion here, but after scanning the thread, figured 
 it might be worth sharing a few observations... 
 
 The fact that we have to bundle files at the application layer is an 
 unfortunate limitation of HTTP 1.x protocol. Specifically, because HTTP 1.x 
 forces us to serializes responses (actually, in practice it also forces us to 
 serializes requests on the client since pipelining adoption has effectively 
 failed), it means we can have up to 6 parallel transfers per origin * number 
 of origins (aka, domain sharding). This sucks at multiple levels.. it adds 
 unnecessary complexity to our build/deploy steps (e.g. try explaining sprites 
 to any designer...), and it also *hurts* performance in many cases.
 
 For details on cost/benefits of pipelining, sharding, concatenation:
 - 
 http://chimera.labs.oreilly.com/books/123000545/ch11.html#HTTP11_MULTIPLE_CONNECTIONS
 - 
 http://chimera.labs.oreilly.com/books/123000545/ch11.html#_domain_sharding
 - 
 http://chimera.labs.oreilly.com/books/123000545/ch11.html#CONCATENATION_SPRITING
 
 As noted in last link, concatenating large bundles is actually *the opposite* 
 of what you want to do for performance: 
 a) we want to deliver small, granular resources, such that they can be 
 cached, invalidated, and updated individually
 b) small resources allow incremental processing and execution
 c) small resources map to modular code and better prioritization (e.g. I need 
 this submodule only after X operation or in Y view, etc)
 
 In practice, (a) is a serious problem for many large sites already.. every 
 rev of their JS / CSS bundle results in a massive (and mostly unnecessary 
 update) - case in point, GMail team has spent an enormous amount of cycles 
 trying to figure out how to scale this process without running a self-imposed 
 DoS attack every time their JS asset is rev'ed (plus doing so in an efficient 
 way for users on slow connections). Similarly, in our Google PageSpeed 
 libraries we've dropped the concatenate all things strategy several years 
 ago after we realized that it hurts perceived performance: instead we merge 
 small files into large bundles (up to 30-50kb in size) -- even this is 
 annoying and ideally unnecessary, and we recommend disabling all spriting / 
 concat logic when running over SPDY. 
 
 Long story short: we don't want large bundles. 
 
 Also, large bundles break prioritization! To deliver good performance we want 
 modular assets with different priority levels. This is exactly why we're 
 working on ResourcePriorities spec: 
 https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/ResourcePriorities/Overview.html.
  Merging multiple files into a single blob, or a zip, breaks this model and 
 makes the situation even worse:
 
 a) a single byte update on any part of the bundle would force downloading the 
 entire blob ($$$, slow for clients on slower connections, low cache hit rates)
 b) streaming sub-resources from a bundle is at best complicated, and in the 
 worst case completely broken, which only makes the performance story even 
 worse
 c) the entire bundle is delivered with a single priority level 
 
 In short, pitching zip bundling as a performance optimization is a complete 
 misnomer. If anything, it will only make things worse, even for HTTP 1.x 
 clients. And with HTTP 2.0 on near horizon, the limitation in number of 
 requests is completely removed: we have full multiplexing, prioritization, 
 flow control.. which is exactly where we want to go if we want to accelerate 
 page load times.
 
 ig
 
 P.S. HTTP 2 recommendations: 
 http://chimera.labs.oreilly.com/books/123000545/ch13.html#_removing_1_x_optimizations

Hi,

You're not saying that gzipping and wise pre-fetching and parallel download of 
scripts don't improve page load times. Or are you?

In the equation you paint above something important is missing: the fact that 
there's a round-trip delay per request (even with http2.0), and that the only 
way to avoid it is to bundle things, as in .zip bundling, to minimize the 
(number of requests and thus the) impact of latencies.

And there's something else I think .zip bundling can provide that http2.0 
can't: the guarantee that a set of files are cached by the time your script 
runs: with such a guarantee you could do synchronous module require()s, à la 
node.js.

Cheers,
-- 
( Jorge )();

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


RE: Generic Bundling

2013-10-24 Thread François REMY
± You're not saying that gzipping and wise pre-fetching and parallel download
± of scripts don't improve page load times. Or are you?

All servers serve GZIPPED version of text files. You don't need a ZIP for that. 

HTTP 2.0 can send you multiple files in parallel on the same connection: that 
way you don't pay (1) the TCP's Slow Start cost, (2) the HTTPS handshake and 
(3) the cookie/useragent/... headers cost. 

Under HTTP 2.0, you can also ask for the next files while you receive the 
current one (or send them by batch), and that reduces the RTT cost to 0. Also, 
the server can decide to send you a list of files you didn't request (à la 
ZIP), making totally unnecessary for your site to ask for the files to preload 
them.

The priority of downloads is negotiated between the browser and the server, and 
not dependent on the 6 connections and the client.

The big advantage of the HTTP2 solution over the ZIP is that your site could 
already load with only the most important files downloaded while if you use a 
ZIP you've to wait until all files have been downloaded. From a performance 
point of view, this is an issue. Also, since you can only start analyzing the 
resources at that time, you will overload the CPU at that time. If you can 
unzip the files one by one, you can spread the load over a much longer time.



± In the equation you paint above something important is missing: the fact that
± there's a round-trip delay per request (even with http2.0), and that the only
± way to avoid it is to bundle things, as in .zip bundling, to minimize the
± (number of requests and thus the) impact of latencies.

Go find some HTTP2 presentation, you'll learn things ;-)
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Re: Generic Bundling

2013-10-23 Thread Ilya Grigorik
Hey all. Late to the discussion here, but after scanning the thread,
figured it might be worth sharing a few observations...

The fact that we have to bundle files at the application layer is an
unfortunate limitation of HTTP 1.x protocol. Specifically, because HTTP 1.x
forces us to serializes responses (actually, in practice it also forces us
to serializes requests on the client since pipelining adoption has
effectively failed), it means we can have up to 6 parallel transfers per
origin * number of origins (aka, domain sharding). This sucks at multiple
levels.. it adds unnecessary complexity to our build/deploy steps (e.g. try
explaining sprites to any designer...), and it also *hurts* performance in
many cases.

For details on cost/benefits of pipelining, sharding, concatenation:
-
http://chimera.labs.oreilly.com/books/123000545/ch11.html#HTTP11_MULTIPLE_CONNECTIONS
-
http://chimera.labs.oreilly.com/books/123000545/ch11.html#_domain_sharding
-
http://chimera.labs.oreilly.com/books/123000545/ch11.html#CONCATENATION_SPRITING

As noted in last link, concatenating large bundles is actually *the
opposite* of what you want to do for performance:
a) we want to deliver small, granular resources, such that they can be
cached, invalidated, and updated individually
b) small resources allow incremental processing and execution
c) small resources map to modular code and better prioritization (e.g. I
need this submodule only after X operation or in Y view, etc)

In practice, (a) is a serious problem for many large sites already.. every
rev of their JS / CSS bundle results in a massive (and mostly unnecessary
update) - case in point, GMail team has spent an enormous amount of cycles
trying to figure out how to scale this process without running a
self-imposed DoS attack every time their JS asset is rev'ed (plus doing so
in an efficient way for users on slow connections). Similarly, in our
Google PageSpeed libraries we've dropped the concatenate all things
strategy several years ago after we realized that it hurts perceived
performance: instead we merge small files into large bundles (up to 30-50kb
in size) -- even this is annoying and ideally unnecessary, and we recommend
disabling all spriting / concat logic when running over SPDY.

Long story short: we don't want large bundles.

Also, large bundles break prioritization! To deliver good performance we
want modular assets with different priority levels. This is exactly why
we're working on ResourcePriorities spec:
https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/ResourcePriorities/Overview.html.
Merging multiple files into a single blob, or a zip, breaks this model and
makes the situation even worse:

a) a single byte update on any part of the bundle would force downloading
the entire blob ($$$, slow for clients on slower connections, low cache hit
rates)
b) streaming sub-resources from a bundle is at best complicated, and in the
worst case completely broken, which only makes the performance story even
worse
c) the entire bundle is delivered with a single priority level

In short, pitching zip bundling as a performance optimization is a
complete misnomer. If anything, it will only make things worse, even for
HTTP 1.x clients. And with HTTP 2.0 on near horizon, the limitation in
number of requests is completely removed: we have full multiplexing,
prioritization, flow control.. which is exactly where we want to go if we
want to accelerate page load times.

ig

P.S. HTTP 2 recommendations:
http://chimera.labs.oreilly.com/books/123000545/ch13.html#_removing_1_x_optimizations
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-15 Thread David Bruant

Le 14/10/2013 23:25, Brendan Eich a écrit :

Jorge Chamorro wrote:

The only work around for that is making as few requests as possible.


+∞, +§, and beyond.

This is deeply true, and a hot topic with browser/network-stack 
engineers right now.
It ought to be with software engineers as well and it's one of the 
reason why promise pipelining [1] is so appealing.


David

[1] http://erights.org/elib/distrib/pipeline.html
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-15 Thread David Bruant

Le 14/10/2013 23:20, Jorge Chamorro a écrit :

On 14/10/2013, at 22:11, David Bruant wrote:


You already can with inlining, can't you?

Yes and no:

-It's much more complicated than pre zipping a bunch of files and adding a ref 
attribute.
-It requires additional logic at the server side, and more programming.
Not really. If there was a need for lots of people, people would have 
come up with an open source grunt task already (or any other open source 
tooling).
The fact that people haven't tried too hard may also be an indication 
that bundling isn't such a pressing need.


With the appropriate tooling, it could be as simple to inline in an HTML 
as it is to gzip (2 clicks for each).


With tooling being such a hot topic these days (so many talks on tooling 
and automation in confs!) and the MIT-licence culture around it, I feel 
we, web devs, should start considering asking less from the platform and 
more from the tooling.



-It's not trivial always: often you can't simply concatenate and expect it to 
work as-is (e.g. module scripts).
-You might be forcing the server to build and/or gzip (á la PHP) on the fly = 
much more load per request.

This is equally true from zip-bundling, no?


-Inlined source isn't always semantically === non-inlined source = bugs.

True. It's admittedly easy to escape with decent discipline.


It would also be very interesting to know if you had .zip packing, would you be 
inlining?

... yeah ... good point, I probably wouldn't :-)

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-14 Thread Anne van Kesteren
On Sun, Oct 13, 2013 at 8:30 PM, Brendan Eich bren...@mozilla.com wrote:
 Anne van Kesteren mailto:ann...@annevk.nl
 It would require each end point that wants to support this to have new
 syntax. A solution from http://wiki.whatwg.org/wiki/Zip#URLs will not
 require updating all the end points.

 That doc is a bit cryptic.

 Can you explain how new URL syntax to address a ZIP member (I like, we had
 it in the ancient days with JAR files [ZIPs of course] using '!') avoids
 updating both end points? The content on the server starts using

 script src=assets.zip!lib/main.js/script

 How do old browsers cope?

The idea is to use a somewhat more unique separator, e.g. $sub/. Old
browsers would simply fetch the URL from the server and if the server
is written with legacy in mind would serve up the file from there. New
browsers would realize it's a separator and fetch the URL up to the
separator and then do addressing within the zip archive once its
retrieved.

https://gist.github.com/wycats/220039304b053b3eedd0 has a more
complete summary of our current thinking. (Not entirely up to date.)


-- 
http://annevankesteren.nl/
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-14 Thread Brendan Eich

Anne van Kesteren mailto:ann...@annevk.nl
October 14, 2013 6:16 AM

The idea is to use a somewhat more unique separator, e.g. $sub/. Old
browsers would simply fetch the URL from the server and if the server
is written with legacy in mind would serve up the file from there. New
browsers would realize it's a separator and fetch the URL up to the
separator and then do addressing within the zip archive once its
retrieved.


Ok, so to be super-clear, this still has the problem of old browsers 
creating request storms.


/be

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-14 Thread Jorge Chamorro
On 13/10/2013, at 21:34, Brendan Eich wrote:
 Jorge Chamorro wrote:
 
 Are main.js and assets.zip two separate files, or is main.js expected to 
 come from into assets.zip?
 
 The latter.
 
  I think the latter would be best because it would guarantee that the assets 
 are there by the time main.js runs, as if they were local files, ready to be 
 require()d synchronously.
 
 How would old browsers cope, though? They would load only lib/main.js (and 
 possibly make a request storm, as Russell brought out elsewhere in this 
 thread), so (synchronous) require of another member of assets.zip might or 
 might not work.

Exactly.

The only 'fix' that I can think of is to use sync XHRs (I know, I know...). For 
example this code would run fine in any browser, with or without .zip packages:

```
function require (modulo) {
  if (!require.modulos) {
require.modulos= Object.create(null);
  }
  if (!(modulo in require.modulos)) {
var xhr= new XMLHttpRequest();
xhr.open('GET', modulo, false);
xhr.send();
require.modulos[modulo]= Function('require', xhr.responseText)(require);
  }
  return require.modulos[modulo];
}

require('js/main.js');
```

Only much slower in old browsers, but lightning fast with .zip packages (if you 
choose wisely what you put into the .zip package).

 A prefetching link element might not suffice in old browsers, I'm pretty 
 sure it won't.
 
 If the only way to cope with downrev browsers is to use Traceur, so be it. We 
 just need to be sure we're not missing some clever alternative.

I for one don't have any better ideas, sorry.

Thank you,
-- 
( Jorge )();
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-14 Thread David Bruant

Le 14/10/2013 15:16, Anne van Kesteren a écrit :

The idea is to use a somewhat more unique separator, e.g. $sub/. Old
browsers would simply fetch the URL from the server and if the server
is written with legacy in mind would serve up the file from there. New
browsers would realize it's a separator and fetch the URL up to the
separator and then do addressing within the zip archive once its
retrieved.

https://gist.github.com/wycats/220039304b053b3eedd0 has a more
complete summary of our current thinking. (Not entirely up to date.)
I feel this document lacks a use case/problem/rationale section. It'd 
also be interesting to explore how people solve the same problem today 
(inlining mostly) and explain why this proposal is significantly (!) 
better (I doubt it is, but I'm always open to being proven wrong).


From what I understand, the problem being solved by bundling is faster 
initial load times (feel free to correct me at this point).


Back to something Brendan said:
I agree with your approach that values ease of content-only (in the 
HTML, via script src= ref=) migration. I think David and others 
pointing to HTTP 2 undervalue that. 
Recently, a friend of mine had a performance problem on his blog. It's a 
Wordpress blog on an average hosting service, nothing fancy. The landing 
page was loading in 14sec. He applied a few tricks (he's not a web dev, 
so nothing too fancy), got a CDN wordpress plugin, async-loaded facebook 
and other social widgets and now the page loads in 4.5 secs and has 
something on screen within about 2sec.
There are 68 requests, 1.2Mb (!) of content downloaded, but it works. 
There are also lots of inline scripts in the middle of the page and that 
creeps me out and makes me want to murder a couple of people who work on 
Wordpress themes... but it works. The web works.
And that's a already semi-complex site. I imagine things to only be 
better with content-only websites. How much are we trying to save with 
the bundling proposal? 200ms? 300ms? Is it really worth it? I feels like 
we're trying to solve a first-world problem.


I feel that before adding new syntax and complexifying yet again the 
platform, a thorough performance study should be made to be sure it'll 
be significantly better than what we do today with inlining.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-14 Thread Jorge Chamorro
On 14/10/2013, at 17:20, David Bruant wrote:

 How much are we trying to save with the bundling proposal? 200ms? 300ms? Is 
 it really worth it? I feels like we're trying to solve a first-world problem.

I think that the savings depend very much on the latency. For example from 
where I am to Brazil the latency (round-trip) is almost 500 ms, so if I could 
bundle 60 files in a .zip instead of requesting them in series (say at max 6 in 
parallel), the page would load in a little more than 500 ms instead of in 10 
seconds.

You can also think about it this way: the price per request with 500 ms of 
latency, is 500kB on a 1 megabyte per second ADSL, or 1 megabyte in a 2 
megabyte/s ADSL, etc. So for 60 requests it's 30 or 60 megabytes.

Yes a server could perhaps fix that for me almost transparently, but with this 
I could as well fix it all by myself.
-- 
( Jorge )();
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


RE: Generic Bundling

2013-10-14 Thread Andrea Giammarchi
IIRC roundtrip happens once per domain so your math is a bit off.
However, I've solved that using a single js Object with all modules
packed as strings and parsed at require time once to avoid huge
overhead by parsing everything at once. The name is require-client and
once gzipped gives similar advantages. However, few adopted such
approach for some reason i dont know

Sent from my Windows Phone From: Jorge Chamorro
Sent: 10/14/2013 9:21 AM
To: David Bruant
Cc: Brendan Eich; es-discuss@mozilla.org
Subject: Re: Generic Bundling
On 14/10/2013, at 17:20, David Bruant wrote:

 How much are we trying to save with the bundling proposal? 200ms? 300ms? Is 
 it really worth it? I feels like we're trying to solve a first-world problem.

I think that the savings depend very much on the latency. For example
from where I am to Brazil the latency (round-trip) is almost 500 ms,
so if I could bundle 60 files in a .zip instead of requesting them in
series (say at max 6 in parallel), the page would load in a little
more than 500 ms instead of in 10 seconds.

You can also think about it this way: the price per request with 500
ms of latency, is 500kB on a 1 megabyte per second ADSL, or 1 megabyte
in a 2 megabyte/s ADSL, etc. So for 60 requests it's 30 or 60
megabytes.

Yes a server could perhaps fix that for me almost transparently, but
with this I could as well fix it all by myself.
-- 
( Jorge )();
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-14 Thread Andrea Giammarchi
the module, if interested:
https://github.com/WebReflection/require_client#require_client

Best Regards


On Mon, Oct 14, 2013 at 9:47 AM, Andrea Giammarchi 
andrea.giammar...@gmail.com wrote:

 IIRC roundtrip happens once per domain so your math is a bit off.
 However, I've solved that using a single js Object with all modules
 packed as strings and parsed at require time once to avoid huge
 overhead by parsing everything at once. The name is require-client and
 once gzipped gives similar advantages. However, few adopted such
 approach for some reason i dont know

 Sent from my Windows Phone From: Jorge Chamorro
 Sent: 10/14/2013 9:21 AM
 To: David Bruant
 Cc: Brendan Eich; es-discuss@mozilla.org
 Subject: Re: Generic Bundling
 On 14/10/2013, at 17:20, David Bruant wrote:

  How much are we trying to save with the bundling proposal? 200ms? 300ms?
 Is it really worth it? I feels like we're trying to solve a first-world
 problem.

 I think that the savings depend very much on the latency. For example
 from where I am to Brazil the latency (round-trip) is almost 500 ms,
 so if I could bundle 60 files in a .zip instead of requesting them in
 series (say at max 6 in parallel), the page would load in a little
 more than 500 ms instead of in 10 seconds.

 You can also think about it this way: the price per request with 500
 ms of latency, is 500kB on a 1 megabyte per second ADSL, or 1 megabyte
 in a 2 megabyte/s ADSL, etc. So for 60 requests it's 30 or 60
 megabytes.

 Yes a server could perhaps fix that for me almost transparently, but
 with this I could as well fix it all by myself.
 --
 ( Jorge )();
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-14 Thread Jorge Chamorro
On 14/10/2013, at 18:47, Andrea Giammarchi wrote:

 IIRC roundtrip happens once per domain so your math is a bit off.

Can you elaborate? I don't quite understand...

Thank you,
-- 
( Jorge )();
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-14 Thread David Bruant

Le 14/10/2013 18:21, Jorge Chamorro a écrit :

On 14/10/2013, at 17:20, David Bruant wrote:


How much are we trying to save with the bundling proposal? 200ms? 300ms? Is it 
really worth it? I feels like we're trying to solve a first-world problem.

I think that the savings depend very much on the latency. For example from 
where I am to Brazil the latency (round-trip) is almost 500 ms, so if I could 
bundle 60 files in a .zip instead of requesting them in series (say at max 6 in 
parallel), the page would load in a little more than 500 ms instead of in 10 
seconds.

You can also think about it this way: the price per request with 500 ms of 
latency, is 500kB on a 1 megabyte per second ADSL, or 1 megabyte in a 2 
megabyte/s ADSL, etc. So for 60 requests it's 30 or 60 megabytes.

Yes a server could perhaps fix that for me almost transparently, but with this 
I could as well fix it all by myself.

You already can with inlining, can't you?

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-14 Thread Jorge Chamorro
On 14/10/2013, at 22:11, David Bruant wrote:

 You already can with inlining, can't you?

Yes and no:

-It's much more complicated than pre zipping a bunch of files and adding a ref 
attribute.
-It requires additional logic at the server side, and more programming.
-It's not trivial always: often you can't simply concatenate and expect it to 
work as-is (e.g. module scripts).
-You might be forcing the server to build and/or gzip (á la PHP) on the fly = 
much more load per request.
-Inlined source isn't always semantically === non-inlined source = bugs.
-Etc.

-- 
( Jorge )();
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-14 Thread Jorge Chamorro
On 14/10/2013, at 22:27, Andrea Giammarchi wrote:

 AFAIK you have those 500ms delay per roundtrip, as you said, but not per 
 domain.
 
 I am talking about mobile and radio behavior where fetching from multiple 
 sources will result in a roundtrip mess/hell but fetching all resources from 
 a single domain should result in a roundtrip delay only for the first file.
 
 Accordingly, avoiding multiple CDN for different external scripts might help 
 to speed up first-contact too.
 
 I don't remember (I might look for it) who brought all these facts on the 
 table but I remember this was practical/concrete situation 3+ years ago and I 
 don't expect to be different today.
 
 As summary: if you have 500ms delay and 10 files, you won't have 500 * 10 ,s 
 delay but 500 plus common network delay accordingly with your host 
 situation so 500 + (100 * 10) considering a regular 100 ms delay
 
 I mean, still some delay, but it's not multiplied 500 ... that's what I've 
 meant :-)

You are sitting in the moon with a lamp sending signals to the earth and no 
matter what you do it takes more than 1 second for the light of your lamp to 
arrive to the earth. There is a mirror in the earth reflecting the light back 
to you, the round-trip will be more than 2 seconds and there's no way to fix 
that.

What I meant with round-trip latency is: once the connection has been 
established, a network packet takes almost 250 ms to go from the ethernet port 
of my computer the the ethernet port of a server in Brazil, and another 250 ms 
for the response packet to come back.

The only work around for that is making as few requests as possible.
-- 
( Jorge )();
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-14 Thread Brendan Eich

Jorge Chamorro wrote:

The only work around for that is making as few requests as possible.


+∞, +§, and beyond.

This is deeply true, and a hot topic with browser/network-stack 
engineers right now.


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-14 Thread Russell Leggett
This is probably the wrong place to ask the question, but I was just
thinking about the whole HTTP 2 server push thing. In a way, it surely wins
in the # of requests camp if it works as described - you request index.html
and the server intelligently starts pushing you not only index.html, but
also everything index.html needs. Even in the case of bundling, you at
least need to wait for index.html to come back before you can ask for the
bundle. And even better, because it sends everything in original granular
form, surely the caching story must be better, you won't wind up
overbundling or having overlapping bundles. Then I realized a major
(potential) flaw. If the server always pushes the dependencies for
index.html without being asked - doesn't that completely wreck the browser
cache? Browser caching relies on knowing when - and when *not* to ask. If
server push starts sending things without being asked, isn't that
potentially sending down a lot of unnecessary data?

- Russ
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-14 Thread Jorge Chamorro
On 14/10/2013, at 22:11, David Bruant wrote:

 You already can with inlining, can't you?

It would also be very interesting to know if you had .zip packing, would you be 
inlining?

-- 
( Jorge )();
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-14 Thread Andrea Giammarchi
Inline, from the Moon


On Mon, Oct 14, 2013 at 2:22 PM, Jorge Chamorro jo...@jorgechamorro.comwrote:

 What I meant with round-trip latency is: once the connection has been
 established


I was talking about this latency, those 500ms in my example



 , a network packet takes almost 250 ms to go


while my 100ms per request were the equivalent of the 250 you are talking
about here.

So we are saying the same thing, I just defined, probably wrongly, the
roundtrip, only the first-contact one.



 The only work around for that is making as few requests as possible.


never said the opposite, and if you read the repo I've pointed out you'll
realize I've already done this pragmatically but nobody seemed to be
interested at that time.

I have an inline require that uses Function over modules on demand but all
modules are packed in a single minzipped JSON object

```javascript
// the equivalent of your file.zip in my case
{
  module1:content as string 1,
  module-N:other content
}
```

The object does, inline, a similar require so that

```javascript
if (!modules[name])
Function(object[name]).call(module,global,require,module);
```

and you have inline require that does  in development the synchronous Ajax
call.

Compared with the zip solution, if targeting only javascript, it gives you
modules behavior, you still need to pack them all together, you will still
use minifiers before packing to save as many bytes as possible so debug is
hard in any case and bugs introduced by minifiers could still be there but,
as it would be for the JS zip solution, you could just inline as text he
whole script without minifier ^_^


the extra plus is given by the ability to use an initialization inside the
script so that

```
script src=package.js exec
  require('any-module-in-package').init();
/script
```

or something similar. In few words create a pre-minzipped package file is
kinda possible but I am with you about having this standardize and
simplified without the need for HTTP2.

Best Regards
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-13 Thread Andrea Giammarchi
On Sat, Oct 12, 2013 at 12:07 PM, Brendan Eich bren...@mozilla.com wrote:


 However, Russell's counter-argument that fallback in older browsers to
 loading lots of little files, request by request, from the server directory
 hierarchy, may be too painful, reducing the value as a migration technique.


this is what happens today with external CDN scripts and/or AMD like
solutions regardless ... if these are not bundled all together, isn't it?


Is there a way for old browsers that avoids a request storm, and which can
 be expressed entirely in the hosted content (no protocol stack update
 problem)?


hacking HTML SCRIPT prototype upfront, it's possible to improve the
behavior but hard to make it fully able to ignore network requests.

However, this is not usually the main concern for new/recent proposals so I
wonder why it is in such specific case?

Best Regards
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-13 Thread Brian Kardell
On Oct 13, 2013 4:40 AM, Andrea Giammarchi andrea.giammar...@gmail.com
wrote:


 On Sat, Oct 12, 2013 at 12:07 PM, Brendan Eich bren...@mozilla.com
wrote:


 However, Russell's counter-argument that fallback in older browsers to
loading lots of little files, request by request, from the server directory
hierarchy, may be too painful, reducing the value as a migration technique.


 this is what happens today with external CDN scripts and/or AMD like
solutions regardless ... if these are not bundled all together, isn't it?

To me at least, the primary difference there is that in that case it is in
the authors hand whereas native feature support is in the hands of the user
agent, creating a potentially huge and unmanageable perf delta in the
short/medium term.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-13 Thread Brendan Eich

Anne van Kesteren mailto:ann...@annevk.nl
October 11, 2013 12:34 AM
On Fri, Oct 11, 2013 at 3:53 AM, Brendan Eichbren...@mozilla.com  wrote:

On Thu, Oct 10, 2013 at 8:10 PM, Andrea Giammarchi
andrea.giammar...@gmail.commailto:andrea.giammar...@gmail.com  wrote:

 You are confining the problem in HTTP only scenarios while the
 solution provided by

 script src=lib/main.js ref=”assets.zip”/script

No, you're right -- agree with you and Andrea, this is sweet.


It would require each end point that wants to support this to have new
syntax. A solution from http://wiki.whatwg.org/wiki/Zip#URLs will not
require updating all the end points.


That doc is a bit cryptic.

Can you explain how new URL syntax to address a ZIP member (I like, we 
had it in the ancient days with JAR files [ZIPs of course] using '!') 
avoids updating both end points? The content on the server starts using


script src=assets.zip!lib/main.js/script

How do old browsers cope?


HTML nerd nit: is ref the right name? I thought it was used as an
attribute name somewhere in HTML or nearby, but I can't find it. Cc'ing
Anne.


You might be thinking of rev (which is obsolete now in favor of
using rel everywhere).


That's it, thanks!

/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-13 Thread Brendan Eich

Jorge Chamorro wrote:

On 11/10/2013, at 03:53, Brendan Eich wrote:
  

  On Thu, Oct 10, 2013 at 8:10 PM, Andrea Giammarchiandrea.giammar...@gmail.com  
mailto:andrea.giammar...@gmail.com  wrote:
  
  You are confining the problem in HTTP only scenarios while the

  solution provided by
  
  script src=lib/main.js ref=”assets.zip”/script
  
  
  No, you're right -- agree with you and Andrea, this is sweet.


Are main.js and assets.zip two separate files, or is main.js expected to come 
from into assets.zip?


The latter.


  I think the latter would be best because it would guarantee that the assets 
are there by the time main.js runs, as if they were local files, ready to be 
require()d synchronously.


How would old browsers cope, though? They would load only lib/main.js 
(and possibly make a request storm, as Russell brought out elsewhere in 
this thread), so (synchronous) require of another member of assets.zip 
might or might not work.


A prefetching link element might not suffice in old browsers, I'm 
pretty sure it won't.


If the only way to cope with downrev browsers is to use Traceur, so be 
it. We just need to be sure we're not missing some clever alternative.


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-13 Thread Andrea Giammarchi
my latest message was about this situation

www/
  my-assets/
js/
  my-bundle.js
css/
  some.css
image/
  some.png

  assets.zip

where latter contains the equivalent of the my-assets folder.

The prefetching link won't do a thing in old browsers, might be a directive
for modern suggesting a package.zip file to use instead of the folder name.

Old browsers will do everything as they do today, new browsers have the
ability to use one single package instead of requiring the file.

As result:

link rel=package name=my-assets type=application/zip href
=assets.zip
script src=my-assets/js/my-bundle.js/script

will simply request that file through HTTP in old browsers, it will use the
aliased zip file through my-assets if capable.

It's actually very similar to initial proposal also creating a precedent
for network aliases independent from mod_rewrite and friends (client/UA
only mod_rewrite like approach)

Cheers









On Sun, Oct 13, 2013 at 12:34 PM, Brendan Eich bren...@mozilla.com wrote:

 Jorge Chamorro wrote:

 On 11/10/2013, at 03:53, Brendan Eich wrote:

 

   On Thu, Oct 10, 2013 at 8:10 PM, Andrea Giammarchi
 andrea.giammarchi@**gmail.com andrea.giammar...@gmail.com  mailto:
 andrea.giammarchi@**gmail.com andrea.giammar...@gmail.com  wrote:
 You are confining the problem in HTTP only scenarios while
 the
   solution provided by
 script src=lib/main.js ref=”assets.zip”/script
 

 No, you're right -- agree with you and Andrea, this is sweet.


 Are main.js and assets.zip two separate files, or is main.js expected to
 come from into assets.zip?


 The latter.


I think the latter would be best because it would guarantee that the
 assets are there by the time main.js runs, as if they were local files,
 ready to be require()d synchronously.


 How would old browsers cope, though? They would load only lib/main.js (and
 possibly make a request storm, as Russell brought out elsewhere in this
 thread), so (synchronous) require of another member of assets.zip might or
 might not work.

 A prefetching link element might not suffice in old browsers, I'm pretty
 sure it won't.

 If the only way to cope with downrev browsers is to use Traceur, so be it.
 We just need to be sure we're not missing some clever alternative.

 /be

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-12 Thread Brendan Eich

Andrea Giammarchi wrote:
Agreed that this might be the wrong place but also it's surprising 
that there was a W3C recommendation and Mozilla, the most standards 
promoter I know, ignored it.


Yes, we went with JSON over XML. Sorry.

/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-12 Thread Brendan Eich
I agree with your approach that values ease of content-only (in the 
HTML, via script src= ref=) migration. I think David and others pointing 
to HTTP 2 undervalue that.


However, Russell's counter-argument that fallback in older browsers to 
loading lots of little files, request by request, from the server 
directory hierarchy, may be too painful, reducing the value as a 
migration technique.


Is there a way for old browsers that avoids a request storm, and which 
can be expressed entirely in the hosted content (no protocol stack 
update problem)?


/be


Jorge Chamorro mailto:jo...@jorgechamorro.com
October 11, 2013 3:14 PM

I appreciate the beauty in 'speedy' and http2.0, but it requires an 
upgrade of both ends to http2.0, all the servers and browsers in the 
world.


We could have the .zip prefetch ref attribute operative tomorrow in 
the next browser update, an update which we are going to do anyway. No 
need to touch any server.


There are many more client than server side developers, and to grasp 
the idea behind an assets.zip prefetch ref attribute takes just a few 
seconds, or perhaps a minute, no more. The word spreads, and in less 
than a year we'd have the web served zipped, but servers are much more 
complicated than that, and no two servers are programmed nor 
configured equal.


And http2.0 and 'speedy' and all their beauty too, in the future. Why 
does it have to be one or the other?


Russell Leggett mailto:russell.legg...@gmail.com
October 11, 2013 6:53 AM

 As you can see the resource packages attempt got dropped.
Perhaps this proposal will go through because it is tied to the
module loader?

It's sad. What happened? Why was it ditched? Was it, perhaps, too
ahead of its time?

Let's try again :-)


As you can see, it basically fell to the same conclusion as you are 
trying to fight right now - SPDY and html pipelining. The idea that 
this can be transparently handled better with http rather than a 
bundling approach.


- Russ

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss
Jorge Chamorro mailto:jo...@jorgechamorro.com
October 11, 2013 6:44 AM
On 11/10/2013, at 15:15, Russell Leggett wrote:


Just wanted to point out a couple of previous attempts at something similar to 
generic bundling and the reactions it got, because so far it hasn't panned out.

Way back in 2008, it was my one and only real contribution to the whatwg list 
before getting a little frustrated and moving on: 
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2008-July/015411.html


Brilliant, Yes! That's it!

if all js,css,and even images and other files could be zipped up
or tarred, that would only require a single HTTP request. This could
basically just add the files to the browser cache or other local storage
mechanism so that requests for the resources would not need to make an extra
trip

2008? That's 5 looong years ago.


Then a year later, Alex Limi independently came up with a very similar 
proposal: http://limi.net/articles/resource-packages/
and actually got a version of it working in some branch of firefox: 
https://bugzilla.mozilla.org/show_bug.cgi?id=529208
And here's a couple of discussions on that proposal: 
https://groups.google.com/forum/#!topic/mozilla.dev.platform/MXeSYsawUgU
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2010-August/027582.html

As you can see the resource packages attempt got dropped. Perhaps this proposal 
will go through because it is tied to the module loader?


It's sad. What happened? Why was it ditched? Was it, perhaps, too ahead of its 
time?

Let's try again :-)

Russell Leggett mailto:russell.legg...@gmail.com
October 11, 2013 6:15 AM
Just wanted to point out a couple of previous attempts at something 
similar to generic bundling and the reactions it got, because so far 
it hasn't panned out.


Way back in 2008, it was my one and only real contribution to the 
whatwg list before getting a little frustrated and moving on: 
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2008-July/015411.html


Then a year later, Alex Limi independently came up with a very similar 
proposal: http://limi.net/articles/resource-packages/
and actually got a version of it working in some branch of firefox: 
https://bugzilla.mozilla.org/show_bug.cgi?id=529208
And here's a couple of discussions on that proposal: 
https://groups.google.com/forum/#!topic/mozilla.dev.platform/MXeSYsawUgU 
https://groups.google.com/forum/#%21topic/mozilla.dev.platform/MXeSYsawUgU

http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2010-August/027582.html

As you can see the resource packages attempt got dropped. Perhaps this 
proposal will go through because it is tied to the module loader?


Not sure if this changes anything, carry on.

- Russ

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss
David

Re: Generic Bundling

2013-10-12 Thread David Bruant

Le 12/10/2013 21:07, Brendan Eich a écrit :
I agree with your approach that values ease of content-only (in the 
HTML, via script src= ref=) migration. I think David and others 
pointing to HTTP 2 undervalue that.
I probably underestimate the importance of content-only websites indeed. 
Do we have numbers? trends?


However, Russell's counter-argument that fallback in older browsers to 
loading lots of little files, request by request, from the server 
directory hierarchy, may be too painful, reducing the value as a 
migration technique.


Is there a way for old browsers that avoids a request storm, and which 
can be expressed entirely in the hosted content (no protocol stack 
update problem)?

concatenation and sprites?
He doesn't use the word bundling, but in his talks and article, Ilya 
Grigorik suggests that inlining resources is the equivalent of server 
push (and bundling in our discussion). It works really well with lots of 
small files.
To a large extent, the problem we have doesn't really exists since 
resources can be inlined. It just requires annoying engineering 
(especially with script execution ordering issues).


David



/be


Jorge Chamorro mailto:jo...@jorgechamorro.com
October 11, 2013 3:14 PM

I appreciate the beauty in 'speedy' and http2.0, but it requires an 
upgrade of both ends to http2.0, all the servers and browsers in the 
world.


We could have the .zip prefetch ref attribute operative tomorrow in 
the next browser update, an update which we are going to do anyway. 
No need to touch any server.


There are many more client than server side developers, and to grasp 
the idea behind an assets.zip prefetch ref attribute takes just a few 
seconds, or perhaps a minute, no more. The word spreads, and in less 
than a year we'd have the web served zipped, but servers are much 
more complicated than that, and no two servers are programmed nor 
configured equal.


And http2.0 and 'speedy' and all their beauty too, in the future. Why 
does it have to be one or the other?


Russell Leggett mailto:russell.legg...@gmail.com
October 11, 2013 6:53 AM

 As you can see the resource packages attempt got dropped.
Perhaps this proposal will go through because it is tied to the
module loader?

It's sad. What happened? Why was it ditched? Was it, perhaps, too
ahead of its time?

Let's try again :-)


As you can see, it basically fell to the same conclusion as you are 
trying to fight right now - SPDY and html pipelining. The idea that 
this can be transparently handled better with http rather than a 
bundling approach.


- Russ

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss
Jorge Chamorro mailto:jo...@jorgechamorro.com
October 11, 2013 6:44 AM
On 11/10/2013, at 15:15, Russell Leggett wrote:

Just wanted to point out a couple of previous attempts at something 
similar to generic bundling and the reactions it got, because so far 
it hasn't panned out.


Way back in 2008, it was my one and only real contribution to the 
whatwg list before getting a little frustrated and moving on: 
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2008-July/015411.html


Brilliant, Yes! That's it!

if all js,css,and even images and other files could be zipped up
or tarred, that would only require a single HTTP request. This could
basically just add the files to the browser cache or other local storage
mechanism so that requests for the resources would not need to make 
an extra

trip

2008? That's 5 looong years ago.

Then a year later, Alex Limi independently came up with a very 
similar proposal: http://limi.net/articles/resource-packages/
and actually got a version of it working in some branch of firefox: 
https://bugzilla.mozilla.org/show_bug.cgi?id=529208
And here's a couple of discussions on that proposal: 
https://groups.google.com/forum/#!topic/mozilla.dev.platform/MXeSYsawUgU 

http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2010-August/027582.html 



As you can see the resource packages attempt got dropped. Perhaps 
this proposal will go through because it is tied to the module loader?


It's sad. What happened? Why was it ditched? Was it, perhaps, too 
ahead of its time?


Let's try again :-)

Russell Leggett mailto:russell.legg...@gmail.com
October 11, 2013 6:15 AM
Just wanted to point out a couple of previous attempts at something 
similar to generic bundling and the reactions it got, because so far 
it hasn't panned out.


Way back in 2008, it was my one and only real contribution to the 
whatwg list before getting a little frustrated and moving on: 
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2008-July/015411.html


Then a year later, Alex Limi independently came up with a very 
similar proposal: http://limi.net/articles/resource-packages/
and actually got a version of it working in some branch of firefox: 
https://bugzilla.mozilla.org/show_bug.cgi?id=529208
And here's

Re: Generic Bundling

2013-10-11 Thread Jorge Chamorro
On 11/10/2013, at 03:10, Andrea Giammarchi wrote:
 
 
 Last personal thought: this is way nicer than any AMD solution I've seen, 
 giving a real alternative to async modules too via script defer/async 
 attributes without requiring boiler plates all over to include on demand.

Because all the files in the .zip would appear to be 'local', a synchronous 
require() can be built on top of that, and suddenly we'd have almost 100% 
node-style modules compatibility in browsers. Or am I missing something?

-- 
( Jorge )();
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-11 Thread Jorge Chamorro
On 11/10/2013, at 03:53, Brendan Eich wrote:
 
 On Thu, Oct 10, 2013 at 8:10 PM, Andrea Giammarchi 
 andrea.giammar...@gmail.com mailto:andrea.giammar...@gmail.com wrote:
 
You are confining the problem in HTTP only scenarios while the
solution provided by
 
script src=lib/main.js ref=”assets.zip”/script
 
 
 No, you're right -- agree with you and Andrea, this is sweet.

Are main.js and assets.zip two separate files, or is main.js expected to come 
from into assets.zip? I think the latter would be best because it would 
guarantee that the assets are there by the time main.js runs, as if they were 
local files, ready to be require()d synchronously.

-- 
( Jorge )();
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-11 Thread David Bruant

Le 11/10/2013 03:10, Andrea Giammarchi a écrit :
You are confining the problem in HTTP only scenarios while the 
solution provided by


script src=lib/main.js ref=”assets.zip”/script

can be handy/reused in offline packaged applications too so HTTP 2 
might win on HTTP but it's not a general HTML App packaging option.
Packaged apps have other options to preload resources. For instance 
resources to be preloaded could be listed individually in the manifest file.

Arguably, we already have link@rel=prefetch for that purpose too.
Providing a zip in the manifest file could work, but I'm not sure I see 
the benefit over individual files. Disk fragmentation issues maybe?


Back to the HTTP support, I would go for the possibility to bundle 
through CDN too which might penalize minor used libraries (like few of 
mines) but boost up most common scenario across websites or apps 
(thinking about Angular bundle, Ember bundle, jQueryUI bundle or ExtJS 
bundle, etc)
Except for geographical distribution, CDNs are rendered irrelevant in 
HTTP 2.0, because HTTP 2.0 solves all the parallelism issues HTTP 1.1 
has. Hopefully CDN will evolve to propose to deliver your full app with 
server push (so no need for library bundles since the bundle is part of 
your application).


Back to the deployment question, when HTTP 2.0 is out with server push 
(SPDY is already in Firefox, Chrome and IE11), will there still be a use 
for @ref? Will browser with support for @ref and no support for server 
push exist at all?


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-11 Thread Jorge Chamorro
On 11/10/2013, at 12:02, David Bruant wrote:

 Providing a zip in the manifest file could work, but I'm not sure I see the 
 benefit over individual files. Disk fragmentation issues maybe?

One benefit is that a single .zip can fetch a bunch of files in a single 
network round trip.

Another is than once the .zip has been unzipped, its files can be accessed 
synchronously.

-- 
( Jorge )();
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-11 Thread David Bruant

Le 11/10/2013 12:46, Jorge Chamorro a écrit :

On 11/10/2013, at 12:02, David Bruant wrote:


Providing a zip in the manifest file could work, but I'm not sure I see the 
benefit over individual files. Disk fragmentation issues maybe?

One benefit is that a single .zip can fetch a bunch of files in a single 
network round trip.
The manifest file was in response to Andrea's point about packaged app 
(where he pointed that network requests aren't the only use case), so 
network round trips don't apply.



Another is than once the .zip has been unzipped, its files can be accessed 
synchronously.
If we're back on the network use case, server push has the same benefits 
(resource bundling and in-memory availability)... and saves a network 
round-trip since the resources come along!


I highly recommend reading 
http://www.igvita.com/2013/06/12/innovating-with-http-2.0-server-push/ 
(which is the best resource I've found on server push so far).

If you prefer video form:
http://www.youtube.com/watch?v=46exugLbGFIlist=PLS3jzvALRSe6uP9gVfXLCG6nWo7M0hAJYindex=2 
(start at 9'00'' for HTTP 2.0 and 11'00'' for server push)


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-11 Thread Russell Leggett
Just wanted to point out a couple of previous attempts at something similar
to generic bundling and the reactions it got, because so far it hasn't
panned out.

Way back in 2008, it was my one and only real contribution to the whatwg
list before getting a little frustrated and moving on:
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2008-July/015411.html

Then a year later, Alex Limi independently came up with a very similar
proposal: http://limi.net/articles/resource-packages/
and actually got a version of it working in some branch of firefox:
https://bugzilla.mozilla.org/show_bug.cgi?id=529208
And here's a couple of discussions on that proposal:
https://groups.google.com/forum/#!topic/mozilla.dev.platform/MXeSYsawUgU
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2010-August/027582.html

As you can see the resource packages attempt got dropped. Perhaps this
proposal will go through because it is tied to the module loader?

Not sure if this changes anything, carry on.

- Russ
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-11 Thread Jorge Chamorro
On 11/10/2013, at 13:23, David Bruant wrote:
 Le 11/10/2013 12:46, Jorge Chamorro a écrit :
 On 11/10/2013, at 12:02, David Bruant wrote:
 
 Providing a zip in the manifest file could work, but I'm not sure I see the 
 benefit over individual files. Disk fragmentation issues maybe?
 One benefit is that a single .zip can fetch a bunch of files in a single 
 network round trip.
 The manifest file was in response to Andrea's point about packaged app (where 
 he pointed that network requests aren't the only use case), so network round 
 trips don't apply.
 
 Another is than once the .zip has been unzipped, its files can be accessed 
 synchronously.
 If we're back on the network use case, server push has the same benefits 
 (resource bundling and in-memory availability)... and saves a network 
 round-trip since the resources come along!

I've read/seen the links you've posted now, thank you.

HTTP2.0 is awesome, but it requires resource planning a priori, and the 
cooperation of the server, and a server HTTP2.0 capable. Not sure if the 
client's http stack does need to be updated too, does it?

OTOH the script src='main.js' ref='assets.zip' is a 100% client-side 
solution, so it would be compatible with any server of any http version. It 
requires a browser that implements it though, and preferably a way to 
feature-detect the capability, of course, so it's not perfect either.

But the ability to use synchronous require()s, á la node, in a browser would be 
a big big big win. imho. The ref='assets.zip', it seems to me, is an easier 
proposition.

-- 
( Jorge )();
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-11 Thread David Bruant

Le 11/10/2013 15:15, Russell Leggett a écrit :
Just wanted to point out a couple of previous attempts at something 
similar to generic bundling and the reactions it got, because so far 
it hasn't panned out.


Way back in 2008, it was my one and only real contribution to the 
whatwg list before getting a little frustrated and moving on: 
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2008-July/015411.html


Then a year later, Alex Limi independently came up with a very similar 
proposal: http://limi.net/articles/resource-packages/
The Be entirely transparent to browsers that do not support it. goal 
can't be achieved with @ref (but maybe polyfilled) specifically because 
it makes @src relative to the archive root.


and actually got a version of it working in some branch of firefox: 
https://bugzilla.mozilla.org/show_bug.cgi?id=529208

Conclusion of the bug:
We've pretty clearly decided to spend our resources on SPDY and HTTP 
pipelining, rather than this approach.


And here's a couple of discussions on that proposal: 
https://groups.google.com/forum/#!topic/mozilla.dev.platform/MXeSYsawUgU 
https://groups.google.com/forum/#%21topic/mozilla.dev.platform/MXeSYsawUgU

http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2010-August/027582.html

As you can see the resource packages attempt got dropped. Perhaps this 
proposal will go through because it is tied to the module loader?
Server push is happening regardless. For all I know it's already agreed 
upon, it's not an if, it's a when (happy to hear if some have 
fresher infos)



Not sure if this changes anything, carry on.
Server push is happening as part of HTTP 2.0. Do you have a use case in 
which it's insufficient?


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-11 Thread Jorge Chamorro
On 11/10/2013, at 15:15, Russell Leggett wrote:

 Just wanted to point out a couple of previous attempts at something similar 
 to generic bundling and the reactions it got, because so far it hasn't panned 
 out.
 
 Way back in 2008, it was my one and only real contribution to the whatwg list 
 before getting a little frustrated and moving on: 
 http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2008-July/015411.html

Brilliant, Yes! That's it!

if all js,css,and even images and other files could be zipped up
or tarred, that would only require a single HTTP request. This could
basically just add the files to the browser cache or other local storage
mechanism so that requests for the resources would not need to make an extra
trip

2008? That's 5 looong years ago.

 Then a year later, Alex Limi independently came up with a very similar 
 proposal: http://limi.net/articles/resource-packages/
 and actually got a version of it working in some branch of firefox: 
 https://bugzilla.mozilla.org/show_bug.cgi?id=529208
 And here's a couple of discussions on that proposal: 
 https://groups.google.com/forum/#!topic/mozilla.dev.platform/MXeSYsawUgU
 http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2010-August/027582.html
 
 As you can see the resource packages attempt got dropped. Perhaps this 
 proposal will go through because it is tied to the module loader?

It's sad. What happened? Why was it ditched? Was it, perhaps, too ahead of its 
time?

Let's try again :-)

-- 
( Jorge )();
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-11 Thread Russell Leggett
  Not sure if this changes anything, carry on.

 Server push is happening as part of HTTP 2.0. Do you have a use case in
 which it's insufficient?


Not sure if this was directed at me or Jorge, but in case it was directed
at me, I wasn't actually advocating for this anymore, simply acting as a
historian. I have a solution that works fine for me right now, and I'm
content to wait HTTP 2.0 or whatever the next step is.

 - Russ
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-11 Thread Jeremy Darling
HTTP 2.0 will require changes to servers for it to work properly, it will
also require that developers learn a bit more about the pipeline or rely on
some vendor to implement the smarts for them.

Asset Bundling on the other hand will provide a quick and easy transition
for most development communities.  Compress everything, update your ref's
and wait for the browsers to catch up, or for your server dev team to work
out push.

You could still push your asset bundle with HTTP 2.0 and achieve basically
the same results as if you bundled all the assets and sent them down the
pipe with HTTP 2.0.

I don't see them as foe's or alternatives to one another.  Quite to the
opposite, they seem to compliment each other quite well.


On Fri, Oct 11, 2013 at 8:51 AM, Russell Leggett
russell.legg...@gmail.comwrote:


   Not sure if this changes anything, carry on.

 Server push is happening as part of HTTP 2.0. Do you have a use case in
 which it's insufficient?


 Not sure if this was directed at me or Jorge, but in case it was directed
 at me, I wasn't actually advocating for this anymore, simply acting as a
 historian. I have a solution that works fine for me right now, and I'm
 content to wait HTTP 2.0 or whatever the next step is.

  - Russ


 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-11 Thread Russell Leggett
  As you can see the resource packages attempt got dropped. Perhaps this
 proposal will go through because it is tied to the module loader?

 It's sad. What happened? Why was it ditched? Was it, perhaps, too ahead of
 its time?

 Let's try again :-)


As you can see, it basically fell to the same conclusion as you are trying
to fight right now - SPDY and html pipelining. The idea that this can be
transparently handled better with http rather than a bundling approach.

- Russ
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-11 Thread Russell Leggett
On Fri, Oct 11, 2013 at 9:57 AM, Jeremy Darling jeremy.darl...@gmail.comwrote:

 HTTP 2.0 will require changes to servers for it to work properly, it will
 also require that developers learn a bit more about the pipeline or rely on
 some vendor to implement the smarts for them.

 Asset Bundling on the other hand will provide a quick and easy transition
 for most development communities.  Compress everything, update your ref's
 and wait for the browsers to catch up, or for your server dev team to work
 out push.

 You could still push your asset bundle with HTTP 2.0 and achieve basically
 the same results as if you bundled all the assets and sent them down the
 pipe with HTTP 2.0.

 I don't see them as foe's or alternatives to one another.  Quite to the
 opposite, they seem to compliment each other quite well.


Well, just so I understand - let's say you have 100 JavaScript files you
want in your bundle. Can you explain to me the strategy for handling the
fallback unsupported case? Does the bundle contain module based code
assuming es6 and the fallback is all es5 code using traceur or something?
Just trying to get a vision for this.

- Russ
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-11 Thread Jeremy Darling
The way I read the proposal (and I could be wrong here), you would have
copies on your server in the appropriate locations.  So I may have a /js/
folder with all my core JS inside it, and a /vendor/*/ with each vendor
package inside of it.  I could have multiple asset package's (one for my
core, one for each vendor code, or maybe one for all vendor code), or I
could simply have a single asset package referenced.  If the browser knows
what to do with it all it will pull down the package files, extract it/them
and use the code from the package.  If not it would call back to the server
for each file that it needed on the page.

Basically, here is my understanding in pseudo code (there may be typos
below);

html
  head
script type=text/javascript src=/vendor/jquery/jquery.min.js
ref=/pkg/jquery.zip/script
link rel=stylesheet type=text/css
href=/vendor/skeleton/css/base.css ref=/pkg/skeleton.zip /
link rel=stylesheet type=text/css
href=/vendor/skeleton/css/skeleton.css ref=/pkg/skeleton.zip /
link rel=stylesheet type=text/css
href=/vendor/skeleton/css/layout.css ref=/pkg/skeleton.zip /
  /head
  body
script type=text/javascript src=/js/myLoader.js
ref=/pkg/app.zip/script
script type=text/javascript src=/js/mySupportScript.js
ref=/pkg/app.zip/script
script type=text/javascript src=/js/app.js
ref=/pkg/app.zip/script
  /body
/html

My thoughts, after reading, are that there would be three requests or
pushes back for /pkg/jquery.zip, /pgk/skeleton.zip, and /pkg/app.zip when
the browser supported packaging.  If the browser didn't then you would see
7 requests to get the assets.

Course, I could be wrong :)


On Fri, Oct 11, 2013 at 9:07 AM, Russell Leggett
russell.legg...@gmail.comwrote:


 On Fri, Oct 11, 2013 at 9:57 AM, Jeremy Darling 
 jeremy.darl...@gmail.comwrote:

 HTTP 2.0 will require changes to servers for it to work properly, it will
 also require that developers learn a bit more about the pipeline or rely on
 some vendor to implement the smarts for them.

 Asset Bundling on the other hand will provide a quick and easy transition
 for most development communities.  Compress everything, update your ref's
 and wait for the browsers to catch up, or for your server dev team to work
 out push.

 You could still push your asset bundle with HTTP 2.0 and achieve
 basically the same results as if you bundled all the assets and sent them
 down the pipe with HTTP 2.0.

 I don't see them as foe's or alternatives to one another.  Quite to the
 opposite, they seem to compliment each other quite well.


 Well, just so I understand - let's say you have 100 JavaScript files you
 want in your bundle. Can you explain to me the strategy for handling the
 fallback unsupported case? Does the bundle contain module based code
 assuming es6 and the fallback is all es5 code using traceur or something?
 Just trying to get a vision for this.

 - Russ

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-11 Thread David Bruant

Le 11/10/2013 15:51, Russell Leggett a écrit :



Not sure if this changes anything, carry on.

Server push is happening as part of HTTP 2.0. Do you have a use
case in which it's insufficient?


Not sure if this was directed at me or Jorge
To anyone really, trying to understand if people are doing things that 
aren't solved by HTTP 2.0 server push.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


RE: Generic Bundling

2013-10-11 Thread Jonathan Bond-Caron
On Thu Oct 10 02:22 PM, David Bruant wrote:
 Among other benefits [1]:
 pushed resources are cached individually by the browser and can be reused 
 across many pages
 = It's not clear this can happen with an asset.zip
 

Opposite benefit of using assets.zip, only a single cache entry to lookup. 
You should be able to re-use assets.zip across pages. 

Imagine having 20 images that never change in 1 year. The browser will lookup 
20 cache entries, why? Use sprites?
A common use case would be bundle resources that rarely change in a single file.

 We can discuss the deployment aspects of HTTP 2 and whether Generic Bundling 
 as proposed can provide benefits before HTTP 2 is fully deployed, but I feel 
 the bottleneck will be the server-side
 engineering to bundle the resources and this work is equivalent for both HTTP 
 2 and the proposed Generic Bundling.
 So HTTP 2 wins?
 

It will be useful, I think of it as a win for files that change frequently.  

Another benefit of bundles not solved by HTTP 2: theming.
http://jquery.com/themes/blue.zip
http://jquery.com/themes/red.zip

It would make distribution of themes much simpler. If developers  point to the 
same 'cached' bundle from a cdn, that's a win for less internet traffic.
 
The pattern could be:
link rel=loader type=application/zip 
href=http://jquery.com/themes/blue.zip; ref=theme 
link rel=stylesheet type=text/css href=buttons.css ref=theme

For backwards compatibility, you would have buttons.css available on your own 
the server.
 
I think of bundling as better way of distributing applications (www or 
packaged), not only the performance benefits of pipelining stuff in a single 
request.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-11 Thread David Bruant

Le 11/10/2013 19:01, Andrea Giammarchi a écrit :
As I've said, you keep confining the problem and the solution over 
HTTP and servers while I see this approach, maybe slightly revisited, 
a good **generic bundling** solution even without a server and easily 
adoptable now plus this will not mean HTTP 2 won't be handy to help 
with this case too.
We seem to differ in what we think of the solution, because we don't 
seem to address the same problem. For me, bundling is a way to prevent 
round-trips (whether it is to the network or to disk). You seem to want 
what is (by me at least) usually referred to as packaging.


Dave Herman or others will disconfirm if I'm wrong, but I think 
generic here was in opposition with previous proposals were 
JS/module-only bundling was proposed. In that context, generic 
bundling just means bundle heterogeneous resources which is different 
from packaging.


On saving network round-trips, server push seems to be the way forward. 
On saving disk round-trips, various app manifest formats (at least the 
two I know from FirefoxOS and Tizen) can be easily extended to declare 
which resource should be put in-memory as soon as the app starts.


Now, let's talk about packaging.

The proposal could be revisited to tell browsers to look for 
package.zip/index.html automagically once opened so we'll have a 
bundle that can work over HTTP and over Bluetooth exchange too.
Both FirefoxOS and Tizen (if someone have infos on other OSes with web 
apps...) took a different approach, that is having a stable manifest 
file location in the package and this manifest file points to the main 
HTML file (launch_path property on FirefoxOS, content@src for Tizen)


Interestingly, this has all been working well without having to change 
HTML even a little; without having to add a new HTML attribute or new 
@src value semantics, specifically!


So, my counter question would be: do we have a standard generic bundle 
option that works same way every other programming language has ? (war 
files, python distributable with self extracting archive and 
execution, .NET apps, etc etc etc)
We do not (not as far as I know at least). There is the widget spec [1] 
which is as mature as being a W3C Recommandation. That's what Tizen is 
based on, but FirefoxOS chose a different path (hopefully, it's not only 
because it's XML based :-p)


If such thing exists plus HTTP2 will solve all other problems then I 
agree it's not a good idea to implement this now.


If such thing does not exist I would like to keep thinking the 
combination JS + HTML + CSS can offer a lot even without a webserver 
behind or any protocol ... there is a database that does not need a 
connection and all tools needed to offer great applications.
Yes. I think we should continue this discussion in a more appropriate 
place though; public-weba...@w3.org certainly.


David

[1] http://www.w3.org/TR/widgets/
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-11 Thread Jorge Chamorro
On 11/10/2013, at 15:53, Russell Leggett wrote:
 
  As you can see the resource packages attempt got dropped. Perhaps this 
  proposal will go through because it is tied to the module loader?
 
 It's sad. What happened? Why was it ditched? Was it, perhaps, too ahead of 
 its time?
 
 Let's try again :-)
 
 As you can see, it basically fell to the same conclusion as you are trying to 
 fight right now - SPDY and html pipelining. The idea that this can be 
 transparently handled better with http rather than a bundling approach.

I appreciate the beauty in 'speedy' and http2.0, but it requires an upgrade of 
both ends to http2.0, all the servers and browsers in the world.

We could have the .zip prefetch ref attribute operative tomorrow in the next 
browser update, an update which we are going to do anyway. No need to touch any 
server.

There are many more client than server side developers, and to grasp the idea 
behind an assets.zip prefetch ref attribute takes just a few seconds, or 
perhaps a minute, no more. The word spreads, and in less than a year we'd have 
the web served zipped, but servers are much more complicated than that, and no 
two servers are programmed nor configured equal.

And http2.0 and 'speedy' and all their beauty too, in the future. Why does it 
have to be one or the other?

-- 
( Jorge )();
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Generic Bundling

2013-10-10 Thread Jonathan Bond-Caron
About Generic Bundling in:
https://github.com/rwaldron/tc39-notes/blob/master/es6/2013-09/modules.pdf

script src=assets.zip$/lib/main.js/script

It could be reworked as:

link rel=loader type=application/zip href=assets.zip
script src=lib/main.js/script

Simple pattern for packaging web apps where 'assets.zip' might be already 
available.

For remote fetching, I imagine it would block waiting for assets.zip to be 
available. Could be solved with something like:

script src=lib/main.js ref=assets.zip/script

Which would lookup link rel=loader and match ref=assets.zip to 
href=assets.zip

Either way, I'm curious where the discussion is taking place, w3c?
How does this fit with Ecmascript, System.loader?
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-10 Thread Jeremy Darling
Personally, I love the concept of script src=lib/main.js
ref=”assets.zip”/script since it allows backwards compatibility if the
client's browser doesn't support packaging.  Older browsers would simply
make the request for lib/main.js while newer browsers would await
assets.zip and then load lib/main.js

The part I would be concerned about (maybe without reason) is compression
algo utilized.  There are enough API's in place that allowed browser
vendors to simply pick their favorite and move forward as is.  The last
thing I want to have to do is embed my cab, zip, lz, 7z, ... file into my
header to support everyone's flavor of compression.

 - Jeremy


On Thu, Oct 10, 2013 at 12:30 PM, Jonathan Bond-Caron 
jbo...@gdesolutions.com wrote:

  About Generic Bundling in:

 https://github.com/rwaldron/tc39-notes/blob/master/es6/2013-09/modules.pdf
 

 ** **

 script src=assets.zip$/lib/main.js/script

 ** **

 It could be reworked as:

 ** **

 link rel=loader type=application/zip href=assets.zip 

 script src=lib/main.js/script

 ** **

 Simple pattern for packaging web apps where ‘assets.zip’ might be already
 available.

 ** **

 For remote fetching, I imagine it would block waiting for assets.zip to be
 available. Could be solved with something like:

 ** **

 script src=lib/main.js ref=”assets.zip”/script

 ** **

 Which would lookup link rel=loader and match ref=assets.zip to
 href=assets.zip

 ** **

 Either way, I’m curious where the discussion is taking place, w3c? 

 How does this fit with Ecmascript, System.loader?

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-10 Thread David Bruant
HTTP 2 is coming with a feature called server-push [1] which seems more 
appropriate for this type of bundling.
In essence, when being asked for a webpage, the server sends the HTML 
page as well as a bunch of resources (CSS, JS, image, whatever) in the 
same HTTP response. These are individually decompressed and cached and 
available handy when the HTML parsing requires fetching resources (lots 
of that can happen in parallel I imagine, depending on the bundling).
Best of all, this is all seamless. Just keep writing HTML as you've 
always had, no need for new assets.zip$/lib/main.js syntax. It keeps 
the HTML decoupled from the how of resource delivery.


Among other benefits [1]:
pushed resources are cached individually by the browser and can be 
reused across many pages

= It's not clear this can happen with an asset.zip

by the time the browser discovers the script tag in the HTML response 
the |main.js| file is already in cache, and no extra network roundtrips 
are incurred!

= Not even a need to load an additional asset.zip

We can discuss the deployment aspects of HTTP 2 and whether Generic 
Bundling as proposed can provide benefits before HTTP 2 is fully 
deployed, but I feel the bottleneck will be the server-side engineering 
to bundle the resources and this work is equivalent for both HTTP 2 and 
the proposed Generic Bundling.

So HTTP 2 wins?

David

[1] http://www.igvita.com/2013/06/12/innovating-with-http-2.0-server-push/

Le 10/10/2013 19:30, Jonathan Bond-Caron a écrit :


About Generic Bundling in:

https://github.com/rwaldron/tc39-notes/blob/master/es6/2013-09/modules.pdf

script src=assets.zip$/lib/main.js/script

It could be reworked as:

link rel=loader type=application/zip href=assets.zip

script src=lib/main.js/script

Simple pattern for packaging web apps where 'assets.zip' might be 
already available.


For remote fetching, I imagine it would block waiting for assets.zip 
to be available. Could be solved with something like:


script src=lib/main.js ref=assets.zip/script

Which would lookup link rel=loader and match ref=assets.zip to 
href=assets.zip


Either way, I'm curious where the discussion is taking place, w3c?

How does this fit with Ecmascript, System.loader?



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-10 Thread Kevin Smith
Side note (sorry):  I missed that PDF the first time around, but from what
I read it looks like good progress is being made.  It feels like it's
coming together.  : )

{ Kevin }
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-10 Thread Andrea Giammarchi
You are confining the problem in HTTP only scenarios while the solution
provided by

script src=lib/main.js ref=”assets.zip”/script

can be handy/reused in offline packaged applications too so HTTP 2 might
win on HTTP but it's not a general HTML App packaging option.

I think FirefoxOS Apps might have something similar without even bothering
the network stack, but then there apps are already packaged in a similar
way so here would be the common bundle/package within the bundled FFOS App
but others might reuse same logic too.

@Jeremy, it does not matter much what browser prefers when the de-facto
standard is to accept and support either deflate or g(un)zip so anything
compatible with these two (basically same) algo woul dbe acceptable and
easy to implement for everyone, am I correct?

Back to the HTTP support, I would go for the possibility to bundle through
CDN too which might penalize minor used libraries (like few of mines) but
boost up most common scenario across websites or apps (thinking about
Angular bundle, Ember bundle, jQueryUI bundle or ExtJS bundle, etc)

Last personal thought: this is way nicer than any AMD solution I've seen,
giving a real alternative to async modules too via script defer/async
attributes without requiring boiler plates all over to include on demand.

+1 for that -1000 is worth my opinion ^_^



On Thu, Oct 10, 2013 at 11:22 AM, David Bruant bruan...@gmail.com wrote:

  HTTP 2 is coming with a feature called server-push [1] which seems more
 appropriate for this type of bundling.
 In essence, when being asked for a webpage, the server sends the HTML page
 as well as a bunch of resources (CSS, JS, image, whatever) in the same HTTP
 response. These are individually decompressed and cached and available
 handy when the HTML parsing requires fetching resources (lots of that can
 happen in parallel I imagine, depending on the bundling).
 Best of all, this is all seamless. Just keep writing HTML as you've always
 had, no need for new assets.zip$/lib/main.js syntax. It keeps the HTML
 decoupled from the how of resource delivery.

 Among other benefits [1]:
 pushed resources are cached individually by the browser and can be reused
 across many pages
 = It's not clear this can happen with an asset.zip

 by the time the browser discovers the script tag in the HTML response the
 main.js file is already in cache, and no extra network roundtrips are
 incurred!
 = Not even a need to load an additional asset.zip

 We can discuss the deployment aspects of HTTP 2 and whether Generic
 Bundling as proposed can provide benefits before HTTP 2 is fully deployed,
 but I feel the bottleneck will be the server-side engineering to bundle the
 resources and this work is equivalent for both HTTP 2 and the proposed
 Generic Bundling.
 So HTTP 2 wins?

 David

 [1] http://www.igvita.com/2013/06/12/innovating-with-http-2.0-server-push/

 Le 10/10/2013 19:30, Jonathan Bond-Caron a écrit :

  About Generic Bundling in:

 https://github.com/rwaldron/tc39-notes/blob/master/es6/2013-09/modules.pdf
 

 ** **

 script src=assets.zip$/lib/main.js/script

 ** **

 It could be reworked as:

 ** **

 link rel=loader type=application/zip href=assets.zip 

 script src=lib/main.js/script

 ** **

 Simple pattern for packaging web apps where ‘assets.zip’ might be already
 available.

 ** **

 For remote fetching, I imagine it would block waiting for assets.zip to be
 available. Could be solved with something like:

 ** **

 script src=lib/main.js ref=”assets.zip”/script

 ** **

 Which would lookup link rel=loader and match ref=assets.zip to
 href=assets.zip

 ** **

 Either way, I’m curious where the discussion is taking place, w3c? 

 How does this fit with Ecmascript, System.loader?


 ___
 es-discuss mailing 
 listes-discuss@mozilla.orghttps://mail.mozilla.org/listinfo/es-discuss



 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generic Bundling

2013-10-10 Thread Jeremy Darling
I understand g(un)zip is the de-facto standard, I would just hate to see
such a small detail overlooked at the end of the day when a one liner
pretty much covers it.

Oh, and I'll 2nd the way nicer than any AMD solution.  This also keeps
readability in mind along with forcing declaration instead of allowing you
to loose track of dependencies accidentally.  I prefer to have page bloat
in dev form and compile down for production use only if necessary.

Course, that's two cents from a guy who usually hides in the corners on
this type of discussion.


On Thu, Oct 10, 2013 at 8:10 PM, Andrea Giammarchi 
andrea.giammar...@gmail.com wrote:

 You are confining the problem in HTTP only scenarios while the solution
 provided by

 script src=lib/main.js ref=”assets.zip”/script

 can be handy/reused in offline packaged applications too so HTTP 2 might
 win on HTTP but it's not a general HTML App packaging option.

 I think FirefoxOS Apps might have something similar without even bothering
 the network stack, but then there apps are already packaged in a similar
 way so here would be the common bundle/package within the bundled FFOS App
 but others might reuse same logic too.

 @Jeremy, it does not matter much what browser prefers when the de-facto
 standard is to accept and support either deflate or g(un)zip so anything
 compatible with these two (basically same) algo woul dbe acceptable and
 easy to implement for everyone, am I correct?

 Back to the HTTP support, I would go for the possibility to bundle through
 CDN too which might penalize minor used libraries (like few of mines) but
 boost up most common scenario across websites or apps (thinking about
 Angular bundle, Ember bundle, jQueryUI bundle or ExtJS bundle, etc)

 Last personal thought: this is way nicer than any AMD solution I've seen,
 giving a real alternative to async modules too via script defer/async
 attributes without requiring boiler plates all over to include on demand.

 +1 for that -1000 is worth my opinion ^_^



 On Thu, Oct 10, 2013 at 11:22 AM, David Bruant bruan...@gmail.com wrote:

  HTTP 2 is coming with a feature called server-push [1] which seems more
 appropriate for this type of bundling.
 In essence, when being asked for a webpage, the server sends the HTML
 page as well as a bunch of resources (CSS, JS, image, whatever) in the same
 HTTP response. These are individually decompressed and cached and available
 handy when the HTML parsing requires fetching resources (lots of that can
 happen in parallel I imagine, depending on the bundling).
 Best of all, this is all seamless. Just keep writing HTML as you've
 always had, no need for new assets.zip$/lib/main.js syntax. It keeps the
 HTML decoupled from the how of resource delivery.

 Among other benefits [1]:
 pushed resources are cached individually by the browser and can be
 reused across many pages
 = It's not clear this can happen with an asset.zip

 by the time the browser discovers the script tag in the HTML response
 the main.js file is already in cache, and no extra network roundtrips
 are incurred!
 = Not even a need to load an additional asset.zip

 We can discuss the deployment aspects of HTTP 2 and whether Generic
 Bundling as proposed can provide benefits before HTTP 2 is fully deployed,
 but I feel the bottleneck will be the server-side engineering to bundle the
 resources and this work is equivalent for both HTTP 2 and the proposed
 Generic Bundling.
 So HTTP 2 wins?

 David

 [1]
 http://www.igvita.com/2013/06/12/innovating-with-http-2.0-server-push/

 Le 10/10/2013 19:30, Jonathan Bond-Caron a écrit :

  About Generic Bundling in:

 https://github.com/rwaldron/tc39-notes/blob/master/es6/2013-09/modules.pdf
 

 ** **

 script src=assets.zip$/lib/main.js/script

 ** **

 It could be reworked as:

 ** **

 link rel=loader type=application/zip href=assets.zip 

 script src=lib/main.js/script

 ** **

 Simple pattern for packaging web apps where ‘assets.zip’ might be already
 available.

 ** **

 For remote fetching, I imagine it would block waiting for assets.zip to
 be available. Could be solved with something like:

 ** **

 script src=lib/main.js ref=”assets.zip”/script

 ** **

 Which would lookup link rel=loader and match ref=assets.zip to
 href=assets.zip

 ** **

 Either way, I’m curious where the discussion is taking place, w3c? 

 How does this fit with Ecmascript, System.loader?


 ___
 es-discuss mailing 
 listes-discuss@mozilla.orghttps://mail.mozilla.org/listinfo/es-discuss



 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss



 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https

Re: Generic Bundling

2013-10-10 Thread Brendan Eich

Jeremy Darling wrote:
I understand g(un)zip is the de-facto standard, I would just hate to 
see such a small detail overlooked at the end of the day when a one 
liner pretty much covers it.


Oh, and I'll 2nd the way nicer than any AMD solution.  This also 
keeps readability in mind along with forcing declaration instead of 
allowing you to loose track of dependencies accidentally.  I prefer to 
have page bloat in dev form and compile down for production use only 
if necessary.


Course, that's two cents from a guy who usually hides in the corners 
on this type of discussion.



On Thu, Oct 10, 2013 at 8:10 PM, Andrea Giammarchi 
andrea.giammar...@gmail.com mailto:andrea.giammar...@gmail.com wrote:


You are confining the problem in HTTP only scenarios while the
solution provided by

script src=lib/main.js ref=”assets.zip”/script



No, you're right -- agree with you and Andrea, this is sweet.

HTML nerd nit: is ref the right name? I thought it was used as an 
attribute name somewhere in HTML or nearby, but I can't find it. Cc'ing 
Anne.


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss