If we're going to talk about size and these things then some meaningful 
parameters are necessary to characterize things.

In e-commerce milliseconds to page is useful/responsive (time to first render) 
is a big deal yes, but e-commerce itself has niches.

For bandwidth, @Araq is bang on, the JS should long since be cached and 
images/content should dominate the network. Size however is a useful proxy 
metric for first run work for the interpreter/JIT.

Back to latency, if we're talking about a cold cache (first page load) and high 
latency networks (anything less than LTE) then you're looking at basically 2 
MTU due to TCP slow start... I mean if you can in fact meaningfully do anything 
in 2 MTU then super, but it's bonkers to push into this space unless you 
actually have those constraints.

You can avoid the blocking from downloading and parsing large JS payloads and 
other assets in the browser which helps perceived performance especially on 
esoteric user agents with terrible engines. That's a bigger deal in small 
niches (by volume of commerce or specific use cases like auto/set 
top/"embedded" scenarios).

So if you have some "basic" parameters to characterize your problem(s) then 
it's worth taking about how far Nim is off the mark and how much sloppiness it 
can absorb for someone and how much they have to makeup. I've written a bunch 
of JS interop code and it's much slimmer for sure but you lose things along the 
way. Those tradeoffs might be entirely unreasonable for someone else and it 
might not make sense to make them in the core.

With all that said it's taking a rather everything in Nim approach and when 
teams are converting an existing project or starting with something small 
within a greater whole, they're not going to like a sudden bump in size for a 
single widget. I can imagine a number of aborted trials that would have 
otherwise resulted in Nim adoption.

Reply via email to