I'd like to chime in here on the GC. I make my living writing WebGL code
for a living. JS'es stop-the-world mark&sweep for however long it takes
approach to GC'ing is very troublesome.

A new frame has to be produced every 16.6ms (and in some cases as in the
merging WebVR implementations every 11.1ms or even 8.3ms). And if that is
delayed in any way, what occurs is jitter. One or several frames are missed
to be drawn until a new frame can be drawn, this is a noticeable effect to
many users. But it's even worse for VR usage because jitter is much more
readily apparent in the case that your head movement no longer produces a
new picture.

But the pernicious effects of GC'ing are already readily apparent even
without strict realtime requirements. Pretty much every JS library (like
jQuery UI) produces very unsmooth animations among other things, because of
this.

Writing code to get around JS'es GC is possible, but it complicates
everything quite a lot (effectively your drawing loop cannot allocate
anything, ever).

The GC-needs of different applications might differ a lot. Some might
prefer a GC that's using as little time as possible, but might occasionally
stop the world for long periods of time. Other applications might be happy
to cede as much as 1/4 of their CPU time to the GC at a clip of 60hz, 90hz
or 120hz but be guaranteed that the GC is never going to occupy more time
than that.

Adding some more flexible ways to deal with GC'ing beyond "just making it
better" would be highly welcome. Provided that incremental/realtime GCs are
probably never gonna happen for JS, the next best thing would probably be
to at least be able to select a GC strategy and set its parameters that
suit your use-case.
_______________________________________________
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss

Reply via email to