Hi,

I'd like to propose exposing the 'renderport' to web content, defined as the area of a scrollable container that is actually rendered by the UA, which may be larger than the viewport (visible) area.

Most browsers these days are rendering areas larger than what is currently visible (especially, but not limited to the root scrollframe) to allow asynchronous scrolling.

We also have a lot examples (especially on mobile) of web content that dynamically adds and removes content based on the scroll position (referred to as 'data scrollers' in the IntersectionObserver spec).

Both of these systems are independently maintaining a definition of what is 'soon to be visible', and when they aren't in sync we get sub-optimal performance. If the data scroller chooses an area larger than the UA, then they are bloating the DOM unnecessarily, which is what they are trying to avoid. If the data scroller chooses an area smaller than the UA, then we get unnecessary invalidations and repeated drawing of the same pixels.

Our suggested solution to this is to add a callback to scrollable elements that fires before painting (similar to requestAnimationFrame) and exposes the (approximate) region of the element that the UA is going to treat as visible for the purpose of painting.

DOM changes from within the callback that result in the final renderport area changing probably shouldn't trigger another callback until after the paint to avoid infinite loops.

It might also be nice to extend the IntersectionObserver spec to allow specifying the renderport (plus a margin) as the intersection region to observe, so that content can trigger asynchronous loading of data before it enters the renderport.

- Matt

Reply via email to