Apologies for not responding earlier.

On Wednesday, February 7, 2018 at 7:50:05 AM UTC, nekrad23 wrote:
> On 06.02.2018 13:32, finnbry...@gmail.com wrote:
>   > This is a clearly solvable problem (browsers do it internally)
> > but isn't necessarily simple, some obvious solutions:
> > - a way to get the texture directly on the gpu, to avoid copying back to 
> > the cpu
> > (likely platform and api specific, if the embedder is using a different 
> > graphics 
> > api, or none, this can be annoying to work with)
> 
> On X11 you can stack windows of different clients (even from different
> hosts) into each other. The window manager does the reparenting (which
> it also uses for window decorations, etc). IMHO, Windows has similar
> techniques (also used for out-of-process OLE/Active-X components).
> When using DRI, the clients render into their own buffers and let the
> Xserver do the composition (via GPU).
> 
> Wayland even goes further: applications always directly render into
> their own buffers/surfaces (via gpu), and the compositor puts them
> together to the output device (which even could be a hw video codec
> for streaming, etc).

Whilst I agree that is a good solution for many use cases, it would only cover 
about half the scenarios I've had to deal with, I'll freely admit my use-cases 
may not be typical, so take from that what you will.

The problem is if the embedding application wishes to have control of 
composition. Do they want to render anything on top of the webpage? Apply a 
filter/effect to it? Does it render into a 3D scene? Perhaps they simply want 
to build an entirely headless application, needs the output, but doesn't need 
to show it to anyone.
And as I've had to deal with, Games often do everything they can to bypass the 
compositor for performance reasons. Games often have to embed webpages. Some 
platforms will handle this gracefully (MacOS will fallback to composited 
rendering and the application will simply take the performance hit until the 
second layer goes away), but it's certainly something I'd prefer to avoid.

A solution like this is great for some use cases but not useful for others. I'm 
not arguing *against* this solution, its good for some uses, just that it isn't 
flexible enough to be the only solution. 
CEF has 2 modes, offline rendering (which compositing embedders can use) and a 
CEF-controlled window (which has similar benefits and limitations as your 
solution). Webkit.framework uses a composited view, but the apis allow 
rendering offscreen and getting the composited texture for arbitrary usage (at 
a performance penalty...). As long as there exists a performant and flexible 
solution for those who can't rely on simple composition from the window 
manager, I'm happy. It doesn't need to be easy, I'm fine with being relegated 
to an "advanced" (read: unpleasant) api.

tldr: using the OS compositor may solve 90% of use cases, but I'd prefer a 
solution for the other 10% too. 

> > I'd like to convince you that focusing on the "minimal-effort" option would 
> > > be a mistake - non-performant browser embedding is unacceptable for 
> many use-cases,
> 
> Minimal doesn't need to be slow.

Agreed, though I suspect that of flexible, minimal, and performant, you can 
only pick 2. I'd love to discover otherwise.
_______________________________________________
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo

Reply via email to