Re: [webkit-dev] WebKit GPU rendering possibility

2016-11-04 Thread Dean Jackson

> On 5 Nov. 2016, at 12:34 am, Rogovin, Kevin  wrote:
> 
> One question, what happens with WebGL 2.0 support on WebKit? I ask because 
> WebGL 2.0 is essentially OpenGL ES 3.x for JavaScript.

We've started on a WebGL 2.0 implementation.

Dean

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] RFC: stop using std::chrono, go back to using doubles for time

2016-11-04 Thread Filip Pizlo
EWS doesn't hate it anymore!

Reviews welcome.  I've been slowly integrating feedback as I've received it.

-Filip



> On Nov 4, 2016, at 11:52 AM, Filip Pizlo  wrote:
> 
> Haha, I'm fixing it!
> 
> I could use a review of the time API even while I fix some broken corners in 
> WebCore and WK2.
> 
> -Filip
> 
> 
>> On Nov 4, 2016, at 11:31 AM, Brent Fulgham > > wrote:
>> 
>> EWS Hates your patch! :-)
>> 
>>> On Nov 4, 2016, at 10:01 AM, Filip Pizlo >> > wrote:
>>> 
>>> Hi everyone!
>>> 
>>> That last time we talked about this, there seemed to be a lot of agreement 
>>> that we should go with the Seconds/MonotonicTime/WallTime approach.
>>> 
>>> I have implemented it: https://bugs.webkit.org/show_bug.cgi?id=152045 
>>> 
>>> 
>>> That patch just takes a subset of our time code - all of the stuff that 
>>> transitively touches ParkingLot - and converts it to use the new time 
>>> classes.  Reviews welcome!
>>> 
>>> -Filip
>>> 
>>> 
>>> 
 On May 22, 2016, at 6:41 PM, Filip Pizlo > wrote:
 
 Hi everyone!
 
 I’d like us to stop using std::chrono and go back to using doubles for 
 time.  First I list the things that I think we wanted to get from 
 std::chrono - the reasons why we started switching to it in the first 
 place.  Then I list some disadvantages of std::chrono that we've seen from 
 fixing std::chrono-based code.  Finally I propose some options for how to 
 use doubles for time.
 
 Why we switched to std::chrono
 
 A year ago we started using std::chrono for measuring time.  std::chrono 
 has a rich typesystem for expressing many different kinds of time.  For 
 example, you can distinguish between an absolute point in time and a 
 relative time.  And you can distinguish between different units, like 
 nanoseconds, milliseconds, etc.
 
 Before this, we used doubles for time.  std::chrono’s advantages over 
 doubles are:
 
 Easy to remember what unit is used: We sometimes used doubles for 
 milliseconds and sometimes for seconds.  std::chrono prevents you from 
 getting the two confused.
 
 Easy to remember what kind of clock is used: We sometimes use the 
 monotonic clock and sometimes the wall clock (aka the real time clock).  
 Bad things would happen if we passed a time measured using the monotonic 
 clock to functions that expected time measured using the wall clock, and 
 vice-versa.  I know that I’ve made this mistake in the past, and it can be 
 painful to debug.
 
 In short, std::chrono uses compile-time type checking to catch some bugs.
 
 Disadvantages of using std::chrono
 
 We’ve seen some problems with std::chrono, and I think that the problems 
 outweigh the advantages.  std::chrono suffers from a heavily templatized 
 API that results in template creep in our own internal APIs.  
 std::chrono’s default of integers without overflow protection means that 
 math involving std::chrono is inherently more dangerous than math 
 involving double.  This is particularly bad when we use time to speak 
 about timeouts.
 
 Too many templates: std::chrono uses templates heavily.  It’s overkill for 
 measuring time.  This leads to verbosity and template creep throughout 
 common algorithms that take time as an argument.  For example if we use 
 doubles, a method for sleeping for a second might look like 
 sleepForSeconds(double).  This works even if someone wants to sleep for a 
 nanoseconds, since 0.01 is easy to represent using a double.  Also, 
 multiplying or dividing a double by a small constant factor (1,000,000,000 
 is small by double standards) is virtually guaranteed to avoid any loss of 
 precision.  But as soon as such a utility gets std::chronified, it becomes 
 a template.  This is because you cannot have 
 sleepFor(std::chrono::seconds), since that wouldn’t allow you to represent 
 fractions of seconds.  This brings me to my next point.
 
 Overflow danger: std::chrono is based on integers and its math methods do 
 not support overflow protection.  This has led to serious bugs like 
 https://bugs.webkit.org/show_bug.cgi?id=157924 
 .  This cancels out the 
 “remember what unit is used” benefit cited above.  It’s true that I know 
 what type of time I have, but as soon as I duration_cast it to another 
 unit, I may overflow.  The type system does not help!  This is insane: 
 std::chrono requires you to do more work when writing multi-unit code, so 
 that you satisfy the type checker, but you still have to be just as 
 paranoid around multi-unit scenarios.  

Re: [webkit-dev] RFC: stop using std::chrono, go back to using doubles for time

2016-11-04 Thread Filip Pizlo
Haha, I'm fixing it!

I could use a review of the time API even while I fix some broken corners in 
WebCore and WK2.

-Filip


> On Nov 4, 2016, at 11:31 AM, Brent Fulgham  wrote:
> 
> EWS Hates your patch! :-)
> 
>> On Nov 4, 2016, at 10:01 AM, Filip Pizlo > > wrote:
>> 
>> Hi everyone!
>> 
>> That last time we talked about this, there seemed to be a lot of agreement 
>> that we should go with the Seconds/MonotonicTime/WallTime approach.
>> 
>> I have implemented it: https://bugs.webkit.org/show_bug.cgi?id=152045 
>> 
>> 
>> That patch just takes a subset of our time code - all of the stuff that 
>> transitively touches ParkingLot - and converts it to use the new time 
>> classes.  Reviews welcome!
>> 
>> -Filip
>> 
>> 
>> 
>>> On May 22, 2016, at 6:41 PM, Filip Pizlo >> > wrote:
>>> 
>>> Hi everyone!
>>> 
>>> I’d like us to stop using std::chrono and go back to using doubles for 
>>> time.  First I list the things that I think we wanted to get from 
>>> std::chrono - the reasons why we started switching to it in the first 
>>> place.  Then I list some disadvantages of std::chrono that we've seen from 
>>> fixing std::chrono-based code.  Finally I propose some options for how to 
>>> use doubles for time.
>>> 
>>> Why we switched to std::chrono
>>> 
>>> A year ago we started using std::chrono for measuring time.  std::chrono 
>>> has a rich typesystem for expressing many different kinds of time.  For 
>>> example, you can distinguish between an absolute point in time and a 
>>> relative time.  And you can distinguish between different units, like 
>>> nanoseconds, milliseconds, etc.
>>> 
>>> Before this, we used doubles for time.  std::chrono’s advantages over 
>>> doubles are:
>>> 
>>> Easy to remember what unit is used: We sometimes used doubles for 
>>> milliseconds and sometimes for seconds.  std::chrono prevents you from 
>>> getting the two confused.
>>> 
>>> Easy to remember what kind of clock is used: We sometimes use the monotonic 
>>> clock and sometimes the wall clock (aka the real time clock).  Bad things 
>>> would happen if we passed a time measured using the monotonic clock to 
>>> functions that expected time measured using the wall clock, and vice-versa. 
>>>  I know that I’ve made this mistake in the past, and it can be painful to 
>>> debug.
>>> 
>>> In short, std::chrono uses compile-time type checking to catch some bugs.
>>> 
>>> Disadvantages of using std::chrono
>>> 
>>> We’ve seen some problems with std::chrono, and I think that the problems 
>>> outweigh the advantages.  std::chrono suffers from a heavily templatized 
>>> API that results in template creep in our own internal APIs.  std::chrono’s 
>>> default of integers without overflow protection means that math involving 
>>> std::chrono is inherently more dangerous than math involving double.  This 
>>> is particularly bad when we use time to speak about timeouts.
>>> 
>>> Too many templates: std::chrono uses templates heavily.  It’s overkill for 
>>> measuring time.  This leads to verbosity and template creep throughout 
>>> common algorithms that take time as an argument.  For example if we use 
>>> doubles, a method for sleeping for a second might look like 
>>> sleepForSeconds(double).  This works even if someone wants to sleep for a 
>>> nanoseconds, since 0.01 is easy to represent using a double.  Also, 
>>> multiplying or dividing a double by a small constant factor (1,000,000,000 
>>> is small by double standards) is virtually guaranteed to avoid any loss of 
>>> precision.  But as soon as such a utility gets std::chronified, it becomes 
>>> a template.  This is because you cannot have 
>>> sleepFor(std::chrono::seconds), since that wouldn’t allow you to represent 
>>> fractions of seconds.  This brings me to my next point.
>>> 
>>> Overflow danger: std::chrono is based on integers and its math methods do 
>>> not support overflow protection.  This has led to serious bugs like 
>>> https://bugs.webkit.org/show_bug.cgi?id=157924 
>>> .  This cancels out the 
>>> “remember what unit is used” benefit cited above.  It’s true that I know 
>>> what type of time I have, but as soon as I duration_cast it to another 
>>> unit, I may overflow.  The type system does not help!  This is insane: 
>>> std::chrono requires you to do more work when writing multi-unit code, so 
>>> that you satisfy the type checker, but you still have to be just as 
>>> paranoid around multi-unit scenarios.  Forgetting that you have 
>>> milliseconds and using it as seconds is trivially fixable.  But if 
>>> std::chrono flags such an error and you fix it with a duration_cast (as any 
>>> std::chrono tutorial will tell you to do), you’ve just introduced an 
>>> unchecked overflow and such unchecked overflows are known to cause bugs 
>>> that 

Re: [webkit-dev] RFC: stop using std::chrono, go back to using doubles for time

2016-11-04 Thread Brent Fulgham
EWS Hates your patch! :-)

> On Nov 4, 2016, at 10:01 AM, Filip Pizlo  wrote:
> 
> Hi everyone!
> 
> That last time we talked about this, there seemed to be a lot of agreement 
> that we should go with the Seconds/MonotonicTime/WallTime approach.
> 
> I have implemented it: https://bugs.webkit.org/show_bug.cgi?id=152045 
> 
> 
> That patch just takes a subset of our time code - all of the stuff that 
> transitively touches ParkingLot - and converts it to use the new time 
> classes.  Reviews welcome!
> 
> -Filip
> 
> 
> 
>> On May 22, 2016, at 6:41 PM, Filip Pizlo > > wrote:
>> 
>> Hi everyone!
>> 
>> I’d like us to stop using std::chrono and go back to using doubles for time. 
>>  First I list the things that I think we wanted to get from std::chrono - 
>> the reasons why we started switching to it in the first place.  Then I list 
>> some disadvantages of std::chrono that we've seen from fixing 
>> std::chrono-based code.  Finally I propose some options for how to use 
>> doubles for time.
>> 
>> Why we switched to std::chrono
>> 
>> A year ago we started using std::chrono for measuring time.  std::chrono has 
>> a rich typesystem for expressing many different kinds of time.  For example, 
>> you can distinguish between an absolute point in time and a relative time.  
>> And you can distinguish between different units, like nanoseconds, 
>> milliseconds, etc.
>> 
>> Before this, we used doubles for time.  std::chrono’s advantages over 
>> doubles are:
>> 
>> Easy to remember what unit is used: We sometimes used doubles for 
>> milliseconds and sometimes for seconds.  std::chrono prevents you from 
>> getting the two confused.
>> 
>> Easy to remember what kind of clock is used: We sometimes use the monotonic 
>> clock and sometimes the wall clock (aka the real time clock).  Bad things 
>> would happen if we passed a time measured using the monotonic clock to 
>> functions that expected time measured using the wall clock, and vice-versa.  
>> I know that I’ve made this mistake in the past, and it can be painful to 
>> debug.
>> 
>> In short, std::chrono uses compile-time type checking to catch some bugs.
>> 
>> Disadvantages of using std::chrono
>> 
>> We’ve seen some problems with std::chrono, and I think that the problems 
>> outweigh the advantages.  std::chrono suffers from a heavily templatized API 
>> that results in template creep in our own internal APIs.  std::chrono’s 
>> default of integers without overflow protection means that math involving 
>> std::chrono is inherently more dangerous than math involving double.  This 
>> is particularly bad when we use time to speak about timeouts.
>> 
>> Too many templates: std::chrono uses templates heavily.  It’s overkill for 
>> measuring time.  This leads to verbosity and template creep throughout 
>> common algorithms that take time as an argument.  For example if we use 
>> doubles, a method for sleeping for a second might look like 
>> sleepForSeconds(double).  This works even if someone wants to sleep for a 
>> nanoseconds, since 0.01 is easy to represent using a double.  Also, 
>> multiplying or dividing a double by a small constant factor (1,000,000,000 
>> is small by double standards) is virtually guaranteed to avoid any loss of 
>> precision.  But as soon as such a utility gets std::chronified, it becomes a 
>> template.  This is because you cannot have sleepFor(std::chrono::seconds), 
>> since that wouldn’t allow you to represent fractions of seconds.  This 
>> brings me to my next point.
>> 
>> Overflow danger: std::chrono is based on integers and its math methods do 
>> not support overflow protection.  This has led to serious bugs like 
>> https://bugs.webkit.org/show_bug.cgi?id=157924 
>> .  This cancels out the 
>> “remember what unit is used” benefit cited above.  It’s true that I know 
>> what type of time I have, but as soon as I duration_cast it to another unit, 
>> I may overflow.  The type system does not help!  This is insane: std::chrono 
>> requires you to do more work when writing multi-unit code, so that you 
>> satisfy the type checker, but you still have to be just as paranoid around 
>> multi-unit scenarios.  Forgetting that you have milliseconds and using it as 
>> seconds is trivially fixable.  But if std::chrono flags such an error and 
>> you fix it with a duration_cast (as any std::chrono tutorial will tell you 
>> to do), you’ve just introduced an unchecked overflow and such unchecked 
>> overflows are known to cause bugs that manifest as pages not working 
>> correctly.
>> 
>> I think that doubles are better than std::chrono in multi-unit scenarios.  
>> It may be possible to have std::chrono work with doubles, but this probably 
>> implies us writing our own clocks.  std::chrono’s default clocks use 
>> integers, not doubles.  It also may be possible to 

Re: [webkit-dev] RFC: stop using std::chrono, go back to using doubles for time

2016-11-04 Thread Filip Pizlo
Hi everyone!

That last time we talked about this, there seemed to be a lot of agreement that 
we should go with the Seconds/MonotonicTime/WallTime approach.

I have implemented it: https://bugs.webkit.org/show_bug.cgi?id=152045

That patch just takes a subset of our time code - all of the stuff that 
transitively touches ParkingLot - and converts it to use the new time classes.  
Reviews welcome!

-Filip



> On May 22, 2016, at 6:41 PM, Filip Pizlo  wrote:
> 
> Hi everyone!
> 
> I’d like us to stop using std::chrono and go back to using doubles for time.  
> First I list the things that I think we wanted to get from std::chrono - the 
> reasons why we started switching to it in the first place.  Then I list some 
> disadvantages of std::chrono that we've seen from fixing std::chrono-based 
> code.  Finally I propose some options for how to use doubles for time.
> 
> Why we switched to std::chrono
> 
> A year ago we started using std::chrono for measuring time.  std::chrono has 
> a rich typesystem for expressing many different kinds of time.  For example, 
> you can distinguish between an absolute point in time and a relative time.  
> And you can distinguish between different units, like nanoseconds, 
> milliseconds, etc.
> 
> Before this, we used doubles for time.  std::chrono’s advantages over doubles 
> are:
> 
> Easy to remember what unit is used: We sometimes used doubles for 
> milliseconds and sometimes for seconds.  std::chrono prevents you from 
> getting the two confused.
> 
> Easy to remember what kind of clock is used: We sometimes use the monotonic 
> clock and sometimes the wall clock (aka the real time clock).  Bad things 
> would happen if we passed a time measured using the monotonic clock to 
> functions that expected time measured using the wall clock, and vice-versa.  
> I know that I’ve made this mistake in the past, and it can be painful to 
> debug.
> 
> In short, std::chrono uses compile-time type checking to catch some bugs.
> 
> Disadvantages of using std::chrono
> 
> We’ve seen some problems with std::chrono, and I think that the problems 
> outweigh the advantages.  std::chrono suffers from a heavily templatized API 
> that results in template creep in our own internal APIs.  std::chrono’s 
> default of integers without overflow protection means that math involving 
> std::chrono is inherently more dangerous than math involving double.  This is 
> particularly bad when we use time to speak about timeouts.
> 
> Too many templates: std::chrono uses templates heavily.  It’s overkill for 
> measuring time.  This leads to verbosity and template creep throughout common 
> algorithms that take time as an argument.  For example if we use doubles, a 
> method for sleeping for a second might look like sleepForSeconds(double).  
> This works even if someone wants to sleep for a nanoseconds, since 0.01 
> is easy to represent using a double.  Also, multiplying or dividing a double 
> by a small constant factor (1,000,000,000 is small by double standards) is 
> virtually guaranteed to avoid any loss of precision.  But as soon as such a 
> utility gets std::chronified, it becomes a template.  This is because you 
> cannot have sleepFor(std::chrono::seconds), since that wouldn’t allow you to 
> represent fractions of seconds.  This brings me to my next point.
> 
> Overflow danger: std::chrono is based on integers and its math methods do not 
> support overflow protection.  This has led to serious bugs like 
> https://bugs.webkit.org/show_bug.cgi?id=157924 
> .  This cancels out the 
> “remember what unit is used” benefit cited above.  It’s true that I know what 
> type of time I have, but as soon as I duration_cast it to another unit, I may 
> overflow.  The type system does not help!  This is insane: std::chrono 
> requires you to do more work when writing multi-unit code, so that you 
> satisfy the type checker, but you still have to be just as paranoid around 
> multi-unit scenarios.  Forgetting that you have milliseconds and using it as 
> seconds is trivially fixable.  But if std::chrono flags such an error and you 
> fix it with a duration_cast (as any std::chrono tutorial will tell you to 
> do), you’ve just introduced an unchecked overflow and such unchecked 
> overflows are known to cause bugs that manifest as pages not working 
> correctly.
> 
> I think that doubles are better than std::chrono in multi-unit scenarios.  It 
> may be possible to have std::chrono work with doubles, but this probably 
> implies us writing our own clocks.  std::chrono’s default clocks use 
> integers, not doubles.  It also may be possible to teach std::chrono to do 
> overflow protection, but that would make me so sad since using double means 
> not having to worry about overflow at all.
> 
> The overflow issue is interesting because of its implications for how we do 
> timeouts.  The way to have a method with an optional timeout is 

Re: [webkit-dev] WebKit GPU rendering possibility

2016-11-04 Thread Rogovin, Kevin
Hi,

>I should mention, though, that we require support for hardware that only 
>supports OpenGL ES 2.0. 
>If FastUIDraw can't handle this, then we would need to keep a fallback 
>codepath that uses Cairo, which would be unfortunate.

FastUIDraw requires features beyond what OpenGL ES 2.0 offers. With that in 
mind, the fall back is needed.

I cannot stress how unfortunate it is to have the burden of needing to support 
hardware that only satisfies a specification that is nearly 10 years old and 
whose feature set corresponds to far more ancient times (essentially first 
generation DX9 cards, over 11 years ago). The jump in flexibility in handling 
data between ES2.0 and ES3.0/3.1 is massive.

One question, what happens with WebGL 2.0 support on WebKit? I ask because 
WebGL 2.0 is essentially OpenGL ES 3.x for JavaScript.

Best Regards,
 -Kevin
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] WebKit GPU rendering possibility

2016-11-04 Thread Michael Catanzaro
On Fri, 2016-11-04 at 08:23 +0100, Carlos Garcia Campos wrote:
> What I can say as the GTK+ port maintainer is that we are very
> interested in this. We are actually looking for a cairo replacement,
> because unfortunately cairo is nowadays pretty much unmaintained. So,
> if FastUIDraw can be used by the GTK+ port to replace cairo, and the
> performance improvement is so impressive we would definitely switch.

Yes, to be clear, Carlos is representing the WebKitGTK+ development
team here. Since WebKitGTK+ is the currently primary WebKit port used
by Linux distros [1], that means this work would have the potential to
be widely-deployed on Linux desktops worldwide. Maybe that will help
you in your quest for funding. Of course, we can't say whether we would
actually switch to this implementation before the code actually exists,
but we're definitely interested.

Other ports may not be as interested, as different WebKit ports have
very different graphics architectures. For instance, the Apple ports of
course don't run on Linux at all. But if you can demonstrate a
significant performance improvement in WebKitGTK+, then other ports
might take more notice.

I should mention, though, that we require support for hardware that
only supports OpenGL ES 2.0. If FastUIDraw can't handle this, then we
would need to keep a fallback codepath that uses Cairo, which would be
unfortunate.

Michael

[1] Sorry Konstantin ;)
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] WebKit GPU rendering possibility

2016-11-04 Thread Carlos Garcia Campos
El jue, 03-11-2016 a las 07:50 +, Rogovin, Kevin escribió:
> Adding a new GraphicsContext is what I want to do as it seems the
> path of least pain and suffering. However, all the other things of a
> backend I do not need to do. I do not know how to add a
> GraphicsContext backend in terms of makefile magicks and
> configuration. I also do not know the plumbing for making it active.
> In theory, FastUIDraw's GraphicsContext will work on any platform
> that does OpenGL 3.3 or OpenGL ES 3.0. What is the plumbing to do
> this? Years ago I remember that the build configuration is what
> governed what backend was built... and I usually just piggy packed
> onto another... years ago I remember there was like an SDL style
> backend that did not require a large toolkit, just SDL.. is that
> still alive? where is it? I could piggy back the work there if it
> still is alive...
> 
> Also, to get permission to do this work, I need significant community
> enthusiasm otherwise I will not be able to justify the large amount
> of work needed. This is another area where I need a great deal of
> help.

The community enthusiasm will depend on how this performs in the end in
 web browser benchmarks, as others said in this thread. But to check
that you need to implement GraphicsContext first, so that's a bit
chicken-egg problem. 

What I can say as the GTK+ port maintainer is that we are very
interested in this. We are actually looking for a cairo replacement,
because unfortunately cairo is nowadays pretty much unmaintained. So,
if FastUIDraw can be used by the GTK+ port to replace cairo, and the
performance improvement is so impressive we would definitely switch.

You could start working on it on branch, we can help you with the
WebKit internal details (#webkitgtk+ on freenode, or webkit-gtk@lists.w
ebkit.org). And once you have something working we will also help you
to upstream your changes.

> Best Regards,
>  -Kevin Rogovin
> 
> -Original Message-
> From: Carlos Garcia Campos [mailto:carlo...@webkit.org] 
> Sent: Thursday, November 3, 2016 9:43 AM
> To: Rogovin, Kevin ; Myles C. Maxfield  fi...@apple.com>
> Cc: webkit-dev@lists.webkit.org
> Subject: Re: [webkit-dev] WebKit GPU rendering possibility
> 
> El jue, 03-11-2016 a las 07:35 +, Rogovin, Kevin escribió:
> > Hi,
> > 
> >  The main issue of making a Cairo backend to FastUIDraw is
> > clipping.
> > Cairo tracks the clipping region in CPU and does things that are
> > fine 
> > for CPU-based rendering (i.e. span based rendering) but are
> > absolutely 
> > awful for GPU rendering (from my slides, one sees that GL backed 
> > QPainter and Cairo do much worse than CPU backed). FastUIDraw only 
> > supports clipIn and clipOut and pushes all the clipping work to
> > the 
> > GPU with almost no CPU work. It does NOT track the clipping region
> > at 
> > all. I can give more technical details how it works (and those
> > details 
> > are why FastUIDraw cannot be used a backend for Cairo).
> > For those interested in where the code is located for clipping in 
> > FastUIDraw, it is located at src/fastuidraw/painter/painter.cpp,
> > methods clipInRect, clipOutPath and clipInPath. Their
> > implementations 
> > are very short and simple and are quite cheap on CPU.
> 
> I see. Then I guess adding a new GraphicsContext for FastUIDraw is
> the easiest and best way to try this out in WebKit. Would it be
> possible to  just add a new GraphicsContext implementation? or would
> you also need to change other parts of the graphics implementation or
> the GraphicsContext API itself?
> 
> > Best Regards,
> > -Kevin
> > 
> > -Original Message-
> > From: Carlos Garcia Campos [mailto:carlo...@webkit.org]
> > Sent: Thursday, November 3, 2016 9:27 AM
> > To: Rogovin, Kevin ; Myles C. Maxfield
> >  > fi...@apple.com>
> > Cc: webkit-dev@lists.webkit.org
> > Subject: Re: [webkit-dev] WebKit GPU rendering possibility
> > 
> > El jue, 03-11-2016 a las 06:58 +, Rogovin, Kevin escribió:
> > > Hi!
> > >  
> > > Question answers:
> > > 1.  Currently FastUIDraw has a backend to OpenGL 3.3 and
> > > OpenGL 
> > > ES 3.0. One of its design goals is to make it not terribly awful
> > > to 
> > > write a backend to different 3D API’s.
> > > 2.  I think I was unclear in my video. I have NOT migrated
> > > ANY 
> > > UI rendering library to use Fast UI Draw. What I have done is
> > > made a 
> > > demo
> > > (painter-cells) and ported that demo to Fast UI Draw, Cairo,
> > > Qt’s 
> > > QPainter and SKIA. The diffs between the ports is almost trivial
> > > (it 
> > > really is just using those different rendering API’s).
> > 
> > That makes me wonder, would it be possible to add a new cairo
> > backend 
> > based on FastUIDraw? That would make very easy to try it out with
> > the 
> > current GraphicsContext cairo backend.
> > 
> > > 3.  There are a few areas:
> > > a.  Reduce some render to offscreen buffers.