Re: [webkit-dev] RFC: stop using std::chrono, go back to using doubles for time

2016-05-22 Thread Michal Debski






Hi,
 
sorry but this really bugs me. Isn't this enough?
 
namespace WTF {using nanoseconds = std::chrono::duration;using microseconds = std::chrono::duration;using milliseconds = std::chrono::duration;using seconds = std::chrono::duration;using minutes = std::chrono::duration>;using hours = std::chrono::duration>;
template std::chrono::time_point now(){    return Clock::now();}}
 
and forbid using std::chrono::clock::now()/durations with check-style. It seems like the best of both worlds. Oh and the infinity:
 
namespace std {namespace chrono {template<>struct duration_values {    static constexpr double min() { return -std::numeric_limits::infinity(); }    static constexpr double zero() { return .0; }    static constexpr double max() { return std::numeric_limits::infinity(); }};}}
 
Best regards,Michal Debski
 
--- Original Message ---
Sender : Filip Pizlo
Date : May 23, 2016 02:41 (GMT+01:00)
Title : [webkit-dev] RFC: stop using std::chrono, go back to using doubles for time
 

Hi everyone! 

I’d like us to stop using std::chrono and go back to using doubles for time.  First I list the things that I think we wanted to get from std::chrono - the reasons why we started switching to it in the first place.  Then I list some disadvantages of std::chrono that we've seen from fixing std::chrono-based code.  Finally I propose some options for how to use doubles for time.

Why we switched to std::chrono

A year ago we started using std::chrono for measuring time.  std::chrono has a rich typesystem for expressing many different kinds of time.  For example, you can distinguish between an absolute point in time and a relative time.  And you can distinguish between different units, like nanoseconds, milliseconds, etc.

Before this, we used doubles for time.  std::chrono’s advantages over doubles are:

Easy to remember what unit is used: We sometimes used doubles for milliseconds and sometimes for seconds.  std::chrono prevents you from getting the two confused.

Easy to remember what kind of clock is used: We sometimes use the monotonic clock and sometimes the wall clock (aka the real time clock).  Bad things would happen if we passed a time measured using the monotonic clock to functions that expected time measured using the wall clock, and vice-versa.  I know that I’ve made this mistake in the past, and it can be painful to debug.

In short, std::chrono uses compile-time type checking to catch some bugs.

Disadvantages of using std::chrono

We’ve seen some problems with std::chrono, and I think that the problems outweigh the advantages.  std::chrono suffers from a heavily templatized API that results in template creep in our own internal APIs.  std::chrono’s default of integers without overflow protection means that math involving std::chrono is inherently more dangerous than math involving double.  This is particularly bad when we use time to speak about timeouts.

Too many templates: std::chrono uses templates heavily.  It’s overkill for measuring time.  This leads to verbosity and template creep throughout common algorithms that take time as an argument.  For example if we use doubles, a method for sleeping for a second might look like sleepForSeconds(double).  This works even if someone wants to sleep for a nanoseconds, since 0.01 is easy to represent using a double.  Also, multiplying or dividing a double by a small constant factor (1,000,000,000 is small by double standards) is virtually guaranteed to avoid any loss of precision.  But as soon as such a utility gets std::chronified, it becomes a template.  This is because you cannot have sleepFor(std::chrono::seconds), since that wouldn’t allow you to represent fractions of seconds.  This brings me to my next point.

Overflow danger: std::chrono is based on integers and its math methods do not support overflow protection.  This has led to serious bugs like https://bugs.webkit.org/show_bug.cgi?id=157924.  This cancels out the “remember what unit is used” benefit cited above.  It’s true that I know what type of time I have, but as soon as I duration_cast it to another unit, I may overflow.  The type system does not help!  This is insane: std::chrono requires you to do more work when writing multi-unit code, so that you satisfy the type checker, but you still have to be just as paranoid around multi-unit scenarios.  Forgetting that you have milliseconds and using it as seconds is trivially fixable.  But if std::chrono flags such an error and you fix it with a duration_cast (as any std::chrono tutorial will tell you to do), you’ve just introduced an unchecked overflow and such unchecked overflows are known to cause bugs that manifest as pages not working correctly.

I think that doubles are better than std::chrono in multi-unit scenarios.  It may be possible 

Re: [webkit-dev] RFC: stop using std::chrono, go back to using doubles for time

2016-05-22 Thread Brady Eidson

> On May 22, 2016, at 6:41 PM, Filip Pizlo  wrote:
> 
> Hi everyone!
> 
> I’d like us to stop using std::chrono and go back to using doubles for time.  
> First I list the things that I think we wanted to get from std::chrono - the 
> reasons why we started switching to it in the first place.  Then I list some 
> disadvantages of std::chrono that we've seen from fixing std::chrono-based 
> code.  Finally I propose some options for how to use doubles for time.
> 
> Why we switched to std::chrono
> 
> A year ago we started using std::chrono for measuring time.  std::chrono has 
> a rich typesystem for expressing many different kinds of time.  For example, 
> you can distinguish between an absolute point in time and a relative time.  
> And you can distinguish between different units, like nanoseconds, 
> milliseconds, etc.
> 
> Before this, we used doubles for time.  std::chrono’s advantages over doubles 
> are:
> 
> Easy to remember what unit is used: We sometimes used doubles for 
> milliseconds and sometimes for seconds.  std::chrono prevents you from 
> getting the two confused.
> 
> Easy to remember what kind of clock is used: We sometimes use the monotonic 
> clock and sometimes the wall clock (aka the real time clock).  Bad things 
> would happen if we passed a time measured using the monotonic clock to 
> functions that expected time measured using the wall clock, and vice-versa.  
> I know that I’ve made this mistake in the past, and it can be painful to 
> debug.
> 
> In short, std::chrono uses compile-time type checking to catch some bugs.
> 
> Disadvantages of using std::chrono
> 
> We’ve seen some problems with std::chrono, and I think that the problems 
> outweigh the advantages.  std::chrono suffers from a heavily templatized API 
> that results in template creep in our own internal APIs.  std::chrono’s 
> default of integers without overflow protection means that math involving 
> std::chrono is inherently more dangerous than math involving double.  This is 
> particularly bad when we use time to speak about timeouts.
> 
> Too many templates: std::chrono uses templates heavily.  It’s overkill for 
> measuring time.  This leads to verbosity and template creep throughout common 
> algorithms that take time as an argument.  For example if we use doubles, a 
> method for sleeping for a second might look like sleepForSeconds(double).  
> This works even if someone wants to sleep for a nanoseconds, since 0.01 
> is easy to represent using a double.  Also, multiplying or dividing a double 
> by a small constant factor (1,000,000,000 is small by double standards) is 
> virtually guaranteed to avoid any loss of precision.  But as soon as such a 
> utility gets std::chronified, it becomes a template.  This is because you 
> cannot have sleepFor(std::chrono::seconds), since that wouldn’t allow you to 
> represent fractions of seconds.  This brings me to my next point.
> 
> Overflow danger: std::chrono is based on integers and its math methods do not 
> support overflow protection.  This has led to serious bugs like 
> https://bugs.webkit.org/show_bug.cgi?id=157924 
> .  This cancels out the 
> “remember what unit is used” benefit cited above.  It’s true that I know what 
> type of time I have, but as soon as I duration_cast it to another unit, I may 
> overflow.  The type system does not help!  This is insane: std::chrono 
> requires you to do more work when writing multi-unit code, so that you 
> satisfy the type checker, but you still have to be just as paranoid around 
> multi-unit scenarios.  Forgetting that you have milliseconds and using it as 
> seconds is trivially fixable.  But if std::chrono flags such an error and you 
> fix it with a duration_cast (as any std::chrono tutorial will tell you to 
> do), you’ve just introduced an unchecked overflow and such unchecked 
> overflows are known to cause bugs that manifest as pages not working 
> correctly.
> 
> I think that doubles are better than std::chrono in multi-unit scenarios.  It 
> may be possible to have std::chrono work with doubles, but this probably 
> implies us writing our own clocks.  std::chrono’s default clocks use 
> integers, not doubles.  It also may be possible to teach std::chrono to do 
> overflow protection, but that would make me so sad since using double means 
> not having to worry about overflow at all.
> 
> The overflow issue is interesting because of its implications for how we do 
> timeouts.  The way to have a method with an optional timeout is to do one of 
> these things:
> 
> - Use 0 to mean no timeout.
> - Have one function for timeout and one for no timeout.
> - Have some form of +Inf or INT_MAX to mean no timeout.  This makes so much 
> mathematical sense.
> 
> WebKit takes the +Inf/INT_MAX approach.  I like this approach the best 
> because it makes the most mathematical sense: not giving a timeout is exactly 
> like asking for a timeout at 

[webkit-dev] RFC: stop using std::chrono, go back to using doubles for time

2016-05-22 Thread Filip Pizlo
Hi everyone!

I’d like us to stop using std::chrono and go back to using doubles for time.  
First I list the things that I think we wanted to get from std::chrono - the 
reasons why we started switching to it in the first place.  Then I list some 
disadvantages of std::chrono that we've seen from fixing std::chrono-based 
code.  Finally I propose some options for how to use doubles for time.

Why we switched to std::chrono

A year ago we started using std::chrono for measuring time.  std::chrono has a 
rich typesystem for expressing many different kinds of time.  For example, you 
can distinguish between an absolute point in time and a relative time.  And you 
can distinguish between different units, like nanoseconds, milliseconds, etc.

Before this, we used doubles for time.  std::chrono’s advantages over doubles 
are:

Easy to remember what unit is used: We sometimes used doubles for milliseconds 
and sometimes for seconds.  std::chrono prevents you from getting the two 
confused.

Easy to remember what kind of clock is used: We sometimes use the monotonic 
clock and sometimes the wall clock (aka the real time clock).  Bad things would 
happen if we passed a time measured using the monotonic clock to functions that 
expected time measured using the wall clock, and vice-versa.  I know that I’ve 
made this mistake in the past, and it can be painful to debug.

In short, std::chrono uses compile-time type checking to catch some bugs.

Disadvantages of using std::chrono

We’ve seen some problems with std::chrono, and I think that the problems 
outweigh the advantages.  std::chrono suffers from a heavily templatized API 
that results in template creep in our own internal APIs.  std::chrono’s default 
of integers without overflow protection means that math involving std::chrono 
is inherently more dangerous than math involving double.  This is particularly 
bad when we use time to speak about timeouts.

Too many templates: std::chrono uses templates heavily.  It’s overkill for 
measuring time.  This leads to verbosity and template creep throughout common 
algorithms that take time as an argument.  For example if we use doubles, a 
method for sleeping for a second might look like sleepForSeconds(double).  This 
works even if someone wants to sleep for a nanoseconds, since 0.01 is easy 
to represent using a double.  Also, multiplying or dividing a double by a small 
constant factor (1,000,000,000 is small by double standards) is virtually 
guaranteed to avoid any loss of precision.  But as soon as such a utility gets 
std::chronified, it becomes a template.  This is because you cannot have 
sleepFor(std::chrono::seconds), since that wouldn’t allow you to represent 
fractions of seconds.  This brings me to my next point.

Overflow danger: std::chrono is based on integers and its math methods do not 
support overflow protection.  This has led to serious bugs like 
https://bugs.webkit.org/show_bug.cgi?id=157924 
.  This cancels out the 
“remember what unit is used” benefit cited above.  It’s true that I know what 
type of time I have, but as soon as I duration_cast it to another unit, I may 
overflow.  The type system does not help!  This is insane: std::chrono requires 
you to do more work when writing multi-unit code, so that you satisfy the type 
checker, but you still have to be just as paranoid around multi-unit scenarios. 
 Forgetting that you have milliseconds and using it as seconds is trivially 
fixable.  But if std::chrono flags such an error and you fix it with a 
duration_cast (as any std::chrono tutorial will tell you to do), you’ve just 
introduced an unchecked overflow and such unchecked overflows are known to 
cause bugs that manifest as pages not working correctly.

I think that doubles are better than std::chrono in multi-unit scenarios.  It 
may be possible to have std::chrono work with doubles, but this probably 
implies us writing our own clocks.  std::chrono’s default clocks use integers, 
not doubles.  It also may be possible to teach std::chrono to do overflow 
protection, but that would make me so sad since using double means not having 
to worry about overflow at all.

The overflow issue is interesting because of its implications for how we do 
timeouts.  The way to have a method with an optional timeout is to do one of 
these things:

- Use 0 to mean no timeout.
- Have one function for timeout and one for no timeout.
- Have some form of +Inf or INT_MAX to mean no timeout.  This makes so much 
mathematical sense.

WebKit takes the +Inf/INT_MAX approach.  I like this approach the best because 
it makes the most mathematical sense: not giving a timeout is exactly like 
asking for a timeout at time-like infinity.  When used with doubles, this Just 
Works.  +Inf is greater than any value and it gets preserved properly in math 
(+Inf * real = +Inf, so it survives gracefully in unit conversions; +Inf + real 
= +Inf, so it also survives 

Re: [webkit-dev] Networking proxy on iOS

2016-05-22 Thread Daniel Olegovich Lazarenko
What if I make a bug report in bugzilla about making a design spec of this
feature. I could then write down implementation details options and
summarize all points we've discussed here. Maybe in a form of a google
document. This spec could then be reviewed and approved for action by you
and any other interested people you want to be involved. Do you think it
could work better?

On Sun, May 22, 2016 at 11:59 PM, Brady Eidson  wrote:

>
> On May 22, 2016, at 3:14 AM, Daniel Olegovich Lazarenko 
> wrote:
>
> > It’s not yet clear what the ideal architecture is, which is one of the
> reasons why the mentioned issued remains unresolved.
>
> What are the other reasons?
>
>
> Perhaps I misrepresented - AFAIK that is the only important reason.
>
> Are there any reasons that block us from discussing the right architecture?
> I'd like to start working on a patch, but I need directions from you.
>
>
> I replied to this thread to describe significant issues with the two
> approaches you suggested.
>
> But I am not able to conclude the thread by unilaterally giving directions
> to a lone contributor on how to properly implement this feature.
>
> That’s a much broader conversation than just you and I.
>
> Thanks,
> ~Brady
>
>
> I'd like to come up with some sort of a plan for this as well. Since the
> desired approach sounds complicated, it would be nice to split it as a
> series of patches where each patch is committed separately and improves the
> feature towards the goal.
>
> On Sun, May 22, 2016 at 6:16 AM, Brady Eidson  wrote:
>
>>
>> On May 21, 2016, at 2:05 PM, Daniel Olegovich Lazarenko <
>> dani...@opera.com> wrote:
>>
>> > We are exploring ways to restore that full functionality -
>> https://bugs.webkit.org/show_bug.cgi?id=138169
>>
>> Having custom scheme protocols is important to me too. I didn't know that
>> it is broken with WKWebView. Do you know what exactly is broken?
>>
>>
>> From most developer’s perspective, what is broken is that their
>> NSURLProtocol they can register in their “UI Process” that used to work in
>> WK1 views no longer has any effect in WK2.
>>
>>
>> I thought that if you call [WKBrowsingContextController
>> registerSchemeForCustomProtocol:] with your scheme, then it works. When I
>> checked last (a year ago) it was implemented in a way that the WebProcess/
>> NetworkingProcess would pass a message to UIProcess, and handle the
>> network request in the UIProcess. Did it change?
>>
>>
>> That did not change.
>>
>> But that mechanism was never API, and even as SPI it is formally
>> deprecated.
>>
>> Assuming that registerSchemeForCustomProtocol still works the same way,
>> you basically state that you dislike the current solution (that does the
>> work in the UIProcess), and want to have a different architecture.
>>
>> For custom networking or proxying you have to run the app-provided code.
>> The basic strategy I proposed was to run it in the app process (i.e.
>> UIProcess). Since you don't want any networking in UIProcess, it means that
>> the app needs to spawn a dedicated process to do custom networking. This
>> process would run app-specific code (including NSURLProtocol-s), and
>> communicate by IPC with the NetworkingProcess. Is this a kind of
>> architecture you would like to have?
>>
>>
>> It’s not yet clear what the ideal architecture is, which is one of the
>> reasons why the mentioned issued remains unresolved.
>>
>> Thanks,
>> ~Brady
>>
>>
>>
>>
>> On Fri, May 20, 2016 at 5:58 PM, Brady Eidson  wrote:
>>
>>>
>>> On May 20, 2016, at 2:30 AM, Daniel Olegovich Lazarenko <
>>> dani...@opera.com> wrote:
>>>
>>> Thank you for such a fast reply. That is amazing! :)
>>> Back to questions...
>>>
>>> > Are you primarily focused on a custom networking layer (e.g. your own
>>> HTTP implementation?),
>>> > or with custom protocol handling for non-http protocols?
>>>
>>> I'm primarily concerned about HTTP and friends (HTTPS, SPDY, HTTP2), but
>>> if there are any other widely used protocols that's also interesting. FTP
>>> support is not interesting for me. Do you have any other specific things in
>>> mind?
>>>
>>> If there's a custom proprietary protocol that people handle in the app
>>> with their own code, for example, acme://acme.com:1234, then proxying
>>> this thing is not very interesting to me, because it's very easy to proxy
>>> my own protocol handled by my own code. There's a case when "acme" is
>>> provided by some 3rd party, and the app author doesn't have the processing
>>> code. In such case it might be interesting to proxy it as well, but then
>>> again, I'm asking for a concrete example of such protocol (in WebKit
>>> context).
>>>
>>>
>>> In a WebKit1 app (WebView on Mac, UIWebView on iOS), app authors were
>>> able to use NSURLProtocol to override any scheme they wished.
>>>
>>> While some such as yourself might’ve used it to override http from the
>>> default handling, *many more* used it to 

Re: [webkit-dev] Networking proxy on iOS

2016-05-22 Thread Brady Eidson

> On May 22, 2016, at 3:14 AM, Daniel Olegovich Lazarenko  
> wrote:
> 
> > It’s not yet clear what the ideal architecture is, which is one of the 
> > reasons why the mentioned issued remains unresolved.
> 
> What are the other reasons?

Perhaps I misrepresented - AFAIK that is the only important reason.

> Are there any reasons that block us from discussing the right architecture?
> I'd like to start working on a patch, but I need directions from you.

I replied to this thread to describe significant issues with the two approaches 
you suggested.

But I am not able to conclude the thread by unilaterally giving directions to a 
lone contributor on how to properly implement this feature.

That’s a much broader conversation than just you and I.

Thanks,
~Brady

> 
> I'd like to come up with some sort of a plan for this as well. Since the 
> desired approach sounds complicated, it would be nice to split it as a series 
> of patches where each patch is committed separately and improves the feature 
> towards the goal.
> 
> On Sun, May 22, 2016 at 6:16 AM, Brady Eidson  > wrote:
> 
>> On May 21, 2016, at 2:05 PM, Daniel Olegovich Lazarenko > > wrote:
>> 
>> > We are exploring ways to restore that full functionality - 
>> > https://bugs.webkit.org/show_bug.cgi?id=138169 
>> > 
>> 
>> Having custom scheme protocols is important to me too. I didn't know that it 
>> is broken with WKWebView. Do you know what exactly is broken?
> 
> From most developer’s perspective, what is broken is that their NSURLProtocol 
> they can register in their “UI Process” that used to work in WK1 views no 
> longer has any effect in WK2.
> 
>> 
>> I thought that if you call [WKBrowsingContextController 
>> registerSchemeForCustomProtocol:] with your scheme, then it works. When I 
>> checked last (a year ago) it was implemented in a way that the 
>> WebProcess/NetworkingProcess would pass a message to UIProcess, and handle 
>> the network request in the UIProcess. Did it change?
> 
> That did not change.
> 
> But that mechanism was never API, and even as SPI it is formally deprecated.
> 
>> Assuming that registerSchemeForCustomProtocol still works the same way, you 
>> basically state that you dislike the current solution (that does the work in 
>> the UIProcess), and want to have a different architecture.
>> 
>> For custom networking or proxying you have to run the app-provided code. The 
>> basic strategy I proposed was to run it in the app process (i.e. UIProcess). 
>> Since you don't want any networking in UIProcess, it means that the app 
>> needs to spawn a dedicated process to do custom networking. This process 
>> would run app-specific code (including NSURLProtocol-s), and communicate by 
>> IPC with the NetworkingProcess. Is this a kind of architecture you would 
>> like to have?
> 
> It’s not yet clear what the ideal architecture is, which is one of the 
> reasons why the mentioned issued remains unresolved.
> 
> Thanks,
> ~Brady
> 
> 
>> 
>> 
>> On Fri, May 20, 2016 at 5:58 PM, Brady Eidson > > wrote:
>> 
>>> On May 20, 2016, at 2:30 AM, Daniel Olegovich Lazarenko >> > wrote:
>>> 
>>> Thank you for such a fast reply. That is amazing! :)
>>> Back to questions...
>>> 
>>> > Are you primarily focused on a custom networking layer (e.g. your own 
>>> > HTTP implementation?),
>>> > or with custom protocol handling for non-http protocols?
>>> 
>>> I'm primarily concerned about HTTP and friends (HTTPS, SPDY, HTTP2), but if 
>>> there are any other widely used protocols that's also interesting. FTP 
>>> support is not interesting for me. Do you have any other specific things in 
>>> mind?
>>> 
>>> If there's a custom proprietary protocol that people handle in the app with 
>>> their own code, for example, acme://acme.com:1234 , 
>>> then proxying this thing is not very interesting to me, because it's very 
>>> easy to proxy my own protocol handled by my own code. There's a case when 
>>> "acme" is provided by some 3rd party, and the app author doesn't have the 
>>> processing code. In such case it might be interesting to proxy it as well, 
>>> but then again, I'm asking for a concrete example of such protocol (in 
>>> WebKit context).
>> 
>> In a WebKit1 app (WebView on Mac, UIWebView on iOS), app authors were able 
>> to use NSURLProtocol to override any scheme they wished.
>> 
>> While some such as yourself might’ve used it to override http from the 
>> default handling, *many more* used it to implement custom protocols. e.g. 
>> “acme://my-app-specific-url <>”
>> 
>> We are exploring ways to restore that full functionality - 
>> https://bugs.webkit.org/show_bug.cgi?id=138169 
>> 
>> 
>>> > You seem to dismiss 

Re: [webkit-dev] Networking proxy on iOS

2016-05-22 Thread Daniel Olegovich Lazarenko
> It’s not yet clear what the ideal architecture is, which is one of the
reasons why the mentioned issued remains unresolved.

What are the other reasons?
Are there any reasons that block us from discussing the right architecture?
I'd like to start working on a patch, but I need directions from you.

I'd like to come up with some sort of a plan for this as well. Since the
desired approach sounds complicated, it would be nice to split it as a
series of patches where each patch is committed separately and improves the
feature towards the goal.

On Sun, May 22, 2016 at 6:16 AM, Brady Eidson  wrote:

>
> On May 21, 2016, at 2:05 PM, Daniel Olegovich Lazarenko 
> wrote:
>
> > We are exploring ways to restore that full functionality -
> https://bugs.webkit.org/show_bug.cgi?id=138169
>
> Having custom scheme protocols is important to me too. I didn't know that
> it is broken with WKWebView. Do you know what exactly is broken?
>
>
> From most developer’s perspective, what is broken is that their
> NSURLProtocol they can register in their “UI Process” that used to work in
> WK1 views no longer has any effect in WK2.
>
>
> I thought that if you call [WKBrowsingContextController
> registerSchemeForCustomProtocol:] with your scheme, then it works. When I
> checked last (a year ago) it was implemented in a way that the WebProcess/
> NetworkingProcess would pass a message to UIProcess, and handle the
> network request in the UIProcess. Did it change?
>
>
> That did not change.
>
> But that mechanism was never API, and even as SPI it is formally
> deprecated.
>
> Assuming that registerSchemeForCustomProtocol still works the same way,
> you basically state that you dislike the current solution (that does the
> work in the UIProcess), and want to have a different architecture.
>
> For custom networking or proxying you have to run the app-provided code.
> The basic strategy I proposed was to run it in the app process (i.e.
> UIProcess). Since you don't want any networking in UIProcess, it means that
> the app needs to spawn a dedicated process to do custom networking. This
> process would run app-specific code (including NSURLProtocol-s), and
> communicate by IPC with the NetworkingProcess. Is this a kind of
> architecture you would like to have?
>
>
> It’s not yet clear what the ideal architecture is, which is one of the
> reasons why the mentioned issued remains unresolved.
>
> Thanks,
> ~Brady
>
>
>
>
> On Fri, May 20, 2016 at 5:58 PM, Brady Eidson  wrote:
>
>>
>> On May 20, 2016, at 2:30 AM, Daniel Olegovich Lazarenko <
>> dani...@opera.com> wrote:
>>
>> Thank you for such a fast reply. That is amazing! :)
>> Back to questions...
>>
>> > Are you primarily focused on a custom networking layer (e.g. your own
>> HTTP implementation?),
>> > or with custom protocol handling for non-http protocols?
>>
>> I'm primarily concerned about HTTP and friends (HTTPS, SPDY, HTTP2), but
>> if there are any other widely used protocols that's also interesting. FTP
>> support is not interesting for me. Do you have any other specific things in
>> mind?
>>
>> If there's a custom proprietary protocol that people handle in the app
>> with their own code, for example, acme://acme.com:1234, then proxying
>> this thing is not very interesting to me, because it's very easy to proxy
>> my own protocol handled by my own code. There's a case when "acme" is
>> provided by some 3rd party, and the app author doesn't have the processing
>> code. In such case it might be interesting to proxy it as well, but then
>> again, I'm asking for a concrete example of such protocol (in WebKit
>> context).
>>
>>
>> In a WebKit1 app (WebView on Mac, UIWebView on iOS), app authors were
>> able to use NSURLProtocol to override any scheme they wished.
>>
>> While some such as yourself might’ve used it to override http from the
>> default handling, *many more* used it to implement custom protocols. e.g. “
>> acme://my-app-specific-url”
>>
>> We are exploring ways to restore that full functionality -
>> https://bugs.webkit.org/show_bug.cgi?id=138169
>>
>> > You seem to dismiss the Networking process’ crash recovery aspect.
>> > "because in practice we know that most of the crashes happen in the
>> WebProcess parts”.
>> > I’m curious what data you’re using to make that claim?
>>
>> Well, I'm not dismissing it, it's definitely a trade off that an app
>> author will make by choosing to enable this option.
>> The data comes from our web browser apps. We certainly see networking
>> faults, but in total it was usually a minor part of all the WebKit crashes.
>> To not sound subjective, I've looked through our current app version, which
>> already has enough data to judge, and in the top WebKit crashes there are
>> none in the network code. Most are crashes in JavaScriptCore, DOM and
>> graphics subsystems. This is the experience we have over many versions and
>> years of service. I might be able to show you the data in