No, I fully understand this. My point is that this doesn't seem to accurately 
represent the cost of exceptions.

In a JSON parser, since the topic has been brought up, you don't have Y*P calls 
that succeed and N*(1-P) calls that fail. You have Y*P calls that succeed and 
*at most one* call that fails. That's because once you hit the (1-P), you stop 
parsing. This heavily biases calls in favor of succeeding, which is what I 
tried to illustrate with my anecdote.

I haven't attempted statistics in a while, but that looks like a geometric 
distribution to me. That would give something like:

Y_1 * (1/P) + N_1 < Y_2 * (1/P) + N_2

in which Y completely dominates N, especially as P goes smaller.

Félix

> Le 9 août 2016 à 16:22:08, John McCall <[email protected]> a écrit :
> 
>> 
>> On Aug 9, 2016, at 8:19 AM, Félix Cloutier via swift-evolution 
>> <[email protected] <mailto:[email protected]>> wrote:
>> 
>>> “Zero cost” EH is also *extremely* expensive in the case where an error is 
>>> actually throw in normal use cases.  This makes it completely inappropriate 
>>> for use in APIs where errors are expected in edge cases (e.g. file not 
>>> found errors).
>> 
>> Anecdote: I work with a web service that gets several million hits a day. 
>> Management loves to use the percentage of succeeding web requests as a 
>> measure of overall health. The problem with that metric is that when a web 
>> request fails, clients fall in an unhealthy state and stop issuing requests 
>> for a while. Therefore, one failing request prevents maybe twenty more that 
>> would all have failed if the client hadn't bailed out, but these don't show 
>> in statistics. This makes us look much better than we actually are.
>> 
>> If I had any amount of experience with DTrace, I'd write a script that logs 
>> syscall errors to try and see how the programs that I use react to failures. 
>> I'm almost certain that when one thing stops working, most programs backs 
>> out of a much bigger process and don't retry right away. When a program 
>> fails to open a file, it's also failing to read/write to it, or whatever 
>> else people normally do after they open files. These things are also 
>> expensive, and they're rarely the type of things that you need to (or even 
>> just can) retry in a tight loop. My perception is that the immediate cost of 
>> failing, even with expensive throwing, is generally dwarfed by the immediate 
>> cost of succeeding, so we're not necessarily losing out on much.
>> 
>> And if that isn't the case, there are alternatives to throwing that people 
>> are already embracing, to the point where error handling practices seem 
>> fractured.
>> 
>>>> I don't really know what to expect in terms of discussion, especially 
>>>> since it may boil down to "we're experts in this fields and you're just 
>>>> peasants”
>>> 
>>> I’m not sure why you think the Swift team would say something that 
>>> derogatory.  I hope there is no specific action that has led to this 
>>> belief. If there is, then please let me know.
>> 
>> Of course not. All of you have been very nice and patient with us peasants, 
>> at least as far as "us" includes me. :) This was meant as a light-hearted 
>> reflection on discussing intimate parts of the language, where my best 
>> perspective is probably well-understood desktop/server development, whereas 
>> the core team has to see that but also needs a high focus on other things 
>> that don't even cross my mind (or at least, that's the heroic picture I have 
>> of you guys).
>> 
>> For instance, my "expensive" stops at "takes a while". Your "expensive" 
>> might mean "takes a while and drains the very finite energy reserves that we 
>> have on this tiny device" or something still more expansive. These 
>> differences are not always immediately obvious.
>> 
>>>> However, as linked above, someone did for Microsoft platforms (for 
>>>> Microsoft-platform-style errors) and found that there is an impact. 
>>> 
>>> C++ and Swift are completely different languages in this respect, so the 
>>> analysis doesn’t translate over.
>> 
>> The analysis was (probably?) done over C++ and HRESULTs but with the 
>> intention of applying it to another language (Midori), and it most likely 
>> validated the approach of other languages (essentially everything 
>> .NET-based). Several findings of the Midori team are being exported to 
>> Microsoft's new APIs, notably the async everywhere and exceptions everywhere 
>> paradigms, and these APIs are callable from both so-called managed programs 
>> (GCed) and unmanaged programs (ref-counted).
>> 
>> Swift operations don't tend to throw very much, which is a net positive, but 
>> it seems to me that comparing the impact of Swift throws with another 
>> language's throws is relatively fair. C# isn't shy of 
>> FileNotFoundExceptions, for instance.
> 
> I think you may be missing Chris's point here.
> 
> Exception ABIs trade off between two different cases: when the callee throws 
> and when it doesn't.  (There are multiple dimensions of trade-off here, but 
> let's just talk about cycle-count performance.)  Suppose that a compiler can 
> implement a call to have cost C if it just "un-implements" exceptions, the 
> way that a C++ compiler does when they're disabled.  If we hand-wave a bit, 
> we can pretend that all the costs are local and just say that any particular 
> ABI will add cost N to calls that don't throw and cost Y to calls that do.  
> Therefore, if calls throw with probability P, ABI 1 will be faster than ABI 2 
> if:
>    Y_1 * P +  N_1 * (1 - P) < Y_2 * P +  N_2 * (1 - P)
> 
> So what is P?  Well, there's a really important difference between 
> programming languages.
> 
> In C++ or C#, you have to compute P as a proportion of every call made by the 
> program.  (Technically, C++ has a way to annotate that a function doesn't 
> throw, and it's possible under very specific circumstances for a C++ or C# 
> implementation to prove that even without an annotation; but for the most 
> part, every call must be assumed to be able to throw.)  Even if exceptions 
> were idiomatically used in C++ for error reporting the way they are in Java 
> and C#, the number of calls to such "failable" functions would still be 
> completely negligible compared to the number of calls to functions that 
> literally cannot throw unless (maybe!) the system runs out of memory.  
> Therefore, P is tiny — maybe one in a trillion, or one in million in C# if 
> the programmer hasn't yet discovered the non-throwing APIs for testing file 
> existence.  At that kind of ratio, it becomes imperative to do basically 
> anything you can to move costs out of N.
> 
> But in Swift, arbitrary functions can't throw.  When computing P, the 
> denominator only contains calls to functions that really can report some sort 
> of ordinary semantic failure.  (Unless you're in something like a rethrows 
> function, but most of those are pretty easy to specialize for non-throwing 
> argument functions.)  So P is a lot higher just to begin with.
> 
> Furthermore, there are knock-on effects here.  Error-handling is a really 
> nice way to solve certain kinds of language problem.  (Aside: I keep running 
> into people writing things like JSON deserializers who for some reason insist 
> on making their lives unnecessarily difficult by manually messing around with 
> Optional/Either results or writing their own monad + combinator libraries or 
> what not.  Folks, there's an error monad built into the language, and it is 
> designed exactly for this kind of error-propagation problem; please just use 
> it.)  But we know from experience that the expense (and other problems) of 
> exception-handling in other languages drives people towards other, much more 
> awkward mechanisms when they expect P to be higher, even if "higher" is still 
> just 1 in 100 or so.  That's awful; to us, that's a total language failure.
> 
> So the shorter summary of the longer performance argument is that (1) we 
> think that our language choices already make P high enough that the zero-cost 
> trade-offs are questionable and (2) those trade-offs, while completely 
> correct for other languages, are known to severely distort the ways that 
> programmers use exceptions in those languages, leading to worse code and more 
> bugs.  So that's why we aren't using zero-cost exceptions in Swift.
> 
> John.

_______________________________________________
swift-evolution mailing list
[email protected]
https://lists.swift.org/mailman/listinfo/swift-evolution

Reply via email to