>
> Most languages, who stick with Exceptions, usually has two kind of 
> exceptions:
> - "exception for regular error", i.e. wrong input, wrong system state, or 
> for "control flow"
> - and "fatal exceptions",
>

agree to that. 
Current error management is not satisfying and pushes the community to find 
a convention**, 
unfortunately it seems impossible to reach, 
the only one who made things goes forward is Dave Cheney with pkg/errors.

** One might say, by now all errors must implement IsFatal so that the 
consumer can determine the severity of error it is dealing with.
It is not impossible to do today, but two packages not written together 
with this rule in mind won t match.
And consensus has not come, so we are left with a broken leg.

PS: panic is just another capability provided, there s no need to fell into 
some dogma like `never use it`,
like everything else in go, use it with care, or panic and fix it.

On Wednesday, April 26, 2017 at 7:30:36 AM UTC+2, Sokolov Yura wrote:
>
> It looks like there is two point of view:
>
> - optimists who never used to build mutable shared state by them self, and 
> they hope libraries they use also don't use mutable shared state,
> - and those who know that mutable shared state usually exists.
>
> In absence of mutable shared state it is perfectly valid to recover to 
> immitate what PHP or Erlang does. But PHP and Erlang has real "process" 
> isolation, and they tend to not recover cause conurrent requests are really 
> "not affected".
>
> Erlang's philosophy is "let it crash" cause there is always "superwiser" 
> in a separate isolated "process" who will respawn "worker process".
>
> PHP will let whole process to crash if it meets C-level "assert" cause 
> process doesn't serve more than one request at time, and there is also 
> always a superwiser who will spawn new process to serve requests.
>
> But in Go you have no such support from runtime. So, you have to inspect 
> all third-party libraries to check they doesn't have mutable shared state, 
> if you want to `recover`. But if you did inspect them, then why didn't you 
> prevent them from 'panic'? Why did you pass input that leads to panic you 
> "allowed to recover"? You didn't test your program enough?
>
> Most languages, who stick with Exceptions, usually has two kind of 
> exceptions:
> - "exception for regular error", i.e. wrong input, wrong system state, or 
> for "control flow"
> - and "fatal exceptions",
> First kind of exceptions are safe to catch and recover from.
> Second kind is always documented as "you'd better crash, but don't recover 
> from it". They usually have separate inheritance root, so regular 'catch' 
> doesn't catch them (some exceptions even checked by runtime to not be 
> catched). And everyone in safe mind will not catch those exceptions.
>
> Go says:
> - first kind is just an error. Return error, analyze error, rereturn 
> error, and you will be happy, and your hair will shine.
> - Second kind is... yeah, it should be panic. You'd better not recover. 
> But you know what... I'm sometimes use it for control flow... and I do 
> recover in 'net/http' cause I pretend to be new PHP... So, you have no 
> "blessed way" to "fatal error". Go ahead, and do your own "super-panic" 
> with "debug.PrintStack(); os.Exit(1)".
>
> I'm sad falcon.
>
> PS. To be fair, "fatal exceptions" usually allows to eval `finally` 
> statements, so my point for "optimize defer in absence of recover" is not 
> perfectly valid.
>
> 26 апр. 2017 г. 7:24 AM пользователь "Chris G" <ch...@guiney.net 
> <javascript:>> написал:
>
>>
>>
>> On Tuesday, April 25, 2017 at 7:52:25 PM UTC-7, Dave Cheney wrote:
>>>
>>> > Yes, and then crashes the program. In the scenario I described, with 
>>> thousands of other requests in flight that meet an abrubt end.  That could 
>>> be incredibly costly, even if it's been planned for
>>>
>>> There are a host of other reasons that can take a server offline 
>>> abruptly. It seems like a odd misallocation of resources to try to prevent 
>>> one specific case - a goroutine panics due to a programming error or input 
>>> validation failure -- both which are far better addressed with testing.
>>>
>> There's a cost benefit analysis to be done, for sure, but I don't always 
>> believe it to be a misallocation of resources.  I don't believe it's costly 
>> for every program, and for programs where it's important, I don't believe 
>> it to always be a hard problem to accomplish.  To your point, for a great 
>> many programs, the effort probably isn't worth the reward.
>>  
>>
>>> To try to postpone the exit of a program after a critical error to me 
>>> implies a much more complex testing and validation process that has 
>>> identified all the shared state in the program and verified that it is 
>>> correct in the case that a panic is caught.
>>>
>> Not always applicable, but there are some relatively easy ways of coping 
>> with that:
>> - Don't have shared state to begin with (for a large number of programs, 
>> this isn't that hard! Look at how far php has gotten, for example)
>> - Don't have mutable shared state
>> - Copy on write, and only publish immutable shared state
>>
>> Those properties can also make testing and validation much easier, I 
>> should note. And with those properties, I don't think it's necessarily hard 
>> to isolate a particular lifecycle, for example, an http request. 
>>
>> Often it can just be a http handler that defers a recover and calls a 
>> real handler.  In the case of publishing an immutable object graph to 
>> shared state, only publish it once it's verified.  If a panic occurs in 
>> whatever publishing goroutine, published state remains in a known-good 
>> condition.
>>
>> Of course, it's very possible to imagine a program that is complex enough 
>> where shared state isn't simple to manage. I would also argue, 
>> independently on if it's worth any effort to make a single lifecycle 
>> crash-safe, that as a program reaches that level of complexity, it should 
>> be questioned if all of that state belongs in the same process at all.   
>> Split it up and get process isolation from the operating system (and scale 
>> that up to multiple machines as well, to your third point).
>>
>> To me it seems simpler and more likely to have the root cause of the 
>>> panic addressed to just let the program crash. The alternative, somehow 
>>> firewalling the crash, and its effects on the internal state of your 
>>> program, sounds unworkably optimistic.
>>>
>>
>> I'm by no means advocating for leaving a fault in a program. I don't 
>> believe these are alternatives at all! Fix your program!  But I certainly 
>> don't think resiliency within a process space is always unworkable.  
>> Perhaps optimistic, I'll give you that :)
>>
>> -- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "golang-nuts" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/golang-nuts/rW6LB-9N37I/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> golang-nuts...@googlegroups.com <javascript:>.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to