> To try to postpone the exit of a program after a critical error to me
implies a much more complex testing and validation process that has
identified all the shared state in the program and verified that it is
correct in the case that a panic is caught

There's an implicit argument here that the panic is, in fact, the result of
a critical error. This is my primary contention with the general use of
panic(). There is no guarantee for me as the consumer of a panicking
library that the panic in question is truly related to an unrecoverable
exception state that can only be resolved by a process exit

I posit the question one last time: How can the author of shared code
understand, in sufficient detail, all the possible ways that the code could
be leverage such that she or he could determine, objectively, that any
given process must stop when a particular error state is encountered?

> There are a host of other reasons that can take a server offline
abruptly. It seems like a odd misallocation of resources to try to prevent
one specific case.

This, generally, is the argument that "if you can't stop all exceptions
then why bother to stop any?". Contrary to exception states such as my
cloud provider has terminated my instance abruptly or my data center has
lost power, panic() uses are entirely defined by developers and not
strictly related to unrecoverable exception states. The process exit in the
case of a panic is entirely preventable unlike a true, systemic failure. To
say that panic leads to process termination and, therefore, panic is
equivalent to all process termination events is fallacious. I stand firm
that only the process developer knows when the process should exit.

To put it more succinctly: The idea that your exception state should stop
my process is, well, that's just, like, your opinion, man.

On Tue, Apr 25, 2017, 21:52 Dave Cheney <d...@cheney.net> wrote:

> > Yes, and then crashes the program. In the scenario I described, with
> thousands of other requests in flight that meet an abrubt end.  That could
> be incredibly costly, even if it's been planned for
>
> There are a host of other reasons that can take a server offline abruptly.
> It seems like a odd misallocation of resources to try to prevent one
> specific case - a goroutine panics due to a programming error or input
> validation failure -- both which are far better addressed with testing.
>
> To try to postpone the exit of a program after a critical error to me
> implies a much more complex testing and validation process that has
> identified all the shared state in the program and verified that it is
> correct in the case that a panic is caught.
>
> To me it seems simpler and more likely to have the root cause of the panic
> addressed to just let the program crash. The alternative, somehow
> firewalling the crash, and its effects on the internal state of your
> program, sounds unworkably optimistic.
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to