Hi all,

I'm developing an endpoint coprocessor at the moment for one of our applications, and I'm trying to figure out the best means to handle errors within the coprocessor. A couple of requirements:

1. I don't want the coprocessor to ever be unloaded on an error (my assumption is that errors will generally be independent across queries, and that the coprocessor should attempt to serve other queries even if one fails). 2. I'd like to be able to recognize and handle certain application exception classes on the client side (BAD_REQUEST type cases, primarily)

For this, it seems like I have a couple of options:

1. Handle exceptions in the coprocessor using org.apache.hadoop.hbase.protobuf.ResponseConverter#setControllerException. This seems to be the method used by the example coprocessors in the codebase, and it has the nice property of not requiring me to represent the errors explicitly in my response type. On the other hand, docs for com.google.protobuf.RpcController#setFailed mention that it shouldn't be used for machine readable exceptions, which should instead be represented in the response type; this makes it difficult to extract the exception type on the client side.

2. Represent exceptions in the response type proto (e.g. message MyResponse { optional Return, optional Error }). This is a little bit less transparent than the previous approach, but allows me to handle the exceptions more flexibly.

3. A hybrid approach--use an explicit response in my response type proto for exceptions I care about, and setControllerException for everything else.

Any suggestions on the best practice here? As I said, the example coprocessors seem to prefer method 1, but they also don't allow for differentiating exception types on the client side, so I'm a bit unclear on the best way to do this.

Thank you very much for your time!

Andrew

Reply via email to