This is a shortcoming of Hadoop RPC. Ideally exceptions thrown on the server would be re-thrown on the client, but the concern is that their class might not exist there, so we instead transmit the just class name and the error string and do not attempt to re-throw the original exception and instead throw a RemoteException. HDFS could patch around this, but it would really be best to fix it in the RPC layer.

Doug

Olga Natkovich wrote:
Hi,
In my code, I want to be able to differentiate access control problems
and give a meaningful message to the users. I noticed that in this case
org.apache.hadoop.fs.permission.AccessControlException is thrown but
then it gets wrapped into other exceptions such as java.io.IOException
or org.apache.hadoop.ipc.RemoteException. One way to figure it out is to recursively check for the cause of the
exception to be of type
org.apache.hadoop.fs.permission.AccessControlException. Is this the
right/best way to go about it?
Thanks, Olga


Reply via email to