Doug Cutting wrote:
This is a shortcoming of Hadoop RPC. Ideally exceptions thrown on the
server would be re-thrown on the client, but the concern is that their
class might not exist there, so we instead transmit the just class
name and the error string and do not attempt to re-throw the original
exception and instead throw a RemoteException. HDFS could patch
around this, but it would really be best to fix it in the RPC layer.
One could say that the client side has the classes for all the
exceptions declared in the Filesystem interface.
Hence if the server side throws a subclass then the client side throws
the base class declared in the signature of the
method call.
In a sense, RemoteException is an attempt to do that but it is too
coarse and too much work for the application writer
to unravel the RemoteException.
Yes we could clean this up.
sanjay
Doug
Olga Natkovich wrote:
Hi,
In my code, I want to be able to differentiate access control problems
and give a meaningful message to the users. I noticed that in this case
org.apache.hadoop.fs.permission.AccessControlException is thrown but
then it gets wrapped into other exceptions such as java.io.IOException
or org.apache.hadoop.ipc.RemoteException.
One way to figure it out is to recursively check for the cause of the
exception to be of type
org.apache.hadoop.fs.permission.AccessControlException. Is this the
right/best way to go about it?
Thanks,
Olga