[ 
https://issues.apache.org/jira/browse/HDFS-245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12757457#action_12757457
 ] 

Eli Collins commented on HDFS-245:
----------------------------------

I think well designed APIs don't require catching exceptions for normal control 
flow. My guess is that they chose to throw a SecurityException because they 
wanted to make failing permission checks impossible to ignore (you have to 
either catch or rethrow) where as it's much easier to ignore a function return 
value, leading to a security flaw. Perhaps that's the right trade off in this 
case. Also perhaps continuing to throw an exception is an 
AccessControlException is reasonable given that it's the convention for a 
library used extensively. I also don't consider java.lang to be the final word 
on what's best for a given system (just look how it's evolved), you have to 
consider the trade off. For example, if exceptions turn out to be pretty slow 
that may not be a big deal for SecurityException since it's not thrown 
frequently in the common case, but not at all good for UnresolvedPathException 
since it may be thrown heavily (eg if the root NN has nothing but symlinks to 
other NNs
 ). 

So in this case, and in general, I think the above is a good api design 
principle. There are real reasons why exceptions are not appropriate for normal 
control flow, the tradeoffs need to be considered for the given case. In this 
jira the only argument I read for throwing the exception is that it means not 
modifying a lot of function prototypes, which, to me, does not justify bending 
a good principle. 

Thanks,
Eli

> Create symbolic links in HDFS
> -----------------------------
>
>                 Key: HDFS-245
>                 URL: https://issues.apache.org/jira/browse/HDFS-245
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: dhruba borthakur
>            Assignee: Eli Collins
>         Attachments: 4044_20081030spi.java, HADOOP-4044-strawman.patch, 
> symlink-0.20.0.patch, symLink1.patch, symLink1.patch, symLink11.patch, 
> symLink12.patch, symLink13.patch, symLink14.patch, symLink15.txt, 
> symLink15.txt, symLink4.patch, symLink5.patch, symLink6.patch, 
> symLink8.patch, symLink9.patch
>
>
> HDFS should support symbolic links. A symbolic link is a special type of file 
> that contains a reference to another file or directory in the form of an 
> absolute or relative path and that affects pathname resolution. Programs 
> which read or write to files named by a symbolic link will behave as if 
> operating directly on the target file. However, archiving utilities can 
> handle symbolic links specially and manipulate them directly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to