[ 
https://issues.apache.org/jira/browse/HADOOP-4044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12636790#action_12636790
 ] 

rangadi edited comment on HADOOP-4044 at 10/3/08 11:11 PM:
----------------------------------------------------------------

I deal with every day and care about _both_ FileSystem and HDFS. We have two 
very different approaches for the same problem in these two places (changing 
return types vs throwing exception) . It does not feel quite right for me to +1 
though I prefer only one of the approaches.  This does not affect external API, 
but for developers internal APIs are just as important. 

Two-step API is of course not good (as I pointed out in the early proposals), I 
don't think it is related to 'return type' vs exception issue.

My preference is to have the same approach as in HDFS. Any API that can throw 
IOException, can throw UnresolvedLinkException. Upper layers that can handle it 
handle it. No change in return types is required and we don't need (rather 
strange) classes like FSLinkBoolean, FSInputStreamLink etc.  As minor side 
benefit, we also remove all the extra objects created (even when no links are 
involved).

Finally, I don't think Exceptions are meant only for hard errors (FileNotFound, 
for e.g.). I doubt if exception is always an error, but my experience with java 
is less than 2 years :).

edit: minor. my first edit since Sept 16th.. 

      was (Author: rangadi):
    
I deal with every day and care about _both_ FileSystem and HDFS. We have two 
very different approaches for the same problem in these two places (changing 
return types vs throwing exception) . I does not feel quite right for me to +1 
though I prefer only one of the approaches.  This does not affect external API, 
but for developers internal APIs are just important. 

Two-step API is of course not good (as I pointed out in the early proposals), I 
don't think it is related to 'return type' vs exception issue.

My preference is to have the same approach as in HDFS. Any API that can throw 
IOException, can throw UnresolvedLinkException. Upper layers that can handle it 
handle it. No change return types is required and we don't need extra classes 
like FSLinkBoolean, FSInputStreamLink etc.  As minor side benifit, we also 
remove all the extra objects created, even when no links are involved.

Finally, I don't think Exceptions are meant only for hard errors (FileNotFound, 
for e.g.). I doubt exception is always an error, but my experience with java is 
less than 2 years :).
  
> Create symbolic links in HDFS
> -----------------------------
>
>                 Key: HADOOP-4044
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4044
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: symLink1.patch, symLink1.patch, symLink4.patch, 
> symLink5.patch, symLink6.patch, symLink8.patch, symLink9.patch
>
>
> HDFS should support symbolic links. A symbolic link is a special type of file 
> that contains a reference to another file or directory in the form of an 
> absolute or relative path and that affects pathname resolution. Programs 
> which read or write to files named by a symbolic link will behave as if 
> operating directly on the target file. However, archiving utilities can 
> handle symbolic links specially and manipulate them directly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to