[ 
https://issues.apache.org/jira/browse/HADOOP-4044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12629263#action_12629263
 ] 

Doug Cutting commented on HADOOP-4044:
--------------------------------------

> Both have extra RPC (or system call in case of LocalFS) for open of a normal 
> file (common case).

Exceptions should not be used for normal program control.  If the extra RPC is 
a problem, then a FileSystem can implement open() directly itself as an 
optimization.  LocatedBlocks could be altered to optionally contain a symbolic 
link.  LocalFileSystem can also override open(), since the native 
implementation already does the right thing.  Would the code above do something 
reasonable on S3 & KFS, or would we need to override open() for those too?  If 
we end up overriding it everywhere then it's probably not worth having a 
default implementation and having openData(), but we should rather just require 
that everyone implement open() to handle links.

> Create symbolic links in HDFS
> -----------------------------
>
>                 Key: HADOOP-4044
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4044
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: symLink1.patch
>
>
> HDFS should support symbolic links. A symbolic link is a special type of file 
> that contains a reference to another file or directory in the form of an 
> absolute or relative path and that affects pathname resolution. Programs 
> which read or write to files named by a symbolic link will behave as if 
> operating directly on the target file. However, archiving utilities can 
> handle symbolic links specially and manipulate them directly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to