[ 
https://issues.apache.org/jira/browse/VFS-442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547521#comment-13547521
 ] 

Dave Marion commented on VFS-442:
---------------------------------


 Internally, getting access to the file in HDFS is pretty simple. The software:

 1. extracts the root URI of the file (i.e. hdfs://host:port)
 2. Creates a Hadoop configuration object and sets 
org.apache.hadoop.fs.FileSystem.FS_DEFAULT_NAME_KEY to the root URI.
 3. Calls org.apache.hadoop.fs.FileSystem.get(conf) to get a FileSystem 
implementation

 The file system could be local or remote, but I think it has only been tested 
locally at this time. The default security mechanism is documented at [1]. In 
short, without Kerberos enabled, HDFS will use the user and group permissions 
of the O/S process that is running the software to determine whether or not the 
file can be accessed. I have not played with Kerberos at all, so I'm not sure 
what the changes would be. I would also assume that anyone trying to use this 
has some knowledge of HDFS.

 [1] http://hadoop.apache.org/docs/r1.0.4/hdfs_permissions_guide.html
                
> Add an HDFS FileSystem Provider
> -------------------------------
>
>                 Key: VFS-442
>                 URL: https://issues.apache.org/jira/browse/VFS-442
>             Project: Commons VFS
>          Issue Type: New Feature
>    Affects Versions: 2.0
>            Reporter: Dave Marion
>            Assignee: Gary Gregory
>            Priority: Minor
>              Labels: accumulo, hdfs
>         Attachments: vfs-422-3.diff, VFS-442-1.patch, VFS-442-2.patch, 
> VFS-442-4.patch, VFS-442-5.patch, VFS-442-6.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to