[ 
https://issues.apache.org/jira/browse/HDFS-10721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15407723#comment-15407723
 ] 

Senthilkumar commented on HDFS-10721:
-------------------------------------

{code:title=RpcMount|borderStyle=solid}

public RpcProgramMountd(NfsConfiguration config,
DatagramSocket registrationSocket, boolean allowInsecurePorts)
throws IOException
{ // Note that RPC cache is not enabled super("mountd", "localhost", 
config.getInt( NfsConfigKeys.DFS_NFS_MOUNTD_PORT_KEY, 
NfsConfigKeys.DFS_NFS_MOUNTD_PORT_DEFAULT), PROGRAM, VERSION_1, VERSION_3, 
registrationSocket, allowInsecurePorts); exports = new ArrayList<String>(); 
exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY, 
NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT)); this.hostsMatcher = 
NfsExports.getInstance(config); this.mounts = Collections.synchronizedList(new 
ArrayList<MountEntry>()); UserGroupInformation.setConfiguration(config); 
SecurityUtil.login(config, NfsConfigKeys.DFS_NFS_KEYTAB_FILE_KEY, 
NfsConfigKeys.DFS_NFS_KERBEROS_PRINCIPAL_KEY); this.dfsClient = new 
DFSClient(NameNode.getAddress(config), config); }

// Export List
exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT));


{code}

> HDFS NFS Gateway - Exporting multiple Directories 
> --------------------------------------------------
>
>                 Key: HDFS-10721
>                 URL: https://issues.apache.org/jira/browse/HDFS-10721
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs
>            Reporter: Senthilkumar
>            Priority: Minor
>
> Current HDFS NFS gateway Supports exporting only one Directory.. 
> Example :  
>    <property>
>           <name>nfs.export.point</name>
>           <value>/user</value>
>      </property>
> This property helps us to export particular directory .. 
> Code Block : 
> public RpcProgramMountd(NfsConfiguration config,
>       DatagramSocket registrationSocket, boolean allowInsecurePorts)
>       throws IOException {
>     // Note that RPC cache is not enabled
>     super("mountd", "localhost", config.getInt(
>         NfsConfigKeys.DFS_NFS_MOUNTD_PORT_KEY,
>         NfsConfigKeys.DFS_NFS_MOUNTD_PORT_DEFAULT), PROGRAM, VERSION_1,
>         VERSION_3, registrationSocket, allowInsecurePorts);
>     exports = new ArrayList<String>();
>      exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
>         NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT));
>     this.hostsMatcher = NfsExports.getInstance(config);
>     this.mounts = Collections.synchronizedList(new ArrayList<MountEntry>());
>     UserGroupInformation.setConfiguration(config);
>     SecurityUtil.login(config, NfsConfigKeys.DFS_NFS_KEYTAB_FILE_KEY,
>         NfsConfigKeys.DFS_NFS_KERBEROS_PRINCIPAL_KEY);
>     this.dfsClient = new DFSClient(NameNode.getAddress(config), config);
>   }
> Export List:
> exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
>         NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT));
> Current Code is supporting only one directory to be exposed ... Based on our 
> example /user can be exported ..
> Most of the production environment expects more number of directories should 
> be exported and the same can be mounted for different clients.. 
> Example: 
> <property>
>           <name>nfs.export.point</name>
>           <value>/user,/data/web_crawler,/app-logs</value>
>      </property>
> Here i have three directories to be exposed ..
> 1)    /user
> 2)   /data/web_crawler
> 3)   /app-logs
> This would help us  to mount directories for particular client ( Say client A 
> wants to write data in /app-logs - Hadoop Admin can mount and handover to 
> clients  ).
> Please advise here..  Sorry if this feature is already implemented.. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to