[jira] [Updated] (HDFS-13138) webhdfs of federated namenode does not work properly

2019-01-30 Thread KWON BYUNGCHANG (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KWON BYUNGCHANG updated HDFS-13138:
---
Resolution: Done
Status: Resolved  (was: Patch Available)

> webhdfs of federated namenode does not  work properly
> -
>
> Key: HDFS-13138
> URL: https://issues.apache.org/jira/browse/HDFS-13138
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.1, 3.0.0
>Reporter: KWON BYUNGCHANG
>Priority: Major
> Attachments: HDFS-13138.001.branch-2.7.patch, HDFS-13138.001.patch, 
> HDFS-13138.002.branch-2.7.patch, HDFS-13138.002.patch, 
> HDFS-13138.003.branch-2.7.patch, HDFS-13138.003.patch
>
>
> my cluster has multiple namenodes using HDFS Federation.
> webhdfs that is not defaultFS does not work properly.
> when I uploaded to non defaultFS namenode  using webhdfs.
> uploaded file was founded at defaultFS namenode.
>  
> I think root cause is that
>   clientNamenodeAddress of non defaultFS namenode is always fs.defaultFS.
>  
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java#L462]
>  
> {code:java}
> /**
>* Set the namenode address that will be used by clients to access this
>* namenode or name service. This needs to be called before the config
>* is overriden.
>*/
>   public void setClientNamenodeAddress(Configuration conf) {
> String nnAddr = conf.get(FS_DEFAULT_NAME_KEY);
> if (nnAddr == null) {
>   // default fs is not set.
>   clientNamenodeAddress = null;
>   return;
> }
> LOG.info("{} is {}", FS_DEFAULT_NAME_KEY, nnAddr);
> URI nnUri = URI.create(nnAddr);
> String nnHost = nnUri.getHost();
> if (nnHost == null) {
>   clientNamenodeAddress = null;
>   return;
> }
> if (DFSUtilClient.getNameServiceIds(conf).contains(nnHost)) {
>   // host name is logical
>   clientNamenodeAddress = nnHost;
> } else if (nnUri.getPort() > 0) {
>   // physical address with a valid port
>   clientNamenodeAddress = nnUri.getAuthority();
> } else {
>   // the port is missing or 0. Figure out real bind address later.
>   clientNamenodeAddress = null;
>   return;
> }
> LOG.info("Clients are to use {} to access"
> + " this namenode/service.", clientNamenodeAddress );
>   }
> {code}
>  
> so webhdfs is redirected to datanode having wrong namenoderpcaddress parameter
> finally file was located namenode of fs,defaultFS
>  
> workaround is
>   configure fs.defaultFS of each namenode to its own nameservice.  
> e.g.
>   hdfs://ns1  has fs.defaultFS=hdfs://ns1
>   hdfs://ns2  has fs.defaultFS=hdfs://ns2
>   
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13138) webhdfs of federated namenode does not work properly

2018-02-23 Thread KWON BYUNGCHANG (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KWON BYUNGCHANG updated HDFS-13138:
---
Attachment: HDFS-13138.003.branch-2.7.patch

> webhdfs of federated namenode does not  work properly
> -
>
> Key: HDFS-13138
> URL: https://issues.apache.org/jira/browse/HDFS-13138
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.1, 3.0.0
>Reporter: KWON BYUNGCHANG
>Priority: Major
> Attachments: HDFS-13138.001.branch-2.7.patch, HDFS-13138.001.patch, 
> HDFS-13138.002.branch-2.7.patch, HDFS-13138.002.patch, 
> HDFS-13138.003.branch-2.7.patch, HDFS-13138.003.patch
>
>
> my cluster has multiple namenodes using HDFS Federation.
> webhdfs that is not defaultFS does not work properly.
> when I uploaded to non defaultFS namenode  using webhdfs.
> uploaded file was founded at defaultFS namenode.
>  
> I think root cause is that
>   clientNamenodeAddress of non defaultFS namenode is always fs.defaultFS.
>  
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java#L462]
>  
> {code:java}
> /**
>* Set the namenode address that will be used by clients to access this
>* namenode or name service. This needs to be called before the config
>* is overriden.
>*/
>   public void setClientNamenodeAddress(Configuration conf) {
> String nnAddr = conf.get(FS_DEFAULT_NAME_KEY);
> if (nnAddr == null) {
>   // default fs is not set.
>   clientNamenodeAddress = null;
>   return;
> }
> LOG.info("{} is {}", FS_DEFAULT_NAME_KEY, nnAddr);
> URI nnUri = URI.create(nnAddr);
> String nnHost = nnUri.getHost();
> if (nnHost == null) {
>   clientNamenodeAddress = null;
>   return;
> }
> if (DFSUtilClient.getNameServiceIds(conf).contains(nnHost)) {
>   // host name is logical
>   clientNamenodeAddress = nnHost;
> } else if (nnUri.getPort() > 0) {
>   // physical address with a valid port
>   clientNamenodeAddress = nnUri.getAuthority();
> } else {
>   // the port is missing or 0. Figure out real bind address later.
>   clientNamenodeAddress = null;
>   return;
> }
> LOG.info("Clients are to use {} to access"
> + " this namenode/service.", clientNamenodeAddress );
>   }
> {code}
>  
> so webhdfs is redirected to datanode having wrong namenoderpcaddress parameter
> finally file was located namenode of fs,defaultFS
>  
> workaround is
>   configure fs.defaultFS of each namenode to its own nameservice.  
> e.g.
>   hdfs://ns1  has fs.defaultFS=hdfs://ns1
>   hdfs://ns2  has fs.defaultFS=hdfs://ns2
>   
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13138) webhdfs of federated namenode does not work properly

2018-02-23 Thread KWON BYUNGCHANG (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KWON BYUNGCHANG updated HDFS-13138:
---
Attachment: HDFS-13138.003.patch

> webhdfs of federated namenode does not  work properly
> -
>
> Key: HDFS-13138
> URL: https://issues.apache.org/jira/browse/HDFS-13138
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.1, 3.0.0
>Reporter: KWON BYUNGCHANG
>Priority: Major
> Attachments: HDFS-13138.001.branch-2.7.patch, HDFS-13138.001.patch, 
> HDFS-13138.002.branch-2.7.patch, HDFS-13138.002.patch, HDFS-13138.003.patch
>
>
> my cluster has multiple namenodes using HDFS Federation.
> webhdfs that is not defaultFS does not work properly.
> when I uploaded to non defaultFS namenode  using webhdfs.
> uploaded file was founded at defaultFS namenode.
>  
> I think root cause is that
>   clientNamenodeAddress of non defaultFS namenode is always fs.defaultFS.
>  
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java#L462]
>  
> {code:java}
> /**
>* Set the namenode address that will be used by clients to access this
>* namenode or name service. This needs to be called before the config
>* is overriden.
>*/
>   public void setClientNamenodeAddress(Configuration conf) {
> String nnAddr = conf.get(FS_DEFAULT_NAME_KEY);
> if (nnAddr == null) {
>   // default fs is not set.
>   clientNamenodeAddress = null;
>   return;
> }
> LOG.info("{} is {}", FS_DEFAULT_NAME_KEY, nnAddr);
> URI nnUri = URI.create(nnAddr);
> String nnHost = nnUri.getHost();
> if (nnHost == null) {
>   clientNamenodeAddress = null;
>   return;
> }
> if (DFSUtilClient.getNameServiceIds(conf).contains(nnHost)) {
>   // host name is logical
>   clientNamenodeAddress = nnHost;
> } else if (nnUri.getPort() > 0) {
>   // physical address with a valid port
>   clientNamenodeAddress = nnUri.getAuthority();
> } else {
>   // the port is missing or 0. Figure out real bind address later.
>   clientNamenodeAddress = null;
>   return;
> }
> LOG.info("Clients are to use {} to access"
> + " this namenode/service.", clientNamenodeAddress );
>   }
> {code}
>  
> so webhdfs is redirected to datanode having wrong namenoderpcaddress parameter
> finally file was located namenode of fs,defaultFS
>  
> workaround is
>   configure fs.defaultFS of each namenode to its own nameservice.  
> e.g.
>   hdfs://ns1  has fs.defaultFS=hdfs://ns1
>   hdfs://ns2  has fs.defaultFS=hdfs://ns2
>   
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13138) webhdfs of federated namenode does not work properly

2018-02-13 Thread KWON BYUNGCHANG (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KWON BYUNGCHANG updated HDFS-13138:
---
Attachment: HDFS-13138.002.branch-2.7.patch

> webhdfs of federated namenode does not  work properly
> -
>
> Key: HDFS-13138
> URL: https://issues.apache.org/jira/browse/HDFS-13138
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.1, 3.0.0
>Reporter: KWON BYUNGCHANG
>Priority: Major
> Attachments: HDFS-13138.001.branch-2.7.patch, HDFS-13138.001.patch, 
> HDFS-13138.002.branch-2.7.patch, HDFS-13138.002.patch
>
>
> my cluster has multiple namenodes using HDFS Federation.
> webhdfs that is not defaultFS does not work properly.
> when I uploaded to non defaultFS namenode  using webhdfs.
> uploaded file was founded at defaultFS namenode.
>  
> I think root cause is that
>   clientNamenodeAddress of non defaultFS namenode is always fs.defaultFS.
>  
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java#L462]
>  
> {code:java}
> /**
>* Set the namenode address that will be used by clients to access this
>* namenode or name service. This needs to be called before the config
>* is overriden.
>*/
>   public void setClientNamenodeAddress(Configuration conf) {
> String nnAddr = conf.get(FS_DEFAULT_NAME_KEY);
> if (nnAddr == null) {
>   // default fs is not set.
>   clientNamenodeAddress = null;
>   return;
> }
> LOG.info("{} is {}", FS_DEFAULT_NAME_KEY, nnAddr);
> URI nnUri = URI.create(nnAddr);
> String nnHost = nnUri.getHost();
> if (nnHost == null) {
>   clientNamenodeAddress = null;
>   return;
> }
> if (DFSUtilClient.getNameServiceIds(conf).contains(nnHost)) {
>   // host name is logical
>   clientNamenodeAddress = nnHost;
> } else if (nnUri.getPort() > 0) {
>   // physical address with a valid port
>   clientNamenodeAddress = nnUri.getAuthority();
> } else {
>   // the port is missing or 0. Figure out real bind address later.
>   clientNamenodeAddress = null;
>   return;
> }
> LOG.info("Clients are to use {} to access"
> + " this namenode/service.", clientNamenodeAddress );
>   }
> {code}
>  
> so webhdfs is redirected to datanode having wrong namenoderpcaddress parameter
> finally file was located namenode of fs,defaultFS
>  
> workaround is
>   configure fs.defaultFS of each namenode to its own nameservice.  
> e.g.
>   hdfs://ns1  has fs.defaultFS=hdfs://ns1
>   hdfs://ns2  has fs.defaultFS=hdfs://ns2
>   
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13138) webhdfs of federated namenode does not work properly

2018-02-13 Thread KWON BYUNGCHANG (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KWON BYUNGCHANG updated HDFS-13138:
---
Attachment: HDFS-13138.002.patch

> webhdfs of federated namenode does not  work properly
> -
>
> Key: HDFS-13138
> URL: https://issues.apache.org/jira/browse/HDFS-13138
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.1, 3.0.0
>Reporter: KWON BYUNGCHANG
>Priority: Major
> Attachments: HDFS-13138.001.branch-2.7.patch, HDFS-13138.001.patch, 
> HDFS-13138.002.patch
>
>
> my cluster has multiple namenodes using HDFS Federation.
> webhdfs that is not defaultFS does not work properly.
> when I uploaded to non defaultFS namenode  using webhdfs.
> uploaded file was founded at defaultFS namenode.
>  
> I think root cause is that
>   clientNamenodeAddress of non defaultFS namenode is always fs.defaultFS.
>  
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java#L462]
>  
> {code:java}
> /**
>* Set the namenode address that will be used by clients to access this
>* namenode or name service. This needs to be called before the config
>* is overriden.
>*/
>   public void setClientNamenodeAddress(Configuration conf) {
> String nnAddr = conf.get(FS_DEFAULT_NAME_KEY);
> if (nnAddr == null) {
>   // default fs is not set.
>   clientNamenodeAddress = null;
>   return;
> }
> LOG.info("{} is {}", FS_DEFAULT_NAME_KEY, nnAddr);
> URI nnUri = URI.create(nnAddr);
> String nnHost = nnUri.getHost();
> if (nnHost == null) {
>   clientNamenodeAddress = null;
>   return;
> }
> if (DFSUtilClient.getNameServiceIds(conf).contains(nnHost)) {
>   // host name is logical
>   clientNamenodeAddress = nnHost;
> } else if (nnUri.getPort() > 0) {
>   // physical address with a valid port
>   clientNamenodeAddress = nnUri.getAuthority();
> } else {
>   // the port is missing or 0. Figure out real bind address later.
>   clientNamenodeAddress = null;
>   return;
> }
> LOG.info("Clients are to use {} to access"
> + " this namenode/service.", clientNamenodeAddress );
>   }
> {code}
>  
> so webhdfs is redirected to datanode having wrong namenoderpcaddress parameter
> finally file was located namenode of fs,defaultFS
>  
> workaround is
>   configure fs.defaultFS of each namenode to its own nameservice.  
> e.g.
>   hdfs://ns1  has fs.defaultFS=hdfs://ns1
>   hdfs://ns2  has fs.defaultFS=hdfs://ns2
>   
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13138) webhdfs of federated namenode does not work properly

2018-02-13 Thread KWON BYUNGCHANG (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KWON BYUNGCHANG updated HDFS-13138:
---
Attachment: HDFS-13138.001.patch
HDFS-13138.001.branch-2.7.patch

> webhdfs of federated namenode does not  work properly
> -
>
> Key: HDFS-13138
> URL: https://issues.apache.org/jira/browse/HDFS-13138
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.1, 3.0.0
>Reporter: KWON BYUNGCHANG
>Priority: Major
> Attachments: HDFS-13138.001.branch-2.7.patch, HDFS-13138.001.patch
>
>
> my cluster has multiple namenodes using HDFS Federation.
> webhdfs that is not defaultFS does not work properly.
> when I uploaded to non defaultFS namenode  using webhdfs.
> uploaded file was founded at defaultFS namenode.
>  
> I think root cause is that
>   clientNamenodeAddress of non defaultFS namenode is always fs.defaultFS.
>  
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java#L462]
>  
> {code:java}
> /**
>* Set the namenode address that will be used by clients to access this
>* namenode or name service. This needs to be called before the config
>* is overriden.
>*/
>   public void setClientNamenodeAddress(Configuration conf) {
> String nnAddr = conf.get(FS_DEFAULT_NAME_KEY);
> if (nnAddr == null) {
>   // default fs is not set.
>   clientNamenodeAddress = null;
>   return;
> }
> LOG.info("{} is {}", FS_DEFAULT_NAME_KEY, nnAddr);
> URI nnUri = URI.create(nnAddr);
> String nnHost = nnUri.getHost();
> if (nnHost == null) {
>   clientNamenodeAddress = null;
>   return;
> }
> if (DFSUtilClient.getNameServiceIds(conf).contains(nnHost)) {
>   // host name is logical
>   clientNamenodeAddress = nnHost;
> } else if (nnUri.getPort() > 0) {
>   // physical address with a valid port
>   clientNamenodeAddress = nnUri.getAuthority();
> } else {
>   // the port is missing or 0. Figure out real bind address later.
>   clientNamenodeAddress = null;
>   return;
> }
> LOG.info("Clients are to use {} to access"
> + " this namenode/service.", clientNamenodeAddress );
>   }
> {code}
>  
> so webhdfs is redirected to datanode having wrong namenoderpcaddress parameter
> finally file was located namenode of fs,defaultFS
>  
> workaround is
>   configure fs.defaultFS of each namenode to its own nameservice.  
> e.g.
>   hdfs://ns1  has fs.defaultFS=hdfs://ns1
>   hdfs://ns2  has fs.defaultFS=hdfs://ns2
>   
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13138) webhdfs of federated namenode does not work properly

2018-02-13 Thread KWON BYUNGCHANG (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KWON BYUNGCHANG updated HDFS-13138:
---
Status: Patch Available  (was: Open)

I've attched patch.

> webhdfs of federated namenode does not  work properly
> -
>
> Key: HDFS-13138
> URL: https://issues.apache.org/jira/browse/HDFS-13138
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.7.1
>Reporter: KWON BYUNGCHANG
>Priority: Major
> Attachments: HDFS-13138.001.branch-2.7.patch, HDFS-13138.001.patch
>
>
> my cluster has multiple namenodes using HDFS Federation.
> webhdfs that is not defaultFS does not work properly.
> when I uploaded to non defaultFS namenode  using webhdfs.
> uploaded file was founded at defaultFS namenode.
>  
> I think root cause is that
>   clientNamenodeAddress of non defaultFS namenode is always fs.defaultFS.
>  
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java#L462]
>  
> {code:java}
> /**
>* Set the namenode address that will be used by clients to access this
>* namenode or name service. This needs to be called before the config
>* is overriden.
>*/
>   public void setClientNamenodeAddress(Configuration conf) {
> String nnAddr = conf.get(FS_DEFAULT_NAME_KEY);
> if (nnAddr == null) {
>   // default fs is not set.
>   clientNamenodeAddress = null;
>   return;
> }
> LOG.info("{} is {}", FS_DEFAULT_NAME_KEY, nnAddr);
> URI nnUri = URI.create(nnAddr);
> String nnHost = nnUri.getHost();
> if (nnHost == null) {
>   clientNamenodeAddress = null;
>   return;
> }
> if (DFSUtilClient.getNameServiceIds(conf).contains(nnHost)) {
>   // host name is logical
>   clientNamenodeAddress = nnHost;
> } else if (nnUri.getPort() > 0) {
>   // physical address with a valid port
>   clientNamenodeAddress = nnUri.getAuthority();
> } else {
>   // the port is missing or 0. Figure out real bind address later.
>   clientNamenodeAddress = null;
>   return;
> }
> LOG.info("Clients are to use {} to access"
> + " this namenode/service.", clientNamenodeAddress );
>   }
> {code}
>  
> so webhdfs is redirected to datanode having wrong namenoderpcaddress parameter
> finally file was located namenode of fs,defaultFS
>  
> workaround is
>   configure fs.defaultFS of each namenode to its own nameservice.  
> e.g.
>   hdfs://ns1  has fs.defaultFS=hdfs://ns1
>   hdfs://ns2  has fs.defaultFS=hdfs://ns2
>   
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org