[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-10-02 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943294#comment-16943294
 ] 

Wei-Chiu Chuang commented on HDFS-14216:


Failure doesn't look related. Pushing it to branch-3.1

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HDFS-14216.branch-3.1.patch, HDFS-14216_1.patch, 
> HDFS-14216_2.patch, HDFS-14216_3.patch, HDFS-14216_4.patch, 
> HDFS-14216_5.patch, HDFS-14216_6.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-10-02 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943281#comment-16943281
 ] 

Hadoop QA commented on HDFS-14216:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
25s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} branch-3.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 155 unchanged - 1 fixed = 155 total (was 156) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
|   | hadoop.hdfs.TestMultipleNNPortQOP |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.diskbalancer.TestDiskBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:080e9d0f9b3 |
| JIRA Issue | HDFS-14216 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12982031/HDFS-14216.branch-3.1.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 35488e4a2808 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.1 / ab7ecd6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28008/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-02-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774182#comment-16774182
 ] 

Hudson commented on HDFS-14216:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16019 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16019/])
HDFS-14216. NullPointerException happens in NamenodeWebHdfs. Contributed 
(surendralilhore: rev 92b53c40f070bbfe65c736f6f3eca721b9d227f5)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/web/resources/TestWebHdfsDataLocality.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java


> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HDFS-14216_1.patch, HDFS-14216_2.patch, 
> HDFS-14216_3.patch, HDFS-14216_4.patch, HDFS-14216_5.patch, 
> HDFS-14216_6.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-02-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774056#comment-16774056
 ] 

Hadoop QA commented on HDFS-14216:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 156 unchanged - 1 fixed = 156 total (was 157) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14216 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959591/HDFS-14216_6.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux aac757efb6d3 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / eedcc8e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26282/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26282/testReport/ |
| Max. process+thread count | 3943 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console 

[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-02-21 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16773941#comment-16773941
 ] 

Surendra Singh Lilhore commented on HDFS-14216:
---

[~xiaoheipangzi] for patch.
{quote}One more thing, asserting NullPointerException in UT is not good idea. 
UT should verify the functional failure case.
{quote}
By this my intention was not to catch NPE. Anyway I corrected the patch and 
attach.

+1 from my side. Will wait for QA result.

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
> Attachments: HDFS-14216_1.patch, HDFS-14216_2.patch, 
> HDFS-14216_3.patch, HDFS-14216_4.patch, HDFS-14216_5.patch, 
> HDFS-14216_6.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-02-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16773741#comment-16773741
 ] 

Hadoop QA commented on HDFS-14216:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 156 unchanged - 1 fixed = 156 total (was 157) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}168m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14216 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959551/HDFS-14216_5.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3e413b63bcb1 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a87e458 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26281/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26281/testReport/ |
| Max. process+thread count | 2540 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-02-20 Thread lujie (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16773636#comment-16773636
 ] 

lujie commented on HDFS-14216:
--

[~surendrasingh]

Thansk for your nice review

I have changed the log level to debug and replace the UT failure as "Failed to 
exclude DataNode2".

 

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
> Attachments: HDFS-14216_1.patch, HDFS-14216_2.patch, 
> HDFS-14216_3.patch, HDFS-14216_4.patch, HDFS-14216_5.patch, 
> hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-02-20 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16773172#comment-16773172
 ] 

Surendra Singh Lilhore commented on HDFS-14216:
---

Anyway client is trying to exclude DN, if it is not found that also fine, no 
need to throw any exception.

{quote} I wonder how this is handled on the RPC side? 
{quote}
If client is doing any operation related to DN, then it should get the latest 
datanode report from namenode periodically.

One more thing, asserting NullPointerException in UT is not good idea. UT 
should verify the functional failure case. 

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
> Attachments: HDFS-14216_1.patch, HDFS-14216_2.patch, 
> HDFS-14216_3.patch, HDFS-14216_4.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-02-20 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16773146#comment-16773146
 ] 

Erik Krogen commented on HDFS-14216:


Sure, I see the logic for using DEBUG level. I'm not sure that throwing an IOE 
is the right move, given that it is non-fatal. For example in the case of a 
well-behaved client deciding to exclude {{nodeA}}, then {{nodeA}} disappears, 
the client's subsequent request shouldn't fail. I wonder how this is handled on 
the RPC side?

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
> Attachments: HDFS-14216_1.patch, HDFS-14216_2.patch, 
> HDFS-14216_3.patch, HDFS-14216_4.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-02-20 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772996#comment-16772996
 ] 

Surendra Singh Lilhore commented on HDFS-14216:
---

{quote} if we don't want to fill the namenode log file, we need throw a 
IOException to indicate that the client uses a wrong input, because we must 
give the reason message for debug. But the exception will prevent the 
"chooseDatanode" continue to run and the user request fails.
{quote}
So better change it to DEBUG. [~xkrogen] what is your opinion ?

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
> Attachments: HDFS-14216_1.patch, HDFS-14216_2.patch, 
> HDFS-14216_3.patch, HDFS-14216_4.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-02-20 Thread lujie (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772994#comment-16772994
 ] 

lujie commented on HDFS-14216:
--

Hi:[~surendrasingh]

 if we don't want to fill the namenode log file, we need throw a IOException to 
indicate that the client uses a wrong input, because we must give the reason 
message for debug. But the exception will prevent the "chooseDatanode" continue 
to run and the user request fails.

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
> Attachments: HDFS-14216_1.patch, HDFS-14216_2.patch, 
> HDFS-14216_3.patch, HDFS-14216_4.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-02-20 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772975#comment-16772975
 ] 

Surendra Singh Lilhore commented on HDFS-14216:
---

Thanks [~xiaoheipangzi] for the patch.
{quote} * I think the log should be at a INFO rather than an ERROR, given that 
it can happen under normal circumstances and the specified DataNode will still 
not be considered (so it is still, in a way, excluded). More explanation would 
probably be nice as well, something like "DataNode {} was requested to be 
excluded, but it was not found." (please use slf4j style statement){quote}
I feel this log itself is not required. Why to fill namenode log file becuase 
of clients wrong input ?

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
> Attachments: HDFS-14216_1.patch, HDFS-14216_2.patch, 
> HDFS-14216_3.patch, HDFS-14216_4.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-02-20 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772962#comment-16772962
 ] 

Surendra Singh Lilhore commented on HDFS-14216:
---

Added [~xiaoheipangzi] as contributor.

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
> Attachments: HDFS-14216_1.patch, HDFS-14216_2.patch, 
> HDFS-14216_3.patch, HDFS-14216_4.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-02-19 Thread lujie (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772693#comment-16772693
 ] 

lujie commented on HDFS-14216:
--

Odd test failure, my local test works well

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Priority: Critical
> Attachments: HDFS-14216_1.patch, HDFS-14216_2.patch, 
> HDFS-14216_3.patch, HDFS-14216_4.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772667#comment-16772667
 ] 

Hadoop QA commented on HDFS-14216:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 156 unchanged - 1 fixed = 156 total (was 157) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14216 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959364/HDFS-14216_4.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3730420047be 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1d30fd9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26264/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26264/testReport/ |
| Max. process+thread count | 3512 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-02-19 Thread lujie (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772605#comment-16772605
 ] 

lujie commented on HDFS-14216:
--

Hi [~xkrogen]

thanks for you nice review, I have changed the patch as your suggestion!  See 
[^HDFS-14216_4.patch] [^HDFS-14216_4.patch]|[^HDFS-14216_4.patch]]

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Priority: Critical
> Attachments: HDFS-14216_1.patch, HDFS-14216_2.patch, 
> HDFS-14216_3.patch, HDFS-14216_4.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-02-19 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772109#comment-16772109
 ] 

Erik Krogen commented on HDFS-14216:


Hi [~xiaoheipangzi], thanks for fixing this important issue. I have a few small 
comments:
* There should be a space between the closing brace and the {{else}} on L286.
* I think the log should be at a INFO rather than an ERROR, given that it can 
happen under normal circumstances and the specified DataNode will still not be 
considered (so it is still, in a way, excluded). More explanation would 
probably be nice as well, something like "DataNode {} was requested to be 
excluded, but it was not found." (please use slf4j style statement)

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Priority: Critical
> Attachments: HDFS-14216_1.patch, HDFS-14216_2.patch, 
> HDFS-14216_3.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-02-16 Thread lujie (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16770322#comment-16770322
 ] 

lujie commented on HDFS-14216:
--

ping—>

Hope for more review

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Priority: Critical
> Attachments: HDFS-14216_1.patch, HDFS-14216_2.patch, 
> HDFS-14216_3.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-01-21 Thread lujie (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16747841#comment-16747841
 ] 

lujie commented on HDFS-14216:
--

UT failure seems not related to this patch!

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Priority: Critical
> Attachments: HDFS-14216_1.patch, HDFS-14216_2.patch, 
> HDFS-14216_3.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-01-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16747369#comment-16747369
 ] 

Hadoop QA commented on HDFS-14216:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 134 unchanged - 1 fixed = 134 total (was 135) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 20s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}154m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14216 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12955537/HDFS-14216_3.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dd0d7476517e 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 824dfa3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26013/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26013/testReport/ |
| Max. process+thread count | 3022 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-01-19 Thread lujie (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16747323#comment-16747323
 ] 

lujie commented on HDFS-14216:
--

HI:[~ayushtkn]

Reattach the patch which include a new UT. The UT just simulate what happens in 
real workload. Local test works well, could you please review it?

Thanks!

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Priority: Critical
> Attachments: HDFS-14216_1.patch, HDFS-14216_2.patch, 
> HDFS-14216_3.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is just  wiped before line280, 
> or{color:#33}{color:#ff} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) 
> will return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-01-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16747219#comment-16747219
 ] 

Hadoop QA commented on HDFS-14216:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 134 unchanged - 1 fixed = 134 total (was 135) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}181m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestFSImage |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14216 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12955525/HDFS-14216_2.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3446170baa71 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 824dfa3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26012/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26012/testReport/ |
| Max. process+thread count | 3082 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs 

[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-01-19 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16747141#comment-16747141
 ] 

Ayush Saxena commented on HDFS-14216:
-

Thanx [~xiaoheipangzi] for the patch.

The fix seems quite straightforward.

Can we extend a proper unit test rather than hampering the existing one,which 
can give a check to the NPE. :)



> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Priority: Critical
> Attachments: HDFS-14216_1.patch, HDFS-14216_2.patch, 
> hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is just  wiped before line280, 
> or{color:#33}{color:#ff} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) 
> will return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-01-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16746988#comment-16746988
 ] 

Hadoop QA commented on HDFS-14216:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 123 unchanged - 1 fixed = 123 total (was 124) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}155m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14216 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12955501/HDFS-14216_1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 646344de5d39 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 824dfa3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26011/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26011/testReport/ |
| Max. process+thread count | 3422 (vs. ulimit of 1) |
| 

[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-01-18 Thread lujie (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16746967#comment-16746967
 ] 

lujie commented on HDFS-14216:
--

Attatch the patch, we also test bm.getDatanodeManager().getDatanodeByXferAddr, 
it can also return null when datanode crash. Fix it also!

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Priority: Critical
> Attachments: HDFS-14216_1.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when host(e.g.hadoop2) just crash before line280, then  
> bm.getDatanodeManager().getDatanodeByHost(host) will return null, 
> *_excludes_* containes *{color:#ff}null{color}*. while *_excludes_* are 
> used later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org