[jira] [Updated] (HDFS-10994) Support an XOR policy XOR-2-1-64k in HDFS

2016-11-03 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-10994:
-
Attachment: HDFS-10994-v3.patch

1. Rebase the patch against trunk
2. Fix one style issue

> Support an XOR policy XOR-2-1-64k in HDFS
> -
>
> Key: HDFS-10994
> URL: https://issues.apache.org/jira/browse/HDFS-10994
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10994-v1.patch, HDFS-10994-v2.patch, 
> HDFS-10994-v3.patch
>
>
> So far, "hdfs erasurecode" command supports three policies, 
> RS-DEFAULT-3-2-64k, RS-DEFAULT-6-3-64k and RS-LEGACY-6-3-64k. This task is 
> going to add XOR-2-1-64k policy to this command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635262#comment-15635262
 ] 

Hadoop QA commented on HDFS-9668:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 39s{color} | {color:orange} root: The patch generated 1 new + 1007 unchanged 
- 14 fixed = 1008 total (was 1021) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
23s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR |
|   | hadoop.hdfs.TestMaintenanceState |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-9668 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837024/HDFS-9668-23.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 0127a6a99b58 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 69dd5fa |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17421/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 

[jira] [Commented] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-03 Thread Jagadesh Kiran N (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635245#comment-15635245
 ] 

Jagadesh Kiran N commented on HDFS-9337:


Test Failures are not releated to Patch , [~vinayrpet] please review

> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, 
> HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, 
> HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, 
> HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, 
> HDFS-9337_11.patch, HDFS-9337_12.patch, HDFS-9337_13.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME;
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2016-11-03 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635236#comment-15635236
 ] 

Xiao Chen commented on HDFS-10899:
--

Thanks Andrew for taking the time to look at this, and great comments! Really 
appreciate it. I will accommodate them into the next patch.

A little clarifications first:
bq. LinkedHashMap v.s. ConcurrentLinkedDeque ...  is there actually concurrency 
happening?
I thought of this too. Currently there's just the single background thread in 
the executor to process it. But later we likely want to multi-thread that for 
performance. Also, we don't use the value (just the key) of the LinkedHashMap. 
OTOH, I agree not using the deque nature is also bad...
Also thought about {{hasZone}} being O(n), but thought it's okay since it's not 
a usual operation (only when new command submission / NN startup). Really wish 
there's a concurrent linked hash set or at least a linked hash set..

bq. Can the new methods in EZManager be encapsulated as a class? 
You mean to have a separate class for the background thread right? Surely can 
do.

> Add functionality to re-encrypt EDEKs.
> --
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635200#comment-15635200
 ] 

Hadoop QA commented on HDFS-9337:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 204 unchanged - 0 fixed = 205 total (was 204) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.TestDFSClientRetries |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-9337 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837020/HDFS-9337_13.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 41fa9d2b7c75 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5cad93d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17419/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17419/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17419/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17419/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Should check required params in WebHDFS to avoid 

[jira] [Commented] (HDFS-11101) TestDFSShell#testMoveWithTargetPortEmpty fails intermittently

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635175#comment-15635175
 ] 

Hadoop QA commented on HDFS-11101:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 59m 
18s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11101 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837022/HDFS-11101.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 126baa63753c 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5cad93d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17420/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17420/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestDFSShell#testMoveWithTargetPortEmpty fails intermittently
> -
>
> Key: HDFS-11101
> URL: https://issues.apache.org/jira/browse/HDFS-11101
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
> Attachments: HDFS-11101.patch
>
>
> {noformat}
> java.io.IOException: Port is already in use; giving up after 10 times.
>   at 
> org.apache.hadoop.net.ServerSocketUtil.waitForPort(ServerSocketUtil.java:98)
>   at 
> 

[jira] [Commented] (HDFS-10638) Modifications to remove the assumption that StorageLocation is associated with java.io.File in Datanode.

2016-11-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635081#comment-15635081
 ] 

Hudson commented on HDFS-10638:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10769 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10769/])
HDFS-11098. Datanode in tests cannot start in Windows after HDFS-10638 
(vinayakumarb: rev 69dd5fa2d43eefeec112f36b91a13513ac21a763)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java


> Modifications to remove the assumption that StorageLocation is associated 
> with java.io.File in Datanode.
> 
>
> Key: HDFS-10638
> URL: https://issues.apache.org/jira/browse/HDFS-10638
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10638.001.patch, HDFS-10638.002.patch, 
> HDFS-10638.003.patch, HDFS-10638.004.patch, HDFS-10638.005.patch
>
>
> Changes to ensure that {{StorageLocation}} need not be associated with a 
> {{java.io.File}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11098) Datanode in tests cannot start in Windows after HDFS-10638

2016-11-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635082#comment-15635082
 ] 

Hudson commented on HDFS-11098:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10769 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10769/])
HDFS-11098. Datanode in tests cannot start in Windows after HDFS-10638 
(vinayakumarb: rev 69dd5fa2d43eefeec112f36b91a13513ac21a763)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java


> Datanode in tests cannot start in Windows after HDFS-10638
> --
>
> Key: HDFS-11098
> URL: https://issues.apache.org/jira/browse/HDFS-11098
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Blocker
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11098.01.patch, HDFS-11098.02.patch
>
>
> After HDFS-10638
> Starting datanode's in MiniDFSCluster in windows throws below exception
> {noformat}java.lang.IllegalArgumentException: URI: 
> file:/D:/code/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1
>  is not in the expected format
>   at 
> org.apache.hadoop.hdfs.server.datanode.StorageLocation.(StorageLocation.java:68)
>   at 
> org.apache.hadoop.hdfs.server.datanode.StorageLocation.parse(StorageLocation.java:123)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getStorageLocations(DataNode.java:2561)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2545)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1613)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:860)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11088) Quash unnecessary safemode WARN message during NameNode startup

2016-11-03 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635079#comment-15635079
 ] 

Yiqun Lin commented on HDFS-11088:
--

Thanks [~andrew.wang] and [~liuml07]!

> Quash unnecessary safemode WARN message during NameNode startup
> ---
>
> Key: HDFS-11088
> URL: https://issues.apache.org/jira/browse/HDFS-11088
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-11088.001.patch, HDFS-11088.002.patch, 
> HDFS-11088.003.patch
>
>
> I tried starting a NameNode on a freshly formatted cluster, and it produced 
> this WARN log:
> {noformat}
> 16/11/01 16:42:44 WARN blockmanagement.BlockManagerSafeMode: forceExit used 
> when normal exist would suffice. Treating force exit as normal safe mode exit.
> {noformat}
> I didn't do any special commands related to safemode, so this log should be 
> quashed.
> I didn't try the branch-2's, but it's likely this issue exists there as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11097) Fix the jenkins warning related to the deprecated method StorageReceivedDeletedBlocks

2016-11-03 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635074#comment-15635074
 ] 

Yiqun Lin commented on HDFS-11097:
--

Thanks [~arpitagarwal] for the review and commit!

> Fix the jenkins warning related to the deprecated method 
> StorageReceivedDeletedBlocks
> -
>
> Key: HDFS-11097
> URL: https://issues.apache.org/jira/browse/HDFS-11097
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11097.001.patch, warn.txt
>
>
> After HDFS-6094, it updated the constrcut method of 
> {{StorageReceivedDeletedBlocks}} and let it good to use. We can pass not only 
> the storage type and state as well. But this new method isn't updated in some 
> test case and it cause many warnings in each jenkins buildings. The part of 
> warning infos:
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java:[175,12]
>  [deprecation] 
> StorageReceivedDeletedBlocks(String,ReceivedDeletedBlockInfo[]) in 
> StorageReceivedDeletedBlocks has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReconstruction.java:[315,14]
>  [deprecation] 
> StorageReceivedDeletedBlocks(String,ReceivedDeletedBlockInfo[]) in 
> StorageReceivedDeletedBlocks has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReconstruction.java:[333,14]
>  [deprecation] 
> StorageReceivedDeletedBlocks(String,ReceivedDeletedBlockInfo[]) in 
> StorageReceivedDeletedBlocks has been deprecated
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11098) Datanode in tests cannot start in Windows after HDFS-10638

2016-11-03 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-11098:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Committed to trunk.

> Datanode in tests cannot start in Windows after HDFS-10638
> --
>
> Key: HDFS-11098
> URL: https://issues.apache.org/jira/browse/HDFS-11098
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Blocker
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11098.01.patch, HDFS-11098.02.patch
>
>
> After HDFS-10638
> Starting datanode's in MiniDFSCluster in windows throws below exception
> {noformat}java.lang.IllegalArgumentException: URI: 
> file:/D:/code/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1
>  is not in the expected format
>   at 
> org.apache.hadoop.hdfs.server.datanode.StorageLocation.(StorageLocation.java:68)
>   at 
> org.apache.hadoop.hdfs.server.datanode.StorageLocation.parse(StorageLocation.java:123)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getStorageLocations(DataNode.java:2561)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2545)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1613)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:860)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9668) Optimize the locking in FsDatasetImpl

2016-11-03 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HDFS-9668:
---
Attachment: HDFS-9668-23.patch

Upload a new patch V23 to address [~eddyxu]'s comments. Thanks!

> Optimize the locking in FsDatasetImpl
> -
>
> Key: HDFS-9668
> URL: https://issues.apache.org/jira/browse/HDFS-9668
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HDFS-9668-1.patch, HDFS-9668-10.patch, 
> HDFS-9668-11.patch, HDFS-9668-12.patch, HDFS-9668-13.patch, 
> HDFS-9668-14.patch, HDFS-9668-14.patch, HDFS-9668-15.patch, 
> HDFS-9668-16.patch, HDFS-9668-17.patch, HDFS-9668-18.patch, 
> HDFS-9668-19.patch, HDFS-9668-19.patch, HDFS-9668-2.patch, 
> HDFS-9668-20.patch, HDFS-9668-21.patch, HDFS-9668-22.patch, 
> HDFS-9668-23.patch, HDFS-9668-3.patch, HDFS-9668-4.patch, HDFS-9668-5.patch, 
> HDFS-9668-6.patch, HDFS-9668-7.patch, HDFS-9668-8.patch, HDFS-9668-9.patch, 
> execution_time.png
>
>
> During the HBase test on a tiered storage of HDFS (WAL is stored in 
> SSD/RAMDISK, and all other files are stored in HDD), we observe many 
> long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part 
> of the jstack result:
> {noformat}
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48521 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread 
> t@93336
>java.lang.Thread.State: BLOCKED
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:)
>   - waiting to lock <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by 
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - None
>   
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" - Thread 
> t@93335
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.createFileExclusively(Native Method)
>   at java.io.File.createNewFile(File.java:1012)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:66)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:271)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:286)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1140)
>   - locked <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - None
> {noformat}
> We measured the execution of some operations in FsDatasetImpl during the 
> test. Here following is the result.
> !execution_time.png!
> The operations of finalizeBlock, addBlock and createRbw on HDD in a heavy 
> load take a really long time.
> It means one slow operation of finalizeBlock, addBlock and createRbw in a 
> slow storage can block all the other same operations in the same DataNode, 
> especially in HBase when many wal/flusher/compactor are configured.
> We need a finer grained lock mechanism in a new FsDatasetImpl 

[jira] [Commented] (HDFS-11098) Datanode in tests cannot start in Windows after HDFS-10638

2016-11-03 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635045#comment-15635045
 ] 

Vinayakumar B commented on HDFS-11098:
--

Thanks [~brahmareddy]. Will commit soon.

> Datanode in tests cannot start in Windows after HDFS-10638
> --
>
> Key: HDFS-11098
> URL: https://issues.apache.org/jira/browse/HDFS-11098
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Blocker
> Attachments: HDFS-11098.01.patch, HDFS-11098.02.patch
>
>
> After HDFS-10638
> Starting datanode's in MiniDFSCluster in windows throws below exception
> {noformat}java.lang.IllegalArgumentException: URI: 
> file:/D:/code/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1
>  is not in the expected format
>   at 
> org.apache.hadoop.hdfs.server.datanode.StorageLocation.(StorageLocation.java:68)
>   at 
> org.apache.hadoop.hdfs.server.datanode.StorageLocation.parse(StorageLocation.java:123)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getStorageLocations(DataNode.java:2561)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2545)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1613)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:860)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11098) Datanode in tests cannot start in Windows after HDFS-10638

2016-11-03 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635031#comment-15635031
 ] 

Brahma Reddy Battula commented on HDFS-11098:
-

Latest patch. LGTM.. Test failure is unrelated,Raised HDFS-11101 to track that 
failure.

> Datanode in tests cannot start in Windows after HDFS-10638
> --
>
> Key: HDFS-11098
> URL: https://issues.apache.org/jira/browse/HDFS-11098
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Blocker
> Attachments: HDFS-11098.01.patch, HDFS-11098.02.patch
>
>
> After HDFS-10638
> Starting datanode's in MiniDFSCluster in windows throws below exception
> {noformat}java.lang.IllegalArgumentException: URI: 
> file:/D:/code/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1
>  is not in the expected format
>   at 
> org.apache.hadoop.hdfs.server.datanode.StorageLocation.(StorageLocation.java:68)
>   at 
> org.apache.hadoop.hdfs.server.datanode.StorageLocation.parse(StorageLocation.java:123)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getStorageLocations(DataNode.java:2561)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2545)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1613)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:860)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11101) TestDFSShell#testMoveWithTargetPortEmpty fails intermittently

2016-11-03 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11101:

Status: Patch Available  (was: Open)

> TestDFSShell#testMoveWithTargetPortEmpty fails intermittently
> -
>
> Key: HDFS-11101
> URL: https://issues.apache.org/jira/browse/HDFS-11101
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
> Attachments: HDFS-11101.patch
>
>
> {noformat}
> java.io.IOException: Port is already in use; giving up after 10 times.
>   at 
> org.apache.hadoop.net.ServerSocketUtil.waitForPort(ServerSocketUtil.java:98)
>   at 
> org.apache.hadoop.hdfs.TestDFSShell.testMoveWithTargetPortEmpty(TestDFSShell.java:778)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11101) TestDFSShell#testMoveWithTargetPortEmpty fails intermittently

2016-11-03 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11101:

Attachment: HDFS-11101.patch

Uploading the patch.It was trying only for 10 secs,port might not released..So 
increasing to 60 sec( 60 times retry).

> TestDFSShell#testMoveWithTargetPortEmpty fails intermittently
> -
>
> Key: HDFS-11101
> URL: https://issues.apache.org/jira/browse/HDFS-11101
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
> Attachments: HDFS-11101.patch
>
>
> {noformat}
> java.io.IOException: Port is already in use; giving up after 10 times.
>   at 
> org.apache.hadoop.net.ServerSocketUtil.waitForPort(ServerSocketUtil.java:98)
>   at 
> org.apache.hadoop.hdfs.TestDFSShell.testMoveWithTargetPortEmpty(TestDFSShell.java:778)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11101) TestDFSShell#testMoveWithTargetPortEmpty fails intermittently

2016-11-03 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-11101:
---

 Summary: TestDFSShell#testMoveWithTargetPortEmpty fails 
intermittently
 Key: HDFS-11101
 URL: https://issues.apache.org/jira/browse/HDFS-11101
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Brahma Reddy Battula


{noformat}
java.io.IOException: Port is already in use; giving up after 10 times.
at 
org.apache.hadoop.net.ServerSocketUtil.waitForPort(ServerSocketUtil.java:98)
at 
org.apache.hadoop.hdfs.TestDFSShell.testMoveWithTargetPortEmpty(TestDFSShell.java:778)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-03 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N updated HDFS-9337:
---
Attachment: (was: HDFS-9337_13)

> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, 
> HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, 
> HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, 
> HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, 
> HDFS-9337_11.patch, HDFS-9337_12.patch, HDFS-9337_13.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME;
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-03 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N updated HDFS-9337:
---
Attachment: HDFS-9337_13.patch

> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, 
> HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, 
> HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, 
> HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, 
> HDFS-9337_11.patch, HDFS-9337_12.patch, HDFS-9337_13.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME;
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl

2016-11-03 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15635012#comment-15635012
 ] 

Jingcheng Du commented on HDFS-9668:


Thanks a lot for the comments! [~eddyxu]
I'll upload a new patch to address the comments. It might take a little bit 
more time to prepare the patch for branch-2, I'll do it asap.
bq. why does not ReplicaMap using itself as mutex? I might miss something. For 
example, FsDatasetImpl#volumeMap is initialized with a FsDatasetImpl instance 
as mutex. And the other operations within FsDatasetImpl are protected by 
read/write locks added this patch. So this replicaMap is not synchronized with 
the rest of FsDatasetImpl methods.
It allows the operations in ReplicaMap and instances from outside can be 
protected by the same lock.
In old FsDatasetImpl, it uses synchronized(this) to protect the operations 
which means ReplicaMap uses the same lock with FsDatasetImpl. And the static 
method {{initReplicaRecovery}} in FsDatasetImpl can use this lock to protect 
the code inside.
Now in this patch, the manner is retained which allows an outside lock in 
ReplicaMap. But this lock is only used to synchronize the operations for this 
map - the locks used in FsDatasetImpl and ReplicaMap, using itself as the lock 
is fine, but how about to use the current way? :) 


> Optimize the locking in FsDatasetImpl
> -
>
> Key: HDFS-9668
> URL: https://issues.apache.org/jira/browse/HDFS-9668
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HDFS-9668-1.patch, HDFS-9668-10.patch, 
> HDFS-9668-11.patch, HDFS-9668-12.patch, HDFS-9668-13.patch, 
> HDFS-9668-14.patch, HDFS-9668-14.patch, HDFS-9668-15.patch, 
> HDFS-9668-16.patch, HDFS-9668-17.patch, HDFS-9668-18.patch, 
> HDFS-9668-19.patch, HDFS-9668-19.patch, HDFS-9668-2.patch, 
> HDFS-9668-20.patch, HDFS-9668-21.patch, HDFS-9668-22.patch, 
> HDFS-9668-3.patch, HDFS-9668-4.patch, HDFS-9668-5.patch, HDFS-9668-6.patch, 
> HDFS-9668-7.patch, HDFS-9668-8.patch, HDFS-9668-9.patch, execution_time.png
>
>
> During the HBase test on a tiered storage of HDFS (WAL is stored in 
> SSD/RAMDISK, and all other files are stored in HDD), we observe many 
> long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part 
> of the jstack result:
> {noformat}
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48521 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread 
> t@93336
>java.lang.Thread.State: BLOCKED
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:)
>   - waiting to lock <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by 
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - None
>   
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" - Thread 
> t@93335
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.createFileExclusively(Native Method)
>   at java.io.File.createNewFile(File.java:1012)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:66)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:271)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:286)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1140)
>   - locked <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> 

[jira] [Updated] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-03 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N updated HDFS-9337:
---
Attachment: HDFS-9337_13

Updated the patch,please review

> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, 
> HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, 
> HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, 
> HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, 
> HDFS-9337_11.patch, HDFS-9337_12.patch, HDFS-9337_13
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME;
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9868) add reading source cluster with HA access mode feature for DistCp

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634990#comment-15634990
 ] 

Hadoop QA commented on HDFS-9868:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-9868 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-9868 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791618/HDFS-9868.4.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17418/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> add reading source cluster with HA access mode feature for DistCp
> -
>
> Key: HDFS-9868
> URL: https://issues.apache.org/jira/browse/HDFS-9868
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: distcp
>Affects Versions: 2.7.1
>Reporter: NING DING
>Assignee: NING DING
> Attachments: HDFS-9868.1.patch, HDFS-9868.2.patch, HDFS-9868.3.patch, 
> HDFS-9868.4.patch
>
>
> Normally the HDFS cluster is HA enabled. It could take a long time when 
> coping huge data by distp. If the source cluster changes active namenode, the 
> distp will run failed. This patch supports the DistCp can read source cluster 
> files in HA access mode. A source cluster configuration file needs to be 
> specified (via the -sourceClusterConf option).
>   The following is an example of the contents of a source cluster 
> configuration
>   file:
> {code:xml}
> 
>   
>   fs.defaultFS
>   hdfs://mycluster
> 
> 
>   dfs.nameservices
>   mycluster
> 
> 
>   dfs.ha.namenodes.mycluster
>   nn1,nn2
> 
> 
>   dfs.namenode.rpc-address.mycluster.nn1
>   host1:9000
> 
> 
>   dfs.namenode.rpc-address.mycluster.nn2
>   host2:9000
> 
> 
>   dfs.namenode.http-address.mycluster.nn1
>   host1:50070
> 
> 
>   dfs.namenode.http-address.mycluster.nn2
>   host2:50070
> 
> 
>   dfs.client.failover.proxy.provider.mycluster
>   
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> 
>   
> {code}
>   The invocation of DistCp is as below:
> {code}
> bash$ hadoop distcp -sourceClusterConf sourceCluster.xml /foo/bar 
> hdfs://nn2:8020/bar/foo
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9868) add reading source cluster with HA access mode feature for DistCp

2016-11-03 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634968#comment-15634968
 ] 

Xiao Chen commented on HDFS-9868:
-

Hi [~iceberg565], thanks for reporting the issue and providing a fix. Did you 
have a chance to address the above comments from [~jojochuang]?

> add reading source cluster with HA access mode feature for DistCp
> -
>
> Key: HDFS-9868
> URL: https://issues.apache.org/jira/browse/HDFS-9868
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: distcp
>Affects Versions: 2.7.1
>Reporter: NING DING
>Assignee: NING DING
> Attachments: HDFS-9868.1.patch, HDFS-9868.2.patch, HDFS-9868.3.patch, 
> HDFS-9868.4.patch
>
>
> Normally the HDFS cluster is HA enabled. It could take a long time when 
> coping huge data by distp. If the source cluster changes active namenode, the 
> distp will run failed. This patch supports the DistCp can read source cluster 
> files in HA access mode. A source cluster configuration file needs to be 
> specified (via the -sourceClusterConf option).
>   The following is an example of the contents of a source cluster 
> configuration
>   file:
> {code:xml}
> 
>   
>   fs.defaultFS
>   hdfs://mycluster
> 
> 
>   dfs.nameservices
>   mycluster
> 
> 
>   dfs.ha.namenodes.mycluster
>   nn1,nn2
> 
> 
>   dfs.namenode.rpc-address.mycluster.nn1
>   host1:9000
> 
> 
>   dfs.namenode.rpc-address.mycluster.nn2
>   host2:9000
> 
> 
>   dfs.namenode.http-address.mycluster.nn1
>   host1:50070
> 
> 
>   dfs.namenode.http-address.mycluster.nn2
>   host2:50070
> 
> 
>   dfs.client.failover.proxy.provider.mycluster
>   
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> 
>   
> {code}
>   The invocation of DistCp is as below:
> {code}
> bash$ hadoop distcp -sourceClusterConf sourceCluster.xml /foo/bar 
> hdfs://nn2:8020/bar/foo
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11100) Recursively deleting directory with file protected by sticky bit should fail

2016-11-03 Thread John Zhuge (JIRA)
John Zhuge created HDFS-11100:
-

 Summary: Recursively deleting directory with file protected by 
sticky bit should fail
 Key: HDFS-11100
 URL: https://issues.apache.org/jira/browse/HDFS-11100
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs
Affects Versions: 2.6.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Critical


Recursively deleting a directory that contains files or directories protected 
by sticky bit should fail but it doesn't in HDFS. In the case below, 
{{/tmp/test/sticky_dir/f2}} is protected by sticky bit, thus recursive deleting 
{{/tmp/test/sticky_dir}} should fail.
{noformat}
+ hdfs dfs -ls -R /tmp/test
drwxrwxrwt   - jzhuge supergroup  0 2016-11-03 18:08 
/tmp/test/sticky_dir
-rwxrwxrwx   1 jzhuge supergroup  0 2016-11-03 18:08 
/tmp/test/sticky_dir/f2

+ sudo -u hadoop hdfs dfs -rm -skipTrash /tmp/test/sticky_dir/f2
rm: Permission denied by sticky bit: user=hadoop, 
path="/tmp/test/sticky_dir/f2":jzhuge:supergroup:-rwxrwxrwx, 
parent="/tmp/test/sticky_dir":jzhuge:supergroup:drwxrwxrwt

+ sudo -u hadoop hdfs dfs -rm -r -skipTrash /tmp/test/sticky_dir
Deleted /tmp/test/sticky_dir
{noformat}

Centos 6.4 behavior:
{noformat}
$ ls -lR /tmp/test
/tmp/test: 
total 4
drwxrwxrwt 2 systest systest 4096 Nov  3 18:36 sbit

/tmp/test/sbit:
total 0
-rw-rw-rw- 1 systest systest 0 Nov  2 13:45 f2

$ sudo -u mapred rm -fr /tmp/test/sbit
rm: cannot remove `/tmp/test/sbit/f2': Operation not permitted

$ chmod -t /tmp/test/sbit
$ sudo -u mapred rm -fr /tmp/test/sbit
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11058) Implement 'hadoop fs -df' command for ViewFileSystem

2016-11-03 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634890#comment-15634890
 ] 

Andrew Wang commented on HDFS-11058:


Thanks for the thoughtful response Manoj. I need to think about this some more, 
but a few ideas for discussion:

bq. IMHO, ViewFsMountPoint should be abstracted and expose only the needed 
attributes – the MountedOn path and its target FileSystem. The FileSystem could 
be a hdfs:// or it could be a one for MergeFs, but I don't see a need for 
exposing all the NameServices, at least for now.

I think the intent was to implement merging in ViewFileSystem itself, rather 
than a new FileSystem. So we'd need to return an array here, like in the 
original MountPoint.

Our user API for referring to a FileSystem is also by URI, not by object 
reference. Yes, the user can always call {{getUri}}, but there is global state 
in a FileSystem like file handles and statistics, and it might be better to not 
share that by handing out a FileSystem object which they can poke at. Also, 
since it looks like we allow mounting subdirectories, {{FileSystem#getUri}} by 
itself is underspecified without the path component.

Finally, what is the reason for using generics? getTargetFileSystem will always 
return a FileSystem right?

bq. 

Not worth to separate out, though we should think about this some more.

As a semi-side note, I'm quite surprised that ViewFileSystem is annotated 
@Public. My impression from DistributedFileSystem is that the FileSystem 
subclasses are private, and are only used when casted as a FileSystem. This is 
why we have HdfsAdmin, which lets you do DFS-specific operations.

ViewFsUtil is similar to HdfsAdmin, but used to examine an already created 
ViewFileSystem instance. However, since {{getStatus}} takes a ViewFileSystem, 
it forces the user to downcast which is unfortunate. Instead, we could have an 
{{isViewFileSystem}} API, and having {{getStatus}} take a {{FileSystem}} and 
throwing UnsupportedOperation if the passed FS is not a VFS.

Finally, we should probably also name this {{ViewFileSystemUtil}} since 
{{ViewFs}} is the FileContext implementation.

bq.  I have seen the unix df command getting stuck at times when NFS servers 
are not reachable. But, I am totally ok to remove this extra feature and error 
out when any of the backing NameServices are not reachable.

Good point, I've seen similar behavior as well. Let's tackle this in a separate 
JIRA though, and maybe put the behavior behind a flag. I do think we should 
return non-zero in this case, and think about how scripts will be able to parse 
the output.

> Implement 'hadoop fs -df' command for ViewFileSystem   
> ---
>
> Key: HDFS-11058
> URL: https://issues.apache.org/jira/browse/HDFS-11058
> Project: Hadoop HDFS
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>  Labels: viewfs
> Attachments: HDFS-11058.01.patch
>
>
> Df command doesn't seem to work well with ViewFileSystem. It always reports 
> used data as 0. Here is the client mount table configuration I am using 
> against a federated clusters of 2 NameNodes and 2 DataNoes. 
> {code}
>   1 
>   2 
>   3   
>   4 fs.defaultFS
>   5 viewfs://ClusterX/
>   6   
>   ..
>  11   
>  12 fs.default.name
>  13 viewfs://ClusterX/
>  14   
>  ..
>  23   
>  24 fs.viewfs.mounttable.ClusterX.link./nn0
>  25 hdfs://127.0.0.1:50001/
>  26   
>  27   
>  28 fs.viewfs.mounttable.ClusterX.link./nn1
>  29 hdfs://127.0.0.1:51001/
>  30   
>  31   
>  32 fs.viewfs.mounttable.ClusterX.link./nn2
>  33 hdfs://127.0.0.1:52001/nn2
>  34   
>  35   
>  36 fs.viewfs.mounttable.ClusterX.link./nn3
>  37 hdfs://127.0.0.1:52001/nn3
>  38   
>  39   
>  40 fs.viewfs.mounttable.ClusterY.linkMergeSlash
>  41 hdfs://127.0.0.1:50001/
>  42   
>  43 
> {code}
> {{Df}} command always reports Size/Available as 8.0E and the usage as 0 for 
> any federated cluster. 
> {noformat}
> # hadoop fs -fs viewfs://ClusterX/ -df  /
> Filesystem Size  UsedAvailable  Use%
> viewfs://ClusterX/  9223372036854775807 0  92233720368547758070%
> # hadoop fs -fs viewfs://ClusterX/ -df  -h /
> Filesystem   Size  Used  Available  Use%
> viewfs://ClusterX/  8.0 E 0  8.0 E0%
> # hadoop fs -fs viewfs://ClusterY/ -df  -h /
> Filesystem   Size  Used  Available  Use%
> viewfs://ClusterY/  8.0 E 0  8.0 E0%
> {noformat}
> Whereas {{Du}} command seems to work as expected even with ViewFileSystem.
> {noformat}
> # hadoop fs -fs viewfs://ClusterY/ -du -h /
> 10.6 K  31.8 K  /build.log.16y
> 0   0   /user
> # hadoop fs -fs viewfs://ClusterX/ -du -h /
> 10.6 K  31.8 K  

[jira] [Commented] (HDFS-11085) Add unit tests for NameNode failing to startup when name dir can not be written

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634853#comment-15634853
 ] 

Hadoop QA commented on HDFS-11085:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.namenode.TestStartup |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11085 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836996/HDFS-11085.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 810c59746e44 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5cad93d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17417/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17417/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17417/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add unit tests for NameNode failing to startup when name dir can not be 
> written
> ---
>
> Key: HDFS-11085
> URL: https://issues.apache.org/jira/browse/HDFS-11085
> Project: 

[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2016-11-03 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634823#comment-15634823
 ] 

Andrew Wang commented on HDFS-10899:


Thanks for working on this Xiao! I didn't give the whole patch a deep look, but 
some review comments to start it off. I didn't get to the KMS changes, it'd 
help to split those into a separate patch to make review easier.

Comments as follows:

* getShortUsage, are -path / -cancel / -verify exclusive operations? We should 
indicate this in the usage text if so. {{man git}} for instance looks like:

{noformat}
usage: git [--version] [--help] [-C ] [-c name=value]
   [--exec-path[=]] [--html-path] [--man-path] [--info-path]
   [-p | --paginate | --no-pager] [--no-replace-objects] [--bare]
   [--git-dir=] [--work-tree=] [--namespace=]
[]
{noformat}

So {{ }} would work here, with the long usage explaining the 
commands.

* We have new functionality to specify time with units in configuration, want 
to use it here?
* New keys should be documented in hdfs-default.xml also

EZManager:

* Some unused imports in I'm sure checkstyle will complain about
* Typo: "filesReencryptedInCurentZone" -> "Current"
* Can the new methods in EZManager be encapsulated as a class? Could use some 
javadoc about expected behavior also.
* Empty {{finally}} case in the run method, can be deleted?
* Thread should check if the NN is active or standby, typically we do this in 
FSNamesystem#startActiveServices and FSNamesystem#stopActiveServices
* reencryptDir looks like it takes the writeLock the entire time while it's 
processing the files in a directory, including when talking to the KMS. We 
should not hold the lock while talking to the KMS.
* reencryptDir is also a big method, refactor the innards of the for loop?
* We also should do a logSync before we log the "Completed" message in 
{{reencryptEncryptionZoneInt}}, so we block first. Maybe elsewhere too.
* Calling {{getListing}} will generate audit logs, we should be calling 
{{getListingInt}} instead. Also, we don't need a full {{HdfsFileStatus}} to 
reencrypt each file, so let's consider an INode-based traversal.
* Structuring this as an iterator (and maybe also a visitor) would make the 
code more clear.
* Recommend we rename {{updateReencryptStatus}} so it's clear that this is used 
during edit log / fsimage loading, or at least add some javadoc.

ReencryptInfo:
* Recommend we name ReencryptInfo something like "PendingReencryptionZones" or 
something to be more self documenting. The javadoc also could mention that this 
is basically a queue.
* A small warning that {{hasZone}} will be {{O(n)}} if the JDK implementation 
is truly a queue, maybe we should use a LinkedHashMap instead? I don't think we 
use the deque nature.
* How is the reencryptInfo in EZManager synchronized? I ask because it's a 
ConcurrentLinkedDeque, is there actually concurrency happening?

> Add functionality to re-encrypt EDEKs.
> --
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11056) Concurrent append and read operations lead to checksum error

2016-11-03 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634810#comment-15634810
 ] 

Lei (Eddy) Xu commented on HDFS-11056:
--

Hi, [~jojochuang]. [HDFS-10636] is not related change. 

> Concurrent append and read operations lead to checksum error
> 
>
> Key: HDFS-11056
> URL: https://issues.apache.org/jira/browse/HDFS-11056
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, httpfs
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-11056.001.patch, HDFS-11056.002.patch, 
> HDFS-11056.reproduce.patch
>
>
> If there are two clients, one of them open-append-close a file continuously, 
> while the other open-read-close the same file continuously, the reader 
> eventually gets a checksum error in the data read.
> On my local Mac, it takes a few minutes to produce the error. This happens to 
> httpfs clients, but there's no reason not believe this happens to any append 
> clients.
> I have a unit test that demonstrates the checksum error. Will attach later.
> Relevant log:
> {quote}
> 2016-10-25 15:34:45,153 INFO  audit - allowed=trueugi=weichiu 
> (auth:SIMPLE)   ip=/127.0.0.1   cmd=opensrc=/tmp/bar.txt
> dst=nullperm=null   proto=rpc
> 2016-10-25 15:34:45,155 INFO  DataNode - Receiving 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 src: 
> /127.0.0.1:51130 dest: /127.0.0.1:50131
> 2016-10-25 15:34:45,155 INFO  FsDatasetImpl - Appending to FinalizedReplica, 
> blk_1073741825_1182, FINALIZED
>   getNumBytes() = 182
>   getBytesOnDisk()  = 182
>   getVisibleLength()= 182
>   getVolume()   = 
> /Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1
>   getBlockURI() = 
> file:/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-837130339-172.16.1.88-1477434851452/current/finalized/subdir0/subdir0/blk_1073741825
> 2016-10-25 15:34:45,167 INFO  DataNode - opReadBlock 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 received exception 
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
> 2016-10-25 15:34:45,167 WARN  DataNode - 
> DatanodeRegistration(127.0.0.1:50131, 
> datanodeUuid=41c96335-5e4b-4950-ac22-3d21b353abb8, infoPort=50133, 
> infoSecurePort=0, ipcPort=50134, 
> storageInfo=lv=-57;cid=testClusterID;nsid=1472068852;c=1477434851452):Got 
> exception while serving 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 to /127.0.0.1:51121
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
>   at java.lang.Thread.run(Thread.java:745)
> 2016-10-25 15:34:45,168 INFO  FSNamesystem - 
> updatePipeline(blk_1073741825_1182, newGS=1183, newLength=182, 
> newNodes=[127.0.0.1:50131], client=DFSClient_NONMAPREDUCE_-1743096965_197)
> 2016-10-25 15:34:45,168 ERROR DataNode - 127.0.0.1:50131:DataXceiver error 
> processing READ_BLOCK operation  src: /127.0.0.1:51121 dst: /127.0.0.1:50131
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
>   at java.lang.Thread.run(Thread.java:745)
> 2016-10-25 15:34:45,168 INFO  FSNamesystem - 
> updatePipeline(blk_1073741825_1182 => blk_1073741825_1183) success
> 2016-10-25 15:34:45,170 WARN  DFSClient - Found Checksum error for 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 from 
> 

[jira] [Commented] (HDFS-11085) Add unit tests for NameNode failing to startup when name dir can not be written

2016-11-03 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634790#comment-15634790
 ] 

Mingliang Liu commented on HDFS-11085:
--

+1 pending on Jenkins.

> Add unit tests for NameNode failing to startup when name dir can not be 
> written
> ---
>
> Key: HDFS-11085
> URL: https://issues.apache.org/jira/browse/HDFS-11085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11085.000.patch, HDFS-11085.001.patch
>
>
> This can be placed in {{org.apache.hadoop.hdfs.server.namenode.TestStartup}} 
> test class.
> UPDATE: this JIRA is for name dir only; for the edit log directories, we have 
> unit test {{TestInitializeSharedEdits#testInitializeSharedEdits}}, which 
> tests that in HA mode, we should not have been able to start any NN without 
> shared dir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11085) Add unit tests for NameNode failing to startup when name dir can not be written

2016-11-03 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-11085:
-
Description: 
This can be placed in {{org.apache.hadoop.hdfs.server.namenode.TestStartup}} 
test class.

UPDATE: this JIRA is for name dir only; for the edit log directories, we have 
unit test {{TestInitializeSharedEdits#testInitializeSharedEdits}}, which tests 
that in HA mode, we should not have been able to start any NN without shared 
dir.

  was:This can be placed in 
{{org.apache.hadoop.hdfs.server.namenode.TestStartup}} test class.


> Add unit tests for NameNode failing to startup when name dir can not be 
> written
> ---
>
> Key: HDFS-11085
> URL: https://issues.apache.org/jira/browse/HDFS-11085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11085.000.patch, HDFS-11085.001.patch
>
>
> This can be placed in {{org.apache.hadoop.hdfs.server.namenode.TestStartup}} 
> test class.
> UPDATE: this JIRA is for name dir only; for the edit log directories, we have 
> unit test {{TestInitializeSharedEdits#testInitializeSharedEdits}}, which 
> tests that in HA mode, we should not have been able to start any NN without 
> shared dir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-9668) Optimize the locking in FsDatasetImpl

2016-11-03 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634720#comment-15634720
 ] 

Lei (Eddy) Xu edited comment on HDFS-9668 at 11/4/16 12:09 AM:
---

Hi, [~jingcheng...@intel.com]

It looks good to me overall. Some small nits:

* Can we change {{ dfs.datanode.dataset.lock.size }} to 
{{dfs.datanode.dataset.block.op.lock.size}} or something more specific?
* {code}
if (mutex == null) {
throw new HadoopIllegalArgumentException(
"Object to synchronize on cannot be null");
}
{code}
Lets use {{"Mutex to synchronize on cannot be null"}}?

Btw, why does not {{ReplicaMap}} using itself as mutex? I might miss something. 
 
For example, {{FsDatasetImpl#volumeMap}} is initialized with a 
{{FsDatasetImpl}} instance as mutex. And the other operations within 
{{FsDatasetImpl}} are protected by read/write locks added this patch. So this 
{{replicaMap}} is not synchronized with the rest of {{FsDatasetImpl}} methods. 

* For many places like the following, please keep a space between {{try}} and 
brackets.
{code}
 try(AutoCloseableLock lock = fds.acquireDatasetWriteLock()) {}
{code} 

* Finally, would you mind to provide a branch-2 patch? I could not apply the 
{{-22}} patch to branch-2.  Thanks much!




was (Author: eddyxu):
Hi, [~jingcheng...@intel.com]

It looks good to me overall. Some small nits:

* Can we change {{ dfs.datanode.dataset.lock.size }} to 
{{dfs.datanode.dataset.block.op.lock.size}} or something more specific?
* {code}
if (mutex == null) {
throw new HadoopIllegalArgumentException(
"Object to synchronize on cannot be null");
}
{code}
Lets use {{"Mutex to synchronize on cannot be null"}}?

Btw, why does not {{ReplicaMap}} using itself as mutex? I might miss something. 
 
For example, in {{FsDatasetImpl#volumeMap()}}, it is initialized with a 
{{FsDatasetImpl}} instance. And the other operations within {{FsDatasetImpl}} 
are protected by read write locks. So this {{replicaMap}} is not synchronized 
with the rest of {{FsDatasetImpl}} methods. 

* For many places like the following, please keep a space between {{try}} and 
brackets.
{code}
 try(AutoCloseableLock lock = fds.acquireDatasetWriteLock()) {}
{code} 

* Finally, would you mind to provide a branch-2 patch? I could not apply the 
{{-22}} patch to branch-2.  Thanks much!



> Optimize the locking in FsDatasetImpl
> -
>
> Key: HDFS-9668
> URL: https://issues.apache.org/jira/browse/HDFS-9668
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HDFS-9668-1.patch, HDFS-9668-10.patch, 
> HDFS-9668-11.patch, HDFS-9668-12.patch, HDFS-9668-13.patch, 
> HDFS-9668-14.patch, HDFS-9668-14.patch, HDFS-9668-15.patch, 
> HDFS-9668-16.patch, HDFS-9668-17.patch, HDFS-9668-18.patch, 
> HDFS-9668-19.patch, HDFS-9668-19.patch, HDFS-9668-2.patch, 
> HDFS-9668-20.patch, HDFS-9668-21.patch, HDFS-9668-22.patch, 
> HDFS-9668-3.patch, HDFS-9668-4.patch, HDFS-9668-5.patch, HDFS-9668-6.patch, 
> HDFS-9668-7.patch, HDFS-9668-8.patch, HDFS-9668-9.patch, execution_time.png
>
>
> During the HBase test on a tiered storage of HDFS (WAL is stored in 
> SSD/RAMDISK, and all other files are stored in HDD), we observe many 
> long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part 
> of the jstack result:
> {noformat}
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48521 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread 
> t@93336
>java.lang.Thread.State: BLOCKED
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:)
>   - waiting to lock <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by 
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - None
>   
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> 

[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl

2016-11-03 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634720#comment-15634720
 ] 

Lei (Eddy) Xu commented on HDFS-9668:
-

Hi, [~jingcheng...@intel.com]

It looks good to me overall. Some small nits:

* Can we change {{ dfs.datanode.dataset.lock.size }} to 
{{dfs.datanode.dataset.block.op.lock.size}} or something more specific?
* {code}
if (mutex == null) {
throw new HadoopIllegalArgumentException(
"Object to synchronize on cannot be null");
}
{code}
Lets use {{"Mutex to synchronize on cannot be null"}}?

Btw, why does not {{ReplicaMap}} using itself as mutex? I might miss something. 
 
For example, in {{FsDatasetImpl#volumeMap()}}, it is initialized with a 
{{FsDatasetImpl}} instance. And the other operations within {{FsDatasetImpl}} 
are protected by read write locks. So this {{replicaMap}} is not synchronized 
with the rest of {{FsDatasetImpl}} methods. 

* For many places like the following, please keep a space between {{try}} and 
brackets.
{code}
 try(AutoCloseableLock lock = fds.acquireDatasetWriteLock()) {}
{code} 

* Finally, would you mind to provide a branch-2 patch? I could not apply the 
{{-22}} patch to branch-2.  Thanks much!



> Optimize the locking in FsDatasetImpl
> -
>
> Key: HDFS-9668
> URL: https://issues.apache.org/jira/browse/HDFS-9668
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HDFS-9668-1.patch, HDFS-9668-10.patch, 
> HDFS-9668-11.patch, HDFS-9668-12.patch, HDFS-9668-13.patch, 
> HDFS-9668-14.patch, HDFS-9668-14.patch, HDFS-9668-15.patch, 
> HDFS-9668-16.patch, HDFS-9668-17.patch, HDFS-9668-18.patch, 
> HDFS-9668-19.patch, HDFS-9668-19.patch, HDFS-9668-2.patch, 
> HDFS-9668-20.patch, HDFS-9668-21.patch, HDFS-9668-22.patch, 
> HDFS-9668-3.patch, HDFS-9668-4.patch, HDFS-9668-5.patch, HDFS-9668-6.patch, 
> HDFS-9668-7.patch, HDFS-9668-8.patch, HDFS-9668-9.patch, execution_time.png
>
>
> During the HBase test on a tiered storage of HDFS (WAL is stored in 
> SSD/RAMDISK, and all other files are stored in HDD), we observe many 
> long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part 
> of the jstack result:
> {noformat}
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48521 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread 
> t@93336
>java.lang.Thread.State: BLOCKED
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:)
>   - waiting to lock <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by 
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - None
>   
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" - Thread 
> t@93335
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.createFileExclusively(Native Method)
>   at java.io.File.createNewFile(File.java:1012)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:66)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:271)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:286)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1140)
>   - locked <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> 

[jira] [Commented] (HDFS-10941) Improve BlockManager#processMisReplicatesAsync log

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634707#comment-15634707
 ] 

Hadoop QA commented on HDFS-10941:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10941 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836985/HDFS-10941.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 258639f113f8 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7534aee |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17414/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17414/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17414/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve BlockManager#processMisReplicatesAsync log
> --
>
> Key: 

[jira] [Commented] (HDFS-11085) Add unit tests for NameNode failing to startup when name dir can not be written

2016-11-03 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634691#comment-15634691
 ] 

Xiaobing Zhou commented on HDFS-11085:
--

v001 addressed your comments, thanks [~liuml07].

> Add unit tests for NameNode failing to startup when name dir can not be 
> written
> ---
>
> Key: HDFS-11085
> URL: https://issues.apache.org/jira/browse/HDFS-11085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11085.000.patch, HDFS-11085.001.patch
>
>
> This can be placed in {{org.apache.hadoop.hdfs.server.namenode.TestStartup}} 
> test class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11085) Add unit tests for NameNode failing to startup when name dir can not be written

2016-11-03 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-11085:
-
Attachment: HDFS-11085.001.patch

> Add unit tests for NameNode failing to startup when name dir can not be 
> written
> ---
>
> Key: HDFS-11085
> URL: https://issues.apache.org/jira/browse/HDFS-11085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11085.000.patch, HDFS-11085.001.patch
>
>
> This can be placed in {{org.apache.hadoop.hdfs.server.namenode.TestStartup}} 
> test class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11045) TestDirectoryScanner#testThrottling fails: Throttle is too permissive

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634682#comment-15634682
 ] 

Hadoop QA commented on HDFS-11045:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 42 unchanged - 35 fixed = 46 total (was 77) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11045 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836991/HDFS-11045.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f3c186451c8c 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5cad93d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17416/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17416/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17416/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17416/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17416/artifact/patchprocess/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17416/artifact/patchprocess/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Commented] (HDFS-11099) Expose rack id in hdfsDNInfo

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634673#comment-15634673
 ] 

Hadoop QA commented on HDFS-11099:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 4s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
15s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
23s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
16s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
13s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:78fc6b6 |
| JIRA Issue | HDFS-11099 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836986/HDFS-11099.HDFS-8707.000.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 9fcab7c647d5 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 4f3696d |
| Default Java | 1.7.0_111 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_111 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111 |
| JDK v1.7.0_111  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17415/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17415/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Expose rack id in hdfsDNInfo
> 
>
> Key: HDFS-11099
> URL: https://issues.apache.org/jira/browse/HDFS-11099
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaowei Zhu
>Assignee: Xiaowei Zhu
> Attachments: HDFS-11099.HDFS-8707.000.patch
>
>
> hdfsDNInfo is missing rack information.



--
This message 

[jira] [Updated] (HDFS-11045) TestDirectoryScanner#testThrottling fails: Throttle is too permissive

2016-11-03 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-11045:

Attachment: HDFS-11045.006.patch

Throttling test is looking great.  This patch fixes a different test that was 
impacted by the changes.

> TestDirectoryScanner#testThrottling fails: Throttle is too permissive
> -
>
> Key: HDFS-11045
> URL: https://issues.apache.org/jira/browse/HDFS-11045
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HDFS-11045.001.patch, HDFS-11045.002.patch, 
> HDFS-11045.003.patch, HDFS-11045.004.patch, HDFS-11045.005.patch, 
> HDFS-11045.006.patch
>
>
>   TestDirectoryScanner.testThrottling:709 Throttle is too permissive
> https://builds.apache.org/job/PreCommit-HDFS-Build/17259/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11045) TestDirectoryScanner#testThrottling fails: Throttle is too permissive

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634610#comment-15634610
 ] 

Hadoop QA commented on HDFS-11045:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 76 unchanged - 1 fixed = 79 total (was 77) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11045 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836977/HDFS-11045.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 321f53913d05 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7534aee |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17413/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17413/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17413/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17413/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestDirectoryScanner#testThrottling fails: Throttle is too permissive
> -
>
> 

[jira] [Commented] (HDFS-11081) Ozone:SCM: Add support for registerNode in datanode

2016-11-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634593#comment-15634593
 ] 

Anu Engineer commented on HDFS-11081:
-

Adding pull request to make code review comments easy. 
https://github.com/apache/hadoop/pull/151


> Ozone:SCM: Add support for registerNode in datanode
> ---
>
> Key: HDFS-11081
> URL: https://issues.apache.org/jira/browse/HDFS-11081
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: Design note for StateMachine.pdf, 
> HDFS-11081-HDFS-7240.001.patch, HDFS-11081-HDFS-7240.002.patch
>
>
> Add support for registerDatanode in Datanode. This allows the container to 
> use SCM via Container datanode protocols.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-11-03 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634578#comment-15634578
 ] 

Zhe Zhang commented on HDFS-10702:
--

Thanks [~mackrorysd] for taking over this work! It's exciting to see this 
feature moving forward.

IIUC, if a client issues a read with a "stale bound" which is fresher than the 
SbNN's current latest TxId, the SbNN will tail edit logs from ANN, right?

My concern is that if a significant portion of read requests follow this 
scenario (needs a fresher TxId), that will cause a high writeLock contention on 
SbNN. At lease in our production cluster, over 99% of RPC requests are reads, 
and NN relies on parallelism between the read requests to catch up with 
incoming requests. If say 50% of those requests are sent to SbNN and 50% of 
those request a fresher TxId (considering the edit log tailing frequency, this 
is quite likely), SbNN is taking much higher writeLock contention than current 
ANN.

Maybe we can consider combining this SbNN read logic with the ideas discussed 
under HDFS-8940, to somehow combine the SbNN state with events from 
{{inotify}}? Pinging [~drankye], [~mingma] and [~HuafengWang] for thoughts.

> Add a Client API and Proxy Provider to enable stale read from Standby
> -
>
> Key: HDFS-10702
> URL: https://issues.apache.org/jira/browse/HDFS-10702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10702.001.patch, HDFS-10702.002.patch, 
> HDFS-10702.003.patch, HDFS-10702.004.patch, HDFS-10702.005.patch, 
> HDFS-10702.006.patch, StaleReadfromStandbyNN.pdf
>
>
> Currently, clients must always talk to the active NameNode when performing 
> any metadata operation, which means active NameNode could be a bottleneck for 
> scalability. One way to solve this problem is to send read-only operations to 
> Standby NameNode. The disadvantage is that it might be a stale read. 
> Here, I'm thinking of adding a Client API to enable/disable stale read from 
> Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-11-03 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10702:
-
Comment: was deleted

(was: Thanks much [~mackrorysd] for taking over the work. It's exciting to see 
this feature moving forward.

Quick comment while I'm still looking at the whole patch:
Is {{SyncInfo}} designed to only hold the last applied TxID? The Javadoc seems 
to imply a bigger scope so I'm not entirely sure. If not, should we just use a 
long type?)

> Add a Client API and Proxy Provider to enable stale read from Standby
> -
>
> Key: HDFS-10702
> URL: https://issues.apache.org/jira/browse/HDFS-10702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10702.001.patch, HDFS-10702.002.patch, 
> HDFS-10702.003.patch, HDFS-10702.004.patch, HDFS-10702.005.patch, 
> HDFS-10702.006.patch, StaleReadfromStandbyNN.pdf
>
>
> Currently, clients must always talk to the active NameNode when performing 
> any metadata operation, which means active NameNode could be a bottleneck for 
> scalability. One way to solve this problem is to send read-only operations to 
> Standby NameNode. The disadvantage is that it might be a stale read. 
> Here, I'm thinking of adding a Client API to enable/disable stale read from 
> Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11088) Quash unnecessary safemode WARN message during NameNode startup

2016-11-03 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11088:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   2.9.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch Yiqun and reviewing Mingliang! I've committed this to 
trunk and branch-2.

> Quash unnecessary safemode WARN message during NameNode startup
> ---
>
> Key: HDFS-11088
> URL: https://issues.apache.org/jira/browse/HDFS-11088
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-11088.001.patch, HDFS-11088.002.patch, 
> HDFS-11088.003.patch
>
>
> I tried starting a NameNode on a freshly formatted cluster, and it produced 
> this WARN log:
> {noformat}
> 16/11/01 16:42:44 WARN blockmanagement.BlockManagerSafeMode: forceExit used 
> when normal exist would suffice. Treating force exit as normal safe mode exit.
> {noformat}
> I didn't do any special commands related to safemode, so this log should be 
> quashed.
> I didn't try the branch-2's, but it's likely this issue exists there as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-11-03 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634531#comment-15634531
 ] 

Zhe Zhang commented on HDFS-10702:
--

Thanks much [~mackrorysd] for taking over the work. It's exciting to see this 
feature moving forward.

Quick comment while I'm still looking at the whole patch:
Is {{SyncInfo}} designed to only hold the last applied TxID? The Javadoc seems 
to imply a bigger scope so I'm not entirely sure. If not, should we just use a 
long type?

> Add a Client API and Proxy Provider to enable stale read from Standby
> -
>
> Key: HDFS-10702
> URL: https://issues.apache.org/jira/browse/HDFS-10702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10702.001.patch, HDFS-10702.002.patch, 
> HDFS-10702.003.patch, HDFS-10702.004.patch, HDFS-10702.005.patch, 
> HDFS-10702.006.patch, StaleReadfromStandbyNN.pdf
>
>
> Currently, clients must always talk to the active NameNode when performing 
> any metadata operation, which means active NameNode could be a bottleneck for 
> scalability. One way to solve this problem is to send read-only operations to 
> Standby NameNode. The disadvantage is that it might be a stale read. 
> Here, I'm thinking of adding a Client API to enable/disable stale read from 
> Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11099) Expose rack id in hdfsDNInfo

2016-11-03 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-11099:
---
Attachment: (was: HDFS-11099.HDFS-8707.000.patch)

> Expose rack id in hdfsDNInfo
> 
>
> Key: HDFS-11099
> URL: https://issues.apache.org/jira/browse/HDFS-11099
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaowei Zhu
>Assignee: Xiaowei Zhu
> Attachments: HDFS-11099.HDFS-8707.000.patch
>
>
> hdfsDNInfo is missing rack information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11099) Expose rack id in hdfsDNInfo

2016-11-03 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-11099:
---
Attachment: HDFS-11099.HDFS-8707.000.patch

> Expose rack id in hdfsDNInfo
> 
>
> Key: HDFS-11099
> URL: https://issues.apache.org/jira/browse/HDFS-11099
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaowei Zhu
>Assignee: Xiaowei Zhu
> Attachments: HDFS-11099.HDFS-8707.000.patch
>
>
> hdfsDNInfo is missing rack information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11099) Expose rack id in hdfsDNInfo

2016-11-03 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634496#comment-15634496
 ] 

James Clampffer commented on HDFS-11099:


Patch looks good to me, +1.   Looks like you might need to resubmit to get a CI 
but I'll commit once there's a passing run.

> Expose rack id in hdfsDNInfo
> 
>
> Key: HDFS-11099
> URL: https://issues.apache.org/jira/browse/HDFS-11099
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaowei Zhu
>Assignee: Xiaowei Zhu
> Attachments: HDFS-11099.HDFS-8707.000.patch
>
>
> hdfsDNInfo is missing rack information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10941) Improve BlockManager#processMisReplicatesAsync log

2016-11-03 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-10941:
--
Attachment: HDFS-10941.002.patch

re-upload v002 patch to try to trigger Jenkins

> Improve BlockManager#processMisReplicatesAsync log
> --
>
> Key: HDFS-10941
> URL: https://issues.apache.org/jira/browse/HDFS-10941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-10941.001.patch, HDFS-10941.002.patch, 
> HDFS-10941.002.patch
>
>
> BlockManager#processMisReplicatesAsync is the daemon thread running inside 
> namenode to handle miserplicated blocks. As shown below, it has a trace log 
> for each of the block in the cluster being processed (1 blocks per 
> iteration after sleep 10s). 
> {code}
>   MisReplicationResult res = processMisReplicatedBlock(block);
>   if (LOG.isTraceEnabled()) {
> LOG.trace("block " + block + ": " + res);
>   }
> {code}
> However, it is not very useful as dumping every block in the cluster will 
> overwhelm the namenode log without much useful information assuming the 
> majority of the blocks are not over/under replicated. This ticket is opened 
> to improve the log for easy troubleshooting of block replication related 
> issues by:
>  
> 1) add debug log for blocks that get under/over replicated result during 
> {{processMisReplicatedBlock()}} 
> 2) or change to trace log for only blocks that get non-OK result during 
> {{processMisReplicatedBlock()}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11045) TestDirectoryScanner#testThrottling fails: Throttle is too permissive

2016-11-03 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-11045:

Attachment: HDFS-11045.005.patch

I couldn't let it go. :)

I think this patch looks promising.  In my local testing the variability in the 
ratios is down to <= 0.01.  Fingers crossed.

> TestDirectoryScanner#testThrottling fails: Throttle is too permissive
> -
>
> Key: HDFS-11045
> URL: https://issues.apache.org/jira/browse/HDFS-11045
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HDFS-11045.001.patch, HDFS-11045.002.patch, 
> HDFS-11045.003.patch, HDFS-11045.004.patch, HDFS-11045.005.patch
>
>
>   TestDirectoryScanner.testThrottling:709 Throttle is too permissive
> https://builds.apache.org/job/PreCommit-HDFS-Build/17259/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11097) Fix the jenkins warning related to the deprecated method StorageReceivedDeletedBlocks

2016-11-03 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11097:
---
Fix Version/s: 3.0.0-alpha2

Reminder to please also set the appropriate 3.x fix version, thanks.

> Fix the jenkins warning related to the deprecated method 
> StorageReceivedDeletedBlocks
> -
>
> Key: HDFS-11097
> URL: https://issues.apache.org/jira/browse/HDFS-11097
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11097.001.patch, warn.txt
>
>
> After HDFS-6094, it updated the constrcut method of 
> {{StorageReceivedDeletedBlocks}} and let it good to use. We can pass not only 
> the storage type and state as well. But this new method isn't updated in some 
> test case and it cause many warnings in each jenkins buildings. The part of 
> warning infos:
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java:[175,12]
>  [deprecation] 
> StorageReceivedDeletedBlocks(String,ReceivedDeletedBlockInfo[]) in 
> StorageReceivedDeletedBlocks has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReconstruction.java:[315,14]
>  [deprecation] 
> StorageReceivedDeletedBlocks(String,ReceivedDeletedBlockInfo[]) in 
> StorageReceivedDeletedBlocks has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReconstruction.java:[333,14]
>  [deprecation] 
> StorageReceivedDeletedBlocks(String,ReceivedDeletedBlockInfo[]) in 
> StorageReceivedDeletedBlocks has been deprecated
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11076) Add unit test for extended Acls

2016-11-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634365#comment-15634365
 ] 

Hudson commented on HDFS-11076:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10767 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10767/])
HDFS-11076. Add unit test for extended Acls. Contributed by Chen Liang 
(liuml07: rev 7534aee09af47c6961100588312da8d133be1b27)
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestExtendedAcls.java


> Add unit test for extended Acls
> ---
>
> Key: HDFS-11076
> URL: https://issues.apache.org/jira/browse/HDFS-11076
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11076.001.patch, HDFS-11076.002.patch, 
> HDFS-11076.003.patch
>
>
> This JIRA tries to add unit tests for extended ACLs in HDFS, to cover the 
> following scenarios:
> # the default ACL of parent directory should be inherited by newly created 
> child directory and file
> # the access ACL of parent directory should not be inherited by newly created 
> child directory and file
> # changing the default ACL of parent directory should not change the ACL of 
> existing child directory and file
> # child directory can add more default ACL in addition to the ACL inherited 
> from parent directory
> # child directory can also restrict ACL based on the ACL inherited from 
> parent directory



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11099) Expose rack id in hdfsDNInfo

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634362#comment-15634362
 ] 

Hadoop QA commented on HDFS-11099:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  5m 
12s{color} | {color:red} Docker failed to build yetus/hadoop:78fc6b6. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11099 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836972/HDFS-11099.HDFS-8707.000.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17412/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Expose rack id in hdfsDNInfo
> 
>
> Key: HDFS-11099
> URL: https://issues.apache.org/jira/browse/HDFS-11099
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaowei Zhu
>Assignee: Xiaowei Zhu
> Attachments: HDFS-11099.HDFS-8707.000.patch
>
>
> hdfsDNInfo is missing rack information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11099) Expose rack id in hdfsDNInfo

2016-11-03 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-11099:
---
Status: Patch Available  (was: Open)

> Expose rack id in hdfsDNInfo
> 
>
> Key: HDFS-11099
> URL: https://issues.apache.org/jira/browse/HDFS-11099
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaowei Zhu
>Assignee: Xiaowei Zhu
> Attachments: HDFS-11099.HDFS-8707.000.patch
>
>
> hdfsDNInfo is missing rack information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11099) Expose rack id in hdfsDNInfo

2016-11-03 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-11099:
---
Attachment: HDFS-11099.HDFS-8707.000.patch

HDFS-11099.HDFS-8707.000.patch is available for review.

> Expose rack id in hdfsDNInfo
> 
>
> Key: HDFS-11099
> URL: https://issues.apache.org/jira/browse/HDFS-11099
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaowei Zhu
>Assignee: Xiaowei Zhu
> Attachments: HDFS-11099.HDFS-8707.000.patch
>
>
> hdfsDNInfo is missing rack information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11076) Add unit test for extended Acls

2016-11-03 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-11076:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed to {{trunk}} through {{branch-2.8}} branch. Thanks [~vagarychen] for 
contribution. Thanks Arpit for offline discussion.

> Add unit test for extended Acls
> ---
>
> Key: HDFS-11076
> URL: https://issues.apache.org/jira/browse/HDFS-11076
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11076.001.patch, HDFS-11076.002.patch, 
> HDFS-11076.003.patch
>
>
> This JIRA tries to add unit tests for extended ACLs in HDFS, to cover the 
> following scenarios:
> # the default ACL of parent directory should be inherited by newly created 
> child directory and file
> # the access ACL of parent directory should not be inherited by newly created 
> child directory and file
> # changing the default ACL of parent directory should not change the ACL of 
> existing child directory and file
> # child directory can add more default ACL in addition to the ACL inherited 
> from parent directory
> # child directory can also restrict ACL based on the ACL inherited from 
> parent directory



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11099) Expose rack id in hdfsDNInfo

2016-11-03 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-11099:
---
Issue Type: Sub-task  (was: Bug)
Parent: HDFS-8707

> Expose rack id in hdfsDNInfo
> 
>
> Key: HDFS-11099
> URL: https://issues.apache.org/jira/browse/HDFS-11099
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaowei Zhu
>Assignee: Xiaowei Zhu
>
> hdfsDNInfo is missing rack information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11099) Expose rack id in hdfsDNInfo

2016-11-03 Thread Xiaowei Zhu (JIRA)
Xiaowei Zhu created HDFS-11099:
--

 Summary: Expose rack id in hdfsDNInfo
 Key: HDFS-11099
 URL: https://issues.apache.org/jira/browse/HDFS-11099
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiaowei Zhu
Assignee: Xiaowei Zhu


hdfsDNInfo is missing rack information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11085) Add unit tests for NameNode failing to startup when name dir can not be written

2016-11-03 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634134#comment-15634134
 ] 

Mingliang Liu edited comment on HDFS-11085 at 11/3/16 8:39 PM:
---

The patch looks good to me overall.

Some minor comments:
# Checkstyle warning is related.
# MiniDFSCluster can be used in try-with-resource. Shorter and neater.
# I think we can use {{Paths.get()}} for joining path strings. Assuming file 
path separate is / is not platform independent probably. Also if there are 
methods in JDK or Apache component that work just fine, I hesitate to use third 
party libraries.
{code}
661 final String nnDirStr = Joiner.on("/").join(
662 hdfsDir.toString(),
663 GenericTestUtils.getMethodName(), "name");
{code}
# I don't think we need the implementation details about the Collection type 
(List). We just need the only element in the collections, don't we?
{code}
673   /* get and verify NN dir */
674   List nameDirs = (List) 
FSNamesystem.getNamespaceDirs(config);
{code}
Alternatively:
{code}
final Collection nameDirs = FSNamesystem.getNamespaceDirs(config);
assertNotNull(nameDirs);
assertTrue(nameDirs.iterator().hasNext());
final URI nameDir = nameDirs.iterator().next();
...
{code}


was (Author: liuml07):
The patch looks good to me overall.

Some minor comments:
# Checkstyle warning is related.
# MiniDFSCluster can be used in try-with-resource. Shorter and neater.
# I think we can use {{Paths.get()}} for joining path strings. Assuming file 
path separate is / is not platform independent probably. Also if there are 
methods in JDK or Apache component that work just fine, I hesitate to use third 
party libraries.
{code}
661 final String nnDirStr = Joiner.on("/").join(
662 hdfsDir.toString(),
663 GenericTestUtils.getMethodName(), "name");
{code}
# I don't think we need the implementation details about the Collection type 
(List). We just need the only element in the collections, don't we?
{code}
673   /* get and verify NN dir */
674   List nameDirs = (List) 
FSNamesystem.getNamespaceDirs(config);
{code}
Alternatively:
{code}
Collection nameDirs = FSNamesystem.getNamespaceDirs(config);
assertNotNull(nameDirs);
assertTrue(nameDirs.iterator().hasNext());
final URI nameDir = nameDirs.iterator().next();
...
{code}

> Add unit tests for NameNode failing to startup when name dir can not be 
> written
> ---
>
> Key: HDFS-11085
> URL: https://issues.apache.org/jira/browse/HDFS-11085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11085.000.patch
>
>
> This can be placed in {{org.apache.hadoop.hdfs.server.namenode.TestStartup}} 
> test class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11085) Add unit tests for NameNode failing to startup when name dir can not be written

2016-11-03 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634134#comment-15634134
 ] 

Mingliang Liu commented on HDFS-11085:
--

The patch looks good to me overall.

Some minor comments:
# Checkstyle warning is related.
# MiniDFSCluster can be used in try-catch. Shorter and neater.
# I think we can use {{Paths.get()}} for joining path strings. Assuming file 
path separate is / is not platform independent probably. Also if there are 
methods in JDK or Apache component that work just fine, I hesitate to use third 
party libraries.
{code}
661 final String nnDirStr = Joiner.on("/").join(
662 hdfsDir.toString(),
663 GenericTestUtils.getMethodName(), "name");
{code}
# I don't think we need the implementation details about the Collection type 
(List). We just need the only element in the collections, don't we?
{code}
673   /* get and verify NN dir */
674   List nameDirs = (List) 
FSNamesystem.getNamespaceDirs(config);
{code}
Alternatively:
{code}
Collection nameDirs = FSNamesystem.getNamespaceDirs(config);
assertNotNull(nameDirs);
assertTrue(nameDirs.iterator().hasNext());
final URI nameDir = nameDirs.iterator().next();
...
{code}

> Add unit tests for NameNode failing to startup when name dir can not be 
> written
> ---
>
> Key: HDFS-11085
> URL: https://issues.apache.org/jira/browse/HDFS-11085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11085.000.patch
>
>
> This can be placed in {{org.apache.hadoop.hdfs.server.namenode.TestStartup}} 
> test class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11085) Add unit tests for NameNode failing to startup when name dir can not be written

2016-11-03 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634134#comment-15634134
 ] 

Mingliang Liu edited comment on HDFS-11085 at 11/3/16 8:27 PM:
---

The patch looks good to me overall.

Some minor comments:
# Checkstyle warning is related.
# MiniDFSCluster can be used in try-with-resource. Shorter and neater.
# I think we can use {{Paths.get()}} for joining path strings. Assuming file 
path separate is / is not platform independent probably. Also if there are 
methods in JDK or Apache component that work just fine, I hesitate to use third 
party libraries.
{code}
661 final String nnDirStr = Joiner.on("/").join(
662 hdfsDir.toString(),
663 GenericTestUtils.getMethodName(), "name");
{code}
# I don't think we need the implementation details about the Collection type 
(List). We just need the only element in the collections, don't we?
{code}
673   /* get and verify NN dir */
674   List nameDirs = (List) 
FSNamesystem.getNamespaceDirs(config);
{code}
Alternatively:
{code}
Collection nameDirs = FSNamesystem.getNamespaceDirs(config);
assertNotNull(nameDirs);
assertTrue(nameDirs.iterator().hasNext());
final URI nameDir = nameDirs.iterator().next();
...
{code}


was (Author: liuml07):
The patch looks good to me overall.

Some minor comments:
# Checkstyle warning is related.
# MiniDFSCluster can be used in try-catch. Shorter and neater.
# I think we can use {{Paths.get()}} for joining path strings. Assuming file 
path separate is / is not platform independent probably. Also if there are 
methods in JDK or Apache component that work just fine, I hesitate to use third 
party libraries.
{code}
661 final String nnDirStr = Joiner.on("/").join(
662 hdfsDir.toString(),
663 GenericTestUtils.getMethodName(), "name");
{code}
# I don't think we need the implementation details about the Collection type 
(List). We just need the only element in the collections, don't we?
{code}
673   /* get and verify NN dir */
674   List nameDirs = (List) 
FSNamesystem.getNamespaceDirs(config);
{code}
Alternatively:
{code}
Collection nameDirs = FSNamesystem.getNamespaceDirs(config);
assertNotNull(nameDirs);
assertTrue(nameDirs.iterator().hasNext());
final URI nameDir = nameDirs.iterator().next();
...
{code}

> Add unit tests for NameNode failing to startup when name dir can not be 
> written
> ---
>
> Key: HDFS-11085
> URL: https://issues.apache.org/jira/browse/HDFS-11085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11085.000.patch
>
>
> This can be placed in {{org.apache.hadoop.hdfs.server.namenode.TestStartup}} 
> test class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11085) Add unit tests for NameNode failing to startup when name dir can not be written

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634082#comment-15634082
 ] 

Hadoop QA commented on HDFS-11085:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 73 unchanged - 0 fixed = 74 total (was 73) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 57m 
16s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11085 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836933/HDFS-11085.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 42b99361cfc1 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 20c4d8e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17411/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17411/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17411/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add unit tests for NameNode failing to startup when name dir can not be 
> written
> ---
>
> Key: HDFS-11085
> URL: https://issues.apache.org/jira/browse/HDFS-11085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: 

[jira] [Commented] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634005#comment-15634005
 ] 

Hadoop QA commented on HDFS-9337:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 204 unchanged - 0 fixed = 205 total (was 204) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-9337 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836912/HDFS-9337_12.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6a55e8dfb4d7 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b71907b |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17409/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17409/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17409/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17409/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: 

[jira] [Commented] (HDFS-11081) Ozone:SCM: Add support for registerNode in datanode

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15633993#comment-15633993
 ] 

Hadoop QA commented on HDFS-11081:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
12s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 169 unchanged - 0 fixed = 170 total (was 169) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 66m 
21s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11081 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836916/HDFS-11081-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 8db4c8ea6492 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / eb8f2b2 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17410/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17410/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17410/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone:SCM: Add support for registerNode in datanode
> ---
>
> Key: HDFS-11081
> URL: https://issues.apache.org/jira/browse/HDFS-11081
> 

[jira] [Updated] (HDFS-11092) Stackoverflow for schemeless defaultFS with trailing slash

2016-11-03 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11092:
--
Summary: Stackoverflow for schemeless defaultFS with trailing slash  (was: 
Stackoverflow if only root directory is used in Command Line)

> Stackoverflow for schemeless defaultFS with trailing slash
> --
>
> Key: HDFS-11092
> URL: https://issues.apache.org/jira/browse/HDFS-11092
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Darius Murawski
>Assignee: John Zhuge
>
> Command: hadoop fs -fs 172.16.12.79/ -mkdir -p /usr/hduser
> Results in a Stack Overflow
> {code}
> Exception in thread "main" java.lang.StackOverflowError
>   at java.lang.String.indexOf(String.java:1503)
>   at java.net.URI$Parser.scan(URI.java:2951)
>   at java.net.URI$Parser.parseHierarchical(URI.java:3104)
>   at java.net.URI$Parser.parse(URI.java:3063)
>   at java.net.URI.(URI.java:588)
>   at java.net.URI.create(URI.java:850)
>   at org.apache.hadoop.fs.FileSystem.getDefaultUri(FileSystem.java:180)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:172)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:357)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:172)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:357)
> (...)
> {code}
> The Problem is the Slash at the End of the IP Address. When I remove it, the 
> command is executed correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11085) Add unit tests for NameNode failing to startup when name dir can not be written

2016-11-03 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15633798#comment-15633798
 ] 

Xiaobing Zhou commented on HDFS-11085:
--

Posted initial patch v000, please kindly review it, thx.

> Add unit tests for NameNode failing to startup when name dir can not be 
> written
> ---
>
> Key: HDFS-11085
> URL: https://issues.apache.org/jira/browse/HDFS-11085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11085.000.patch
>
>
> This can be placed in {{org.apache.hadoop.hdfs.server.namenode.TestStartup}} 
> test class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11085) Add unit tests for NameNode failing to startup when name dir can not be written

2016-11-03 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-11085:
-
Status: Patch Available  (was: Open)

> Add unit tests for NameNode failing to startup when name dir can not be 
> written
> ---
>
> Key: HDFS-11085
> URL: https://issues.apache.org/jira/browse/HDFS-11085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11085.000.patch
>
>
> This can be placed in {{org.apache.hadoop.hdfs.server.namenode.TestStartup}} 
> test class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11085) Add unit tests for NameNode failing to startup when name dir can not be written

2016-11-03 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-11085:
-
Attachment: HDFS-11085.000.patch

> Add unit tests for NameNode failing to startup when name dir can not be 
> written
> ---
>
> Key: HDFS-11085
> URL: https://issues.apache.org/jira/browse/HDFS-11085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11085.000.patch
>
>
> This can be placed in {{org.apache.hadoop.hdfs.server.namenode.TestStartup}} 
> test class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2016-11-03 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631811#comment-15631811
 ] 

Xiao Chen edited comment on HDFS-10899 at 11/3/16 6:05 PM:
---

WIP patch attached. The re-encrypt is functional, and all test cases from the 
doc added. Appreciate any early feedback, thanks in advance!

Work on going:
- Cancel re-encrypt
- Check re-encrypt progress
- Support client to verify that files are re-encrypted
- Perf testing and related tuning (including updating EZM to unlock when 
calling {{reencryptEncrytedKey}} - and maybe batching that)


was (Author: xiaochen):
WIP patch attached. The re-encrypt is functional, and all test cases from the 
doc added. Appreciate any early feedback, thanks in advance!

Work on going:
- Cancel re-encrypt
- Check re-encrypt progress
- Support client to verify that files are re-encrypted
- Perf testing and related tuning.

> Add functionality to re-encrypt EDEKs.
> --
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11081) Ozone:SCM: Add support for registerNode in datanode

2016-11-03 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11081:

Attachment: HDFS-11081-HDFS-7240.002.patch

Version 002:
* Fixed test failure
* Fixed a findbug issue


> Ozone:SCM: Add support for registerNode in datanode
> ---
>
> Key: HDFS-11081
> URL: https://issues.apache.org/jira/browse/HDFS-11081
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: Design note for StateMachine.pdf, 
> HDFS-11081-HDFS-7240.001.patch, HDFS-11081-HDFS-7240.002.patch
>
>
> Add support for registerDatanode in Datanode. This allows the container to 
> use SCM via Container datanode protocols.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11097) Fix the jenkins warning related to the deprecated method StorageReceivedDeletedBlocks

2016-11-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15633665#comment-15633665
 ] 

Hudson commented on HDFS-11097:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10764 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10764/])
HDFS-11097. Fix warnings for deprecated StorageReceivedDeletedBlocks (arp: rev 
b71907b2ae3d8f05b4332e06d52ec2096681ea6b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/StorageReceivedDeletedBlocks.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReconstruction.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java


> Fix the jenkins warning related to the deprecated method 
> StorageReceivedDeletedBlocks
> -
>
> Key: HDFS-11097
> URL: https://issues.apache.org/jira/browse/HDFS-11097
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 2.8.0
>
> Attachments: HDFS-11097.001.patch, warn.txt
>
>
> After HDFS-6094, it updated the constrcut method of 
> {{StorageReceivedDeletedBlocks}} and let it good to use. We can pass not only 
> the storage type and state as well. But this new method isn't updated in some 
> test case and it cause many warnings in each jenkins buildings. The part of 
> warning infos:
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java:[175,12]
>  [deprecation] 
> StorageReceivedDeletedBlocks(String,ReceivedDeletedBlockInfo[]) in 
> StorageReceivedDeletedBlocks has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReconstruction.java:[315,14]
>  [deprecation] 
> StorageReceivedDeletedBlocks(String,ReceivedDeletedBlockInfo[]) in 
> StorageReceivedDeletedBlocks has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReconstruction.java:[333,14]
>  [deprecation] 
> StorageReceivedDeletedBlocks(String,ReceivedDeletedBlockInfo[]) in 
> StorageReceivedDeletedBlocks has been deprecated
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-03 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N updated HDFS-9337:
---
Attachment: HDFS-9337_12.patch

Updated patch fixing UT Failures, Checkstyle is not related to patch uploaded

> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, 
> HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, 
> HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, 
> HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, 
> HDFS-9337_11.patch, HDFS-9337_12.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME;
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11098) Datanode in tests cannot start in Windows after HDFS-10638

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15633606#comment-15633606
 ] 

Hadoop QA commented on HDFS-11098:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSShell |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11098 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836840/HDFS-11098.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 330dadd94ef8 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 352cbaa |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17408/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17408/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17408/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Datanode in tests cannot start in Windows after HDFS-10638
> --
>
> Key: HDFS-11098
> URL: https://issues.apache.org/jira/browse/HDFS-11098
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: 

[jira] [Updated] (HDFS-11097) Fix the jenkins warning related to the deprecated method StorageReceivedDeletedBlocks

2016-11-03 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11097:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed for 2.8.0. The issues flagged by Jenkins were false-positives.

Thank you for fixing this [~linyiqun].

> Fix the jenkins warning related to the deprecated method 
> StorageReceivedDeletedBlocks
> -
>
> Key: HDFS-11097
> URL: https://issues.apache.org/jira/browse/HDFS-11097
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 2.8.0
>
> Attachments: HDFS-11097.001.patch, warn.txt
>
>
> After HDFS-6094, it updated the constrcut method of 
> {{StorageReceivedDeletedBlocks}} and let it good to use. We can pass not only 
> the storage type and state as well. But this new method isn't updated in some 
> test case and it cause many warnings in each jenkins buildings. The part of 
> warning infos:
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java:[175,12]
>  [deprecation] 
> StorageReceivedDeletedBlocks(String,ReceivedDeletedBlockInfo[]) in 
> StorageReceivedDeletedBlocks has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReconstruction.java:[315,14]
>  [deprecation] 
> StorageReceivedDeletedBlocks(String,ReceivedDeletedBlockInfo[]) in 
> StorageReceivedDeletedBlocks has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReconstruction.java:[333,14]
>  [deprecation] 
> StorageReceivedDeletedBlocks(String,ReceivedDeletedBlockInfo[]) in 
> StorageReceivedDeletedBlocks has been deprecated
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11060) make DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED configurable

2016-11-03 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15633455#comment-15633455
 ] 

Ravi Prakash commented on HDFS-11060:
-

Sure!

> make DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED configurable
> -
>
> Key: HDFS-11060
> URL: https://issues.apache.org/jira/browse/HDFS-11060
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Lantao Jin
>Priority: Minor
>
> Current, the easiest way to determine which blocks is missing is using NN web 
> UI or JMX. Unfortunately, because the 
> DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED=100 is hard code in FSNamesystem, 
> only 100 missing blocks can be return by UI and JMX. Even the result of URL 
> "https://nn:50070/fsck?listcorruptfileblocks=1=%2F; is limited by this 
> hard code value too.
> I did know FSCK can return more than 100 result but due to the security 
> reason(with kerberos), it is very hard to involve to costumer programs and 
> scripts.
> So I think it should add a configurable var "maxCorruptFileBlocksReturned" to 
> fix above case.
> If community also think it's worth to do, I will patch this. If not, please 
> feel free to tell me the reason. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15633362#comment-15633362
 ] 

Hadoop QA commented on HDFS-10702:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 41s{color} | {color:orange} root: The patch generated 7 new + 996 unchanged 
- 2 fixed = 1003 total (was 998) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
22s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10702 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836815/HDFS-10702.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 2092c4a69f25 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 352cbaa |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (HDFS-11098) Datanode in tests cannot start in Windows after HDFS-10638

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15633324#comment-15633324
 ] 

Hadoop QA commented on HDFS-11098:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSShell |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11098 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836840/HDFS-11098.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 40a4674a0b5d 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 352cbaa |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17407/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17407/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17407/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Datanode in tests cannot start in Windows after HDFS-10638
> --
>
> Key: HDFS-11098
> URL: https://issues.apache.org/jira/browse/HDFS-11098
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: 

[jira] [Commented] (HDFS-10885) [SPS]: Mover tool should not be allowed to run when Storage Policy Satisfier is on

2016-11-03 Thread Wei Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15633265#comment-15633265
 ] 

Wei Zhou commented on HDFS-10885:
-

Thanks [~rakeshr] for the comments!
{quote}
Alternate approach is to use FSNamesystem APIs directly like below. 
{quote}
I planed to use {{FSNamesystem.startFile()}} to create MOVER_ID_PATH, but it 
have some issues here:
1) Extra work have to be done as in {{NameNodeRpcServer.create}} before call 
{{FSNamesystem.startFile()}}, it duplicates the codes
2) It's also very hard for SPS to write hostname to mover id file as no API 
provided.


> [SPS]: Mover tool should not be allowed to run when Storage Policy Satisfier 
> is on
> --
>
> Key: HDFS-10885
> URL: https://issues.apache.org/jira/browse/HDFS-10885
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Wei Zhou
>Assignee: Wei Zhou
> Fix For: HDFS-10285
>
> Attachments: HDFS-10800-HDFS-10885-00.patch, 
> HDFS-10800-HDFS-10885-01.patch, HDFS-10800-HDFS-10885-02.patch, 
> HDFS-10885-HDFS-10285.03.patch, HDFS-10885-HDFS-10285.04.patch, 
> HDFS-10885-HDFS-10285.05.patch
>
>
> These two can not work at the same time to avoid conflicts and fight with 
> each other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11056) Concurrent append and read operations lead to checksum error

2016-11-03 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15633249#comment-15633249
 ] 

Kihwal Lee commented on HDFS-11056:
---

I was looking at the patch since yesterday. It looks like the partial chunk sum 
is loaded from disk and saved in memory before it is modified.  That seems like 
a correct approach. +1

> Concurrent append and read operations lead to checksum error
> 
>
> Key: HDFS-11056
> URL: https://issues.apache.org/jira/browse/HDFS-11056
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, httpfs
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-11056.001.patch, HDFS-11056.002.patch, 
> HDFS-11056.reproduce.patch
>
>
> If there are two clients, one of them open-append-close a file continuously, 
> while the other open-read-close the same file continuously, the reader 
> eventually gets a checksum error in the data read.
> On my local Mac, it takes a few minutes to produce the error. This happens to 
> httpfs clients, but there's no reason not believe this happens to any append 
> clients.
> I have a unit test that demonstrates the checksum error. Will attach later.
> Relevant log:
> {quote}
> 2016-10-25 15:34:45,153 INFO  audit - allowed=trueugi=weichiu 
> (auth:SIMPLE)   ip=/127.0.0.1   cmd=opensrc=/tmp/bar.txt
> dst=nullperm=null   proto=rpc
> 2016-10-25 15:34:45,155 INFO  DataNode - Receiving 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 src: 
> /127.0.0.1:51130 dest: /127.0.0.1:50131
> 2016-10-25 15:34:45,155 INFO  FsDatasetImpl - Appending to FinalizedReplica, 
> blk_1073741825_1182, FINALIZED
>   getNumBytes() = 182
>   getBytesOnDisk()  = 182
>   getVisibleLength()= 182
>   getVolume()   = 
> /Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1
>   getBlockURI() = 
> file:/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-837130339-172.16.1.88-1477434851452/current/finalized/subdir0/subdir0/blk_1073741825
> 2016-10-25 15:34:45,167 INFO  DataNode - opReadBlock 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 received exception 
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
> 2016-10-25 15:34:45,167 WARN  DataNode - 
> DatanodeRegistration(127.0.0.1:50131, 
> datanodeUuid=41c96335-5e4b-4950-ac22-3d21b353abb8, infoPort=50133, 
> infoSecurePort=0, ipcPort=50134, 
> storageInfo=lv=-57;cid=testClusterID;nsid=1472068852;c=1477434851452):Got 
> exception while serving 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 to /127.0.0.1:51121
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
>   at java.lang.Thread.run(Thread.java:745)
> 2016-10-25 15:34:45,168 INFO  FSNamesystem - 
> updatePipeline(blk_1073741825_1182, newGS=1183, newLength=182, 
> newNodes=[127.0.0.1:50131], client=DFSClient_NONMAPREDUCE_-1743096965_197)
> 2016-10-25 15:34:45,168 ERROR DataNode - 127.0.0.1:50131:DataXceiver error 
> processing READ_BLOCK operation  src: /127.0.0.1:51121 dst: /127.0.0.1:50131
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
>   at java.lang.Thread.run(Thread.java:745)
> 2016-10-25 15:34:45,168 INFO  FSNamesystem - 
> updatePipeline(blk_1073741825_1182 => blk_1073741825_1183) success
> 2016-10-25 15:34:45,170 WARN  DFSClient - Found Checksum error for 
> 

[jira] [Commented] (HDFS-11092) Stackoverflow if only root directory is used in Command Line

2016-11-03 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15633193#comment-15633193
 ] 

John Zhuge commented on HDFS-11092:
---

Uri {{abc.com/}} is parsed into {{scheme=null, authority=null, path=abc.com/}}; 
while {{abc.com}} into {{scheme=hdfs, authority=abc.com, path=}}.

> Stackoverflow if only root directory is used in Command Line
> 
>
> Key: HDFS-11092
> URL: https://issues.apache.org/jira/browse/HDFS-11092
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Darius Murawski
>Assignee: John Zhuge
>
> Command: hadoop fs -fs 172.16.12.79/ -mkdir -p /usr/hduser
> Results in a Stack Overflow
> {code}
> Exception in thread "main" java.lang.StackOverflowError
>   at java.lang.String.indexOf(String.java:1503)
>   at java.net.URI$Parser.scan(URI.java:2951)
>   at java.net.URI$Parser.parseHierarchical(URI.java:3104)
>   at java.net.URI$Parser.parse(URI.java:3063)
>   at java.net.URI.(URI.java:588)
>   at java.net.URI.create(URI.java:850)
>   at org.apache.hadoop.fs.FileSystem.getDefaultUri(FileSystem.java:180)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:172)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:357)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:172)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:357)
> (...)
> {code}
> The Problem is the Slash at the End of the IP Address. When I remove it, the 
> command is executed correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15633190#comment-15633190
 ] 

Hadoop QA commented on HDFS-9337:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 204 unchanged - 0 fixed = 205 total (was 204) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.web.TestWebHDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-9337 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836805/HDFS-9337_11.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 473ad4a376e4 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 352cbaa |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17403/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17403/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17403/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17403/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> 

[jira] [Commented] (HDFS-11098) Datanode in tests cannot start in Windows after HDFS-10638

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15633175#comment-15633175
 ] 

Hadoop QA commented on HDFS-11098:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages |
|   | hadoop.hdfs.TestDatanodeConfig |
|   | hadoop.hdfs.server.namenode.TestAllowFormat |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.namenode.TestStartup |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR |
|   | hadoop.hdfs.server.namenode.TestFSImage |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11098 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836770/HDFS-11098.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1ce6355d3671 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 352cbaa |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17404/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17404/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console 

[jira] [Commented] (HDFS-11056) Concurrent append and read operations lead to checksum error

2016-11-03 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15633054#comment-15633054
 ] 

Wei-Chiu Chuang commented on HDFS-11056:


The test failure is unrelated.
[~eddyxu] [~virajith] would you like to make a comment? I saw that HDFS-10636 
refactored a lot of relevant code, but I do think the same bug existed pre 
HDFS-10636.

> Concurrent append and read operations lead to checksum error
> 
>
> Key: HDFS-11056
> URL: https://issues.apache.org/jira/browse/HDFS-11056
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, httpfs
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-11056.001.patch, HDFS-11056.002.patch, 
> HDFS-11056.reproduce.patch
>
>
> If there are two clients, one of them open-append-close a file continuously, 
> while the other open-read-close the same file continuously, the reader 
> eventually gets a checksum error in the data read.
> On my local Mac, it takes a few minutes to produce the error. This happens to 
> httpfs clients, but there's no reason not believe this happens to any append 
> clients.
> I have a unit test that demonstrates the checksum error. Will attach later.
> Relevant log:
> {quote}
> 2016-10-25 15:34:45,153 INFO  audit - allowed=trueugi=weichiu 
> (auth:SIMPLE)   ip=/127.0.0.1   cmd=opensrc=/tmp/bar.txt
> dst=nullperm=null   proto=rpc
> 2016-10-25 15:34:45,155 INFO  DataNode - Receiving 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 src: 
> /127.0.0.1:51130 dest: /127.0.0.1:50131
> 2016-10-25 15:34:45,155 INFO  FsDatasetImpl - Appending to FinalizedReplica, 
> blk_1073741825_1182, FINALIZED
>   getNumBytes() = 182
>   getBytesOnDisk()  = 182
>   getVisibleLength()= 182
>   getVolume()   = 
> /Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1
>   getBlockURI() = 
> file:/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-837130339-172.16.1.88-1477434851452/current/finalized/subdir0/subdir0/blk_1073741825
> 2016-10-25 15:34:45,167 INFO  DataNode - opReadBlock 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 received exception 
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
> 2016-10-25 15:34:45,167 WARN  DataNode - 
> DatanodeRegistration(127.0.0.1:50131, 
> datanodeUuid=41c96335-5e4b-4950-ac22-3d21b353abb8, infoPort=50133, 
> infoSecurePort=0, ipcPort=50134, 
> storageInfo=lv=-57;cid=testClusterID;nsid=1472068852;c=1477434851452):Got 
> exception while serving 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 to /127.0.0.1:51121
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
>   at java.lang.Thread.run(Thread.java:745)
> 2016-10-25 15:34:45,168 INFO  FSNamesystem - 
> updatePipeline(blk_1073741825_1182, newGS=1183, newLength=182, 
> newNodes=[127.0.0.1:50131], client=DFSClient_NONMAPREDUCE_-1743096965_197)
> 2016-10-25 15:34:45,168 ERROR DataNode - 127.0.0.1:50131:DataXceiver error 
> processing READ_BLOCK operation  src: /127.0.0.1:51121 dst: /127.0.0.1:50131
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
>   at java.lang.Thread.run(Thread.java:745)
> 2016-10-25 15:34:45,168 INFO  FSNamesystem - 
> updatePipeline(blk_1073741825_1182 => blk_1073741825_1183) success
> 2016-10-25 15:34:45,170 WARN  DFSClient - Found 

[jira] [Updated] (HDFS-11098) Datanode in tests cannot start in Windows after HDFS-10638

2016-11-03 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-11098:
-
Attachment: HDFS-11098.02.patch

Corrected the 'url' of the StorageLocation.

It was getting appended with '/' when directory exist. And comparison was 
getting failed.

> Datanode in tests cannot start in Windows after HDFS-10638
> --
>
> Key: HDFS-11098
> URL: https://issues.apache.org/jira/browse/HDFS-11098
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Blocker
> Attachments: HDFS-11098.01.patch, HDFS-11098.02.patch
>
>
> After HDFS-10638
> Starting datanode's in MiniDFSCluster in windows throws below exception
> {noformat}java.lang.IllegalArgumentException: URI: 
> file:/D:/code/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1
>  is not in the expected format
>   at 
> org.apache.hadoop.hdfs.server.datanode.StorageLocation.(StorageLocation.java:68)
>   at 
> org.apache.hadoop.hdfs.server.datanode.StorageLocation.parse(StorageLocation.java:123)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getStorageLocations(DataNode.java:2561)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2545)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1613)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:860)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11026) Convert BlockTokenIdentifier to use Protobuf

2016-11-03 Thread Thomas Demoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15632918#comment-15632918
 ] 

Thomas Demoor commented on HDFS-11026:
--

Ewan's stacktraces match with [~andrew.wang]'s remarks.
[~daryn], once HDFS-11096 gets resolved we expect the current patch to work 
across 2.x and 3.0.

Thanks for looking at our patch.

> Convert BlockTokenIdentifier to use Protobuf
> 
>
> Key: HDFS-11026
> URL: https://issues.apache.org/jira/browse/HDFS-11026
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs, hdfs-client
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ewan Higgs
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11026.002.patch, blocktokenidentifier-protobuf.patch
>
>
> {{BlockTokenIdentifier}} currently uses a {{DataInput}}/{{DataOutput}} 
> (basically a {{byte[]}}) and manual serialization to get data into and out of 
> the encrypted buffer (in {{BlockKeyProto}}). Other TokenIdentifiers (e.g. 
> {{ContainerTokenIdentifier}}, {{AMRMTokenIdentifier}}) use Protobuf. The 
> {{BlockTokenIdenfitier}} should use Protobuf as well so it can be expanded 
> more easily and will be consistent with the rest of the system.
> NB: Release of this will require a version update since 2.8.x won't be able 
> to decipher {{BlockKeyProto.keyBytes}} from 2.8.y.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-11-03 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HDFS-10702:
-
Attachment: HDFS-10702.006.patch

It might be helpful if I don't submit patches for other JIRAs.

> Add a Client API and Proxy Provider to enable stale read from Standby
> -
>
> Key: HDFS-10702
> URL: https://issues.apache.org/jira/browse/HDFS-10702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10702.001.patch, HDFS-10702.002.patch, 
> HDFS-10702.003.patch, HDFS-10702.004.patch, HDFS-10702.005.patch, 
> HDFS-10702.006.patch, StaleReadfromStandbyNN.pdf
>
>
> Currently, clients must always talk to the active NameNode when performing 
> any metadata operation, which means active NameNode could be a bottleneck for 
> scalability. One way to solve this problem is to send read-only operations to 
> Standby NameNode. The disadvantage is that it might be a stale read. 
> Here, I'm thinking of adding a Client API to enable/disable stale read from 
> Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-11-03 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HDFS-10702:
-
Attachment: (was: HDFS-10797.006.patch)

> Add a Client API and Proxy Provider to enable stale read from Standby
> -
>
> Key: HDFS-10702
> URL: https://issues.apache.org/jira/browse/HDFS-10702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10702.001.patch, HDFS-10702.002.patch, 
> HDFS-10702.003.patch, HDFS-10702.004.patch, HDFS-10702.005.patch, 
> StaleReadfromStandbyNN.pdf
>
>
> Currently, clients must always talk to the active NameNode when performing 
> any metadata operation, which means active NameNode could be a bottleneck for 
> scalability. One way to solve this problem is to send read-only operations to 
> Standby NameNode. The disadvantage is that it might be a stale read. 
> Here, I'm thinking of adding a Client API to enable/disable stale read from 
> Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15632842#comment-15632842
 ] 

Hadoop QA commented on HDFS-10702:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 11s{color} 
| {color:red} HDFS-10702 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-10702 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836810/HDFS-10797.006.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17405/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add a Client API and Proxy Provider to enable stale read from Standby
> -
>
> Key: HDFS-10702
> URL: https://issues.apache.org/jira/browse/HDFS-10702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10702.001.patch, HDFS-10702.002.patch, 
> HDFS-10702.003.patch, HDFS-10702.004.patch, HDFS-10702.005.patch, 
> HDFS-10797.006.patch, StaleReadfromStandbyNN.pdf
>
>
> Currently, clients must always talk to the active NameNode when performing 
> any metadata operation, which means active NameNode could be a bottleneck for 
> scalability. One way to solve this problem is to send read-only operations to 
> Standby NameNode. The disadvantage is that it might be a stale read. 
> Here, I'm thinking of adding a Client API to enable/disable stale read from 
> Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-11-03 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HDFS-10702:
-
Attachment: HDFS-10797.006.patch

I had a conversation with [~clouderajiayi] about taking this over - meant to 
post that previously, my apologies.

Attaching a patch with substantially rewritten logic in the ProxyProvider. 
Previously the 'active' was determined by which request completed first, but we 
can't rely on the same logic as RequestHedgingProxyProvider because when stale 
reads are enabled, any of the NameNodes can respond to virtually any read 
request. On the first request and then in the event of failover, it will now 
probe all the proxies to determine the active and running standby's, shuffle 
the standby's, and then try the first one. In the event of failure, it will try 
the next standby in order. When they have all been exhausted it will re-probe 
and just use the master. One thing that needs a bit of further testing in 
particular is the probing to identify the master and standby nodes. It's a bit 
of a hack using the return value of getting the transaction ID, but we don't 
want a write operation that modifies anything, and a normal read operation 
could almost certainly be served just as well by any of the NameNodes, 
regardless of state. Open to other suggestions on this...

This patch also addresses some (but not all, yet) of the remaining feedback 
from [~andrew.wang]. One particular point I'd like to discuss is that you 
suggest not having a SyncInfo object to encapsulate the transaction ID when it 
could be requested separately, but you do suggest having a separate struct to 
encapsulate that info in the subsequent request. It does make sense, since it's 
cleaner to add getSyncInfo later if we need it than to replace the transaction 
ID in the proto later, but I just thought I'd ask: do you think it's likely 
instead of just possible there will be additional sync info in the future? 
Because if so, we may as well just add it now. Otherwise I'll just go ahead 
with your original suggestion...

> Add a Client API and Proxy Provider to enable stale read from Standby
> -
>
> Key: HDFS-10702
> URL: https://issues.apache.org/jira/browse/HDFS-10702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10702.001.patch, HDFS-10702.002.patch, 
> HDFS-10702.003.patch, HDFS-10702.004.patch, HDFS-10702.005.patch, 
> HDFS-10797.006.patch, StaleReadfromStandbyNN.pdf
>
>
> Currently, clients must always talk to the active NameNode when performing 
> any metadata operation, which means active NameNode could be a bottleneck for 
> scalability. One way to solve this problem is to send read-only operations to 
> Standby NameNode. The disadvantage is that it might be a stale read. 
> Here, I'm thinking of adding a Client API to enable/disable stale read from 
> Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-03 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N updated HDFS-9337:
---
Attachment: HDFS-9337_11.patch

Updating the patch pls review

> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, 
> HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, 
> HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, 
> HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, HDFS-9337_11.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME;
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11098) Datanode in tests cannot start in Windows after HDFS-10638

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15632552#comment-15632552
 ] 

Hadoop QA commented on HDFS-11098:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.namenode.TestAllowFormat |
|   | hadoop.hdfs.server.namenode.TestStartup |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.namenode.TestFSImage |
|   | hadoop.hdfs.TestDatanodeConfig |
|   | hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11098 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836770/HDFS-11098.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0a4f67ea35ff 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0e75496 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17402/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17402/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console 

[jira] [Commented] (HDFS-10885) [SPS]: Mover tool should not be allowed to run when Storage Policy Satisfier is on

2016-11-03 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15632502#comment-15632502
 ] 

Rakesh R commented on HDFS-10885:
-

bq. Yes, besides there is another reason that NN may not be ready when calling 
dfs.exists.

{code}
+  while (namesystem.isInSafeMode()) {
+Thread.sleep(50);
+  }
{code}
Internally {{dfs.exists()}} is using {{FSNamesystem#getFileInfo()}} function, 
which will return filestatus even if the NN is in safe mode. So the safemode 
checks is not required for {{#createMarkRunning()}} , I think.

I just noticed that you have used {{DistributedFileSystem}} for creating the 
MOVER_PATH_ID in StoragePolicySatisfier. I'd prefer not to use DFS approach as 
SPS daemon is part of Namenode.
{code}
+  DistributedFileSystem dfs = (DistributedFileSystem)FileSystem.get(
+  FileSystem.getDefaultUri(conf), conf);
{code}

Alternate approach is to use FSNamesystem APIs directly like below. Again, 
StoragePolicySatisfier is kept under BlockManager. BlockManager doesn't have 
FSNamesystem reference to call {{#getFileInfo()}} and this function is not 
visible. [~umamaheswararao], [~drankye], any suggestions?

{code}
//1. StoragePolicySatisfier run()
 if (FSNamesystem.getFileInfo(MOVER_ID_PATH, true)) {
   // throw exception
 } else {
   // create MOVER_ID_PATH
 }

//2. StoragePolicySatisfier stop()
// delete MOVER_ID_PATH

{code}


> [SPS]: Mover tool should not be allowed to run when Storage Policy Satisfier 
> is on
> --
>
> Key: HDFS-10885
> URL: https://issues.apache.org/jira/browse/HDFS-10885
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Wei Zhou
>Assignee: Wei Zhou
> Fix For: HDFS-10285
>
> Attachments: HDFS-10800-HDFS-10885-00.patch, 
> HDFS-10800-HDFS-10885-01.patch, HDFS-10800-HDFS-10885-02.patch, 
> HDFS-10885-HDFS-10285.03.patch, HDFS-10885-HDFS-10285.04.patch, 
> HDFS-10885-HDFS-10285.05.patch
>
>
> These two can not work at the same time to avoid conflicts and fight with 
> each other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11097) Fix the jenkins warning related to the deprecated method StorageReceivedDeletedBlocks

2016-11-03 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15632494#comment-15632494
 ] 

Yiqun Lin edited comment on HDFS-11097 at 11/3/16 11:42 AM:


The latest jenkins's warnings related to the new construct method {{public 
StorageReceivedDeletedBlocks(final String storageID, final 
ReceivedDeletedBlockInfo[] blocks)}} have been all fixed(jenkins 
infos:https://builds.apache.org/job/PreCommit-HDFS-Build/17401/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt).
 The javadoc warnings is already existed.


was (Author: linyiqun):
The latest jenkins's warnings related to the new construct method {{public 
StorageReceivedDeletedBlocks(final DatanodeStorage storage, final 
ReceivedDeletedBlockInfo[] blocks)}} have been all fixed(jenkins 
infos:https://builds.apache.org/job/PreCommit-HDFS-Build/17401/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt).
 The javadoc warnings is already existed.

> Fix the jenkins warning related to the deprecated method 
> StorageReceivedDeletedBlocks
> -
>
> Key: HDFS-11097
> URL: https://issues.apache.org/jira/browse/HDFS-11097
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11097.001.patch, warn.txt
>
>
> After HDFS-6094, it updated the constrcut method of 
> {{StorageReceivedDeletedBlocks}} and let it good to use. We can pass not only 
> the storage type and state as well. But this new method isn't updated in some 
> test case and it cause many warnings in each jenkins buildings. The part of 
> warning infos:
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java:[175,12]
>  [deprecation] 
> StorageReceivedDeletedBlocks(String,ReceivedDeletedBlockInfo[]) in 
> StorageReceivedDeletedBlocks has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReconstruction.java:[315,14]
>  [deprecation] 
> StorageReceivedDeletedBlocks(String,ReceivedDeletedBlockInfo[]) in 
> StorageReceivedDeletedBlocks has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReconstruction.java:[333,14]
>  [deprecation] 
> StorageReceivedDeletedBlocks(String,ReceivedDeletedBlockInfo[]) in 
> StorageReceivedDeletedBlocks has been deprecated
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11097) Fix the jenkins warning related to the deprecated method StorageReceivedDeletedBlocks

2016-11-03 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15632494#comment-15632494
 ] 

Yiqun Lin commented on HDFS-11097:
--

The latest jenkins's warnings related to the new construct method {{public 
StorageReceivedDeletedBlocks(final DatanodeStorage storage, final 
ReceivedDeletedBlockInfo[] blocks)}} have been all fixed(jenkins 
infos:https://builds.apache.org/job/PreCommit-HDFS-Build/17401/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt).
 The javadoc warnings is already existed.

> Fix the jenkins warning related to the deprecated method 
> StorageReceivedDeletedBlocks
> -
>
> Key: HDFS-11097
> URL: https://issues.apache.org/jira/browse/HDFS-11097
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11097.001.patch, warn.txt
>
>
> After HDFS-6094, it updated the constrcut method of 
> {{StorageReceivedDeletedBlocks}} and let it good to use. We can pass not only 
> the storage type and state as well. But this new method isn't updated in some 
> test case and it cause many warnings in each jenkins buildings. The part of 
> warning infos:
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java:[175,12]
>  [deprecation] 
> StorageReceivedDeletedBlocks(String,ReceivedDeletedBlockInfo[]) in 
> StorageReceivedDeletedBlocks has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReconstruction.java:[315,14]
>  [deprecation] 
> StorageReceivedDeletedBlocks(String,ReceivedDeletedBlockInfo[]) in 
> StorageReceivedDeletedBlocks has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReconstruction.java:[333,14]
>  [deprecation] 
> StorageReceivedDeletedBlocks(String,ReceivedDeletedBlockInfo[]) in 
> StorageReceivedDeletedBlocks has been deprecated
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11097) Fix the jenkins warning related to the deprecated method StorageReceivedDeletedBlocks

2016-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15632390#comment-15632390
 ] 

Hadoop QA commented on HDFS-11097:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m  
8s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 47s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 28 unchanged - 
8 fixed = 29 total (was 36) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 179 unchanged - 4 fixed = 182 total (was 183) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11097 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836765/HDFS-11097.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 99874c784542 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0e75496 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17401/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17401/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17401/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17401/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Updated] (HDFS-2936) Provide a way to apply a minimum replication factor aside of strict minimum live replicas feature

2016-11-03 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2936:
--
Description: 
If an admin wishes to enforce replication today for all the users of their 
cluster, they may set {{dfs.namenode.replication.min}}. This property prevents 
users from creating files with < expected replication factor.

However, the value of minimum replication set by the above value is also 
checked at several other points, especially during completeFile (close) 
operations. If a condition arises wherein a write's pipeline may have gotten 
only < minimum nodes in it, the completeFile operation does not successfully 
close the file and the client begins to hang waiting for NN to replicate the 
last bad block in the background. This form of hard-guarantee can, for example, 
bring down clusters of HBase during high xceiver load on DN, or disk fill-ups 
on many of them, etc..

I propose we should split the property in two parts:
* dfs.namenode.replication.min
** Stays the same name, but only checks file creation time replication factor 
value and during adjustments made via setrep/etc.
* dfs.namenode.replication.min.for.write
** New property that disconnects the rest of the checks from the above 
property, such as the checks done during block commit, file complete/close, 
safemode checks for block availability, etc..

Alternatively, we may also choose to remove the client-side hang of 
completeFile/close calls with a set number of retries. This would further 
require discussion about how a file-closure handle ought to be handled.

  was:
If an admin wishes to enforce replication today for all the users of their 
cluster, he may set {{dfs.namenode.replication.min}}. This property prevents 
users from creating files with < expected replication factor.

However, the value of minimum replication set by the above value is also 
checked at several other points, especially during completeFile (close) 
operations. If a condition arises wherein a write's pipeline may have gotten 
only < minimum nodes in it, the completeFile operation does not successfully 
close the file and the client begins to hang waiting for NN to replicate the 
last bad block in the background. This form of hard-guarantee can, for example, 
bring down clusters of HBase during high xceiver load on DN, or disk fill-ups 
on many of them, etc..

I propose we should split the property in two parts:
* dfs.namenode.replication.min
** Stays the same name, but only checks file creation time replication factor 
value and during adjustments made via setrep/etc.
* dfs.namenode.replication.min.for.write
** New property that disconnects the rest of the checks from the above 
property, such as the checks done during block commit, file complete/close, 
safemode checks for block availability, etc..

Alternatively, we may also choose to remove the client-side hang of 
completeFile/close calls with a set number of retries. This would further 
require discussion about how a file-closure handle ought to be handled.


> Provide a way to apply a minimum replication factor aside of strict minimum 
> live replicas feature
> -
>
> Key: HDFS-2936
> URL: https://issues.apache.org/jira/browse/HDFS-2936
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 0.23.0
>Reporter: Harsh J
> Attachments: HDFS-2936.patch
>
>
> If an admin wishes to enforce replication today for all the users of their 
> cluster, they may set {{dfs.namenode.replication.min}}. This property 
> prevents users from creating files with < expected replication factor.
> However, the value of minimum replication set by the above value is also 
> checked at several other points, especially during completeFile (close) 
> operations. If a condition arises wherein a write's pipeline may have gotten 
> only < minimum nodes in it, the completeFile operation does not successfully 
> close the file and the client begins to hang waiting for NN to replicate the 
> last bad block in the background. This form of hard-guarantee can, for 
> example, bring down clusters of HBase during high xceiver load on DN, or disk 
> fill-ups on many of them, etc..
> I propose we should split the property in two parts:
> * dfs.namenode.replication.min
> ** Stays the same name, but only checks file creation time replication factor 
> value and during adjustments made via setrep/etc.
> * dfs.namenode.replication.min.for.write
> ** New property that disconnects the rest of the checks from the above 
> property, such as the checks done during block commit, file complete/close, 
> safemode checks for block availability, etc..
> Alternatively, we may also choose to remove the client-side hang of 
> completeFile/close calls with a 

[jira] [Updated] (HDFS-2936) Provide a way to apply a minimum replication factor aside of strict minimum live replicas feature

2016-11-03 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2936:
--
Summary: Provide a way to apply a minimum replication factor aside of 
strict minimum live replicas feature  (was: File close()-ing hangs indefinitely 
if the number of live blocks does not match the minimum replication)

> Provide a way to apply a minimum replication factor aside of strict minimum 
> live replicas feature
> -
>
> Key: HDFS-2936
> URL: https://issues.apache.org/jira/browse/HDFS-2936
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 0.23.0
>Reporter: Harsh J
>Assignee: Harsh J
> Attachments: HDFS-2936.patch
>
>
> If an admin wishes to enforce replication today for all the users of their 
> cluster, he may set {{dfs.namenode.replication.min}}. This property prevents 
> users from creating files with < expected replication factor.
> However, the value of minimum replication set by the above value is also 
> checked at several other points, especially during completeFile (close) 
> operations. If a condition arises wherein a write's pipeline may have gotten 
> only < minimum nodes in it, the completeFile operation does not successfully 
> close the file and the client begins to hang waiting for NN to replicate the 
> last bad block in the background. This form of hard-guarantee can, for 
> example, bring down clusters of HBase during high xceiver load on DN, or disk 
> fill-ups on many of them, etc..
> I propose we should split the property in two parts:
> * dfs.namenode.replication.min
> ** Stays the same name, but only checks file creation time replication factor 
> value and during adjustments made via setrep/etc.
> * dfs.namenode.replication.min.for.write
> ** New property that disconnects the rest of the checks from the above 
> property, such as the checks done during block commit, file complete/close, 
> safemode checks for block availability, etc..
> Alternatively, we may also choose to remove the client-side hang of 
> completeFile/close calls with a set number of retries. This would further 
> require discussion about how a file-closure handle ought to be handled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-2936) Provide a way to apply a minimum replication factor aside of strict minimum live replicas feature

2016-11-03 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2936:
--
Assignee: (was: Harsh J)

> Provide a way to apply a minimum replication factor aside of strict minimum 
> live replicas feature
> -
>
> Key: HDFS-2936
> URL: https://issues.apache.org/jira/browse/HDFS-2936
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 0.23.0
>Reporter: Harsh J
> Attachments: HDFS-2936.patch
>
>
> If an admin wishes to enforce replication today for all the users of their 
> cluster, he may set {{dfs.namenode.replication.min}}. This property prevents 
> users from creating files with < expected replication factor.
> However, the value of minimum replication set by the above value is also 
> checked at several other points, especially during completeFile (close) 
> operations. If a condition arises wherein a write's pipeline may have gotten 
> only < minimum nodes in it, the completeFile operation does not successfully 
> close the file and the client begins to hang waiting for NN to replicate the 
> last bad block in the background. This form of hard-guarantee can, for 
> example, bring down clusters of HBase during high xceiver load on DN, or disk 
> fill-ups on many of them, etc..
> I propose we should split the property in two parts:
> * dfs.namenode.replication.min
> ** Stays the same name, but only checks file creation time replication factor 
> value and during adjustments made via setrep/etc.
> * dfs.namenode.replication.min.for.write
> ** New property that disconnects the rest of the checks from the above 
> property, such as the checks done during block commit, file complete/close, 
> safemode checks for block availability, etc..
> Alternatively, we may also choose to remove the client-side hang of 
> completeFile/close calls with a set number of retries. This would further 
> require discussion about how a file-closure handle ought to be handled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11098) Datanode in tests cannot start in Windows after HDFS-10638

2016-11-03 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-11098:
-
Description: 
After HDFS-10638
Starting datanode's in MiniDFSCluster in windows throws below exception
{noformat}java.lang.IllegalArgumentException: URI: 
file:/D:/code/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1
 is not in the expected format
at 
org.apache.hadoop.hdfs.server.datanode.StorageLocation.(StorageLocation.java:68)
at 
org.apache.hadoop.hdfs.server.datanode.StorageLocation.parse(StorageLocation.java:123)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getStorageLocations(DataNode.java:2561)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2545)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1613)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:860)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
{noformat}

  was:
After HDFS-10368
Starting datanode's in MiniDFSCluster in windows throws below exception
{noformat}java.lang.IllegalArgumentException: URI: 
file:/D:/code/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1
 is not in the expected format
at 
org.apache.hadoop.hdfs.server.datanode.StorageLocation.(StorageLocation.java:68)
at 
org.apache.hadoop.hdfs.server.datanode.StorageLocation.parse(StorageLocation.java:123)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getStorageLocations(DataNode.java:2561)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2545)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1613)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:860)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
{noformat}

Summary: Datanode in tests cannot start in Windows after HDFS-10638  
(was: Datanode in tests cannot start in Windows after HDFS-10368)

> Datanode in tests cannot start in Windows after HDFS-10638
> --
>
> Key: HDFS-11098
> URL: https://issues.apache.org/jira/browse/HDFS-11098
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Blocker
> Attachments: HDFS-11098.01.patch
>
>
> After HDFS-10638
> Starting datanode's in MiniDFSCluster in windows throws below exception
> {noformat}java.lang.IllegalArgumentException: URI: 
> file:/D:/code/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1
>  is not in the expected format
>   at 
> org.apache.hadoop.hdfs.server.datanode.StorageLocation.(StorageLocation.java:68)
>   at 
> org.apache.hadoop.hdfs.server.datanode.StorageLocation.parse(StorageLocation.java:123)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getStorageLocations(DataNode.java:2561)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2545)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1613)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:860)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11098) Datanode in tests cannot start in Windows after HDFS-10368

2016-11-03 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-11098:
-
Target Version/s: 3.0.0-alpha2
  Status: Patch Available  (was: Open)

> Datanode in tests cannot start in Windows after HDFS-10368
> --
>
> Key: HDFS-11098
> URL: https://issues.apache.org/jira/browse/HDFS-11098
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Blocker
> Attachments: HDFS-11098.01.patch
>
>
> After HDFS-10368
> Starting datanode's in MiniDFSCluster in windows throws below exception
> {noformat}java.lang.IllegalArgumentException: URI: 
> file:/D:/code/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1
>  is not in the expected format
>   at 
> org.apache.hadoop.hdfs.server.datanode.StorageLocation.(StorageLocation.java:68)
>   at 
> org.apache.hadoop.hdfs.server.datanode.StorageLocation.parse(StorageLocation.java:123)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getStorageLocations(DataNode.java:2561)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2545)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1613)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:860)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11098) Datanode in tests cannot start in Windows after HDFS-10368

2016-11-03 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-11098:
-
Attachment: HDFS-11098.01.patch

Attached the simple patch.

> Datanode in tests cannot start in Windows after HDFS-10368
> --
>
> Key: HDFS-11098
> URL: https://issues.apache.org/jira/browse/HDFS-11098
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Blocker
> Attachments: HDFS-11098.01.patch
>
>
> After HDFS-10368
> Starting datanode's in MiniDFSCluster in windows throws below exception
> {noformat}java.lang.IllegalArgumentException: URI: 
> file:/D:/code/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1
>  is not in the expected format
>   at 
> org.apache.hadoop.hdfs.server.datanode.StorageLocation.(StorageLocation.java:68)
>   at 
> org.apache.hadoop.hdfs.server.datanode.StorageLocation.parse(StorageLocation.java:123)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getStorageLocations(DataNode.java:2561)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2545)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1613)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:860)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >