[jira] [Updated] (HDFS-11373) Backport HDFS-11258 and HDFS-11272 to branch-2.7

2017-05-09 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11373:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.7.4
   Status: Resolved  (was: Patch Available)

Committed to branch-2.7.[~ajisakaa] thanks for branch-2.7 patch.. 
FYR.HDFS-11795 to track  {{ASFLicenses}}

> Backport HDFS-11258 and HDFS-11272 to branch-2.7
> 
>
> Key: HDFS-11373
> URL: https://issues.apache.org/jira/browse/HDFS-11373
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
>  Labels: release-blocker
> Fix For: 2.7.4
>
> Attachments: HDFS-11373-branch-2.7.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11757) Query StreamCapabilities when creating balancer's lock file

2017-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004092#comment-16004092
 ] 

Hadoop QA commented on HDFS-11757:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
54s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11757 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867239/HDFS-11757.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 19cb69428481 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 166be0e |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19374/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19374/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19374/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-11795) Fix ASF Licence warnings in branch-2.7

2017-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004082#comment-16004082
 ] 

Hadoop QA commented on HDFS-11795:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
45s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
12s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
52s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 1261 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
41s{color} | {color:red} The patch 70 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestSnapshot |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
| JDK v1.7.0_121 Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestSnapshot |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || 

[jira] [Commented] (HDFS-11783) Ozone: Fix spotbugs warnings

2017-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004068#comment-16004068
 ] 

Hadoop QA commented on HDFS-11783:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
10s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 17 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs generated 0 new + 10 
unchanged - 7 fixed = 10 total (was 17) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.cblock.TestLocalBlockCache |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11783 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867240/HDFS-11783-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6495109cd3f2 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 6516706 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19373/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19373/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-11765) Fix:Performance regression due to incorrect use of DataChecksum

2017-05-09 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004062#comment-16004062
 ] 

Masatake Iwasaki commented on HDFS-11765:
-

I will commit this shortly. Before that, I'm going to move this issue to Hadoop 
Common to which the fix belongs.

> Fix:Performance regression due to incorrect use of DataChecksum
> ---
>
> Key: HDFS-11765
> URL: https://issues.apache.org/jira/browse/HDFS-11765
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native, performance
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: LiXin Ge
>Assignee: LiXin Ge
> Attachments: HDFS-11765.patch
>
>
> Recently I have upgraded my Hadoop version from 2.6 to 3.0, and I find that 
> the write performance decreased by 13%. After some days comparative analysis, 
> It's seems introduced by HADOOP-10865. 
> Since James Thomas have done the work that native checksum can run against 
> byte[] arrays instead of just against byte buffers, we may use native method 
> preferential because it runs faster than others.
> [~szetszwo] and [~iwasakims] could you take a look at this to see if  it make 
> bad effect on your benchmark test? [~tlipcon] could you help to see if I have 
> make mistakes in this patch?
> thanks!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11768) Ozone: KSM: Create Key Space manager service

2017-05-09 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11768:

Status: Patch Available  (was: Open)

> Ozone: KSM: Create Key Space manager service
> 
>
> Key: HDFS-11768
> URL: https://issues.apache.org/jira/browse/HDFS-11768
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-11768-HDFS-7240.001.patch, ozone-key-space.pdf
>
>
> KSM is the namespace manager for Ozone. KSM relies on SCM to achieve block 
> functions. Ozone handler -- The rest protocol frontend talks to KSM and SCM 
> to get datanode addresses.
> This JIRA will add the service as well as add the protobuf definitions needed 
> to work with KSM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11768) Ozone: KSM: Create Key Space manager service

2017-05-09 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11768:

Attachment: HDFS-11768-HDFS-7240.001.patch

> Ozone: KSM: Create Key Space manager service
> 
>
> Key: HDFS-11768
> URL: https://issues.apache.org/jira/browse/HDFS-11768
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-11768-HDFS-7240.001.patch, ozone-key-space.pdf
>
>
> KSM is the namespace manager for Ozone. KSM relies on SCM to achieve block 
> functions. Ozone handler -- The rest protocol frontend talks to KSM and SCM 
> to get datanode addresses.
> This JIRA will add the service as well as add the protobuf definitions needed 
> to work with KSM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11680) Ozone: SCM CLI: Implement info container command

2017-05-09 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-11680:
--
Attachment: HDFS-11680-HDFS-7240.006.patch

[~cheersyang] Thanks for your comments.
Your env seems out of date, the container info command works as expect in the 
latest version.
Others look good to me, upload v6 patch to address your comments.

> Ozone: SCM CLI: Implement info container command
> 
>
> Key: HDFS-11680
> URL: https://issues.apache.org/jira/browse/HDFS-11680
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
>  Labels: command-line
> Attachments: HDFS-11680-HDFS-7240.001.patch, 
> HDFS-11680-HDFS-7240.002.patch, HDFS-11680-HDFS-7240.003.patch, 
> HDFS-11680-HDFS-7240.004.patch, HDFS-11680-HDFS-7240.005.patch, 
> HDFS-11680-HDFS-7240.006.patch, output.003.txt, output.txt
>
>
> Implement info container
> {code}
> hdfs scm -container info 
> {code}
> Returns information about a specific container.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11546) Federation Router RPC server

2017-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003997#comment-16003997
 ] 

Hadoop QA commented on HDFS-11546:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
34s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-10467 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
48s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-10467 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 402 unchanged - 0 fixed = 404 total (was 402) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestStartup |
|   | hadoop.hdfs.server.namenode.TestMetadataVersionOutput |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11546 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867228/HDFS-11546-HDFS-10467-010.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 1dc5dccb0b69 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / f242d25 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19371/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19371/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19371/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-11601) Ozone: Compact DB should be called on Open Containers.

2017-05-09 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003980#comment-16003980
 ] 

Weiwei Yang commented on HDFS-11601:


Thanks [~anu] for the heads up, I will keep an eye on the statues of rocksDB. I 
may not start to work on this until we get to there. Let me decrease this one's 
priority for now on my list. Thanks.

> Ozone: Compact DB should be called on Open Containers.
> --
>
> Key: HDFS-11601
> URL: https://issues.apache.org/jira/browse/HDFS-11601
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
>
> The discussion in HDFS-11594 pointed to a potential issue that we might run 
> into. That is too many delete key operations can take place and make a DB 
> slow. Running compactDB in those cases are useful. Currently we run compactDB 
> when we close a container. This JIRA tracks a potential improvement of 
> running compactDB even on open containers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11783) Ozone: Fix spotbugs warnings

2017-05-09 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003976#comment-16003976
 ] 

Weiwei Yang commented on HDFS-11783:


Uploaded v2 patch to fix the findbugs warning in {{VolumeDescriptor}}, it was 
using keyset iterator which has performance issue. Looking at that code, the 
loop of {{containerMap}} was to compare two maps, in this case, we don't need 
the loop at all, simply by calling {{containerMap.equals(other.containerMap)}} 
should work. According to java doc for {{AbstractMap#equals()}}

{noformat}
Compares the specified object with this map for equality. 
Returns true if the given object is also a map and the two maps represent the 
same mappings.
{noformat}

[~linyiqun] and [~anu], please kindly review.

> Ozone: Fix spotbugs warnings
> 
>
> Key: HDFS-11783
> URL: https://issues.apache.org/jira/browse/HDFS-11783
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: findbugs
> Attachments: HDFS-11783-HDFS-7240.001.patch, 
> HDFS-11783-HDFS-7240.002.patch
>
>
> Trunk moves to spotbugs, there might be new warnings in ozone project. This 
> task is to track these issues and fix spotbugs warnings. This task is aimed 
> to fix the warnings for ozone classes, there is a separate one for rest of 
> hdfs in HDFS-11696.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11757) Query StreamCapabilities when creating balancer's lock file

2017-05-09 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003966#comment-16003966
 ] 

SammiChen edited comment on HDFS-11757 at 5/10/17 3:21 AM:
---

Sure. Initial patch uploaded


was (Author: sammi):
Initial patch

> Query StreamCapabilities when creating balancer's lock file
> ---
>
> Key: HDFS-11757
> URL: https://issues.apache.org/jira/browse/HDFS-11757
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: SammiChen
> Attachments: HDFS-11757.001.patch
>
>
> Once HDFS-11644 goes in, we'll have a clean way of querying for stream 
> capabilities. We should redo the check in the Balancer introduced in 
> HDFS-11643 to query the capabilities.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11757) Query StreamCapabilities when creating balancer's lock file

2017-05-09 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-11757:
-
Status: Patch Available  (was: Open)

> Query StreamCapabilities when creating balancer's lock file
> ---
>
> Key: HDFS-11757
> URL: https://issues.apache.org/jira/browse/HDFS-11757
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: SammiChen
> Attachments: HDFS-11757.001.patch
>
>
> Once HDFS-11644 goes in, we'll have a clean way of querying for stream 
> capabilities. We should redo the check in the Balancer introduced in 
> HDFS-11643 to query the capabilities.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11783) Ozone: Fix spotbugs warnings

2017-05-09 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11783:
---
Attachment: HDFS-11783-HDFS-7240.002.patch

> Ozone: Fix spotbugs warnings
> 
>
> Key: HDFS-11783
> URL: https://issues.apache.org/jira/browse/HDFS-11783
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: findbugs
> Attachments: HDFS-11783-HDFS-7240.001.patch, 
> HDFS-11783-HDFS-7240.002.patch
>
>
> Trunk moves to spotbugs, there might be new warnings in ozone project. This 
> task is to track these issues and fix spotbugs warnings. This task is aimed 
> to fix the warnings for ozone classes, there is a separate one for rest of 
> hdfs in HDFS-11696.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11757) Query StreamCapabilities when creating balancer's lock file

2017-05-09 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-11757:
-
Attachment: HDFS-11757.001.patch

Initial patch

> Query StreamCapabilities when creating balancer's lock file
> ---
>
> Key: HDFS-11757
> URL: https://issues.apache.org/jira/browse/HDFS-11757
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: SammiChen
> Attachments: HDFS-11757.001.patch
>
>
> Once HDFS-11644 goes in, we'll have a clean way of querying for stream 
> capabilities. We should redo the check in the Balancer introduced in 
> HDFS-11643 to query the capabilities.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11795) Fix ASF Licence warnings in branch-2.7

2017-05-09 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11795:
-
Description: 
There are some ASF warnings appeared in branch-2.7 due to test files being 
created in "hadoop-hdfs/build" instead of 'target' directory. (Similar to 
HDFS-9571).
{code}
Lines that start with ? in the ASF License  report indicate files that do 
not have an Apache license header:
 !? 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/build/test/data/temp/TestNameNodeMXBean/include
 !? 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/build/test/data/temp/decommission/include
 !? 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/build/test/data/temp/decommission/exclude
{code}

  was:There are some ASF warnings appeared in branch-2.7 due to test files 
being created in "hadoop-hdfs/build" instead of 'target' directory. (Similar to 
HDFS-9571).


> Fix ASF Licence warnings in branch-2.7
> --
>
> Key: HDFS-11795
> URL: https://issues.apache.org/jira/browse/HDFS-11795
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11795-branch-2.7.001.patch
>
>
> There are some ASF warnings appeared in branch-2.7 due to test files being 
> created in "hadoop-hdfs/build" instead of 'target' directory. (Similar to 
> HDFS-9571).
> {code}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/build/test/data/temp/TestNameNodeMXBean/include
>  !? 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/build/test/data/temp/decommission/include
>  !? 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/build/test/data/temp/decommission/exclude
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11795) Fix ASF Licence warnings in branch-2.7

2017-05-09 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11795:
-
Attachment: HDFS-11795-branch-2.7.001.patch

> Fix ASF Licence warnings in branch-2.7
> --
>
> Key: HDFS-11795
> URL: https://issues.apache.org/jira/browse/HDFS-11795
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11795-branch-2.7.001.patch
>
>
> There are some ASF warnings appeared in branch-2.7 due to test files being 
> created in "hadoop-hdfs/build" instead of 'target' directory. (Similar to 
> HDFS-9571).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11795) Fix ASF Licence warnings in branch-2.7

2017-05-09 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11795:
-
Status: Patch Available  (was: Open)

Attach a simple fix to get a quick fix.

> Fix ASF Licence warnings in branch-2.7
> --
>
> Key: HDFS-11795
> URL: https://issues.apache.org/jira/browse/HDFS-11795
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11795-branch-2.7.001.patch
>
>
> There are some ASF warnings appeared in branch-2.7 due to test files being 
> created in "hadoop-hdfs/build" instead of 'target' directory. (Similar to 
> HDFS-9571).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11795) Fix ASF Licence warnings in branch-2.7

2017-05-09 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-11795:


 Summary: Fix ASF Licence warnings in branch-2.7
 Key: HDFS-11795
 URL: https://issues.apache.org/jira/browse/HDFS-11795
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yiqun Lin
Assignee: Yiqun Lin


There are some ASF warnings appeared in branch-2.7 due to test files being 
created in "hadoop-hdfs/build" instead of 'target' directory. (Similar to 
HDFS-9571).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11783) Ozone: Fix spotbugs warnings

2017-05-09 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003949#comment-16003949
 ] 

Weiwei Yang commented on HDFS-11783:


Hi [~linyiqun] sure, I thought that was from cblock so I did not fix that in my 
first patch. But sure, lets get it fixed as well. I will upload a new patch 
shortly.

> Ozone: Fix spotbugs warnings
> 
>
> Key: HDFS-11783
> URL: https://issues.apache.org/jira/browse/HDFS-11783
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: findbugs
> Attachments: HDFS-11783-HDFS-7240.001.patch
>
>
> Trunk moves to spotbugs, there might be new warnings in ozone project. This 
> task is to track these issues and fix spotbugs warnings. This task is aimed 
> to fix the warnings for ozone classes, there is a separate one for rest of 
> hdfs in HDFS-11696.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11794) Add ec sub command -listCodec to show currently supported ec codecs

2017-05-09 Thread SammiChen (JIRA)
SammiChen created HDFS-11794:


 Summary: Add ec sub command -listCodec to show currently supported 
ec codecs
 Key: HDFS-11794
 URL: https://issues.apache.org/jira/browse/HDFS-11794
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding
Reporter: SammiChen
Assignee: SammiChen


Add ec sub command -listCodec to show currently supported ec codecs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11793) Dfs configuration key dfs.namenode.ec.policies.enabled support user defined erasure coding policy

2017-05-09 Thread SammiChen (JIRA)
SammiChen created HDFS-11793:


 Summary: Dfs configuration key dfs.namenode.ec.policies.enabled 
support user defined erasure coding policy
 Key: HDFS-11793
 URL: https://issues.apache.org/jira/browse/HDFS-11793
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding
Reporter: SammiChen
Assignee: SammiChen


Dfs configuration key dfs.namenode.ec.policies.enabled support user defined 
erasure coding policy



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11757) Query StreamCapabilities when creating balancer's lock file

2017-05-09 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen reassigned HDFS-11757:


Assignee: SammiChen

> Query StreamCapabilities when creating balancer's lock file
> ---
>
> Key: HDFS-11757
> URL: https://issues.apache.org/jira/browse/HDFS-11757
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: SammiChen
>
> Once HDFS-11644 goes in, we'll have a clean way of querying for stream 
> capabilities. We should redo the check in the Balancer introduced in 
> HDFS-11643 to query the capabilities.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11783) Ozone: Fix spotbugs warnings

2017-05-09 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003917#comment-16003917
 ] 

Yiqun Lin commented on HDFS-11783:
--

Hi [~cheersyang], thanks for fixing this!
I am thinking we might missing one place in the patch.
{code}
org.apache.hadoop.cblock.meta.VolumeDescriptor.equals(Object) makes inefficient 
use of keySet iterator instead of entrySet iterator
{code}
Could you please fix this as well? This warning place is not under ozone but 
belong to the feature branch, and will also generate findbugs warning every 
time. The fix way has showed above {{use of keySet iterator instead of entrySet 
iterator}}.



> Ozone: Fix spotbugs warnings
> 
>
> Key: HDFS-11783
> URL: https://issues.apache.org/jira/browse/HDFS-11783
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: findbugs
> Attachments: HDFS-11783-HDFS-7240.001.patch
>
>
> Trunk moves to spotbugs, there might be new warnings in ozone project. This 
> task is to track these issues and fix spotbugs warnings. This task is aimed 
> to fix the warnings for ozone classes, there is a separate one for rest of 
> hdfs in HDFS-11696.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11546) Federation Router RPC server

2017-05-09 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-11546:
---
Attachment: HDFS-11546-HDFS-10467-010.patch

Fixed:
* Unit tests
* Findbugs
* Check styles

> Federation Router RPC server
> 
>
> Key: HDFS-11546
> URL: https://issues.apache.org/jira/browse/HDFS-11546
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: HDFS-10467
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HDFS-11546-HDFS-10467-000.patch, 
> HDFS-11546-HDFS-10467-001.patch, HDFS-11546-HDFS-10467-002.patch, 
> HDFS-11546-HDFS-10467-003.patch, HDFS-11546-HDFS-10467-004.patch, 
> HDFS-11546-HDFS-10467-005.patch, HDFS-11546-HDFS-10467-007.patch, 
> HDFS-11546-HDFS-10467-008.patch, HDFS-11546-HDFS-10467-009.patch, 
> HDFS-11546-HDFS-10467-010.patch
>
>
> RPC server side of the Federation Router implements ClientProtocol.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11644) Support for querying outputstream capabilities

2017-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003907#comment-16003907
 ] 

Hadoop QA commented on HDFS-11644:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m  
6s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 0s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
2s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
9s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
51s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
55s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
28s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
48s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
38s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
19s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
35s{color} | {color:green} hadoop-azure in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}227m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason 

[jira] [Created] (HDFS-11792) [READ] Additional test cases for ProvidedVolumeImpl

2017-05-09 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-11792:
-

 Summary: [READ] Additional test cases for ProvidedVolumeImpl
 Key: HDFS-11792
 URL: https://issues.apache.org/jira/browse/HDFS-11792
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Virajith Jalaparti






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11791) [READ] Test for increasing replication of provided files.

2017-05-09 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-11791:
-

 Summary: [READ] Test for increasing replication of provided files.
 Key: HDFS-11791
 URL: https://issues.apache.org/jira/browse/HDFS-11791
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Virajith Jalaparti






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11788) Ozone : add DEBUG CLI support for nodepool db file

2017-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003885#comment-16003885
 ] 

Hadoop QA commented on HDFS-11788:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
54s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 17 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cblock.TestCBlockCLI |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.cblock.TestCBlockServer |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
| Timed out junit tests | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11788 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867204/HDFS-11788-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7bfb89cebcd3 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 6516706 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19370/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19370/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19370/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Commented] (HDFS-11790) Decommissioning of a DataNode after MaintenanceState takes a very long time to complete

2017-05-09 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003869#comment-16003869
 ] 

Manoj Govindassamy commented on HDFS-11790:
---

[~mingma],
 Please take a look at the problem description and let us know your thoughts.

I think {{BlockManager#computeReconstructionWorkForBlocks()}} and 
{{BlockManager#validateReconstructionWork()}} should be aware of IN_MAINT DNs 
and avoid chosing them as source datanodes for re-replication. 

> Decommissioning of a DataNode after MaintenanceState takes a very long time 
> to complete
> ---
>
> Key: HDFS-11790
> URL: https://issues.apache.org/jira/browse/HDFS-11790
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>
> *Problem:*
> When a DataNode is requested for Decommissioning after it successfully 
> transitioned to MaintenanceState (HDFS-7877), the decommissioning state 
> transition is stuck for a long time even for very small number of blocks in 
> the cluster. 
> *Details:*
> * A DataNode DN1 wa requested for MaintenanceState and it successfully 
> transitioned from ENTERING_MAINTENANCE state IN_MAINTENANCE state as there 
> are sufficient replication for all its blocks.
> * As DN1 was in maintenance state now, the DataNode process was stopped on 
> DN1. Later the same DN1 was requested for Decommissioning. 
> * As part of Decommissioning, all the blocks residing in DN1 were requested 
> for re-replicated to other DataNodes, so that DN1 could transition from 
> ENTERING_DECOMMISSION to DECOMMISSIONED. 
> * But, re-replication for few blocks was stuck for a long time. Eventually it 
> got completed.
> * Digging the code and logs, found that the IN_MAINTENANCE DN1 was chosen as 
> a source datanode for re-replication of few of the blocks. Since DataNode 
> process on DN1 was already stopped, the re-replication was stuck for a long 
> time.
> * Eventually PendingReplicationMonitor timed out, and those re-replication 
> were re-scheduled for those timed out blocks. Again, during the 
> re-replication also, the IN_MAINT DN1 was chose as a source datanode for few 
> of the blocks leading to timeout again. This iteration continued for few 
> times until all blocks get re-replicated.
> * By design, IN_MAINT datandoes should not be chosen for any read or write 
> operations.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11790) Decommissioning of a DataNode after MaintenanceState takes a very long time to complete

2017-05-09 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11790:
--
Description: 
*Problem:*
When a DataNode is requested for Decommissioning after it successfully 
transitioned to MaintenanceState (HDFS-7877), the decommissioning state 
transition is stuck for a long time even for very small number of blocks in the 
cluster. 

*Details:*
* A DataNode DN1 wa requested for MaintenanceState and it successfully 
transitioned from ENTERING_MAINTENANCE state IN_MAINTENANCE state as there are 
sufficient replication for all its blocks.
* As DN1 was in maintenance state now, the DataNode process was stopped on DN1. 
Later the same DN1 was requested for Decommissioning. 
* As part of Decommissioning, all the blocks residing in DN1 were requested for 
re-replicated to other DataNodes, so that DN1 could transition from 
ENTERING_DECOMMISSION to DECOMMISSIONED. 
* But, re-replication for few blocks was stuck for a long time. Eventually it 
got completed.
* Digging the code and logs, found that the IN_MAINTENANCE DN1 was chosen as a 
source datanode for re-replication of few of the blocks. Since DataNode process 
on DN1 was already stopped, the re-replication was stuck for a long time.
* Eventually PendingReplicationMonitor timed out, and those re-replication were 
re-scheduled for those timed out blocks. Again, during the re-replication also, 
the IN_MAINT DN1 was chose as a source datanode for few of the blocks leading 
to timeout again. This iteration continued for few times until all blocks get 
re-replicated.
* By design, IN_MAINT datandoes should not be chosen for any read or write 
operations.  

  was:
Problem:
When a DataNode is requested for Decommissioning after it successfully 
transitioned to MaintenanceState (HDFS-7877), the decommissioning state 
transition is stuck for a long time even for very small number of blocks in the 
cluster. 

Details:
* A DataNode DN1 wa requested for MaintenanceState and it successfully 
transitioned from ENTERING_MAINTENANCE state IN_MAINTENANCE state as there are 
sufficient replication for all its blocks.
* As DN1 was in maintenance state now, the DataNode process was stopped on DN1. 
Later the same DN1 was requested for Decommissioning. 
* As part of Decommissioning, all the blocks residing in DN1 were requested for 
re-replicated to other DataNodes, so that DN1 could transition from 
ENTERING_DECOMMISSION to DECOMMISSIONED. 
* But, re-replication for few blocks was stuck for a long time. Eventually it 
got completed.
* Digging the code and logs, found that the IN_MAINTENANCE DN1 was chosen as a 
source datanode for re-replication of few of the blocks. Since DataNode process 
on DN1 was already stopped, the re-replication was stuck for a long time.
* Eventually PendingReplicationMonitor timed out, and those re-replication were 
re-scheduled for those timed out blocks. Again, during the re-replication also, 
the IN_MAINT DN1 was chose as a source datanode for few of the blocks leading 
to timeout again. This iteration continued for few times until all blocks get 
re-replicated.
* By design, IN_MAINT datandoes should not be chosen for any read or write 
operations.  


> Decommissioning of a DataNode after MaintenanceState takes a very long time 
> to complete
> ---
>
> Key: HDFS-11790
> URL: https://issues.apache.org/jira/browse/HDFS-11790
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>
> *Problem:*
> When a DataNode is requested for Decommissioning after it successfully 
> transitioned to MaintenanceState (HDFS-7877), the decommissioning state 
> transition is stuck for a long time even for very small number of blocks in 
> the cluster. 
> *Details:*
> * A DataNode DN1 wa requested for MaintenanceState and it successfully 
> transitioned from ENTERING_MAINTENANCE state IN_MAINTENANCE state as there 
> are sufficient replication for all its blocks.
> * As DN1 was in maintenance state now, the DataNode process was stopped on 
> DN1. Later the same DN1 was requested for Decommissioning. 
> * As part of Decommissioning, all the blocks residing in DN1 were requested 
> for re-replicated to other DataNodes, so that DN1 could transition from 
> ENTERING_DECOMMISSION to DECOMMISSIONED. 
> * But, re-replication for few blocks was stuck for a long time. Eventually it 
> got completed.
> * Digging the code and logs, found that the IN_MAINTENANCE DN1 was chosen as 
> a source datanode for re-replication of few of the blocks. Since DataNode 
> process on DN1 was already stopped, the re-replication was stuck for a long 
> time.
> * Eventually PendingReplicationMonitor 

[jira] [Created] (HDFS-11790) Decommissioning of a DataNode after MaintenanceState takes a very long time to complete

2017-05-09 Thread Manoj Govindassamy (JIRA)
Manoj Govindassamy created HDFS-11790:
-

 Summary: Decommissioning of a DataNode after MaintenanceState 
takes a very long time to complete
 Key: HDFS-11790
 URL: https://issues.apache.org/jira/browse/HDFS-11790
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.0.0-alpha1
Reporter: Manoj Govindassamy
Assignee: Manoj Govindassamy


Problem:
When a DataNode is requested for Decommissioning after it successfully 
transitioned to MaintenanceState (HDFS-7877), the decommissioning state 
transition is stuck for a long time even for very small number of blocks in the 
cluster. 

Details:
* A DataNode DN1 wa requested for MaintenanceState and it successfully 
transitioned from ENTERING_MAINTENANCE state IN_MAINTENANCE state as there are 
sufficient replication for all its blocks.
* As DN1 was in maintenance state now, the DataNode process was stopped on DN1. 
Later the same DN1 was requested for Decommissioning. 
* As part of Decommissioning, all the blocks residing in DN1 were requested for 
re-replicated to other DataNodes, so that DN1 could transition from 
ENTERING_DECOMMISSION to DECOMMISSIONED. 
* But, re-replication for few blocks was stuck for a long time. Eventually it 
got completed.
* Digging the code and logs, found that the IN_MAINTENANCE DN1 was chosen as a 
source datanode for re-replication of few of the blocks. Since DataNode process 
on DN1 was already stopped, the re-replication was stuck for a long time.
* Eventually PendingReplicationMonitor timed out, and those re-replication were 
re-scheduled for those timed out blocks. Again, during the re-replication also, 
the IN_MAINT DN1 was chose as a source datanode for few of the blocks leading 
to timeout again. This iteration continued for few times until all blocks get 
re-replicated.
* By design, IN_MAINT datandoes should not be chosen for any read or write 
operations.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11681) DatanodeStorageInfo#getBlockIterator() should return an iterator to an unmodifiable set.

2017-05-09 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas reassigned HDFS-11681:


Assignee: Virajith Jalaparti

> DatanodeStorageInfo#getBlockIterator() should return an iterator to an 
> unmodifiable set.
> 
>
> Key: HDFS-11681
> URL: https://issues.apache.org/jira/browse/HDFS-11681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11681.001.patch, HDFS-11681.002.patch
>
>
> The iterator from {{DatanodeStorageInfo#getBlockIterator()}} should not be 
> modifiable. Otherwise, calling {{remove}} on the iterator will remove blocks 
> from {{DatanodeStorageInfo.blocks}}, a function that has to be performed by 
> calling {{DatanodeStorageInfo#removeBlock}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11788) Ozone : add DEBUG CLI support for nodepool db file

2017-05-09 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003779#comment-16003779
 ] 

Xiaoyu Yao commented on HDFS-11788:
---

Thanks [~vagarychen] for working on this. Patch v2 looks pretty good to me. I 
only have some minor suggestions:
 
1. Can we use the try-with-resource for {{dbStore}} so that the store can be 
closed even exception is throw in the middle of the insert into sqllite 
{code}
365 LevelDBStore dbStore = new LevelDBStore(dbFile, dbOptions);
{code}

2. Does level db support open by multiple processes? Do we need any special 
Option/flag when opening a live nodepool/container db running with SCM?
If we assume this is an offline tool, we should comment or document it 
somewhere.

  

> Ozone : add DEBUG CLI support for nodepool db file
> --
>
> Key: HDFS-11788
> URL: https://issues.apache.org/jira/browse/HDFS-11788
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11788-HDFS-7240.001.patch, 
> HDFS-11788-HDFS-7240.002.patch
>
>
> This is a following-up of HDFS-11698. This JIRA adds the converting of 
> nodepool.db levelDB file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11788) Ozone : add DEBUG CLI support for nodepool db file

2017-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003781#comment-16003781
 ] 

Hadoop QA commented on HDFS-11788:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 8s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
56s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 17 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 21s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.scm.node.TestContainerPlacement |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11788 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867189/HDFS-11788-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4cf8f2244c1f 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 6516706 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19367/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19367/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19367/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs 

[jira] [Commented] (HDFS-11546) Federation Router RPC server

2017-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003776#comment-16003776
 ] 

Hadoop QA commented on HDFS-11546:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 5s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-10467 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
44s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-10467 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 402 unchanged - 0 fixed = 405 total (was 402) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
51s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 10 
unchanged - 0 fixed = 11 total (was 10) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 15s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  new 
org.apache.hadoop.hdfs.server.federation.router.ConnectionManager(Configuration)
 invokes 
org.apache.hadoop.hdfs.server.federation.router.ConnectionManager$ConnectionCreator.start()
  At ConnectionManager.java:At ConnectionManager.java:[line 111] |
| Failed junit tests | hadoop.hdfs.server.namenode.TestStartup |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.server.namenode.TestMetadataVersionOutput |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11546 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867192/HDFS-11546-HDFS-10467-009.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 9b8398d83285 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / f242d25 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Created] (HDFS-11789) Maintain Short-Circuit Read Statistics

2017-05-09 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDFS-11789:
-

 Summary: Maintain Short-Circuit Read Statistics
 Key: HDFS-11789
 URL: https://issues.apache.org/jira/browse/HDFS-11789
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


If a disk or controller hardware is faulty then short-circuit read requests can 
stall indefinitely while reading from the file descriptor. Currently there is 
no way to detect when short-circuit read requests are slow or blocked. 

This Jira proposes that each BlockReaderLocal maintain read statistics while it 
is active by measuring the time taken for a pre-determined fraction of read 
requests. These per-reader stats can be aggregated into global stats when the 
reader is closed. The aggregate statistics can be exposed via JMX.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11780) Ozone: KSM : Add putKey

2017-05-09 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDFS-11780:
-

Assignee: Xiaoyu Yao

> Ozone: KSM : Add putKey
> ---
>
> Key: HDFS-11780
> URL: https://issues.apache.org/jira/browse/HDFS-11780
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Xiaoyu Yao
>
> Support putting a key into an Ozone bucket. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10785) libhdfs++: Implement the rest of the tools

2017-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003697#comment-16003697
 ] 

Hadoop QA commented on HDFS-10785:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
44s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
28s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
43s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
15s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:78fc6b6 |
| JIRA Issue | HDFS-10785 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867183/HDFS-10785.HDFS-8707.003.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  |
| uname | Linux 415b66f59089 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / e73bd53 |
| Default Java | 1.7.0_121 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_131 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_121 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19366/artifact/patchprocess/whitespace-tabs.txt
 |
| JDK v1.7.0_121  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19366/testReport/ |
| modules 

[jira] [Updated] (HDFS-11788) Ozone : add DEBUG CLI support for nodepool db file

2017-05-09 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11788:
--
Attachment: HDFS-11788-HDFS-7240.002.patch

Missing close db in v001 patch, fix in v002 patch.

> Ozone : add DEBUG CLI support for nodepool db file
> --
>
> Key: HDFS-11788
> URL: https://issues.apache.org/jira/browse/HDFS-11788
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11788-HDFS-7240.001.patch, 
> HDFS-11788-HDFS-7240.002.patch
>
>
> This is a following-up of HDFS-11698. This JIRA adds the converting of 
> nodepool.db levelDB file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11784) Backport HDFS-8312 to branch-2.7: Trash does not descent into child directories to check for permissions

2017-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003678#comment-16003678
 ] 

Hadoop QA commented on HDFS-11784:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
28s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
34s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
51s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
9s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} branch-2.7 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
34s{color} | {color:red} hadoop-common-project/hadoop-common in branch-2.7 has 
3 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
0s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
13s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} root: The patch generated 0 new + 510 unchanged - 1 
fixed = 510 total (was 511) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2620 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  1m 
31s{color} | {color:red} The patch 124 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 36s{color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_121. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_121. {color} |
| 

[jira] [Commented] (HDFS-11745) Increase HDFS test timeouts from 1 second to 10 seconds

2017-05-09 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003654#comment-16003654
 ] 

Arpit Agarwal commented on HDFS-11745:
--

Thanks for committing it [~jlowe]. I wrongly assumed Eric had commit privileges.

> Increase HDFS test timeouts from 1 second to 10 seconds
> ---
>
> Key: HDFS-11745
> URL: https://issues.apache.org/jira/browse/HDFS-11745
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-11745.001.patch
>
>
> 1 second test timeouts are susceptible to failure on overloaded or otherwise 
> slow machines



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11785) Backport HDFS-9902 to branch-2.7: Support different values of dfs.datanode.du.reserved per storage type

2017-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003650#comment-16003650
 ] 

Hadoop QA commented on HDFS-11785:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
47s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 2s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1271 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
29s{color} | {color:red} The patch 70 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_121. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}163m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
| JDK v1.7.0_121 Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
|   | 

[jira] [Commented] (HDFS-11681) DatanodeStorageInfo#getBlockIterator() should return an iterator to an unmodifiable set.

2017-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003627#comment-16003627
 ] 

Hadoop QA commented on HDFS-11681:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
53s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot |
|   | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11681 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867173/HDFS-11681.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7eefcfcfc218 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a2f6804 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19365/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19365/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19365/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19365/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



[jira] [Commented] (HDFS-11741) Long running balancer may fail due to expired DataEncryptionKey

2017-05-09 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003619#comment-16003619
 ] 

Wei-Chiu Chuang commented on HDFS-11741:


[~zhz] [~shahrs87] mind to chime in on this observation?
{quote}
I just realized a client side BlockTokenSecretManager generates 
DataEncryptionKey expiration time using now + token life time. I am not sure if 
that's intended, as I would have assumed the key expiration time equals the 
current BlockKey expiration time (which is determined by NameNode).

So it is entirely possible that balancer has an unexpired DataEncryptionKey, 
corresponding to an expired BlockKey. When it talks to the other side, the 
expired BlockKey would fail the connection. Therefore my rev 01 patch would not 
fix all the problems because of this mismatch.
{quote}

Thanks!

> Long running balancer may fail due to expired DataEncryptionKey
> ---
>
> Key: HDFS-11741
> URL: https://issues.apache.org/jira/browse/HDFS-11741
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
> Environment: CDH5.8.2, Kerberos, Data transfer encryption enabled. 
> Balancer login using keytab
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-11741.001.patch, HDFS-11741.002.patch, 
> HDFS-11741.003.patch
>
>
> We found a long running balancer may fail despite using keytab, because 
> KeyManager returns expired DataEncryptionKey, and it throws the following 
> exception:
> {noformat}
> 2017-04-30 05:03:58,661 WARN  [pool-1464-thread-10] balancer.Dispatcher 
> (Dispatcher.java:dispatch(325)) - Failed to move blk_1067352712_3913241 with 
> size=546650 from 10.0.0.134:50010:DISK to 10.0.0.98:50010:DISK through 
> 10.0.0.134:50010
> org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException: 
> Can't re-compute encryption key for nonce, since the required block key 
> (keyID=1005215027) doesn't exist. Current key: 1005215030
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessageAndNegotiatedCipherOption(DataTransferSaslUtil.java:417)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:474)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:299)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:242)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:211)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:183)
> at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.dispatch(Dispatcher.java:311)
> at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.access$2300(Dispatcher.java:182)
> at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$1.run(Dispatcher.java:899)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This bug is similar in nature to HDFS-10609. While balancer KeyManager 
> actively synchronizes itself with NameNode w.r.t block keys, it does not 
> update DataEncryptionKey accordingly.
> In a specific cluster, with Kerberos ticket life time 10 hours, and default 
> block token expiration/life time 10 hours, a long running balancer failed 
> after 20~30 hours.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11745) Increase HDFS test timeouts from 1 second to 10 seconds

2017-05-09 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003597#comment-16003597
 ] 

Jason Lowe commented on HDFS-11745:
---

Thanks for the patch!  I noticed that TestNameNodeMetrics#testCapacityMetrics 
also has a pretty low timeout (1.8 seconds, seems like an odd number).  I think 
we should bump that as well.

> Increase HDFS test timeouts from 1 second to 10 seconds
> ---
>
> Key: HDFS-11745
> URL: https://issues.apache.org/jira/browse/HDFS-11745
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-11745.001.patch
>
>
> 1 second test timeouts are susceptible to failure on overloaded or otherwise 
> slow machines



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10999) Introduce separate stats for Replicated and Erasure Coded Blocks apart from the current Aggregated stats

2017-05-09 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003596#comment-16003596
 ] 

Manoj Govindassamy commented on HDFS-10999:
---

Above test failures are not related to the patch. They all passed through 
locally for me.

> Introduce separate stats for Replicated and Erasure Coded Blocks apart from 
> the current Aggregated stats
> 
>
> Key: HDFS-10999
> URL: https://issues.apache.org/jira/browse/HDFS-10999
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-nice-to-have, supportability
> Attachments: HDFS-10999.01.patch, HDFS-10999.02.patch, 
> HDFS-10999.03.patch
>
>
> Per HDFS-9857, it seems in the Hadoop 3 world, people prefer the more generic 
> term "low redundancy" to the old-fashioned "under replicated". But this term 
> is still being used in messages in several places, such as web ui, dfsadmin 
> and fsck. We should probably change them to avoid confusion.
> File this jira to discuss it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11546) Federation Router RPC server

2017-05-09 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-11546:
---
Attachment: HDFS-11546-HDFS-10467-009.patch

* Improving {{ConnectionManager}}
* Fixing {{RouterRpcServer#rename()}}

> Federation Router RPC server
> 
>
> Key: HDFS-11546
> URL: https://issues.apache.org/jira/browse/HDFS-11546
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: HDFS-10467
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HDFS-11546-HDFS-10467-000.patch, 
> HDFS-11546-HDFS-10467-001.patch, HDFS-11546-HDFS-10467-002.patch, 
> HDFS-11546-HDFS-10467-003.patch, HDFS-11546-HDFS-10467-004.patch, 
> HDFS-11546-HDFS-10467-005.patch, HDFS-11546-HDFS-10467-007.patch, 
> HDFS-11546-HDFS-10467-008.patch, HDFS-11546-HDFS-10467-009.patch
>
>
> RPC server side of the Federation Router implements ClientProtocol.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11644) Support for querying outputstream capabilities

2017-05-09 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11644:
---
Status: Patch Available  (was: Reopened)

> Support for querying outputstream capabilities
> --
>
> Key: HDFS-11644
> URL: https://issues.apache.org/jira/browse/HDFS-11644
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11644.01.patch, HDFS-11644.02.patch, 
> HDFS-11644.03.patch, HDFS-11644-branch-2.01.patch
>
>
> FSDataOutputStream#hsync checks if a stream implements Syncable, and if so, 
> calls hsync. Otherwise, it just calls flush. This is used, for instance, by 
> YARN's FileSystemTimelineWriter.
> DFSStripedOutputStream extends DFSOutputStream, which implements Syncable. 
> However, DFSStripedOS throws a runtime exception when the Syncable methods 
> are called.
> We should refactor the inheritance structure so DFSStripedOS does not 
> implement Syncable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-11644) Support for querying outputstream capabilities

2017-05-09 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reopened HDFS-11644:


> Support for querying outputstream capabilities
> --
>
> Key: HDFS-11644
> URL: https://issues.apache.org/jira/browse/HDFS-11644
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11644.01.patch, HDFS-11644.02.patch, 
> HDFS-11644.03.patch, HDFS-11644-branch-2.01.patch
>
>
> FSDataOutputStream#hsync checks if a stream implements Syncable, and if so, 
> calls hsync. Otherwise, it just calls flush. This is used, for instance, by 
> YARN's FileSystemTimelineWriter.
> DFSStripedOutputStream extends DFSOutputStream, which implements Syncable. 
> However, DFSStripedOS throws a runtime exception when the Syncable methods 
> are called.
> We should refactor the inheritance structure so DFSStripedOS does not 
> implement Syncable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11788) Ozone : add DEBUG CLI support for nodepool db file

2017-05-09 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11788:
--
Status: Patch Available  (was: Open)

> Ozone : add DEBUG CLI support for nodepool db file
> --
>
> Key: HDFS-11788
> URL: https://issues.apache.org/jira/browse/HDFS-11788
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11788-HDFS-7240.001.patch
>
>
> This is a following-up of HDFS-11698. This JIRA adds the converting of 
> nodepool.db levelDB file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11788) Ozone : add DEBUG CLI support for nodepool db file

2017-05-09 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11788:
--
Attachment: HDFS-11788-HDFS-7240.001.patch

Post v001 patch to add converting of nodepool db.

> Ozone : add DEBUG CLI support for nodepool db file
> --
>
> Key: HDFS-11788
> URL: https://issues.apache.org/jira/browse/HDFS-11788
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11788-HDFS-7240.001.patch
>
>
> This is a following-up of HDFS-11698. This JIRA adds the converting of 
> nodepool.db levelDB file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10999) Introduce separate stats for Replicated and Erasure Coded Blocks apart from the current Aggregated stats

2017-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003575#comment-16003575
 ] 

Hadoop QA commented on HDFS-10999:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
25s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
42s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
5s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 19s{color} 
| {color:red} hadoop-hdfs-project generated 36 new + 55 unchanged - 0 fixed = 
91 total (was 55) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project: The patch generated 7 new + 
1059 unchanged - 17 fixed = 1066 total (was 1076) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
18s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
| Timed out junit tests | org.apache.hadoop.tools.TestJMXGet |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-10999 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867159/HDFS-10999.03.patch |
| Optional Tests |  asflicense  compile  javac  

[jira] [Created] (HDFS-11788) Ozone : add DEBUG CLI support for nodepool db file

2017-05-09 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11788:
-

 Summary: Ozone : add DEBUG CLI support for nodepool db file
 Key: HDFS-11788
 URL: https://issues.apache.org/jira/browse/HDFS-11788
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang


This is a following-up of HDFS-11698. This JIRA adds the converting of 
nodepool.db levelDB file.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11756) Ozone : add DEBUG CLI support of blockDB file

2017-05-09 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11756:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Ozone : add DEBUG CLI support of blockDB file
> -
>
> Key: HDFS-11756
> URL: https://issues.apache.org/jira/browse/HDFS-11756
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Chen Liang
>Assignee: Chen Liang
>  Labels: ozone
> Attachments: HDFS-11756-HDFS-7240.001.patch, 
> HDFS-11756-HDFS-7240.002.patch
>
>
> This is a following-up of HDFS-11698. This JIRA adds the convert of block.db 
> levelDB file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10785) libhdfs++: Implement the rest of the tools

2017-05-09 Thread Anatoli Shein (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003564#comment-16003564
 ] 

Anatoli Shein commented on HDFS-10785:
--

Thanks [~James C] for the review. I have addressed your comments as follows:

-Looks like the tools_common getOptions functions pulls from hdfs-site.xml and 
core-site.xml. Can you also make it allow the environment's HADOOP_CONF_DIR 
override those?

* The environment's HADOOP_CONF_DIR seems to override the default values 
already (in ConfigurationLoader::SetDefaultSearchPath())

-hdfs_get and hdfs_copyToLocal are really the same thing other than the help 
message. Is there any reason to have two? hdfs_moveToLocal is also the same 
thing other than doing the delete at the end. I think it'd be straightforward 
to factor out the main copy loop into a function and call that from all 3; this 
would make things a lot more maintainable.

* hdfs_get and hdfs_copyToLocal currently do the same thing, however they will 
be different when we will implement writing functionality, since hdfs_get will 
have the ability to write to hdfs also.  I separated out the main copy loop 
into a function readFile which is being used by: hdfs_moveToLocal, hdfs_cat, 
hdfs_tail, hdfs_get, and hdfs_copyToLocal.

-Is it worth making hdfs_ls have the option to be recursive? Seems like we 
could get the same functionality with hdfs_find, can we share more code between 
the two?

* We have an option for hdfs_ls to be recursive in order to be compatible with 
the ls command of the java client 
(https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/FileSystemShell.html#ls).
 There are still more flags to add support to after this initial patch is 
landed. The recursive ls is implemented using a call to fs->Find, so the code 
is already being reused. I am not sure if we can further reuse the code between 
ls and find right now.

-How about moving the printStatInfo function in tools_common.h to be a .str() 
method on the StatInfo object in include/statinfo.h. That looks like it'd be 
useful for general debugging and logging in client applications as well. Same 
with printFsInfo, printContentSummary etc.

* This is done.

-in hdfs_copyToLocal you don't need the break after the call to exit. Looks 
like the same issue is in other tools. It'd be nice to generalize the options 
parsing so you don't have that block of code in all the tools, not sure if 
there's a good way to do it though.
  usage();
  exit(EXIT_SUCCESS);
  break;

* I removed this extra break in all the tools. I don't currently see a good way 
to generalize the GetOpt options parsing between the tools though.

-also in hdfs_copyToLocal either check that hdfs::Options isn't empty and call 
exit with a warning or use a default one (seems reasonable). Accessing 
optionals that haven't been set will throw some error but it's going to be 
something confusing for end users.

* Solving HDFS-9539 should solve the options problem. Currently the default 
values loaded are always empty, so the error message looks like: "Error 
connecting to the cluster: defaultFS is empty. defaultFS of [] is not supported"

-More generally it seems like it'd make sense to stick the config parsing, 
timeout limit changes, and FileSystem creation into a helper function in 
tools_common so things can be changed in a single place.

* Done: moved everything into a doConnect function.

-also in hdfs_copyToLocal: I'd increase BUF_SIZE to something a lot larger than 
4096. We still don't have caching for DN connections so copying a large file is 
going to have a lot of overhead just setting up connections to the DN. Consider 
moving it to a global variable so it can live in bss/data rather than the 
stack; valgrind doesn't pick up on stack corruption well and it's possible 
(though unlikely) that someone will set the system's max stack size to be 
pretty small.

* Done. I increased it to 1 MB and made it global.

-would be really nice if hdfs_tail let you specify how many lines to print. 
Still helpful and a lot less work would be specifying how many bytes to print.

* Java client does not have functionality to specify the number of bytes/lines 
to print 
(https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/FileSystemShell.html#tail).
 Not sure if this improvement should be part of this patch.

-IMHO nitpicking coding style stuff to death during reviews is usually an 
exercise in bikeshedding, but could you make sure tab spacing is consistent? 
For example in hdfs_chmod.cpp you switch between 2 and 4 spaces. (turning .cpp 
to .cc extensions for consistency would be nice too)

* I fixed this and other spacing issues that I noticed, and turned all .cpp 
extensions to .cc

Have you tested all of these tools using valgrind and large directory trees? We 
are in a bad state right now since the minidfscluster tests can't be run under 

[jira] [Commented] (HDFS-11755) Underconstruction blocks can be considered missing

2017-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003554#comment-16003554
 ] 

Hadoop QA commented on HDFS-11755:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
56s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 193 unchanged - 3 fixed = 194 total (was 196) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 20s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestFileCorruption |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11755 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867152/HDFS-11755.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f0af8b2a01b3 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7dd258d |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19362/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19362/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19362/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19362/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Updated] (HDFS-11644) Support for querying outputstream capabilities

2017-05-09 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11644:
--
Attachment: HDFS-11644-branch-2.01.patch

Thanks for the review and commit help [~andrew.wang], [~ste...@apache.org]. 
Attaching branch-2 patch. Please take a look.

> Support for querying outputstream capabilities
> --
>
> Key: HDFS-11644
> URL: https://issues.apache.org/jira/browse/HDFS-11644
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11644.01.patch, HDFS-11644.02.patch, 
> HDFS-11644.03.patch, HDFS-11644-branch-2.01.patch
>
>
> FSDataOutputStream#hsync checks if a stream implements Syncable, and if so, 
> calls hsync. Otherwise, it just calls flush. This is used, for instance, by 
> YARN's FileSystemTimelineWriter.
> DFSStripedOutputStream extends DFSOutputStream, which implements Syncable. 
> However, DFSStripedOS throws a runtime exception when the Syncable methods 
> are called.
> We should refactor the inheritance structure so DFSStripedOS does not 
> implement Syncable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10785) libhdfs++: Implement the rest of the tools

2017-05-09 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-10785:
-
Attachment: HDFS-10785.HDFS-8707.003.patch

New patch attached.

> libhdfs++: Implement the rest of the tools
> --
>
> Key: HDFS-10785
> URL: https://issues.apache.org/jira/browse/HDFS-10785
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-10785.HDFS-8707.000.patch, 
> HDFS-10785.HDFS-8707.001.patch, HDFS-10785.HDFS-8707.002.patch, 
> HDFS-10785.HDFS-8707.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11742) Improve balancer usability after HDFS-8818

2017-05-09 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11742:
---
Target Version/s: 2.7.4, 3.0.0-beta1, 2.8.1  (was: 2.7.4, 2.8.1)

> Improve balancer usability after HDFS-8818
> --
>
> Key: HDFS-11742
> URL: https://issues.apache.org/jira/browse/HDFS-11742
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Blocker
>  Labels: release-blocker
> Attachments: balancer2.8.png, HDFS-11742.branch-2.8.patch, 
> HDFS-11742.branch-2.patch, HDFS-11742.trunk.patch, HDFS-11742.v2.trunk.patch
>
>
> We ran 2.8 balancer with HDFS-8818 on a 280-node and a 2,400-node cluster. In 
> both cases, it would hang forever after two iterations. The two iterations 
> were also moving things at a significantly lower rate. The hang itself is 
> fixed by HDFS-11377, but the design limitation remains, so the balancer 
> throughput ends up actually lower.
> Instead of reverting HDFS-8188 as originally suggested, I am making a small 
> change to make it less error prone and more usable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11756) Ozone : add DEBUG CLI support of blockDB file

2017-05-09 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003522#comment-16003522
 ] 

Chen Liang commented on HDFS-11756:
---

Thanks [~anu] for the review! I've committed to the feature branch.

> Ozone : add DEBUG CLI support of blockDB file
> -
>
> Key: HDFS-11756
> URL: https://issues.apache.org/jira/browse/HDFS-11756
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Chen Liang
>Assignee: Chen Liang
>  Labels: ozone
> Attachments: HDFS-11756-HDFS-7240.001.patch, 
> HDFS-11756-HDFS-7240.002.patch
>
>
> This is a following-up of HDFS-11698. This JIRA adds the convert of block.db 
> levelDB file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11755) Underconstruction blocks can be considered missing

2017-05-09 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003459#comment-16003459
 ] 

Nathan Roberts commented on HDFS-11755:
---

bq. Do you know which one makes more sense?
Not an expert in this area but here's my understanding. When a block is 
completed and the client has received the necessary acks, the client either 
adds another block, or completes the file. Both cause the namenode to consider 
the block complete, and at that point the namenode will properly maintain 
replication of the completed block. If the pipeline fails while writing, the 
client may (depends on policy configured) rebuild the pipeline to maintain the 
desired level of replication in the pipeline. So, while a block is mutating, it 
is the client that is ultimately responsible for making sure enough datanodes 
remain in the pipeline and in-sync with the data. Once a block is complete, it 
becomes the namenode's responsibility to maintain replication. 

If a client dies and fails to complete the last block, after a timeout, lease 
recovery will cause the file to be closed and the blocks to be properly 
synchronized and committed if possible.  

There is also hsync(), which applications can use to enhance the durability 
guarantees at the datanode (via fsync).

Hope that helps a little.


> Underconstruction blocks can be considered missing
> --
>
> Key: HDFS-11755
> URL: https://issues.apache.org/jira/browse/HDFS-11755
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2, 2.8.1
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Attachments: HDFS-11755.001.patch
>
>
> Following sequence of events can lead to a block underconstruction being 
> considered missing.
> - pipeline of 3 DNs, DN1->DN2->DN3
> - DN3 has a failing disk so some updates take a long time
> - Client writes entire block and is waiting for final ack
> - DN1, DN2 and DN3 have all received the block 
> - DN1 is waiting for ACK from DN2 who is waiting for ACK from DN3
> - DN3 is having trouble finalizing the block due to the failing drive. It 
> does eventually succeed but it is VERY slow at doing so. 
> - DN2 times out waiting for DN3 and tears down its pieces of the pipeline, so 
> DN1 notices and does the same. Neither DN1 nor DN2 finalized the block.
> - DN3 finally sends an IBR to the NN indicating the block has been received.
> - Drive containing the block on DN3 fails enough that the DN takes it offline 
> and notifies NN of failed volume
> - NN removes DN3's replica from the triplets and then declares the block 
> missing because there are no other replicas
> Seems like we shouldn't consider uncompleted blocks for replication.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11644) Support for querying outputstream capabilities

2017-05-09 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003441#comment-16003441
 ] 

Andrew Wang commented on HDFS-11644:


Talked with Manoj offline, we think this would be good to get into branch-2 as 
well.

Manoj, do you mind posting a branch-2 patch? Thanks.

> Support for querying outputstream capabilities
> --
>
> Key: HDFS-11644
> URL: https://issues.apache.org/jira/browse/HDFS-11644
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11644.01.patch, HDFS-11644.02.patch, 
> HDFS-11644.03.patch
>
>
> FSDataOutputStream#hsync checks if a stream implements Syncable, and if so, 
> calls hsync. Otherwise, it just calls flush. This is used, for instance, by 
> YARN's FileSystemTimelineWriter.
> DFSStripedOutputStream extends DFSOutputStream, which implements Syncable. 
> However, DFSStripedOS throws a runtime exception when the Syncable methods 
> are called.
> We should refactor the inheritance structure so DFSStripedOS does not 
> implement Syncable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11681) DatanodeStorageInfo#getBlockIterator() should return an iterator to an unmodifiable set.

2017-05-09 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003435#comment-16003435
 ] 

Virajith Jalaparti commented on HDFS-11681:
---

Thanks a taking a look [~chris.douglas]. The ordering is not significant in 
either of the places {{TreeSet}} is used in v001. Posting a new patch replacing 
{{TreeSet}} with {{ArrayList}}.

> DatanodeStorageInfo#getBlockIterator() should return an iterator to an 
> unmodifiable set.
> 
>
> Key: HDFS-11681
> URL: https://issues.apache.org/jira/browse/HDFS-11681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Virajith Jalaparti
> Attachments: HDFS-11681.001.patch, HDFS-11681.002.patch
>
>
> The iterator from {{DatanodeStorageInfo#getBlockIterator()}} should not be 
> modifiable. Otherwise, calling {{remove}} on the iterator will remove blocks 
> from {{DatanodeStorageInfo.blocks}}, a function that has to be performed by 
> calling {{DatanodeStorageInfo#removeBlock}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11681) DatanodeStorageInfo#getBlockIterator() should return an iterator to an unmodifiable set.

2017-05-09 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11681:
--
Status: Patch Available  (was: Open)

> DatanodeStorageInfo#getBlockIterator() should return an iterator to an 
> unmodifiable set.
> 
>
> Key: HDFS-11681
> URL: https://issues.apache.org/jira/browse/HDFS-11681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Virajith Jalaparti
> Attachments: HDFS-11681.001.patch, HDFS-11681.002.patch
>
>
> The iterator from {{DatanodeStorageInfo#getBlockIterator()}} should not be 
> modifiable. Otherwise, calling {{remove}} on the iterator will remove blocks 
> from {{DatanodeStorageInfo.blocks}}, a function that has to be performed by 
> calling {{DatanodeStorageInfo#removeBlock}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11096) Support rolling upgrade between 2.x and 3.x

2017-05-09 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu reassigned HDFS-11096:


Assignee: Lei (Eddy) Xu

> Support rolling upgrade between 2.x and 3.x
> ---
>
> Key: HDFS-11096
> URL: https://issues.apache.org/jira/browse/HDFS-11096
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Lei (Eddy) Xu
>Priority: Blocker
>
> trunk has a minimum software version of 3.0.0-alpha1. This means we can't 
> rolling upgrade between branch-2 and trunk.
> This is a showstopper for large deployments. Unless there are very compelling 
> reasons to break compatibility, let's restore the ability to rolling upgrade 
> to 3.x releases.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11681) DatanodeStorageInfo#getBlockIterator() should return an iterator to an unmodifiable set.

2017-05-09 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11681:
--
Attachment: HDFS-11681.002.patch

> DatanodeStorageInfo#getBlockIterator() should return an iterator to an 
> unmodifiable set.
> 
>
> Key: HDFS-11681
> URL: https://issues.apache.org/jira/browse/HDFS-11681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Virajith Jalaparti
> Attachments: HDFS-11681.001.patch, HDFS-11681.002.patch
>
>
> The iterator from {{DatanodeStorageInfo#getBlockIterator()}} should not be 
> modifiable. Otherwise, calling {{remove}} on the iterator will remove blocks 
> from {{DatanodeStorageInfo.blocks}}, a function that has to be performed by 
> calling {{DatanodeStorageInfo#removeBlock}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11681) DatanodeStorageInfo#getBlockIterator() should return an iterator to an unmodifiable set.

2017-05-09 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11681:
--
Status: Open  (was: Patch Available)

> DatanodeStorageInfo#getBlockIterator() should return an iterator to an 
> unmodifiable set.
> 
>
> Key: HDFS-11681
> URL: https://issues.apache.org/jira/browse/HDFS-11681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Virajith Jalaparti
> Attachments: HDFS-11681.001.patch, HDFS-11681.002.patch
>
>
> The iterator from {{DatanodeStorageInfo#getBlockIterator()}} should not be 
> modifiable. Otherwise, calling {{remove}} on the iterator will remove blocks 
> from {{DatanodeStorageInfo.blocks}}, a function that has to be performed by 
> calling {{DatanodeStorageInfo#removeBlock}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11787) After HDFS-11515, -du still throws ConcurrentModificationException

2017-05-09 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-11787:
--

 Summary: After HDFS-11515, -du still throws 
ConcurrentModificationException
 Key: HDFS-11787
 URL: https://issues.apache.org/jira/browse/HDFS-11787
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots, tools
Affects Versions: 3.0.0-alpha3, 2.8.1
Reporter: Wei-Chiu Chuang


I ran a modified NameNode that was patched against HDFS-11515 on a production 
cluster fsimage, and I am still seeing ConcurrentModificationException.

It seems that there are corner cases not convered by HDFS-11515. File this jira 
to discuss how to proceed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11515) -du throws ConcurrentModificationException

2017-05-09 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003366#comment-16003366
 ] 

Wei-Chiu Chuang commented on HDFS-11515:


Sure. I filed HDFS-11787 to follow up.

> -du throws ConcurrentModificationException
> --
>
> Key: HDFS-11515
> URL: https://issues.apache.org/jira/browse/HDFS-11515
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, shell
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Istvan Fajth
> Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1
>
> Attachments: HDFS-11515.001.patch, HDFS-11515.002.patch, 
> HDFS-11515.003.patch, HDFS-11515.004.patch, HDFS-11515.test.patch
>
>
> HDFS-10797 fixed a disk summary (-du) bug, but it introduced a new bug.
> The bug can be reproduced running the following commands:
> {noformat}
> bash-4.1$ hdfs dfs -mkdir /tmp/d0
> bash-4.1$ hdfs dfsadmin -allowSnapshot /tmp/d0
> Allowing snaphot on /tmp/d0 succeeded
> bash-4.1$ hdfs dfs -touchz /tmp/d0/f4
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1
> bash-4.1$ hdfs dfs -createSnapshot /tmp/d0 s1
> Created snapshot /tmp/d0/.snapshot/s1
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d2
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d3
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d2/d4
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d3/d5
> bash-4.1$ hdfs dfs -createSnapshot /tmp/d0 s2
> Created snapshot /tmp/d0/.snapshot/s2
> bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d2/d4
> bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d2
> bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d3/d5
> bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d3
> bash-4.1$ hdfs dfs -du -h /tmp/d0
> du: java.util.ConcurrentModificationException
> 0 0 /tmp/d0/f4
> {noformat}
> A ConcurrentModificationException forced du to terminate abruptly.
> Correspondingly, NameNode log has the following error:
> {noformat}
> 2017-03-08 14:32:17,673 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 4 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getContentSumma
> ry from 10.0.0.198:49957 Call#2 Retry#0
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
> at 
> org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext.tallyDeletedSnapshottedINodes(ContentSummaryComputationContext.java:209)
> at 
> org.apache.hadoop.hdfs.server.namenode.INode.computeAndConvertContentSummary(INode.java:507)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.getContentSummary(FSDirectory.java:2302)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:4535)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1087)
> at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getContentSummary(AuthorizationProviderProxyClientProtocol.java:5
> 63)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.jav
> a:873)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2212)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2210)
> {noformat}
> The bug is due to a improper use of HashSet, not concurrent operations. 
> Basically, a HashSet can not be updated while an iterator is traversing it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11515) -du throws ConcurrentModificationException

2017-05-09 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-11515.

Resolution: Fixed

> -du throws ConcurrentModificationException
> --
>
> Key: HDFS-11515
> URL: https://issues.apache.org/jira/browse/HDFS-11515
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, shell
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Istvan Fajth
> Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1
>
> Attachments: HDFS-11515.001.patch, HDFS-11515.002.patch, 
> HDFS-11515.003.patch, HDFS-11515.004.patch, HDFS-11515.test.patch
>
>
> HDFS-10797 fixed a disk summary (-du) bug, but it introduced a new bug.
> The bug can be reproduced running the following commands:
> {noformat}
> bash-4.1$ hdfs dfs -mkdir /tmp/d0
> bash-4.1$ hdfs dfsadmin -allowSnapshot /tmp/d0
> Allowing snaphot on /tmp/d0 succeeded
> bash-4.1$ hdfs dfs -touchz /tmp/d0/f4
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1
> bash-4.1$ hdfs dfs -createSnapshot /tmp/d0 s1
> Created snapshot /tmp/d0/.snapshot/s1
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d2
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d3
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d2/d4
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d3/d5
> bash-4.1$ hdfs dfs -createSnapshot /tmp/d0 s2
> Created snapshot /tmp/d0/.snapshot/s2
> bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d2/d4
> bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d2
> bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d3/d5
> bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d3
> bash-4.1$ hdfs dfs -du -h /tmp/d0
> du: java.util.ConcurrentModificationException
> 0 0 /tmp/d0/f4
> {noformat}
> A ConcurrentModificationException forced du to terminate abruptly.
> Correspondingly, NameNode log has the following error:
> {noformat}
> 2017-03-08 14:32:17,673 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 4 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getContentSumma
> ry from 10.0.0.198:49957 Call#2 Retry#0
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
> at 
> org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext.tallyDeletedSnapshottedINodes(ContentSummaryComputationContext.java:209)
> at 
> org.apache.hadoop.hdfs.server.namenode.INode.computeAndConvertContentSummary(INode.java:507)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.getContentSummary(FSDirectory.java:2302)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:4535)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1087)
> at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getContentSummary(AuthorizationProviderProxyClientProtocol.java:5
> 63)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.jav
> a:873)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2212)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2210)
> {noformat}
> The bug is due to a improper use of HashSet, not concurrent operations. 
> Basically, a HashSet can not be updated while an iterator is traversing it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6949) Add NFS-ACL protocol support

2017-05-09 Thread Ruslan Dautkhanov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003342#comment-16003342
 ] 

Ruslan Dautkhanov commented on HDFS-6949:
-

https://tools.ietf.org/html/rfc3530#section-5.11 

NFS v4 defined ACL explicitly in RFC 3530.


> Add NFS-ACL protocol support
> 
>
> Key: HDFS-6949
> URL: https://issues.apache.org/jira/browse/HDFS-6949
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Reporter: Brandon Li
>
> This is the umbrella JIRA to track the effort of adding NFS ACL support.
> ACL support for NFSv3 is known as NFSACL. It is a separate out of band 
> protocol (for NFSv3) to support ACL operations (GETACL and SETACL). There is 
> no formal documentation or RFC on this protocol.
> NFSACL program number is 100227 and version is 3. 
> The program listens on tcp port 38467.
> More reference:
> http://lwn.net/Articles/120338/
> http://cateee.net/lkddb/web-lkddb/NFS_V3_ACL.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11786) Add a new command for multi threaded Put/CopyFromLocal

2017-05-09 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003340#comment-16003340
 ] 

Anu Engineer commented on HDFS-11786:
-

[~msingh] Thanks for the initial patch. I would think that this change can be 
done in copyFromLocal itself instead of introducing a new command. 
We have to be careful that we don't change the semantics of existing arguments 
and if new arguments are added they should not be required. 
That will ensure that we don't break existing scripts. 

> Add a new command for multi threaded Put/CopyFromLocal
> --
>
> Key: HDFS-11786
> URL: https://issues.apache.org/jira/browse/HDFS-11786
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11786.001.patch
>
>
> CopyFromLocal/Put is not currently multithreaded.
> In case, where there are multiple files which need to be uploaded to the 
> hdfs, a single thread reads the file and then copies the data to the cluster.
> This copy to hdfs can be made faster by uploading multiple files in parallel.
> I am attaching the initial patch so that I can get some initial feedback.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11687) Add new public encryption APIs required by Hive

2017-05-09 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003341#comment-16003341
 ] 

Akira Ajisaka commented on HDFS-11687:
--

Hi [~eddyxu], would you backport this to branch-2.8.1 as well?

> Add new public encryption APIs required by Hive
> ---
>
> Key: HDFS-11687
> URL: https://issues.apache.org/jira/browse/HDFS-11687
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Lei (Eddy) Xu
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HDFS-11687.00.patch, HDFS-11687.01.patch, 
> HDFS-11687.02.patch, HDFS-11687.03.patch
>
>
> As discovered on HADOOP-14333, Hive is using reflection to get a DFSClient 
> for its encryption shim. We should provide proper public APIs for getting 
> this information.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11687) Add new public encryption APIs required by Hive

2017-05-09 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1600#comment-1600
 ] 

Lei (Eddy) Xu commented on HDFS-11687:
--

[~shahrs87]  Backported to branch-2.8.



> Add new public encryption APIs required by Hive
> ---
>
> Key: HDFS-11687
> URL: https://issues.apache.org/jira/browse/HDFS-11687
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Lei (Eddy) Xu
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HDFS-11687.00.patch, HDFS-11687.01.patch, 
> HDFS-11687.02.patch, HDFS-11687.03.patch
>
>
> As discovered on HADOOP-14333, Hive is using reflection to get a DFSClient 
> for its encryption shim. We should provide proper public APIs for getting 
> this information.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11786) Add a new command for multi threaded Put/CopyFromLocal

2017-05-09 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-11786:
-
Attachment: HDFS-11786.001.patch

Initial patch, I am working on a unit test, will add it in the v2 patch.

> Add a new command for multi threaded Put/CopyFromLocal
> --
>
> Key: HDFS-11786
> URL: https://issues.apache.org/jira/browse/HDFS-11786
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11786.001.patch
>
>
> CopyFromLocal/Put is not currently multithreaded.
> In case, where there are multiple files which need to be uploaded to the 
> hdfs, a single thread reads the file and then copies the data to the cluster.
> This copy to hdfs can be made faster by uploading multiple files in parallel.
> I am attaching the initial patch so that I can get some initial feedback.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11785) Backport HDFS-9902 to branch-2.7: Support different values of dfs.datanode.du.reserved per storage type

2017-05-09 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11785:

Description: As per discussussion in [mailling 
list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser]
 backport HDFS-9902 to branch-2.7  (was: As per discussussion in [mailling 
list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser]
 backport HDFS-8312 to branch-2.7)

> Backport HDFS-9902 to branch-2.7: Support different values of 
> dfs.datanode.du.reserved per storage type
> ---
>
> Key: HDFS-11785
> URL: https://issues.apache.org/jira/browse/HDFS-11785
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-11785-branch-2.7.patch
>
>
> As per discussussion in [mailling 
> list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser]
>  backport HDFS-9902 to branch-2.7



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11785) Backport HDFS-9902 to branch-2.7: Support different values of dfs.datanode.du.reserved per storage type

2017-05-09 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11785:

Status: Patch Available  (was: Open)

> Backport HDFS-9902 to branch-2.7: Support different values of 
> dfs.datanode.du.reserved per storage type
> ---
>
> Key: HDFS-11785
> URL: https://issues.apache.org/jira/browse/HDFS-11785
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-11785-branch-2.7.patch
>
>
> As per discussussion in [mailling 
> list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser]
>  backport HDFS-8312 to branch-2.7



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11786) Add a new command for multi threaded Put/CopyFromLocal

2017-05-09 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-11786:


 Summary: Add a new command for multi threaded Put/CopyFromLocal
 Key: HDFS-11786
 URL: https://issues.apache.org/jira/browse/HDFS-11786
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh


CopyFromLocal/Put is not currently multithreaded.

In case, where there are multiple files which need to be uploaded to the hdfs, 
a single thread reads the file and then copies the data to the cluster.

This copy to hdfs can be made faster by uploading multiple files in parallel.

I am attaching the initial patch so that I can get some initial feedback.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11785) Backport HDFS-9902 to branch-2.7: Support different values of dfs.datanode.du.reserved per storage type

2017-05-09 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11785:

Attachment: HDFS-11785-branch-2.7.patch

Uploaded the patch.kindly Review.

> Backport HDFS-9902 to branch-2.7: Support different values of 
> dfs.datanode.du.reserved per storage type
> ---
>
> Key: HDFS-11785
> URL: https://issues.apache.org/jira/browse/HDFS-11785
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-11785-branch-2.7.patch
>
>
> As per discussussion in [mailling 
> list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser]
>  backport HDFS-8312 to branch-2.7



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10999) Introduce separate stats for Replicated and Erasure Coded Blocks apart from the current Aggregated stats

2017-05-09 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-10999:
--
Attachment: HDFS-10999.03.patch

Thanks [~tasanuma0829] and [~andrew.wang] for the detailed review. Attaching 
v03 patch to address the following comments and bunch of renamings. While you 
review this revision, i will look at more tests that can be added to verify the 
missing cases.

bq. BlockManagerSafeMode: How about using LongAccumulator for 
numberOfBytesInFutureBlocks, too?
Done. numberOfBytesInFutureBlocks is no more needed as it can be derived from 
bytesInFutureReplicatedBlocks and bytesInFutureStripedBlocks. 
numberOfBytesInFutureBlocks is now removed.

bq. CorruptReplicasMap: decrementBlockStat be included in the if statement?Done

bq. It seems package private is enough for new methods 
getCorruptReplicatedBlocksStat and getCorruptStripedBlocksStat.
 Done 

bq. LowRedundancyBlocks: Looks like corruptReplicatedOneBlocks is same as 
corruptReplOneBlocks. How about reusing corruptReplOneBlocks instead of 
calculating corruptReplicatedOneBlocks?
Done. To be consistent with newly introdcued stats, removed 
corruptReplOneBlocks and used corruptReplicatedOneBlocks

bq. InvalidateBlocks: Maybe I should have asked earlier, but if it is not much 
trouble for you, how about doing InvalidateBlocks-related work as a follow-on 
task?
Simplified the code by using two maps instead of 1. Still there are quite a few 
new changes. Tried reverting this file change, but it ended up making the patch 
inconsistent for few stats. Please take a look at the new version one more time.

bq. ReplicatedBlocksStatsMBean: the equivalent of getCorruptBlocks is now 
called getCorruptReplicaBlocksStat. Why not getCorruptBlocksStat instead?
Done. Renamed the methods as suggested.

bq. Should we rename getUnderReplicatedBlocksStat to 
getLowRedundancyReplicatedBlocksStat or similar to standardize with the EC 
naming? Since we deprecated getUnderReplicatedBlocks in favor of 
getLowRedundancyBlocks, it seems odd to bring this same name back here.
Done. Renamed this to getLowRedundancyBlocksStat.

bq. ECBlockGroupsStatusMBean: the noun is an "EC block group", should we name 
these e.g. getLowRedundancyECBlockGroupsStat, getCorruptECBlockGroupsStat, 
etc.? We could also shorten from "ECBlockGroup" to just "BlockGroup" if you 
think the MBean name by itself is sufficient documentation.
Done. Followed the approach used in fsck as suggested by Takanobu and renamed 
all these methods to have "ECBlockGroups"

bq. Misc: Any reason you chose to use LongAccumulator rather than LongAdder 
everywhere?
Done. LongAdder internally uses LongAccumulator. Anyway, switched to LongAdder 
as it is far easier to work with than the other.

bq. DFSClient and DistributedFileSystem aren't public APIs, so we don't need to 
preserve getUnderReplicatedBlocksCount if these methods are unused. Can move 
DFSAdmin over to using the new call.
Done.

bq. CorruptReplicasMap: Nit: Please put the new increment and decrement 
functions next to each other for clarity
Done

bq. In this block of code, should the decrement be moved inside the isEmpty 
case? Testing ?
Done. Tests yet to be added for this particular case.

bq. InvalidateBlocks: A lot of the new code is because of the indexing into the 
new array in the map and maintaining the separate counts. If we instead add 
another map, e.g. nodeToBlockGroups, we could avoid this.
Done.

bq. Can we simplify the limiting code on 297 ..?
DONE

bq. LowRedundancyBlocks: Rename from StripedBlocks to instead 
StripedBlockGroups?
Done. To be consistent with other places, I used EC prefix instead of Striped, 
but also included BlockGroups. 

bq. We're duplicating the corruptReplOneBlocks if statement, let's either move 
handling corruptReplicatedOneBlocks out of incrementBlockStat, or also 
increment corruptReplOneBlocks inside incrementBlockStat. Same comment for 
remove
Removed corruptReplOneBlocks as it is handled by corruptReplicatedOneBlocks 
fully.


> Introduce separate stats for Replicated and Erasure Coded Blocks apart from 
> the current Aggregated stats
> 
>
> Key: HDFS-10999
> URL: https://issues.apache.org/jira/browse/HDFS-10999
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-nice-to-have, supportability
> Attachments: HDFS-10999.01.patch, HDFS-10999.02.patch, 
> HDFS-10999.03.patch
>
>
> Per HDFS-9857, it seems in the Hadoop 3 world, people prefer the more generic 
> term "low redundancy" to the old-fashioned "under replicated". But this term 
> is still 

[jira] [Comment Edited] (HDFS-10999) Introduce separate stats for Replicated and Erasure Coded Blocks apart from the current Aggregated stats

2017-05-09 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003294#comment-16003294
 ] 

Manoj Govindassamy edited comment on HDFS-10999 at 5/9/17 6:52 PM:
---

Thanks [~tasanuma0829] and [~andrew.wang] for the detailed review. Attaching 
v03 patch to address the following comments and bunch of renamings. While you 
review this revision, i will look at more tests that can be added to verify the 
missing cases.

bq. BlockManagerSafeMode: How about using LongAccumulator for 
numberOfBytesInFutureBlocks, too?
Done. numberOfBytesInFutureBlocks is no more needed as it can be derived from 
bytesInFutureReplicatedBlocks and bytesInFutureStripedBlocks. 
numberOfBytesInFutureBlocks is now removed.

bq. CorruptReplicasMap: decrementBlockStat be included in the if statement?Done

bq. It seems package private is enough for new methods 
getCorruptReplicatedBlocksStat and getCorruptStripedBlocksStat.
 Done 

bq. LowRedundancyBlocks: Looks like corruptReplicatedOneBlocks is same as 
corruptReplOneBlocks. How about reusing corruptReplOneBlocks instead of 
calculating corruptReplicatedOneBlocks?
Done. To be consistent with newly introdcued stats, removed 
corruptReplOneBlocks and used corruptReplicatedOneBlocks

bq. InvalidateBlocks: Maybe I should have asked earlier, but if it is not much 
trouble for you, how about doing InvalidateBlocks-related work as a follow-on 
task?
Simplified the code by using two maps instead of 1. Still there are quite a few 
new changes. Tried reverting this file change, but it ended up making the patch 
inconsistent for few stats. Please take a look at the new version one more time.

bq. ReplicatedBlocksStatsMBean: the equivalent of getCorruptBlocks is now 
called getCorruptReplicaBlocksStat. Why not getCorruptBlocksStat instead?
Done. Renamed the methods as suggested.

bq. Should we rename getUnderReplicatedBlocksStat to 
getLowRedundancyReplicatedBlocksStat or similar to standardize with the EC 
naming? Since we deprecated getUnderReplicatedBlocks in favor of 
getLowRedundancyBlocks, it seems odd to bring this same name back here.
Done. Renamed this to getLowRedundancyBlocksStat.

bq. ECBlockGroupsStatusMBean: the noun is an "EC block group", should we name 
these e.g. getLowRedundancyECBlockGroupsStat, getCorruptECBlockGroupsStat, 
etc.? We could also shorten from "ECBlockGroup" to just "BlockGroup" if you 
think the MBean name by itself is sufficient documentation.
Done. Followed the approach used in fsck as suggested by Takanobu and renamed 
all these methods to have "ECBlockGroups"

bq. Misc: Any reason you chose to use LongAccumulator rather than LongAdder 
everywhere?
Done. LongAdder internally uses LongAccumulator. Anyway, switched to LongAdder 
as it is far easier to work with than the other.

bq. DFSClient and DistributedFileSystem aren't public APIs, so we don't need to 
preserve getUnderReplicatedBlocksCount if these methods are unused. Can move 
DFSAdmin over to using the new call.
Done.

bq. CorruptReplicasMap: Nit: Please put the new increment and decrement 
functions next to each other for clarity
Done

bq. In this block of code, should the decrement be moved inside the isEmpty 
case? Testing ?
Done. Tests yet to be added for this particular case.

bq. InvalidateBlocks: A lot of the new code is because of the indexing into the 
new array in the map and maintaining the separate counts. If we instead add 
another map, e.g. nodeToBlockGroups, we could avoid this.
Done.

bq. Can we simplify the limiting code on 297 ..?
Done.

bq. LowRedundancyBlocks: Rename from StripedBlocks to instead 
StripedBlockGroups?
Done. To be consistent with other places, I used EC prefix instead of Striped, 
but also included BlockGroups. 

bq. We're duplicating the corruptReplOneBlocks if statement, let's either move 
handling corruptReplicatedOneBlocks out of incrementBlockStat, or also 
increment corruptReplOneBlocks inside incrementBlockStat. Same comment for 
remove
Removed corruptReplOneBlocks as it is handled by corruptReplicatedOneBlocks 
fully.



was (Author: manojg):
Thanks [~tasanuma0829] and [~andrew.wang] for the detailed review. Attaching 
v03 patch to address the following comments and bunch of renamings. While you 
review this revision, i will look at more tests that can be added to verify the 
missing cases.

bq. BlockManagerSafeMode: How about using LongAccumulator for 
numberOfBytesInFutureBlocks, too?
Done. numberOfBytesInFutureBlocks is no more needed as it can be derived from 
bytesInFutureReplicatedBlocks and bytesInFutureStripedBlocks. 
numberOfBytesInFutureBlocks is now removed.

bq. CorruptReplicasMap: decrementBlockStat be included in the if statement?Done

bq. It seems package private is enough for new methods 
getCorruptReplicatedBlocksStat and getCorruptStripedBlocksStat.
 Done 

bq. LowRedundancyBlocks: Looks like 

[jira] [Created] (HDFS-11785) Backport HDFS-9902 to branch-2.7: Support different values of dfs.datanode.du.reserved per storage type

2017-05-09 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-11785:
---

 Summary: Backport HDFS-9902 to branch-2.7: Support different 
values of dfs.datanode.du.reserved per storage type
 Key: HDFS-11785
 URL: https://issues.apache.org/jira/browse/HDFS-11785
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Critical


As per discussussion in [mailling 
list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser]
 backport HDFS-8312 to branch-2.7



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11784) Backport HDFS-8312 to branch-2.7: Trash does not descent into child directories to check for permissions

2017-05-09 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11784:

Attachment: HDFS-11784-branch-2.7.patch

> Backport HDFS-8312 to branch-2.7: Trash does not descent into child 
> directories to check for permissions
> 
>
> Key: HDFS-11784
> URL: https://issues.apache.org/jira/browse/HDFS-11784
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-11784-branch-2.7.patch
>
>
> As per discussussion in [mailling 
> list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser]
>  backport HDFS-8312 to branch-2.7



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11784) Backport HDFS-8312 to branch-2.7: Trash does not descent into child directories to check for permissions

2017-05-09 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11784:

Attachment: (was: HDFS-11784-branch-2.7.patch.patch)

> Backport HDFS-8312 to branch-2.7: Trash does not descent into child 
> directories to check for permissions
> 
>
> Key: HDFS-11784
> URL: https://issues.apache.org/jira/browse/HDFS-11784
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
>
> As per discussussion in [mailling 
> list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser]
>  backport HDFS-8312 to branch-2.7



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11687) Add new public encryption APIs required by Hive

2017-05-09 Thread Naveen Gangam (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003280#comment-16003280
 ] 

Naveen Gangam commented on HDFS-11687:
--

[~shahrs87] Confirmed. No references in hive code. Appears this is the code, it 
initiates from
http://grepcode.com/file/repo1.maven.org/maven2/org.apache.hadoop/hadoop-hdfs/2.7.0/org/apache/hadoop/hdfs/KeyProviderCache.java#87
Could this have addressed it ? https://issues.apache.org/jira/browse/HDFS-7931 
Thanks

> Add new public encryption APIs required by Hive
> ---
>
> Key: HDFS-11687
> URL: https://issues.apache.org/jira/browse/HDFS-11687
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Lei (Eddy) Xu
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HDFS-11687.00.patch, HDFS-11687.01.patch, 
> HDFS-11687.02.patch, HDFS-11687.03.patch
>
>
> As discovered on HADOOP-14333, Hive is using reflection to get a DFSClient 
> for its encryption shim. We should provide proper public APIs for getting 
> this information.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10987) Make Decommission less expensive when lot of blocks present.

2017-05-09 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003279#comment-16003279
 ] 

Brahma Reddy Battula commented on HDFS-10987:
-

[~kihwal] can you please review the branch-2.7 patch, I am thinking,we no need 
run the jenkins against patch since it's straightforward patch with little 
modification against branch-2..Even I am ok to raise another jira to backport 
this(since 2.8 and alpha2 relase given and discussion in [mailing 
list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser])..

> Make Decommission less expensive when lot of blocks present.
> 
>
> Key: HDFS-10987
> URL: https://issues.apache.org/jira/browse/HDFS-10987
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
>  Labels: release-blocker
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10987-002.patch, HDFS-10987-branch-2.7.patch, 
> HDFS-10987.patch
>
>
> When user want to decommission a node which having 50M blocks +,it could hold 
> the namesystem lock for long time.We've seen it is taking 36 sec+. 
> As we knew during this time, Namenode will not available... As this 
> decommission will continuosly run till all the blocks got replicated,hence 
> Namenode will unavailable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11687) Add new public encryption APIs required by Hive

2017-05-09 Thread Naveen Gangam (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003268#comment-16003268
 ] 

Naveen Gangam edited comment on HDFS-11687 at 5/9/17 6:45 PM:
--

Thanks [~shahrs87] I believe this is being logged by the HDFS clientside code. 
I can check the hive codebase to confirm.


was (Author: ngangam):
Thanks [~rushabh.shah]. I believe this is being logged by the HDFS clientside 
code. I can check the hive codebase to confirm.

> Add new public encryption APIs required by Hive
> ---
>
> Key: HDFS-11687
> URL: https://issues.apache.org/jira/browse/HDFS-11687
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Lei (Eddy) Xu
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HDFS-11687.00.patch, HDFS-11687.01.patch, 
> HDFS-11687.02.patch, HDFS-11687.03.patch
>
>
> As discovered on HADOOP-14333, Hive is using reflection to get a DFSClient 
> for its encryption shim. We should provide proper public APIs for getting 
> this information.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11687) Add new public encryption APIs required by Hive

2017-05-09 Thread Naveen Gangam (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003268#comment-16003268
 ] 

Naveen Gangam commented on HDFS-11687:
--

Thanks [~rushabh.shah]. I believe this is being logged by the HDFS clientside 
code. I can check the hive codebase to confirm.

> Add new public encryption APIs required by Hive
> ---
>
> Key: HDFS-11687
> URL: https://issues.apache.org/jira/browse/HDFS-11687
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Lei (Eddy) Xu
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HDFS-11687.00.patch, HDFS-11687.01.patch, 
> HDFS-11687.02.patch, HDFS-11687.03.patch
>
>
> As discovered on HADOOP-14333, Hive is using reflection to get a DFSClient 
> for its encryption shim. We should provide proper public APIs for getting 
> this information.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11755) Underconstruction blocks can be considered missing

2017-05-09 Thread Nathan Roberts (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Roberts updated HDFS-11755:
--
Status: Patch Available  (was: Open)

v1  of trunk patch. branch 2 will require a separate patch.

> Underconstruction blocks can be considered missing
> --
>
> Key: HDFS-11755
> URL: https://issues.apache.org/jira/browse/HDFS-11755
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2, 2.8.1
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Attachments: HDFS-11755.001.patch
>
>
> Following sequence of events can lead to a block underconstruction being 
> considered missing.
> - pipeline of 3 DNs, DN1->DN2->DN3
> - DN3 has a failing disk so some updates take a long time
> - Client writes entire block and is waiting for final ack
> - DN1, DN2 and DN3 have all received the block 
> - DN1 is waiting for ACK from DN2 who is waiting for ACK from DN3
> - DN3 is having trouble finalizing the block due to the failing drive. It 
> does eventually succeed but it is VERY slow at doing so. 
> - DN2 times out waiting for DN3 and tears down its pieces of the pipeline, so 
> DN1 notices and does the same. Neither DN1 nor DN2 finalized the block.
> - DN3 finally sends an IBR to the NN indicating the block has been received.
> - Drive containing the block on DN3 fails enough that the DN takes it offline 
> and notifies NN of failed volume
> - NN removes DN3's replica from the triplets and then declares the block 
> missing because there are no other replicas
> Seems like we shouldn't consider uncompleted blocks for replication.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11784) Backport HDFS-8312 to branch-2.7: Trash does not descent into child directories to check for permissions

2017-05-09 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11784:

Attachment: HDFS-11784-branch-2.7.patch.patch

Uploaded the patch ,kindly review..HADOOP-13867 also needs to backport to 
branch-2.7 after this in.. will handle in seperate jira

> Backport HDFS-8312 to branch-2.7: Trash does not descent into child 
> directories to check for permissions
> 
>
> Key: HDFS-11784
> URL: https://issues.apache.org/jira/browse/HDFS-11784
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-11784-branch-2.7.patch.patch
>
>
> As per discussussion in [mailling 
> list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser]
>  backport HDFS-8312 to branch-2.7



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11784) Backport HDFS-8312 to branch-2.7: Trash does not descent into child directories to check for permissions

2017-05-09 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11784:

Status: Patch Available  (was: Open)

> Backport HDFS-8312 to branch-2.7: Trash does not descent into child 
> directories to check for permissions
> 
>
> Key: HDFS-11784
> URL: https://issues.apache.org/jira/browse/HDFS-11784
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-11784-branch-2.7.patch.patch
>
>
> As per discussussion in [mailling 
> list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser]
>  backport HDFS-8312 to branch-2.7



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11755) Underconstruction blocks can be considered missing

2017-05-09 Thread Nathan Roberts (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Roberts updated HDFS-11755:
--
Attachment: HDFS-11755.001.patch

> Underconstruction blocks can be considered missing
> --
>
> Key: HDFS-11755
> URL: https://issues.apache.org/jira/browse/HDFS-11755
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2, 2.8.1
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Attachments: HDFS-11755.001.patch
>
>
> Following sequence of events can lead to a block underconstruction being 
> considered missing.
> - pipeline of 3 DNs, DN1->DN2->DN3
> - DN3 has a failing disk so some updates take a long time
> - Client writes entire block and is waiting for final ack
> - DN1, DN2 and DN3 have all received the block 
> - DN1 is waiting for ACK from DN2 who is waiting for ACK from DN3
> - DN3 is having trouble finalizing the block due to the failing drive. It 
> does eventually succeed but it is VERY slow at doing so. 
> - DN2 times out waiting for DN3 and tears down its pieces of the pipeline, so 
> DN1 notices and does the same. Neither DN1 nor DN2 finalized the block.
> - DN3 finally sends an IBR to the NN indicating the block has been received.
> - Drive containing the block on DN3 fails enough that the DN takes it offline 
> and notifies NN of failed volume
> - NN removes DN3's replica from the triplets and then declares the block 
> missing because there are no other replicas
> Seems like we shouldn't consider uncompleted blocks for replication.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11687) Add new public encryption APIs required by Hive

2017-05-09 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003251#comment-16003251
 ] 

Rushabh S Shah commented on HDFS-11687:
---

bq. Wouldnt we see the same log message again if we use 
HdfsAdmin.getKeyProvider()
[~ngangam]: Why don't you do a null check before logging.
If encryption is disabled, {{HDFSAdmin#getKeyProvider}} will return null.

[~eddyxu]: We missed committing this patch to branch-2.8.
Can you please commit to branch-2.8 also.

> Add new public encryption APIs required by Hive
> ---
>
> Key: HDFS-11687
> URL: https://issues.apache.org/jira/browse/HDFS-11687
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Lei (Eddy) Xu
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HDFS-11687.00.patch, HDFS-11687.01.patch, 
> HDFS-11687.02.patch, HDFS-11687.03.patch
>
>
> As discovered on HADOOP-14333, Hive is using reflection to get a DFSClient 
> for its encryption shim. We should provide proper public APIs for getting 
> this information.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11784) Backport HDFS-8312 to branch-2.7: Trash does not descent into child directories to check for permissions

2017-05-09 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-11784:
---

 Summary: Backport HDFS-8312 to branch-2.7: Trash does not descent 
into child directories to check for permissions
 Key: HDFS-11784
 URL: https://issues.apache.org/jira/browse/HDFS-11784
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Critical


As per discussussion in [mailling 
list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser]
 backport HDFS-8312 to branch-2.7



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11661) GetContentSummary uses excessive amounts of memory

2017-05-09 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003243#comment-16003243
 ] 

Daryn Sharp commented on HDFS-11661:


Sorry, thought I commented last week.  I agree we should just revert the 
original change entirely.  I may/should/hopefully have a patch by EOW, but 
don't let it block the release.  We've "lived" with the inconsistency this long 
so waiting a bit longer won't hurt.  Between debugging 2.8 and grokking 
snapshots, fixing the discrepancies is taking longer than expected.

> GetContentSummary uses excessive amounts of memory
> --
>
> Key: HDFS-11661
> URL: https://issues.apache.org/jira/browse/HDFS-11661
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Nathan Roberts
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
> Attachments: HDFS-11661.001.patch, HDFs-11661.002.patch, Heap 
> growth.png
>
>
> ContentSummaryComputationContext::nodeIncluded() is being used to keep track 
> of all INodes visited during the current content summary calculation. This 
> can be all of the INodes in the filesystem, making for a VERY large hash 
> table. This simply won't work on large filesystems. 
> We noticed this after upgrading a namenode with ~100Million filesystem 
> objects was spending significantly more time in GC. Fortunately this system 
> had some memory breathing room, other clusters we have will not run with this 
> additional demand on memory.
> This was added as part of HDFS-10797 as a way of keeping track of INodes that 
> have already been accounted for - to avoid double counting.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11661) GetContentSummary uses excessive amounts of memory

2017-05-09 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003239#comment-16003239
 ] 

Wei-Chiu Chuang commented on HDFS-11661:


[~shahrs87]
My bad. You're absolutely correct.

> GetContentSummary uses excessive amounts of memory
> --
>
> Key: HDFS-11661
> URL: https://issues.apache.org/jira/browse/HDFS-11661
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Nathan Roberts
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
> Attachments: HDFS-11661.001.patch, HDFs-11661.002.patch, Heap 
> growth.png
>
>
> ContentSummaryComputationContext::nodeIncluded() is being used to keep track 
> of all INodes visited during the current content summary calculation. This 
> can be all of the INodes in the filesystem, making for a VERY large hash 
> table. This simply won't work on large filesystems. 
> We noticed this after upgrading a namenode with ~100Million filesystem 
> objects was spending significantly more time in GC. Fortunately this system 
> had some memory breathing room, other clusters we have will not run with this 
> additional demand on memory.
> This was added as part of HDFS-10797 as a way of keeping track of INodes that 
> have already been accounted for - to avoid double counting.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11661) GetContentSummary uses excessive amounts of memory

2017-05-09 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003235#comment-16003235
 ] 

Rushabh S Shah commented on HDFS-11661:
---

bq. I suggest we revert both HDFS-11515 and HDFS-11661 to unblock.
[~jojochuang]: By HDFS-11661, you mean HDFS-10797 correct ?

> GetContentSummary uses excessive amounts of memory
> --
>
> Key: HDFS-11661
> URL: https://issues.apache.org/jira/browse/HDFS-11661
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Nathan Roberts
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
> Attachments: HDFS-11661.001.patch, HDFs-11661.002.patch, Heap 
> growth.png
>
>
> ContentSummaryComputationContext::nodeIncluded() is being used to keep track 
> of all INodes visited during the current content summary calculation. This 
> can be all of the INodes in the filesystem, making for a VERY large hash 
> table. This simply won't work on large filesystems. 
> We noticed this after upgrading a namenode with ~100Million filesystem 
> objects was spending significantly more time in GC. Fortunately this system 
> had some memory breathing room, other clusters we have will not run with this 
> additional demand on memory.
> This was added as part of HDFS-10797 as a way of keeping track of INodes that 
> have already been accounted for - to avoid double counting.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11661) GetContentSummary uses excessive amounts of memory

2017-05-09 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003225#comment-16003225
 ] 

Wei-Chiu Chuang commented on HDFS-11661:


I don't want to have this one blocking Hadoop 3 alpha3. I suggest we revert 
both HDFS-11515 and HDFS-11661 to unblock.

> GetContentSummary uses excessive amounts of memory
> --
>
> Key: HDFS-11661
> URL: https://issues.apache.org/jira/browse/HDFS-11661
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Nathan Roberts
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
> Attachments: HDFS-11661.001.patch, HDFs-11661.002.patch, Heap 
> growth.png
>
>
> ContentSummaryComputationContext::nodeIncluded() is being used to keep track 
> of all INodes visited during the current content summary calculation. This 
> can be all of the INodes in the filesystem, making for a VERY large hash 
> table. This simply won't work on large filesystems. 
> We noticed this after upgrading a namenode with ~100Million filesystem 
> objects was spending significantly more time in GC. Fortunately this system 
> had some memory breathing room, other clusters we have will not run with this 
> additional demand on memory.
> This was added as part of HDFS-10797 as a way of keeping track of INodes that 
> have already been accounted for - to avoid double counting.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11732) Backport HDFS-8498 to branch-2.7: Blocks can be committed with wrong size

2017-05-09 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003216#comment-16003216
 ] 

Wei-Chiu Chuang commented on HDFS-11732:


I'd like to review the patch later. Thanks [~zhz] for taking the initiative.

> Backport HDFS-8498 to branch-2.7: Blocks can be committed with wrong size
> -
>
> Key: HDFS-11732
> URL: https://issues.apache.org/jira/browse/HDFS-11732
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.3
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Critical
> Attachments: HDFS-11732-branch-2.7.00.patch, 
> HDFS-11732-branch-2.7.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >