[jira] [Commented] (HDFS-10528) Add logging to successful standby checkpointing

2017-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238765#comment-16238765
 ] 

Hadoop QA commented on HDFS-10528:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 17s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 32m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
19s{color} | {color:red} The patch generated 298 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:5 |
| Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.TestDatanodeLayoutUpgrade |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.hdfs.TestFileCreation |
|   | hadoop.hdfs.TestParallelShortCircuitReadUnCached |
|   | hadoop.hdfs.TestReadStripedFileWithDNFailure |
|   | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-10528 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12810784/HDFS-10528.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 217e3acf2ea5 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 59d78a5 |
| maven | version: Apache Maven 3.3.9 

[jira] [Commented] (HDFS-10528) Add logging to successful standby checkpointing

2017-11-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238764#comment-16238764
 ] 

Hudson commented on HDFS-10528:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13189 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13189/])
HDFS-10528. Add logging to successful standby checkpointing. Contributed (xyao: 
rev 169cdaa38eca1c0b78f608754eb15d4e6ca87bd9)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/StandbyCheckpointer.java


> Add logging to successful standby checkpointing
> ---
>
> Key: HDFS-10528
> URL: https://issues.apache.org/jira/browse/HDFS-10528
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HDFS-10528.00.patch
>
>
> This ticket is opened to add INFO log for a successful standby checkpointing 
> in the code below for troubleshooting.
> {code}
> if (needCheckpoint) {
> doCheckpoint();
> // reset needRollbackCheckpoint to false only when we finish a 
> ckpt
> // for rollback image
> if (needRollbackCheckpoint
> && namesystem.getFSImage().hasRollbackFSImage()) {
>   namesystem.setCreatedRollbackImages(true);
>   namesystem.setNeedRollbackFsImage(false);
> }
> lastCheckpointTime = now;
>   }
> } catch (SaveNamespaceCancelledException ce) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10528) Add logging to successful standby checkpointing

2017-11-03 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-10528:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

Thanks [~arpitagarwal] for the review. I've committed the patch to the trunk.

> Add logging to successful standby checkpointing
> ---
>
> Key: HDFS-10528
> URL: https://issues.apache.org/jira/browse/HDFS-10528
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HDFS-10528.00.patch
>
>
> This ticket is opened to add INFO log for a successful standby checkpointing 
> in the code below for troubleshooting.
> {code}
> if (needCheckpoint) {
> doCheckpoint();
> // reset needRollbackCheckpoint to false only when we finish a 
> ckpt
> // for rollback image
> if (needRollbackCheckpoint
> && namesystem.getFSImage().hasRollbackFSImage()) {
>   namesystem.setCreatedRollbackImages(true);
>   namesystem.setNeedRollbackFsImage(false);
> }
> lastCheckpointTime = now;
>   }
> } catch (SaveNamespaceCancelledException ce) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10528) Add logging to successful standby checkpointing

2017-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238725#comment-16238725
 ] 

Hadoop QA commented on HDFS-10528:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 21s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:5 |
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.TestBlockStoragePolicy |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
|   | org.apache.hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData |
|   | org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 |
|   | org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-10528 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12810784/HDFS-10528.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4f418406a0a5 3.13.0-116-generic 

[jira] [Commented] (HDFS-12774) Ozone: OzoneClient: Moving OzoneException from hadoop-hdfs to hadoop-hdfs-client

2017-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238694#comment-16238694
 ] 

Hadoop QA commented on HDFS-12774:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
20s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
24s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
27s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
31s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
42s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 15m 
11s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
14s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-ozone in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}209m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:8 |
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestDFSRemove |
|   | hadoop.fs.viewfs.TestViewFileSystemLinkMergeSlash |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestPersistBlocks |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData |
|   | hadoop.fs.viewfs.TestViewFsHdfs |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce 

[jira] [Commented] (HDFS-12685) [READ] FsVolumeImpl exception when scanning Provided storage volume

2017-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238690#comment-16238690
 ] 

Hadoop QA commented on HDFS-12685:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
31s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 35s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}123m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}179m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:1 |
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ca8ddc6 |
| JIRA Issue | HDFS-12685 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12895980/HDFS-12685-HDFS-9806.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 05683fdcd17a 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-9806 / 365edb0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| Unreaped Processes Log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21954/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-reaper.txt
 |
| unit | 

[jira] [Commented] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-11-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238688#comment-16238688
 ] 

Hudson commented on HDFS-12681:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13188 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13188/])
HDFS-12681. Fold HdfsLocatedFileStatus into HdfsFileStatus. (cdouglas: rev 
b85603e3f85e85da406241b991f3a9974384c3aa)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocatedFileStatus.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestStorageMover.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/fs/Hdfs.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java


> Fold HdfsLocatedFileStatus into HdfsFileStatus
> --
>
> Key: HDFS-12681
> URL: https://issues.apache.org/jira/browse/HDFS-12681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HDFS-12681.00.patch, HDFS-12681.01.patch, 
> HDFS-12681.02.patch, HDFS-12681.03.patch, HDFS-12681.04.patch, 
> HDFS-12681.05.patch, HDFS-12681.06.patch, HDFS-12681.07.patch, 
> HDFS-12681.08.patch, HDFS-12681.09.patch, HDFS-12681.10.patch
>
>
> {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of 
> {{LocatedFileStatus}}. Conversion requires copying common fields and shedding 
> unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to 
> extend {{LocatedFileStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12607) [READ] Even one dead datanode with PROVIDED storage results in ProvidedStorageInfo being marked as FAILED

2017-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238682#comment-16238682
 ] 

Hadoop QA commented on HDFS-12607:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m  
1s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
19s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
21s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
47s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
7m  7s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
29s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}181m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:6 |
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData |
|   | hadoop.hdfs.TestWriteRead |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication |
|   | hadoop.hdfs.TestWriteReadStripedFile |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
|   | 

[jira] [Commented] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit

2017-11-03 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238671#comment-16238671
 ] 

Tsz Wo Nicholas Sze commented on HDFS-12594:


For RemoteIterator, since there is no change in the signature of 
DistributedFileSystem.getSnapshotDiffReport, we may add a new 
getSnapshotDiffReportListing later.

> SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC 
> response limit
> ---
>
> Key: HDFS-12594
> URL: https://issues.apache.org/jira/browse/HDFS-12594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-12594.001.patch, HDFS-12594.002.patch, 
> HDFS-12594.003.patch, HDFS-12594.004.patch, SnapshotDiff_Improvemnets .pdf
>
>
> The snapshotDiff command fails if the snapshotDiff report size is larger than 
> the configuration value of ipc.maximum.response.length which is by default 
> 128 MB. 
> Worst case, with all Renames ops in sanpshots each with source and target 
> name equal to MAX_PATH_LEN which is 8k characters, this would result in at 
> 8192 renames.
>  
> SnapshotDiff is currently used by distcp to optimize copy operations and in 
> case of the the diff report exceeding the limit , it fails with the below 
> exception:
> Test set: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 112.095 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> testDiffReportWithMillionFiles(org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport)
>   Time elapsed: 111.906 sec  <<< ERROR!
> java.io.IOException: Failed on local exception: 
> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; 
> Host Details : local host is: "hw15685.local/10.200.5.230"; destination host 
> is: "localhost":59808;
> Attached is the proposal for the changes required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit

2017-11-03 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238670#comment-16238670
 ] 

Tsz Wo Nicholas Sze commented on HDFS-12594:


Some other comments on the patch.

- Since there is already a 
"dfs.namenode.snapshotdiff.allow.snap-root-descendant", rename 
"dfs.snapshotdiff-report.limit" to "dfs.namenode.snapshotdiff.listing.limit" 
and move it next to DFS_NAMENODE_SNAPSHOT_DIFF_ALLOW_SNAP_ROOT_DESCENDANT.

- Use int for index and snapshotDiffReportLimit instead of Integer.  Use long 
instead of Long, boolean instead of Boolean, etc.

- SnapshotDiffReportGenerator should be moved to the 
org.apache.hadoop.hdfs.client.impl package.

- Use byte[][] in SnapshotDiffReportListing for sourcePath and targetPath
-* bytes2String and string2Bytes are expensive, please avoid calling them.
{code}
public byte[] getParent() {
  if (sourcePath == null || DFSUtilClient.bytes2String(sourcePath)
  .isEmpty()) {
return null;
  } else {
Path path = new Path(DFSUtilClient.bytes2String(sourcePath));
return DFSUtilClient.string2Bytes(path.getParent().toString());
  }
}
{code}

- In DistributedFileSystem.getSnapshotDiffReportInternal,
-* deltetedList should be deletedList
-* remove snapDiffReport, just return snapshotDiffReport.generateReport();

I have not finished reviewing the entire patch yet.  Will continue.


> SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC 
> response limit
> ---
>
> Key: HDFS-12594
> URL: https://issues.apache.org/jira/browse/HDFS-12594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-12594.001.patch, HDFS-12594.002.patch, 
> HDFS-12594.003.patch, HDFS-12594.004.patch, SnapshotDiff_Improvemnets .pdf
>
>
> The snapshotDiff command fails if the snapshotDiff report size is larger than 
> the configuration value of ipc.maximum.response.length which is by default 
> 128 MB. 
> Worst case, with all Renames ops in sanpshots each with source and target 
> name equal to MAX_PATH_LEN which is 8k characters, this would result in at 
> 8192 renames.
>  
> SnapshotDiff is currently used by distcp to optimize copy operations and in 
> case of the the diff report exceeding the limit , it fails with the below 
> exception:
> Test set: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 112.095 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> testDiffReportWithMillionFiles(org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport)
>   Time elapsed: 111.906 sec  <<< ERROR!
> java.io.IOException: Failed on local exception: 
> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; 
> Host Details : local host is: "hw15685.local/10.200.5.230"; destination host 
> is: "localhost":59808;
> Attached is the proposal for the changes required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12779) [READ] Allow cluster id to be specified to the Image generation tool

2017-11-03 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-12779:
-

 Summary: [READ] Allow cluster id to be specified to the Image 
generation tool
 Key: HDFS-12779
 URL: https://issues.apache.org/jira/browse/HDFS-12779
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Virajith Jalaparti






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata

2017-11-03 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238627#comment-16238627
 ] 

Virajith Jalaparti commented on HDFS-12713:
---

Please see [this 
comment|https://issues.apache.org/jira/browse/HDFS-12665?focusedCommentId=16238568=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16238568]
 on HDFS-12665. I think this JIRA should be used to add the block pool id to 
the {{BlockAliasMap}}, and in turn to its current implementation 
{{TextFileRegionAliasMap}}. 

> [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata 
> and PROVIDED storage metadata
> 
>
> Key: HDFS-12713
> URL: https://issues.apache.org/jira/browse/HDFS-12713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-12713-HDFS-9806.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12775) [READ] Fix reporting of Provided volumes

2017-11-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12775:
--
Description: Provided Volumes currently report infinite capacity and 0 
space used. Further, PROVIDED locations are reported as 
{{/default-rack/null:0}} in fsck. This JIRA is for making this more readable, 
and replace these with what users would expect.  (was: Provided Volumes 
currently report infinite capacity and 0 space used. This JIRA aims to replace 
this with a capacity report that users would expect.)

> [READ] Fix reporting of Provided volumes
> 
>
> Key: HDFS-12775
> URL: https://issues.apache.org/jira/browse/HDFS-12775
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Priority: Major
>
> Provided Volumes currently report infinite capacity and 0 space used. 
> Further, PROVIDED locations are reported as {{/default-rack/null:0}} in fsck. 
> This JIRA is for making this more readable, and replace these with what users 
> would expect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12775) [READ] Fix reporting for Provided volumes

2017-11-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12775:
--
Summary: [READ] Fix reporting for Provided volumes  (was: [READ] Fix 
capacity reporting for Provided volumes)

> [READ] Fix reporting for Provided volumes
> -
>
> Key: HDFS-12775
> URL: https://issues.apache.org/jira/browse/HDFS-12775
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Priority: Major
>
> Provided Volumes currently report infinite capacity and 0 space used. This 
> JIRA aims to replace this with a capacity report that users would expect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12775) [READ] Fix reporting of Provided volumes

2017-11-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12775:
--
Summary: [READ] Fix reporting of Provided volumes  (was: [READ] Fix 
reporting for Provided volumes)

> [READ] Fix reporting of Provided volumes
> 
>
> Key: HDFS-12775
> URL: https://issues.apache.org/jira/browse/HDFS-12775
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Priority: Major
>
> Provided Volumes currently report infinite capacity and 0 space used. This 
> JIRA aims to replace this with a capacity report that users would expect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12777) [READ] Reduce memory and CPU footprint for PROVIDED volumes.

2017-11-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12777:
--
Description: 
As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
storage. This can be millions of blocks for 100s of TBs of PROVIDED data. 
Storing the data for these blocks can lead to a large memory footprint. 
Further, with so many blocks, {{DirectoryScanner}} running on a PROVIDED volume 
can increase the memory and CPU utilization. 

To reduce these overheads, this JIRA aims to (a) disable the 
{{DirectoryScanner}} on PROVIDED volumes (as HDFS-9806 focuses on only 
read-only data in PROVIDED volumes), (b) reduce the space occupied by 
{{FinalizedProvidedReplicaInfo by using a common URI prefix across all PROVIDED 
blocks.



  was:
As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
storage. This can be millions of blocks for 100s of TBs of PROVIDED data. This 
JIRA aims to reduce the memory footprint of these blocks by using a common URI 
prefix across all PROVIDED blocks.
Further, with so many blocks the DirectoryScanner can take up a lot of 




> [READ] Reduce memory and CPU footprint for PROVIDED volumes.
> 
>
> Key: HDFS-12777
> URL: https://issues.apache.org/jira/browse/HDFS-12777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Priority: Major
>
> As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
> storage. This can be millions of blocks for 100s of TBs of PROVIDED data. 
> Storing the data for these blocks can lead to a large memory footprint. 
> Further, with so many blocks, {{DirectoryScanner}} running on a PROVIDED 
> volume can increase the memory and CPU utilization. 
> To reduce these overheads, this JIRA aims to (a) disable the 
> {{DirectoryScanner}} on PROVIDED volumes (as HDFS-9806 focuses on only 
> read-only data in PROVIDED volumes), (b) reduce the space occupied by 
> {{FinalizedProvidedReplicaInfo by using a common URI prefix across all 
> PROVIDED blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12777) [READ] Reduce memory and CPU footprint for PROVIDED volumes.

2017-11-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12777:
--
Description: 
As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
storage. This can be millions of blocks for 100s of TBs of PROVIDED data. 
Storing the data for these blocks can lead to a large memory footprint. 
Further, with so many blocks, {{DirectoryScanner}} running on a PROVIDED volume 
can increase the memory and CPU utilization. 

To reduce these overheads, this JIRA aims to (a) disable the 
{{DirectoryScanner}} on PROVIDED volumes (as HDFS-9806 focuses on only 
read-only data in PROVIDED volumes), (b) reduce the space occupied by 
{{FinalizedProvidedReplicaInfo}} by using a common URI prefix across all 
PROVIDED blocks.



  was:
As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
storage. This can be millions of blocks for 100s of TBs of PROVIDED data. 
Storing the data for these blocks can lead to a large memory footprint. 
Further, with so many blocks, {{DirectoryScanner}} running on a PROVIDED volume 
can increase the memory and CPU utilization. 

To reduce these overheads, this JIRA aims to (a) disable the 
{{DirectoryScanner}} on PROVIDED volumes (as HDFS-9806 focuses on only 
read-only data in PROVIDED volumes), (b) reduce the space occupied by 
{{FinalizedProvidedReplicaInfo by using a common URI prefix across all PROVIDED 
blocks.




> [READ] Reduce memory and CPU footprint for PROVIDED volumes.
> 
>
> Key: HDFS-12777
> URL: https://issues.apache.org/jira/browse/HDFS-12777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Priority: Major
>
> As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
> storage. This can be millions of blocks for 100s of TBs of PROVIDED data. 
> Storing the data for these blocks can lead to a large memory footprint. 
> Further, with so many blocks, {{DirectoryScanner}} running on a PROVIDED 
> volume can increase the memory and CPU utilization. 
> To reduce these overheads, this JIRA aims to (a) disable the 
> {{DirectoryScanner}} on PROVIDED volumes (as HDFS-9806 focuses on only 
> read-only data in PROVIDED volumes), (b) reduce the space occupied by 
> {{FinalizedProvidedReplicaInfo}} by using a common URI prefix across all 
> PROVIDED blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12777) [READ] Reduce memory and CPU footprint for PROVIDED volumes.

2017-11-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12777:
--
Summary: [READ] Reduce memory and CPU footprint for PROVIDED volumes.  
(was: [READ] Share URI prefix across Provided blocks to lower memory footprint.)

> [READ] Reduce memory and CPU footprint for PROVIDED volumes.
> 
>
> Key: HDFS-12777
> URL: https://issues.apache.org/jira/browse/HDFS-12777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Priority: Major
>
> As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
> storage. This can be millions of blocks for 100s of TBs of PROVIDED data. 
> This JIRA aims to reduce the memory footprint of these blocks by using a 
> common URI prefix across all PROVIDED blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12777) [READ] Reduce memory and CPU footprint for PROVIDED volumes.

2017-11-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12777:
--
Description: 
As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
storage. This can be millions of blocks for 100s of TBs of PROVIDED data. This 
JIRA aims to reduce the memory footprint of these blocks by using a common URI 
prefix across all PROVIDED blocks.
Further, with so many blocks the DirectoryScanner can take up a lot of 



  was:As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
storage. This can be millions of blocks for 100s of TBs of PROVIDED data. This 
JIRA aims to reduce the memory footprint of these blocks by using a common URI 
prefix across all PROVIDED blocks.


> [READ] Reduce memory and CPU footprint for PROVIDED volumes.
> 
>
> Key: HDFS-12777
> URL: https://issues.apache.org/jira/browse/HDFS-12777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Priority: Major
>
> As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
> storage. This can be millions of blocks for 100s of TBs of PROVIDED data. 
> This JIRA aims to reduce the memory footprint of these blocks by using a 
> common URI prefix across all PROVIDED blocks.
> Further, with so many blocks the DirectoryScanner can take up a lot of 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks

2017-11-03 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-12778:
-

 Summary: [READ] Report multiple locations for PROVIDED blocks
 Key: HDFS-12778
 URL: https://issues.apache.org/jira/browse/HDFS-12778
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Virajith Jalaparti
Priority: Major


On {{getBlockLocations}}, only one Datanode is returned as the location for all 
PROVIDED blocks. This can hurt the performance of applications which typically 
3 locations per block. We need to return multiple Datanodes for each PROVIDED 
block for better application performance/resilience. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12777) [READ] Share URI prefix across Provided blocks to lower memory footprint.

2017-11-03 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-12777:
-

 Summary: [READ] Share URI prefix across Provided blocks to lower 
memory footprint.
 Key: HDFS-12777
 URL: https://issues.apache.org/jira/browse/HDFS-12777
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Virajith Jalaparti
Priority: Major


As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
storage. This can be millions of blocks for 100s of TBs of PROVIDED data. This 
JIRA aims to reduce the memory footprint of these blocks by using a common URI 
prefix across all PROVIDED blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11754) Make FsServerDefaults cache configurable.

2017-11-03 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238612#comment-16238612
 ] 

Subru Krishnan commented on HDFS-11754:
---

Moving out to 2.10.0 as 2.9.0 release is ongoing.

> Make FsServerDefaults cache configurable.
> -
>
> Key: HDFS-11754
> URL: https://issues.apache.org/jira/browse/HDFS-11754
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Mikhail Erofeev
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-11754.001.patch, HDFS-11754.002.patch, 
> HDFS-11754.003.patch, HDFS-11754.004.patch, HDFS-11754.005.patch, 
> HDFS-11754.006.patch
>
>
> DFSClient caches the result of FsServerDefaults for 60 minutes.
> But the 60 minutes time is not configurable.
> Continuing the discussion from HDFS-11702, it would be nice if we can make 
> this configurable and make the default as 60 minutes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11754) Make FsServerDefaults cache configurable.

2017-11-03 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated HDFS-11754:
--
Target Version/s: 2.10.0  (was: 2.9.0)

> Make FsServerDefaults cache configurable.
> -
>
> Key: HDFS-11754
> URL: https://issues.apache.org/jira/browse/HDFS-11754
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Mikhail Erofeev
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-11754.001.patch, HDFS-11754.002.patch, 
> HDFS-11754.003.patch, HDFS-11754.004.patch, HDFS-11754.005.patch, 
> HDFS-11754.006.patch
>
>
> DFSClient caches the result of FsServerDefaults for 60 minutes.
> But the 60 minutes time is not configurable.
> Continuing the discussion from HDFS-11702, it would be nice if we can make 
> this configurable and make the default as 60 minutes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12774) Ozone: OzoneClient: Moving OzoneException from hadoop-hdfs to hadoop-hdfs-client

2017-11-03 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238598#comment-16238598
 ] 

Xiaoyu Yao commented on HDFS-12774:
---

Thanks [~nandakumar131] for the patch. It looks good to me. +1 pending Jenkins.

> Ozone: OzoneClient: Moving OzoneException from hadoop-hdfs to 
> hadoop-hdfs-client
> 
>
> Key: HDFS-12774
> URL: https://issues.apache.org/jira/browse/HDFS-12774
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-12774-HDFS-7240.000.patch
>
>
> {{OzoneException}} has to be used in hadoop-hdfs-client, since we cannot 
> refer classes in hadoop-hdfs from hadoop-hdfs-client, it has to be moved to 
> client module.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12474) Ozone: SCM: Handling container report with key count and container usage.

2017-11-03 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238597#comment-16238597
 ] 

Xiaoyu Yao commented on HDFS-12474:
---

[~nandakumar131], can you rebase the patch as it does not apply anymore? Thanks!

> Ozone: SCM: Handling container report with key count and container usage.
> -
>
> Key: HDFS-12474
> URL: https://issues.apache.org/jira/browse/HDFS-12474
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Nanda kumar
>Priority: Major
>  Labels: ozoneMerge
> Attachments: HDFS-12474-HDFS-7240.000.patch, 
> HDFS-12474-HDFS-7240.001.patch, HDFS-12474-HDFS-7240.002.patch
>
>
> Currently, the container report only contains the # of reports sent to SCM. 
> We will need to provide the key count and the usage of each individual 
> containers to update the SCM container state maintained by 
> ContainerStateManager. This has a dependency on HDFS-12387.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12776) [READ] Increasing replication for PROVIDED files should create local replicas

2017-11-03 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-12776:
-

 Summary: [READ] Increasing replication for PROVIDED files should 
create local replicas
 Key: HDFS-12776
 URL: https://issues.apache.org/jira/browse/HDFS-12776
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Virajith Jalaparti
Priority: Major


For PROVIDED files, set replication only works when the target datanode does 
not have a PROVIDED volume. In a cluster, where all Datanodes have PROVIDED 
volumes, set replication does not work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12775) [READ] Fix capacity reporting for Provided volumes

2017-11-03 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-12775:
-

 Summary: [READ] Fix capacity reporting for Provided volumes
 Key: HDFS-12775
 URL: https://issues.apache.org/jira/browse/HDFS-12775
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Virajith Jalaparti
Priority: Major


Provided Volumes currently report infinite capacity and 0 space used. This JIRA 
aims to replace this with a capacity report that users would expect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10467) Router-based HDFS federation

2017-11-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-10467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-10467:
---
Fix Version/s: 2.9.0

> Router-based HDFS federation
> 
>
> Key: HDFS-10467
> URL: https://issues.apache.org/jira/browse/HDFS-10467
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.1
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: RBF
> Fix For: 2.9.0, 3.0.0
>
> Attachments: HDFS Router Federation.pdf, HDFS-10467.002.patch, 
> HDFS-10467.PoC.001.patch, HDFS-10467.PoC.patch, 
> HDFS-Router-Federation-Prototype.patch
>
>
> Add a Router to provide a federated view of multiple HDFS clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9240) Use Builder pattern for BlockLocation constructors

2017-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238580#comment-16238580
 ] 

Hadoop QA commented on HDFS-9240:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 52s{color} | {color:orange} root: The patch generated 9 new + 530 unchanged 
- 44 fixed = 539 total (was 574) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 43s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-azure-datalake in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 49s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
56s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
4s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 25s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m  
8s{color} | {color:green} hadoop-gridmix in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-openstack in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
7s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
38s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
29s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| 

[jira] [Commented] (HDFS-12665) [AliasMap] Create a version of the AliasMap that runs in memory in the Namenode (leveldb)

2017-11-03 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238568#comment-16238568
 ] 

Virajith Jalaparti commented on HDFS-12665:
---

Hi [~ehiggs], thanks for posting the new patch. A couple of comments:
# This patch includes changes that are part of HDFS-11902. Can you post a patch 
that does not include these?
# The datanode should check the block pool id associated with a FileRegion 
before loading it. The patch eliminates this check (in 
{{ProvidedBlockPoolSlice}}). This should be retained as it ensures that the 
Datanode doesn't load blocks that shouldn't be associated with a Namenode. For 
example, consider the case where a DN reports to two Namenodes, NN1 and NN2, in 
federation. Only NN1 is configured with PROVIDED. Both NN1 and NN2 might have a 
block with the same id but NN1 refers to a PROVIDED block and NN2 refers to a 
local block. The DN needs to distinguish these two blocks with the same id. 
One way for the DN to know this is if the {{FileRegion}} or {{AliasMap}} has a 
block pool id associated with it. This ensures that the block can be 
distinguished in the {{ReplicaMap}} of the {{FsDatasetImpl}} and the two block 
aren't fixed up.

My proposal is to have the following as part of the API of {{BlockAliasMap}} so 
that we have know the block pool id from the alias map.

{code}
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/BlockAliasMap.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/BlockAliasMap.java
index d276fb52036..e564097fd2e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/BlockAliasMap.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/BlockAliasMap.java
@@ -47,6 +47,7 @@
  */
 public abstract U resolve(Block ident) throws IOException;

+public abstract String getBlockPoolID() throws IOException;
   }

   /**
@@ -74,10 +75,12 @@
   /**
* Returns the writer for the alias map.
* @param opts writer options.
+   * @param blockPoolID block pool id to use
* @return {@link Writer} to the alias map.
* @throws IOException
*/
-  public abstract Writer getWriter(Writer.Options opts) throws IOException;
+  public abstract Writer getWriter(Writer.Options opts, String blockPoolID)
+  throws IOException;

   /**
* Refresh the alias map.
{code}

I think this change along with the change of adding the 
{{ProvidedStorageLocation}} should be done as part of HDFS-12713.

> [AliasMap] Create a version of the AliasMap that runs in memory in the 
> Namenode (leveldb)
> -
>
> Key: HDFS-12665
> URL: https://issues.apache.org/jira/browse/HDFS-12665
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-12665-HDFS-9806.001.patch, 
> HDFS-12665-HDFS-9806.002.patch, HDFS-12665-HDFS-9806.003.patch
>
>
> The design of Provided Storage requires the use of an AliasMap to manage the 
> mapping between blocks of files on the local HDFS and ranges of files on a 
> remote storage system. To reduce load from the Namenode, this can be done 
> using a pluggable external service (e.g. AzureTable, Cassandra, Ratis). 
> However, to aide adoption and ease of deployment, we propose an in memory 
> version.
> This AliasMap will be a wrapper around LevelDB (already a dependency from the 
> Timeline Service) and use protobuf for the key (blockpool, blockid, and 
> genstamp) and the value (url, offset, length, nonce). The in memory service 
> will also have a configurable port on which it will listen for updates from 
> Storage Policy Satisfier (SPS) Coordinating Datanodes (C-DN).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-11-03 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12681:
-
   Resolution: Fixed
 Assignee: Chris Douglas
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

I committed this. Thanks for the review [~elgoiri] and [~ste...@apache.org].

> Fold HdfsLocatedFileStatus into HdfsFileStatus
> --
>
> Key: HDFS-12681
> URL: https://issues.apache.org/jira/browse/HDFS-12681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HDFS-12681.00.patch, HDFS-12681.01.patch, 
> HDFS-12681.02.patch, HDFS-12681.03.patch, HDFS-12681.04.patch, 
> HDFS-12681.05.patch, HDFS-12681.06.patch, HDFS-12681.07.patch, 
> HDFS-12681.08.patch, HDFS-12681.09.patch, HDFS-12681.10.patch
>
>
> {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of 
> {{LocatedFileStatus}}. Conversion requires copying common fields and shedding 
> unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to 
> extend {{LocatedFileStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12772) RBF: Track Router states

2017-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238523#comment-16238523
 ] 

Hadoop QA commented on HDFS-12772:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 27 new + 413 unchanged - 0 fixed = 440 total (was 413) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
22s{color} | {color:red} The patch generated 282 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:7 |
| Failed junit tests | hadoop.hdfs.TestFileAppend3 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSInputStream |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestErasureCodingMultipleRacks |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestLocalDFS |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.hdfs.TestFileLengthOnClusterRestart |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestFileConcurrentReader |
|   | hadoop.hdfs.TestRestartDFS |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy |
|   | 

[jira] [Commented] (HDFS-12734) Ozone: generate version specific documentation during the build

2017-11-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238520#comment-16238520
 ] 

Anu Engineer commented on HDFS-12734:
-

Sorry, my bad, not in case of Jenkins builds but if you are docker images for 
development.  Please see  - HDFS-12702
However, in future, we would like to add this to Jenkins too.

With that confusion cleared up, Can I presume that you are ok with lifting your 
-1? 




> Ozone: generate version specific documentation during the build
> ---
>
> Key: HDFS-12734
> URL: https://issues.apache.org/jira/browse/HDFS-12734
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-12734-HDFS-7240.001.patch
>
>
> HDFS-12664 susggested a new way to include documentation in the KSM web ui.
> This patch modifies the build lifecycle to automatically generate the 
> documentation *if* hugo is on the PATH. If hugo is not there  the 
> documentation won't be generated and it won't be displayed (see HDFS-12661)
> To test: Apply this patch on top of HDFS-12664 do a full build and check the 
> KSM webui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12734) Ozone: generate version specific documentation during the build

2017-11-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238499#comment-16238499
 ] 

Allen Wittenauer commented on HDFS-12734:
-

bq. The only thing this patch does is that it adds hugo automatically in case 
of Jenkins builds, It just makes it easy for us when we do releases. 

The current set of patches do neither of those things:

* Yetus isn't going to fail it if it isn't there.
* Since hugo is being added below the CUT HERE line, it won't be part of the 
Docker image that create-release or Yetus use




> Ozone: generate version specific documentation during the build
> ---
>
> Key: HDFS-12734
> URL: https://issues.apache.org/jira/browse/HDFS-12734
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-12734-HDFS-7240.001.patch
>
>
> HDFS-12664 susggested a new way to include documentation in the KSM web ui.
> This patch modifies the build lifecycle to automatically generate the 
> documentation *if* hugo is on the PATH. If hugo is not there  the 
> documentation won't be generated and it won't be displayed (see HDFS-12661)
> To test: Apply this patch on top of HDFS-12664 do a full build and check the 
> KSM webui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12734) Ozone: generate version specific documentation during the build

2017-11-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238486#comment-16238486
 ] 

Anu Engineer edited comment on HDFS-12734 at 11/3/17 10:19 PM:
---

bq.the requirements that we place on end users trying to build Hadoop. Every 
additional dependency just makes it that much harder.

This patch already skips the Hugo based path if the user does not have it, so 
the user is not burdened by it or have zero impact other than not having this 
optional feature in the UI. The only thing this patch does is that it adds hugo 
automatically in case of Jenkins builds, It just makes it easy for us when we 
do releases. 

I do agree that we should document this in BUILDING.txt as an optional 
dependency if someone wants to get that feature.

bq. Why does it need to be hugo-based?

As I said already, I was genuinely trying to reduce the dependency on Web UI 
tools. I used whatever was being proposed as the tool for next generation of  
Hadoop website. I sincerely thought I was trying to reduce the confusion by 
using the next set of tools that Hadoop seem to take dependency upon. As I 
said, this is an optional feature, not having this tool in the path will not 
even cause a build issue.


was (Author: anu):
bq.the requirements that we place on end users trying to build Hadoop. Every 
additional dependency just makes it that much harder.

This patch already skips the Hugo based path if the user does not have it, so 
the user is not burdened by it or have zero impact other than not having this 
optional feature in the UI. The only thing this patch does it add that 
automatically in case of Jenkins builds, It just makes it easy for us when we 
do releases. 

I do agree that we should document this in BUILDING.txt as an optional 
dependency if someone wants to get that feature.

bq. Why does it need to be hugo-based?

As I said already, I was genuinely trying to reduce the dependency on Web UI 
tools. I used whatever was being proposed as the tool for next generation of  
Hadoop website. I sincerely thought I was trying to reduce the confusion by 
using the next set of tools that Hadoop seem to take dependency upon. As I 
said, this is an optional feature, not having this tool in the path will not 
even cause a build issue.

> Ozone: generate version specific documentation during the build
> ---
>
> Key: HDFS-12734
> URL: https://issues.apache.org/jira/browse/HDFS-12734
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-12734-HDFS-7240.001.patch
>
>
> HDFS-12664 susggested a new way to include documentation in the KSM web ui.
> This patch modifies the build lifecycle to automatically generate the 
> documentation *if* hugo is on the PATH. If hugo is not there  the 
> documentation won't be generated and it won't be displayed (see HDFS-12661)
> To test: Apply this patch on top of HDFS-12664 do a full build and check the 
> KSM webui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12734) Ozone: generate version specific documentation during the build

2017-11-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238486#comment-16238486
 ] 

Anu Engineer commented on HDFS-12734:
-

bq.the requirements that we place on end users trying to build Hadoop. Every 
additional dependency just makes it that much harder.

This patch already skips the Hugo based path if the user does not have it, so 
the user is not burdened by it or have zero impact other than not having this 
optional feature in the UI. The only thing this patch does it add that 
automatically in case of Jenkins builds, It just makes it easy for us when we 
do releases. 

I do agree that we should document this in BUILDING.txt as an optional 
dependency if someone wants to get that feature.

bq. Why does it need to be hugo-based?

As I said already, I was genuinely trying to reduce the dependency on Web UI 
tools. I used whatever was being proposed as the tool for next generation of  
Hadoop website. I sincerely thought I was trying to reduce the confusion by 
using the next set of tools that Hadoop seem to take dependency upon. As I 
said, this is an optional feature, not having this tool in the path will not 
even cause a build issue.

> Ozone: generate version specific documentation during the build
> ---
>
> Key: HDFS-12734
> URL: https://issues.apache.org/jira/browse/HDFS-12734
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-12734-HDFS-7240.001.patch
>
>
> HDFS-12664 susggested a new way to include documentation in the KSM web ui.
> This patch modifies the build lifecycle to automatically generate the 
> documentation *if* hugo is on the PATH. If hugo is not there  the 
> documentation won't be generated and it won't be displayed (see HDFS-12661)
> To test: Apply this patch on top of HDFS-12664 do a full build and check the 
> KSM webui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12734) Ozone: generate version specific documentation during the build

2017-11-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238467#comment-16238467
 ] 

Allen Wittenauer commented on HDFS-12734:
-

bq. from the comments on that thread, it looks to me that is the direction we 
want to go. 

It doesn't mean anything until it's been committed.  I can point to lots and 
lots of issues where this is true... and years later, still open.

bq. If you have a hugo based site, how do you want to generate it?

Why does it need to be hugo-based?  We've already got all of 
node/npm/bower/yarn sitting there due to the overly heavy yarn-ui.

bq. by asking people to install the build tool each time? 

What do you think happens for people who aren't using Docker?  Or, what about 
platforms where Go doesn't work at all?

In my mind, there is a very big difference between what gets posted on 
hadoop.apache.org and the requirements that we place on end users trying to 
build Hadoop.  Every additional dependency just makes it that much harder.

> Ozone: generate version specific documentation during the build
> ---
>
> Key: HDFS-12734
> URL: https://issues.apache.org/jira/browse/HDFS-12734
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-12734-HDFS-7240.001.patch
>
>
> HDFS-12664 susggested a new way to include documentation in the KSM web ui.
> This patch modifies the build lifecycle to automatically generate the 
> documentation *if* hugo is on the PATH. If hugo is not there  the 
> documentation won't be generated and it won't be displayed (see HDFS-12661)
> To test: Apply this patch on top of HDFS-12664 do a full build and check the 
> KSM webui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12759) Ozone: web: integrate configuration reader page to the SCM/KSM web ui.

2017-11-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238464#comment-16238464
 ] 

Anu Engineer commented on HDFS-12759:
-

Looks like the patch failed because it ran against the trunk
{noformat}
HEAD is now at e6ec020 YARN-7370: Preemption properties should be refreshable. 
Contrubted by Gergely Novák.
Switched to a new branch 'trunk'
Branch trunk set up to track remote branch trunk from origin.
Current branch trunk is up to date.
Already on 'trunk'
Your branch is up-to-date with 'origin/trunk'.
{noformat}

I think the issue is the branch part of the patch has a typo. I think we should 
rename this patch to 
HDFS-12759\-HDFS-*7280*.002.patch ==> HDFS-12759\-HDFS-*7240*.002.patch


> Ozone: web: integrate configuration reader page to the SCM/KSM web ui.
> --
>
> Key: HDFS-12759
> URL: https://issues.apache.org/jira/browse/HDFS-12759
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>  Labels: web-ui
> Attachments: HDFS-12759-HDFS-7240.001.patch, 
> HDFS-12759-HDFS-7280.002.patch, after1.png, after2.png, before1.png, 
> before2.png, conf.png
>
>
> In the current SCM/KSM web ui the configuration are
>  *  hidden under the Common Tools menu
>  * opens a different type of web page (different menu and style).
> In this patch I integrate the configuration page to the existing web ui:
> From user point of view:
>  * Configuration page is moved to a separated main menu
>  * The menu of the Configuration page is the same as all the others
>  * Metrics are also moved to separatad pages/menus
>  * As the configuraiton page requires full width, all the pages use full 
> width layout
> From technical point of view:
>  * To support multiple pages I enabled the angular router (which has already 
> been added as component)
>  * Not, it's suppored to create multiple pages and navigate between them, so 
> I also moved the metrics pages to different pages, making the main overview 
> page more clean.
>  * The layout changed to use the full width.
> TESTING:
> It's a client side only change. The easiest way to test is doing a full 
> build, start SCM/KSM and check the menu items
>  
>  * All the menu items should work
>  * Configuration page (from the main menu) should use the same header
>  * The configuration item of the Common tools menu shows the good old raw 
> configuration page



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12734) Ozone: generate version specific documentation during the build

2017-11-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238457#comment-16238457
 ] 

Anu Engineer commented on HDFS-12734:
-

bq. HADOOP-14163 has been open for over 6 months and doesn't appear to be 
anywhere near completion. It also doesn't appear to impact any build-time 
dependencies.
I agree it has been open for a while, but from the comments on that thread, it 
looks to me that is the direction we want to go. 
If you have a hugo based site, how do you want to generate it? by asking people 
to install the build tool each time? 
Eventually, I am sure we will build it just like mvn-site.


bq. To add insult to injury, BUILDING.txt wasn't even updated to list it as a 
dependency.
Very fair point. Thanks for the feedback. [~elek] Can you please take this as 
code review feedback and update BUILDING.txt?.

> Ozone: generate version specific documentation during the build
> ---
>
> Key: HDFS-12734
> URL: https://issues.apache.org/jira/browse/HDFS-12734
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-12734-HDFS-7240.001.patch
>
>
> HDFS-12664 susggested a new way to include documentation in the KSM web ui.
> This patch modifies the build lifecycle to automatically generate the 
> documentation *if* hugo is on the PATH. If hugo is not there  the 
> documentation won't be generated and it won't be displayed (see HDFS-12661)
> To test: Apply this patch on top of HDFS-12664 do a full build and check the 
> KSM webui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12734) Ozone: generate version specific documentation during the build

2017-11-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238440#comment-16238440
 ] 

Allen Wittenauer commented on HDFS-12734:
-

HADOOP-14163 has been open for over 6 months and doesn't appear to be any where 
near completion. It also doesn't appear to impact any build time dependencies.

That's a very different situation than what this patch is proposing.  It 
specifically adds another build-time dependency in the critical path. Worse, I 
think this may be something like the 5th website generator in the source tree.  
(I don't even know if I can name all them anymore.) To add insult to injury, 
BUILDING.txt wasn't even updated to list it as a dependency.


> Ozone: generate version specific documentation during the build
> ---
>
> Key: HDFS-12734
> URL: https://issues.apache.org/jira/browse/HDFS-12734
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-12734-HDFS-7240.001.patch
>
>
> HDFS-12664 susggested a new way to include documentation in the KSM web ui.
> This patch modifies the build lifecycle to automatically generate the 
> documentation *if* hugo is on the PATH. If hugo is not there  the 
> documentation won't be generated and it won't be displayed (see HDFS-12661)
> To test: Apply this patch on top of HDFS-12664 do a full build and check the 
> KSM webui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12685) [READ] FsVolumeImpl exception when scanning Provided storage volume

2017-11-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12685:
--
Attachment: HDFS-12685-HDFS-9806.002.patch

Posting a patch that rebases the earlier patch on the most recent version of 
HDFS-9806 branch.

> [READ] FsVolumeImpl exception when scanning Provided storage volume
> ---
>
> Key: HDFS-12685
> URL: https://issues.apache.org/jira/browse/HDFS-12685
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-12685-HDFS-9806.001.patch, 
> HDFS-12685-HDFS-9806.002.patch
>
>
> I left a Datanode running overnight and found this in the logs in the morning:
> {code}
> 2017-10-18 23:51:54,391 ERROR datanode.DirectoryScanner: Error compiling 
> report for the volume, StorageId: DS-e75ebc3c-6b12-424e-875a-a4ae1a4dcc29 
>   
>  
> java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: 
> URI scheme is not "file"  
>   
>  
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   
>   
> 
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)   
>   
>   
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:544)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:393)
>   
>   
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:375)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:320)
>   
>
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)   
>   
>
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)   
>   
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   
>
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   
>   
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   
>   
> at java.lang.Thread.run(Thread.java:748)  
>   
>   
> 
> Caused by: java.lang.IllegalArgumentException: URI scheme is not "file"   
>   
>   
> 
> at java.io.File.(File.java:421)

[jira] [Updated] (HDFS-12685) [READ] FsVolumeImpl exception when scanning Provided storage volume

2017-11-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12685:
--
Status: Patch Available  (was: Open)

> [READ] FsVolumeImpl exception when scanning Provided storage volume
> ---
>
> Key: HDFS-12685
> URL: https://issues.apache.org/jira/browse/HDFS-12685
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-12685-HDFS-9806.001.patch, 
> HDFS-12685-HDFS-9806.002.patch
>
>
> I left a Datanode running overnight and found this in the logs in the morning:
> {code}
> 2017-10-18 23:51:54,391 ERROR datanode.DirectoryScanner: Error compiling 
> report for the volume, StorageId: DS-e75ebc3c-6b12-424e-875a-a4ae1a4dcc29 
>   
>  
> java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: 
> URI scheme is not "file"  
>   
>  
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   
>   
> 
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)   
>   
>   
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:544)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:393)
>   
>   
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:375)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:320)
>   
>
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)   
>   
>
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)   
>   
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   
>
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   
>   
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   
>   
> at java.lang.Thread.run(Thread.java:748)  
>   
>   
> 
> Caused by: java.lang.IllegalArgumentException: URI scheme is not "file"   
>   
>   
> 
> at java.io.File.(File.java:421) 
>

[jira] [Updated] (HDFS-12685) [READ] FsVolumeImpl exception when scanning Provided storage volume

2017-11-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12685:
--
Status: Open  (was: Patch Available)

> [READ] FsVolumeImpl exception when scanning Provided storage volume
> ---
>
> Key: HDFS-12685
> URL: https://issues.apache.org/jira/browse/HDFS-12685
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-12685-HDFS-9806.001.patch
>
>
> I left a Datanode running overnight and found this in the logs in the morning:
> {code}
> 2017-10-18 23:51:54,391 ERROR datanode.DirectoryScanner: Error compiling 
> report for the volume, StorageId: DS-e75ebc3c-6b12-424e-875a-a4ae1a4dcc29 
>   
>  
> java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: 
> URI scheme is not "file"  
>   
>  
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   
>   
> 
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)   
>   
>   
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:544)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:393)
>   
>   
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:375)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:320)
>   
>
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)   
>   
>
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)   
>   
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   
>
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   
>   
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   
>   
> at java.lang.Thread.run(Thread.java:748)  
>   
>   
> 
> Caused by: java.lang.IllegalArgumentException: URI scheme is not "file"   
>   
>   
> 
> at java.io.File.(File.java:421) 
>   
>  

[jira] [Updated] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-11-03 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12681:
-
Hadoop Flags: Incompatible change,Reviewed
Release Note: HdfsFileStatus is now a subtype of LocatedFileStatus, 
HdfsLocatedFileStatus is deleted. Applications that distinguish calls that 
include block locations using instanceof may be affected.

> Fold HdfsLocatedFileStatus into HdfsFileStatus
> --
>
> Key: HDFS-12681
> URL: https://issues.apache.org/jira/browse/HDFS-12681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HDFS-12681.00.patch, HDFS-12681.01.patch, 
> HDFS-12681.02.patch, HDFS-12681.03.patch, HDFS-12681.04.patch, 
> HDFS-12681.05.patch, HDFS-12681.06.patch, HDFS-12681.07.patch, 
> HDFS-12681.08.patch, HDFS-12681.09.patch, HDFS-12681.10.patch
>
>
> {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of 
> {{LocatedFileStatus}}. Conversion requires copying common fields and shedding 
> unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to 
> extend {{LocatedFileStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12607) [READ] Even one dead datanode with PROVIDED storage results in ProvidedStorageInfo being marked as FAILED

2017-11-03 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238423#comment-16238423
 ] 

Virajith Jalaparti commented on HDFS-12607:
---

Posting a rebased patch. It also removes a extra flag ({{hasDNs}}) and uses 
{{providedDescriptor.activeProvidedDatanodes()}} instead.

> [READ] Even one dead datanode with PROVIDED storage results in 
> ProvidedStorageInfo being marked as FAILED
> -
>
> Key: HDFS-12607
> URL: https://issues.apache.org/jira/browse/HDFS-12607
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-12607-HDFS-9806.001.patch, 
> HDFS-12607-HDFS-9806.002.patch, HDFS-12607-HDFS-9806.003.patch, 
> HDFS-12607.repro.patch
>
>
> When a DN configured with PROVIDED storage is marked as dead by the NN, the 
> state of {{providedStorageInfo}} in {{ProvidedStorageMap}} is set to FAILED, 
> and never becomes NORMAL. The state should change to FAILED only if all 
> datanodes with PROVIDED storage are dead, and should be restored back to 
> NORMAL when a Datanode with NORMAL DatanodeStorage reports in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12607) [READ] Even one dead datanode with PROVIDED storage results in ProvidedStorageInfo being marked as FAILED

2017-11-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12607:
--
Status: Patch Available  (was: Open)

> [READ] Even one dead datanode with PROVIDED storage results in 
> ProvidedStorageInfo being marked as FAILED
> -
>
> Key: HDFS-12607
> URL: https://issues.apache.org/jira/browse/HDFS-12607
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-12607-HDFS-9806.001.patch, 
> HDFS-12607-HDFS-9806.002.patch, HDFS-12607-HDFS-9806.003.patch, 
> HDFS-12607.repro.patch
>
>
> When a DN configured with PROVIDED storage is marked as dead by the NN, the 
> state of {{providedStorageInfo}} in {{ProvidedStorageMap}} is set to FAILED, 
> and never becomes NORMAL. The state should change to FAILED only if all 
> datanodes with PROVIDED storage are dead, and should be restored back to 
> NORMAL when a Datanode with NORMAL DatanodeStorage reports in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12734) Ozone: generate version specific documentation during the build

2017-11-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238422#comment-16238422
 ] 

Anu Engineer edited comment on HDFS-12734 at 11/3/17 9:30 PM:
--

bq. This is not an option. mvn site needs to be used to be consistent with the 
rest of Hadoop. If you want to move Hadoop to something that isn't mvn site, 
that's a much bigger conversation and should definitely not be snuck into a 
patch.
I am sorry that the description of this JIRA is confusing. Ozone docs use mvn 
site exactly as Hadoop. There is no deviation from existing behavior.

This is a completely new feature that allows KSM to carry documentation as part 
of the KSM UI (completely separate from standard documentation of  Hadoop Site) 
  This is completely orthogonal to the mvn site command. The reason why we 
choose hugo to do this was HADOOP-14163. That patch uses Hugo and we did not 
want to introduce any new dependencies via Ozone.
 
[~aw] I hope this addresses your concern and you would be kind enough to lift 
your -1.


was (Author: anu):
bq. This is not an option. mvn site needs to be used to be consistent with the 
rest of Hadoop. If you want to move Hadoop to something that isn't mvn site, 
that's a much bigger conversation and should definitely not be snuck into a 
patch.
I am sorry that the description of this JIRA is confusing. Ozone docs use mvn 
site exactly as Hadoop. There is no deviation from existing behavior.

This is a completely new feature that allows KSM to carry documentation as part 
of the KSM UI (completely separate from standard documentation of  Hadoop Site) 
  This is completely orthogonal to the mvn site command. The reason why we 
choose hugo to do this was HADOOP-14163. That patch uses Hugo and we did not 
want to introduce any new dependencies via Oozne.
 

> Ozone: generate version specific documentation during the build
> ---
>
> Key: HDFS-12734
> URL: https://issues.apache.org/jira/browse/HDFS-12734
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-12734-HDFS-7240.001.patch
>
>
> HDFS-12664 susggested a new way to include documentation in the KSM web ui.
> This patch modifies the build lifecycle to automatically generate the 
> documentation *if* hugo is on the PATH. If hugo is not there  the 
> documentation won't be generated and it won't be displayed (see HDFS-12661)
> To test: Apply this patch on top of HDFS-12664 do a full build and check the 
> KSM webui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12607) [READ] Even one dead datanode with PROVIDED storage results in ProvidedStorageInfo being marked as FAILED

2017-11-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12607:
--
Status: Open  (was: Patch Available)

> [READ] Even one dead datanode with PROVIDED storage results in 
> ProvidedStorageInfo being marked as FAILED
> -
>
> Key: HDFS-12607
> URL: https://issues.apache.org/jira/browse/HDFS-12607
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-12607-HDFS-9806.001.patch, 
> HDFS-12607-HDFS-9806.002.patch, HDFS-12607-HDFS-9806.003.patch, 
> HDFS-12607.repro.patch
>
>
> When a DN configured with PROVIDED storage is marked as dead by the NN, the 
> state of {{providedStorageInfo}} in {{ProvidedStorageMap}} is set to FAILED, 
> and never becomes NORMAL. The state should change to FAILED only if all 
> datanodes with PROVIDED storage are dead, and should be restored back to 
> NORMAL when a Datanode with NORMAL DatanodeStorage reports in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12607) [READ] Even one dead datanode with PROVIDED storage results in ProvidedStorageInfo being marked as FAILED

2017-11-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12607:
--
Attachment: HDFS-12607-HDFS-9806.003.patch

> [READ] Even one dead datanode with PROVIDED storage results in 
> ProvidedStorageInfo being marked as FAILED
> -
>
> Key: HDFS-12607
> URL: https://issues.apache.org/jira/browse/HDFS-12607
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-12607-HDFS-9806.001.patch, 
> HDFS-12607-HDFS-9806.002.patch, HDFS-12607-HDFS-9806.003.patch, 
> HDFS-12607.repro.patch
>
>
> When a DN configured with PROVIDED storage is marked as dead by the NN, the 
> state of {{providedStorageInfo}} in {{ProvidedStorageMap}} is set to FAILED, 
> and never becomes NORMAL. The state should change to FAILED only if all 
> datanodes with PROVIDED storage are dead, and should be restored back to 
> NORMAL when a Datanode with NORMAL DatanodeStorage reports in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12734) Ozone: generate version specific documentation during the build

2017-11-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238422#comment-16238422
 ] 

Anu Engineer commented on HDFS-12734:
-

bq. This is not an option. mvn site needs to be used to be consistent with the 
rest of Hadoop. If you want to move Hadoop to something that isn't mvn site, 
that's a much bigger conversation and should definitely not be snuck into a 
patch.
I am sorry that the description of this JIRA is confusing. Ozone docs use mvn 
site exactly as Hadoop. There is no deviation from existing behavior.

This is a completely new feature that allows KSM to carry documentation as part 
of the KSM UI (completely separate from standard documentation of  Hadoop Site) 
  This is completely orthogonal to the mvn site command. The reason why we 
choose hugo to do this was HADOOP-14163. That patch uses Hugo and we did not 
want to introduce any new dependencies via Oozne.
 

> Ozone: generate version specific documentation during the build
> ---
>
> Key: HDFS-12734
> URL: https://issues.apache.org/jira/browse/HDFS-12734
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-12734-HDFS-7240.001.patch
>
>
> HDFS-12664 susggested a new way to include documentation in the KSM web ui.
> This patch modifies the build lifecycle to automatically generate the 
> documentation *if* hugo is on the PATH. If hugo is not there  the 
> documentation won't be generated and it won't be displayed (see HDFS-12661)
> To test: Apply this patch on top of HDFS-12664 do a full build and check the 
> KSM webui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12774) Ozone: OzoneClient: Moving OzoneException from hadoop-hdfs to hadoop-hdfs-client

2017-11-03 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238418#comment-16238418
 ] 

Nanda kumar commented on HDFS-12774:


More info [here | 
https://issues.apache.org/jira/browse/HDFS-12549?focusedCommentId=16238371=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16238371].

> Ozone: OzoneClient: Moving OzoneException from hadoop-hdfs to 
> hadoop-hdfs-client
> 
>
> Key: HDFS-12774
> URL: https://issues.apache.org/jira/browse/HDFS-12774
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-12774-HDFS-7240.000.patch
>
>
> {{OzoneException}} has to be used in hadoop-hdfs-client, since we cannot 
> refer classes in hadoop-hdfs from hadoop-hdfs-client, it has to be moved to 
> client module.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12774) Ozone: OzoneClient: Moving OzoneException from hadoop-hdfs to hadoop-hdfs-client

2017-11-03 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12774:
---
Status: Patch Available  (was: Open)

> Ozone: OzoneClient: Moving OzoneException from hadoop-hdfs to 
> hadoop-hdfs-client
> 
>
> Key: HDFS-12774
> URL: https://issues.apache.org/jira/browse/HDFS-12774
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-12774-HDFS-7240.000.patch
>
>
> {{OzoneException}} has to be used in hadoop-hdfs-client, since we cannot 
> refer classes in hadoop-hdfs from hadoop-hdfs-client, it has to be moved to 
> client module.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12774) Ozone: OzoneClient: Moving OzoneException from hadoop-hdfs to hadoop-hdfs-client

2017-11-03 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12774:
---
Attachment: HDFS-12774-HDFS-7240.000.patch

> Ozone: OzoneClient: Moving OzoneException from hadoop-hdfs to 
> hadoop-hdfs-client
> 
>
> Key: HDFS-12774
> URL: https://issues.apache.org/jira/browse/HDFS-12774
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-12774-HDFS-7240.000.patch
>
>
> {{OzoneException}} has to be used in hadoop-hdfs-client, since we cannot 
> refer classes in hadoop-hdfs from hadoop-hdfs-client, it has to be moved to 
> client module.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12685) [READ] FsVolumeImpl exception when scanning Provided storage volume

2017-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238403#comment-16238403
 ] 

Hadoop QA commented on HDFS-12685:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-12685 does not apply to HDFS-9806. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12685 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12894005/HDFS-12685-HDFS-9806.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21950/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [READ] FsVolumeImpl exception when scanning Provided storage volume
> ---
>
> Key: HDFS-12685
> URL: https://issues.apache.org/jira/browse/HDFS-12685
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-12685-HDFS-9806.001.patch
>
>
> I left a Datanode running overnight and found this in the logs in the morning:
> {code}
> 2017-10-18 23:51:54,391 ERROR datanode.DirectoryScanner: Error compiling 
> report for the volume, StorageId: DS-e75ebc3c-6b12-424e-875a-a4ae1a4dcc29 
>   
>  
> java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: 
> URI scheme is not "file"  
>   
>  
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   
>   
> 
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)   
>   
>   
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:544)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:393)
>   
>   
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:375)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:320)
>   
>
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)   
>   
>
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)   
>   
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   
>
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   
>   
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   

[jira] [Commented] (HDFS-12734) Ozone: generate version specific documentation during the build

2017-11-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238400#comment-16238400
 ] 

Allen Wittenauer commented on HDFS-12734:
-

bq. If hugo is not there the documentation won't be generated and it won't be 
displayed

-1

This is not an option. mvn site needs to be used to be consistent with the rest 
of Hadoop.  If you want to move Hadoop to something that isn't mvn site, that's 
a much bigger conversation and should definitely not be snuck into a patch.

> Ozone: generate version specific documentation during the build
> ---
>
> Key: HDFS-12734
> URL: https://issues.apache.org/jira/browse/HDFS-12734
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-12734-HDFS-7240.001.patch
>
>
> HDFS-12664 susggested a new way to include documentation in the KSM web ui.
> This patch modifies the build lifecycle to automatically generate the 
> documentation *if* hugo is on the PATH. If hugo is not there  the 
> documentation won't be generated and it won't be displayed (see HDFS-12661)
> To test: Apply this patch on top of HDFS-12664 do a full build and check the 
> KSM webui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11246) FSNameSystem#logAuditEvent should be called outside the read or write locks

2017-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238387#comment-16238387
 ] 

Hadoop QA commented on HDFS-11246:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 170 unchanged - 3 fixed = 170 total (was 173) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 58s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m  1s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
24s{color} | {color:red} The patch generated 176 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:8 |
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestHdfsAdmin |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestErasureCodingMultipleRacks |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.hdfs.TestFileConcurrentReader |
|   | hadoop.hdfs.TestRestartDFS |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-11246 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12895950/HDFS-11246.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 687b323b42af 

[jira] [Commented] (HDFS-10419) Building HDFS on top of Ozone's storage containers

2017-11-03 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238379#comment-16238379
 ] 

Sanjay Radia commented on HDFS-10419:
-

HDFS-5389 describes one approach of building a NN that scales its namespace 
better than the current NN. 
It proposes caching only working set namespace in memory; also see [HUG - 
Removing Namenode's 
Limitation|https://www.slideshare.net/ydn/hadoop-meetup-hug-august-2013-removing-the-namenodes-memory-limitation].
 Independent studies have also analysed  LRU caching of HDFS Metadata  
[Metadata Traces and Workload Models for Evaluating Big Storage 
Systems|https://www.slideshare.net/ydn/hadoop-meetup-hug-august-2013-removing-the-namenodes-memory-limitation]
 This approach works because in spite of having large amounts of data (say data 
for the last five years) most of the data that is accessed is recent (say last 
3-9 months); hence the working set can fit in memory.

> Building HDFS on top of Ozone's storage containers
> --
>
> Key: HDFS-10419
> URL: https://issues.apache.org/jira/browse/HDFS-10419
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Major
>
> In HDFS-7240, Ozone defines storage containers to store both the data and the 
> metadata. The storage container layer provides an object storage interface 
> and aims to manage data/metadata in a distributed manner. More details about 
> storage containers can be found in the design doc in HDFS-7240.
> HDFS can adopt the storage containers to store and manage blocks. The general 
> idea is:
> # Each block can be treated as an object and the block ID is the object's key.
> # Blocks will still be stored in DataNodes but as objects in storage 
> containers.
> # The block management work can be separated out of the NameNode and will be 
> handled by the storage container layer in a more distributed way. The 
> NameNode will only manage the namespace (i.e., files and directories).
> # For each file, the NameNode only needs to record a list of block IDs which 
> are used as keys to obtain real data from storage containers.
> # A new DFSClient implementation talks to both NameNode and the storage 
> container layer to read/write.
> HDFS, especially the NameNode, can get much better scalability from this 
> design. Currently the NameNode's heaviest workload comes from the block 
> management, which includes maintaining the block-DataNode mapping, receiving 
> full/incremental block reports, tracking block states (under/over/miss 
> replicated), and joining every writing pipeline protocol to guarantee the 
> data consistency. These work bring high memory footprint and make NameNode 
> suffer from GC. HDFS-5477 already proposes to convert BlockManager as a 
> service. If we can build HDFS on top of the storage container layer, we not 
> only separate out the BlockManager from the NameNode, but also replace it 
> with a new distributed management scheme.
> The storage container work is currently in progress in HDFS-7240, and the 
> work proposed here is still in an experimental/exploring stage. We can do 
> this experiment in a feature branch so that people with interests can be 
> involved.
> A design doc will be uploaded later explaining more details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12756) Ozone: Add datanodeID to heartbeat responses and container protocol

2017-11-03 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238376#comment-16238376
 ] 

Nanda kumar commented on HDFS-12756:


Thanks [~anu] for the update. 
The patch is not applying anymore, can you please rebase and upload a new one.

> Ozone: Add datanodeID to heartbeat responses and container protocol
> ---
>
> Key: HDFS-12756
> URL: https://issues.apache.org/jira/browse/HDFS-12756
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-12756-HDFS-7240.001.patch, 
> HDFS-12756-HDFS-7240.002.patch
>
>
> if we have datanode ID in the HBs responses and commands send to datanode, we 
> will be able to do additional sanity checking on datanode before executing 
> the command. This is also very helpful in creating a MiniOzoneCluster with 
> 1000s of simulated nodes. This is needed for scale based unit tests of SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12549) Ozone: OzoneClient: Support for REST protocol

2017-11-03 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238371#comment-16238371
 ] 

Nanda kumar commented on HDFS-12549:


Thanks [~xyao] for the review.

bq. Please separate the broad refactoring change such as class/package rename 
in a separate JIRAs so that we can focus on reviewing core changes for this 
JIRA.
This was done in this jira because REST client is using {{OzoneException}} 
which is in hadoop-hdfs. We cannot refer classes in hadoop-hdfs from 
hadoop-hdfs-client, so the refactoring.
I have created HDFS-12774 for this, we have to fix it before HDFS-12549.

I will upload a new patch shortly addressing the other review comments.

> Ozone: OzoneClient: Support for REST protocol
> -
>
> Key: HDFS-12549
> URL: https://issues.apache.org/jira/browse/HDFS-12549
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-12549-HDFS-7240.000.patch, 
> HDFS-12549-HDFS-7240.001.patch, HDFS-12549-HDFS-7240.002.patch, 
> HDFS-12549-HDFS-7240.003.patch
>
>
> Support for REST protocol in OzoneClient. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7060) Avoid taking locks when sending heartbeats from the DataNode

2017-11-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238362#comment-16238362
 ] 

Íñigo Goiri commented on HDFS-7060:
---

[^HDFS-7060.003.patch] removed both {{statsLock}} and the synchronization on 
{{dataset}}.
I'm all in for removing the locks just for heart beating (we've seen similar 
issues internally).
However, what are the implications here?
* {{BlockPoolSlice}} maintains the DFS used and the number of blocks. There 
might be some minor disadjustment but the data should converge; I cannot 
foresee corruption here.
* {{FsVolumeImpl}} seems to mostly rely on {{BlockPoolSlice}} and the rest also 
seems thread safe. So again some possible disadjustment but no corruption.

Any other implications of the removal of these two synchronization points?
Is it worth making these assumptions explicit in a comment in each of these 
classes?

> Avoid taking locks when sending heartbeats from the DataNode
> 
>
> Key: HDFS-7060
> URL: https://issues.apache.org/jira/browse/HDFS-7060
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Xinwei Qin 
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7060-002.patch, HDFS-7060.000.patch, 
> HDFS-7060.001.patch, HDFS-7060.003.patch, complete_failed_qps.png, 
> sendHeartbeat.png
>
>
> We're seeing the heartbeat is blocked by the monitor of {{FsDatasetImpl}} 
> when the DN is under heavy load of writes:
> {noformat}
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:115)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:91)
> - locked <0x000780612fd8> (a java.lang.Object)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:563)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:827)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:743)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: RUNNABLE
> at java.io.UnixFileSystem.createFileExclusively(Native Method)
> at java.io.File.createNewFile(File.java:1006)
> at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:59)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:244)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:195)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:753)
> - locked <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HDFS-12774) Ozone: OzoneClient: Moving OzoneException from hadoop-hdfs to hadoop-hdfs-client

2017-11-03 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12774:
---
Description: {{OzoneException}} has to be used in hadoop-hdfs-client, since 
we cannot refer classes in hadoop-hdfs from hadoop-hdfs-client, it has to be 
moved to client module.  (was: {{OzoneException}} has to be used in 
hadoop-hdfs-client, since we cannot refer classes in hadoop-hdfs from 
hadoop-hdfs-client moving it to client module.)

> Ozone: OzoneClient: Moving OzoneException from hadoop-hdfs to 
> hadoop-hdfs-client
> 
>
> Key: HDFS-12774
> URL: https://issues.apache.org/jira/browse/HDFS-12774
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> {{OzoneException}} has to be used in hadoop-hdfs-client, since we cannot 
> refer classes in hadoop-hdfs from hadoop-hdfs-client, it has to be moved to 
> client module.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12774) Ozone: OzoneClient: Moving OzoneException from hadoop-hdfs to hadoop-hdfs-client

2017-11-03 Thread Nanda kumar (JIRA)
Nanda kumar created HDFS-12774:
--

 Summary: Ozone: OzoneClient: Moving OzoneException from 
hadoop-hdfs to hadoop-hdfs-client
 Key: HDFS-12774
 URL: https://issues.apache.org/jira/browse/HDFS-12774
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Nanda kumar
Assignee: Nanda kumar
Priority: Major


{{OzoneException}} has to be used in hadoop-hdfs-client, since we cannot refer 
classes in hadoop-hdfs from hadoop-hdfs-client moving it to client module.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7240) Object store in HDFS

2017-11-03 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HDFS-7240:
---
Attachment: HDFS Scalability and Ozone.pdf

I have added a document that explains a design for scaling HDFS and how Ozone 
paves the way towards the full solution.

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
>Priority: Major
> Attachments: HDFS Scalability and Ozone.pdf, HDFS-7240.001.patch, 
> HDFS-7240.002.patch, HDFS-7240.003.patch, HDFS-7240.003.patch, 
> HDFS-7240.004.patch, Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12685) [READ] FsVolumeImpl exception when scanning Provided storage volume

2017-11-03 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238345#comment-16238345
 ] 

Virajith Jalaparti commented on HDFS-12685:
---

[~ehiggs], can you check if this patch avoids the exception?

> [READ] FsVolumeImpl exception when scanning Provided storage volume
> ---
>
> Key: HDFS-12685
> URL: https://issues.apache.org/jira/browse/HDFS-12685
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-12685-HDFS-9806.001.patch
>
>
> I left a Datanode running overnight and found this in the logs in the morning:
> {code}
> 2017-10-18 23:51:54,391 ERROR datanode.DirectoryScanner: Error compiling 
> report for the volume, StorageId: DS-e75ebc3c-6b12-424e-875a-a4ae1a4dcc29 
>   
>  
> java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: 
> URI scheme is not "file"  
>   
>  
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   
>   
> 
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)   
>   
>   
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:544)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:393)
>   
>   
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:375)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:320)
>   
>
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)   
>   
>
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)   
>   
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   
>
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   
>   
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   
>   
> at java.lang.Thread.run(Thread.java:748)  
>   
>   
> 
> Caused by: java.lang.IllegalArgumentException: URI scheme is not "file"   
>   
>   
> 
> at java.io.File.(File.java:421) 
>  

[jira] [Commented] (HDFS-12771) Add genstamp and block size to metasave Corrupt blocks list

2017-11-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238343#comment-16238343
 ] 

Hudson commented on HDFS-12771:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13187 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13187/])
HDFS-12771. Add genstamp and block size to metasave Corrupt blocks list. 
(kihwal: rev 4d2dce40bbe5242953774e6a2fc0dc9111cf5ed0)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java


> Add genstamp and block size to metasave Corrupt blocks list
> ---
>
> Key: HDFS-12771
> URL: https://issues.apache.org/jira/browse/HDFS-12771
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
>Priority: Minor
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HDFS-12771.001.patch
>
>
> For corrupt blocks in metasave, adding genstamp and blocksize can be useful 
> instead of just the blockIds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12685) [READ] FsVolumeImpl exception when scanning Provided storage volume

2017-11-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12685:
--
Status: Patch Available  (was: Open)

> [READ] FsVolumeImpl exception when scanning Provided storage volume
> ---
>
> Key: HDFS-12685
> URL: https://issues.apache.org/jira/browse/HDFS-12685
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-12685-HDFS-9806.001.patch
>
>
> I left a Datanode running overnight and found this in the logs in the morning:
> {code}
> 2017-10-18 23:51:54,391 ERROR datanode.DirectoryScanner: Error compiling 
> report for the volume, StorageId: DS-e75ebc3c-6b12-424e-875a-a4ae1a4dcc29 
>   
>  
> java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: 
> URI scheme is not "file"  
>   
>  
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   
>   
> 
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)   
>   
>   
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:544)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:393)
>   
>   
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:375)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:320)
>   
>
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)   
>   
>
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)   
>   
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   
>
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   
>   
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   
>   
> at java.lang.Thread.run(Thread.java:748)  
>   
>   
> 
> Caused by: java.lang.IllegalArgumentException: URI scheme is not "file"   
>   
>   
> 
> at java.io.File.(File.java:421) 
>   
>  

[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-11-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch, 
> HDFS-11902-HDFS-9806.010.patch, HDFS-11902-HDFS-9806.011.patch, 
> HDFS-11902-HDFS-9806.012.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12685) [READ] FsVolumeImpl exception when scanning Provided storage volume

2017-11-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12685:
--
Status: Open  (was: Patch Available)

> [READ] FsVolumeImpl exception when scanning Provided storage volume
> ---
>
> Key: HDFS-12685
> URL: https://issues.apache.org/jira/browse/HDFS-12685
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Priority: Major
> Attachments: HDFS-12685-HDFS-9806.001.patch
>
>
> I left a Datanode running overnight and found this in the logs in the morning:
> {code}
> 2017-10-18 23:51:54,391 ERROR datanode.DirectoryScanner: Error compiling 
> report for the volume, StorageId: DS-e75ebc3c-6b12-424e-875a-a4ae1a4dcc29 
>   
>  
> java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: 
> URI scheme is not "file"  
>   
>  
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   
>   
> 
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)   
>   
>   
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:544)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:393)
>   
>   
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:375)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:320)
>   
>
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)   
>   
>
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)   
>   
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   
>
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   
>   
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   
>   
> at java.lang.Thread.run(Thread.java:748)  
>   
>   
> 
> Caused by: java.lang.IllegalArgumentException: URI scheme is not "file"   
>   
>   
> 
> at java.io.File.(File.java:421) 
>   
>

[jira] [Assigned] (HDFS-12685) [READ] FsVolumeImpl exception when scanning Provided storage volume

2017-11-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti reassigned HDFS-12685:
-

Assignee: Virajith Jalaparti

> [READ] FsVolumeImpl exception when scanning Provided storage volume
> ---
>
> Key: HDFS-12685
> URL: https://issues.apache.org/jira/browse/HDFS-12685
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-12685-HDFS-9806.001.patch
>
>
> I left a Datanode running overnight and found this in the logs in the morning:
> {code}
> 2017-10-18 23:51:54,391 ERROR datanode.DirectoryScanner: Error compiling 
> report for the volume, StorageId: DS-e75ebc3c-6b12-424e-875a-a4ae1a4dcc29 
>   
>  
> java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: 
> URI scheme is not "file"  
>   
>  
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   
>   
> 
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)   
>   
>   
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:544)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:393)
>   
>   
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:375)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:320)
>   
>
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)   
>   
>
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)   
>   
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   
>
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   
>   
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   
>   
> at java.lang.Thread.run(Thread.java:748)  
>   
>   
> 
> Caused by: java.lang.IllegalArgumentException: URI scheme is not "file"   
>   
>   
> 
> at java.io.File.(File.java:421) 
>   
>   

[jira] [Commented] (HDFS-12745) Ozone: XceiverClientManager should cache objects based on pipeline name

2017-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238337#comment-16238337
 ] 

Hadoop QA commented on HDFS-12745:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
4m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 
0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m  1s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}151m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.ozone.scm.TestSCMCli |
|   | hadoop.ozone.scm.container.TestContainerMapping |
|   | hadoop.ozone.scm.node.TestContainerPlacement |
|   | hadoop.cblock.TestLocalBlockCache |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication |
|   | hadoop.ozone.scm.TestContainerSmallFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12745 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-11-03 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238328#comment-16238328
 ] 

Virajith Jalaparti commented on HDFS-11902:
---

The checkstyle issues are from {{DFSConfigKeys}} and the builder pattern. 
{{blockaliasmap/package-info.java}} does have a java doc. Committing v012 of 
the patch to HDFS-9806 branch.

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch, 
> HDFS-11902-HDFS-9806.010.patch, HDFS-11902-HDFS-9806.011.patch, 
> HDFS-11902-HDFS-9806.012.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12772) RBF: Track Router states

2017-11-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238318#comment-16238318
 ] 

Íñigo Goiri commented on HDFS-12772:


Currently it has:
* Router state (including State Store interface).
* UI changes with the Routers and their states (including metrics).
* Router safe mode. When the Router cannot reach the State Store it goes into a 
safe mode where it doesn't serve write requests. It also happens at startup.
This is a little longer than I expected, I can split it if so.

> RBF: Track Router states
> 
>
> Key: HDFS-12772
> URL: https://issues.apache.org/jira/browse/HDFS-12772
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Normal
> Attachments: HDFS-12772.000.patch
>
>
> To monitor the state of the cluster, we should track the state of the 
> routers. This should be exposed in the UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12772) RBF: Track Router states

2017-11-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12772:
---
Attachment: HDFS-12772.000.patch

> RBF: Track Router states
> 
>
> Key: HDFS-12772
> URL: https://issues.apache.org/jira/browse/HDFS-12772
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Normal
> Attachments: HDFS-12772.000.patch
>
>
> To monitor the state of the cluster, we should track the state of the 
> routers. This should be exposed in the UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12772) RBF: Track Router states

2017-11-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12772:
---
Assignee: Íñigo Goiri
  Status: Patch Available  (was: Open)

> RBF: Track Router states
> 
>
> Key: HDFS-12772
> URL: https://issues.apache.org/jira/browse/HDFS-12772
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Normal
> Attachments: HDFS-12772.000.patch
>
>
> To monitor the state of the cluster, we should track the state of the 
> routers. This should be exposed in the UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12771) Add genstamp and block size to metasave Corrupt blocks list

2017-11-03 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-12771:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.0
   3.0.0
   Status: Resolved  (was: Patch Available)

I've committed this to trunk and branch-3.0. Thanks for working on this, Kuhu.

> Add genstamp and block size to metasave Corrupt blocks list
> ---
>
> Key: HDFS-12771
> URL: https://issues.apache.org/jira/browse/HDFS-12771
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
>Priority: Minor
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HDFS-12771.001.patch
>
>
> For corrupt blocks in metasave, adding genstamp and blocksize can be useful 
> instead of just the blockIds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12771) Add genstamp and block size to metasave Corrupt blocks list

2017-11-03 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238289#comment-16238289
 ] 

Kihwal Lee commented on HDFS-12771:
---

+1 

> Add genstamp and block size to metasave Corrupt blocks list
> ---
>
> Key: HDFS-12771
> URL: https://issues.apache.org/jira/browse/HDFS-12771
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
>Priority: Minor
> Attachments: HDFS-12771.001.patch
>
>
> For corrupt blocks in metasave, adding genstamp and blocksize can be useful 
> instead of just the blockIds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12771) Add genstamp and block size to metasave Corrupt blocks list

2017-11-03 Thread Kuhu Shukla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238268#comment-16238268
 ] 

Kuhu Shukla commented on HDFS-12771:


[~kihwal], could you share any review comments for this patch. Thank you!


> Add genstamp and block size to metasave Corrupt blocks list
> ---
>
> Key: HDFS-12771
> URL: https://issues.apache.org/jira/browse/HDFS-12771
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
>Priority: Minor
> Attachments: HDFS-12771.001.patch
>
>
> For corrupt blocks in metasave, adding genstamp and blocksize can be useful 
> instead of just the blockIds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11467) Support ErasureCoding section in OIV XML/ReverseXML

2017-11-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238265#comment-16238265
 ] 

Hudson commented on HDFS-11467:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13186 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13186/])
HDFS-11467. Support ErasureCoding section in OIV XML/ReverseXML. (xiao: rev 
299d38295d61e3ad154814b680558969449d50fe)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageReconstructor.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java


> Support ErasureCoding section in OIV XML/ReverseXML
> ---
>
> Key: HDFS-11467
> URL: https://issues.apache.org/jira/browse/HDFS-11467
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Assignee: Huafeng Wang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0
>
> Attachments: HDFS-11467.001.patch, HDFS-11467.002.patch, 
> HDFS-11467.003.patch
>
>
> As discussed in HDFS-7859, after ErasureCoding section is added into fsimage, 
> we would like to also support exporting this section into an XML back and 
> forth using the OIV tool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12754) Lease renewal can hit a deadlock

2017-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238244#comment-16238244
 ] 

Hadoop QA commented on HDFS-12754:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 49s{color} 
| {color:red} hadoop-hdfs-project generated 1 new + 426 unchanged - 0 fixed = 
427 total (was 426) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
184 unchanged - 0 fixed = 185 total (was 184) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
26s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}185m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestErasureCodingMultipleRacks |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12754 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12895825/HDFS-12754.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ef5a9dac2fec 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Assigned] (HDFS-9240) Use Builder pattern for BlockLocation constructors

2017-11-03 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDFS-9240:


Assignee: Virajith Jalaparti  (was: Xiaoyu Yao)

> Use Builder pattern for BlockLocation constructors
> --
>
> Key: HDFS-9240
> URL: https://issues.apache.org/jira/browse/HDFS-9240
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Virajith Jalaparti
>Priority: Minor
> Attachments: HDFS-9240.001.patch, HDFS-9240.002.patch
>
>
> This JIRA is opened to refactor the 8 telescoping constructors of 
> BlockLocation class with Builder pattern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11246) FSNameSystem#logAuditEvent should be called outside the read or write locks

2017-11-03 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated HDFS-11246:
---
Attachment: HDFS-11246.004.patch

Revised patch addressing comments from Daryn.

> FSNameSystem#logAuditEvent should be called outside the read or write locks
> ---
>
> Key: HDFS-11246
> URL: https://issues.apache.org/jira/browse/HDFS-11246
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
>Priority: Major
> Attachments: HDFS-11246.001.patch, HDFS-11246.002.patch, 
> HDFS-11246.003.patch, HDFS-11246.004.patch
>
>
> {code}
> readLock();
> boolean success = true;
> ContentSummary cs;
> try {
>   checkOperation(OperationCategory.READ);
>   cs = FSDirStatAndListingOp.getContentSummary(dir, src);
> } catch (AccessControlException ace) {
>   success = false;
>   logAuditEvent(success, operationName, src);
>   throw ace;
> } finally {
>   readUnlock(operationName);
> }
> {code}
> It would be nice to have audit logging outside the lock esp. in scenarios 
> where applications hammer a given operation several times. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11467) Support ErasureCoding section in OIV XML/ReverseXML

2017-11-03 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-11467:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

> Support ErasureCoding section in OIV XML/ReverseXML
> ---
>
> Key: HDFS-11467
> URL: https://issues.apache.org/jira/browse/HDFS-11467
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Assignee: Huafeng Wang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0
>
> Attachments: HDFS-11467.001.patch, HDFS-11467.002.patch, 
> HDFS-11467.003.patch
>
>
> As discussed in HDFS-7859, after ErasureCoding section is added into fsimage, 
> we would like to also support exporting this section into an XML back and 
> forth using the OIV tool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11467) Support ErasureCoding section in OIV XML/ReverseXML

2017-11-03 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238187#comment-16238187
 ] 

Xiao Chen commented on HDFS-11467:
--

Committed to trunk and branch-3.0.

Thanks [~HuafengWang] for contributing the patch, [~drankye] and others for the 
reviews!

> Support ErasureCoding section in OIV XML/ReverseXML
> ---
>
> Key: HDFS-11467
> URL: https://issues.apache.org/jira/browse/HDFS-11467
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Assignee: Huafeng Wang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0
>
> Attachments: HDFS-11467.001.patch, HDFS-11467.002.patch, 
> HDFS-11467.003.patch
>
>
> As discussed in HDFS-7859, after ErasureCoding section is added into fsimage, 
> we would like to also support exporting this section into an XML back and 
> forth using the OIV tool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11467) Support ErasureCoding section in OIV XML/ReverseXML

2017-11-03 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238183#comment-16238183
 ] 

Xiao Chen commented on HDFS-11467:
--

+1 from me too, Ran failed tests locally, all passed. Committing this.


> Support ErasureCoding section in OIV XML/ReverseXML
> ---
>
> Key: HDFS-11467
> URL: https://issues.apache.org/jira/browse/HDFS-11467
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Assignee: Huafeng Wang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11467.001.patch, HDFS-11467.002.patch, 
> HDFS-11467.003.patch
>
>
> As discussed in HDFS-7859, after ErasureCoding section is added into fsimage, 
> we would like to also support exporting this section into an XML back and 
> forth using the OIV tool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12711) deadly hdfs test

2017-11-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238181#comment-16238181
 ] 

stack commented on HDFS-12711:
--

bq. For now, though, I'm sort of tired at looking at this problem and will go 
work on something else for a while.

Thanks for putting Hadoop in a box.

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12771) Add genstamp and block size to metasave Corrupt blocks list

2017-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238175#comment-16238175
 ] 

Hadoop QA commented on HDFS-12771:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 82m 
22s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12771 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12895826/HDFS-12771.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6603234d905d 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c417284 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21944/testReport/ |
| Max. process+thread count | 3908 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21944/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add genstamp and block size to metasave Corrupt blocks list
> ---
>
> Key: HDFS-12771
> 

[jira] [Comment Edited] (HDFS-9240) Use Builder pattern for BlockLocation constructors

2017-11-03 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238168#comment-16238168
 ] 

Virajith Jalaparti edited comment on HDFS-9240 at 11/3/17 6:50 PM:
---

Thanks for taking a look [~xyao]. Posting new patch (v002) that fixes the 
NoWhitespaceAfter and "longer than 80 characters" checkstyle issues. The others 
are from the builder pattern.


was (Author: virajith):
Thanks for taking a look [~xyao]. Posting new patch (v002) that fixes the 
NoWhitespaceAfter and "longer than 80 characters" checkstyle issues. The others 
from the builder pattern.

> Use Builder pattern for BlockLocation constructors
> --
>
> Key: HDFS-9240
> URL: https://issues.apache.org/jira/browse/HDFS-9240
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HDFS-9240.001.patch, HDFS-9240.002.patch
>
>
> This JIRA is opened to refactor the 8 telescoping constructors of 
> BlockLocation class with Builder pattern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9240) Use Builder pattern for BlockLocation constructors

2017-11-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9240:
-
Status: Patch Available  (was: Open)

> Use Builder pattern for BlockLocation constructors
> --
>
> Key: HDFS-9240
> URL: https://issues.apache.org/jira/browse/HDFS-9240
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HDFS-9240.001.patch, HDFS-9240.002.patch
>
>
> This JIRA is opened to refactor the 8 telescoping constructors of 
> BlockLocation class with Builder pattern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9240) Use Builder pattern for BlockLocation constructors

2017-11-03 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238168#comment-16238168
 ] 

Virajith Jalaparti commented on HDFS-9240:
--

Thanks for taking a look [~xyao]. Posting new patch (v002) that fixes the 
NoWhitespaceAfter and "longer than 80 characters" checkstyle issues. The others 
from the builder pattern.

> Use Builder pattern for BlockLocation constructors
> --
>
> Key: HDFS-9240
> URL: https://issues.apache.org/jira/browse/HDFS-9240
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HDFS-9240.001.patch, HDFS-9240.002.patch
>
>
> This JIRA is opened to refactor the 8 telescoping constructors of 
> BlockLocation class with Builder pattern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9240) Use Builder pattern for BlockLocation constructors

2017-11-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9240:
-
Status: Open  (was: Patch Available)

> Use Builder pattern for BlockLocation constructors
> --
>
> Key: HDFS-9240
> URL: https://issues.apache.org/jira/browse/HDFS-9240
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HDFS-9240.001.patch, HDFS-9240.002.patch
>
>
> This JIRA is opened to refactor the 8 telescoping constructors of 
> BlockLocation class with Builder pattern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9240) Use Builder pattern for BlockLocation constructors

2017-11-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9240:
-
Attachment: HDFS-9240.002.patch

> Use Builder pattern for BlockLocation constructors
> --
>
> Key: HDFS-9240
> URL: https://issues.apache.org/jira/browse/HDFS-9240
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HDFS-9240.001.patch, HDFS-9240.002.patch
>
>
> This JIRA is opened to refactor the 8 telescoping constructors of 
> BlockLocation class with Builder pattern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12773) RBF: Improve State Store FS implementation

2017-11-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238117#comment-16238117
 ] 

Íñigo Goiri commented on HDFS-12773:


In addition to the safer FS implementation:
* Added the test metrics missing from HDFS-12335.
* Removed {{get(Class clazz, String sub)}} from 
{{StateStoreRecordOperations}} as it's not really needed.

> RBF: Improve State Store FS implementation
> --
>
> Key: HDFS-12773
> URL: https://issues.apache.org/jira/browse/HDFS-12773
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Normal
> Attachments: HDFS-12773.000.patch
>
>
> HDFS-10630 introduced a filesystem implementation of the State Store for unit 
> tests. However, this implementation doesn't handle multiple writers 
> concurrently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12773) RBF: Improve State Store FS implementation

2017-11-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12773:
---
Attachment: HDFS-12773.000.patch

> RBF: Improve State Store FS implementation
> --
>
> Key: HDFS-12773
> URL: https://issues.apache.org/jira/browse/HDFS-12773
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Normal
> Attachments: HDFS-12773.000.patch
>
>
> HDFS-10630 introduced a filesystem implementation of the State Store for unit 
> tests. However, this implementation doesn't handle multiple writers 
> concurrently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12773) RBF: Improve State Store FS implementation

2017-11-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12773:
---
Priority: Normal  (was: Trivial)

> RBF: Improve State Store FS implementation
> --
>
> Key: HDFS-12773
> URL: https://issues.apache.org/jira/browse/HDFS-12773
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Normal
> Attachments: HDFS-12773.000.patch
>
>
> HDFS-10630 introduced a filesystem implementation of the State Store for unit 
> tests. However, this implementation doesn't handle multiple writers 
> concurrently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12773) RBF: Improve State Store FS implementation

2017-11-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12773:
---
Assignee: Íñigo Goiri
  Status: Patch Available  (was: Open)

> RBF: Improve State Store FS implementation
> --
>
> Key: HDFS-12773
> URL: https://issues.apache.org/jira/browse/HDFS-12773
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Normal
> Attachments: HDFS-12773.000.patch
>
>
> HDFS-10630 introduced a filesystem implementation of the State Store for unit 
> tests. However, this implementation doesn't handle multiple writers 
> concurrently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12711) deadly hdfs test

2017-11-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238107#comment-16238107
 ] 

Allen Wittenauer commented on HDFS-12711:
-

Thanks!

I'll have to play around with sending a SIGQUIT. The other thing is that some 
process types may need different types of signals.  It might be useful to be 
able to define the "signal path"... e.g., surefire processes get QUIT -> TERM 
-> KILL.

I know the other thing is for archiver to save off the stack trace logs  
(hs_err_pidXX.log files) we do get.  That's just a settings thing that I've 
been too busy to setup in Jenkins. 

For now, though, I'm sort of tired at looking at this problem and will go work 
on something else for a while.  It's at the point that the issues are firmly 
contained from ASF build infra perspective and rests solely in the hands of the 
Hadoop community to fix their unit tests (or even base code) to be less broken. 
 

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12772) RBF: Track Router states

2017-11-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12772:
---
Affects Version/s: 3.0.0

> RBF: Track Router states
> 
>
> Key: HDFS-12772
> URL: https://issues.apache.org/jira/browse/HDFS-12772
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Priority: Normal
>
> To monitor the state of the cluster, we should track the state of the 
> routers. This should be exposed in the UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12763) DataStreamer should heartbeat during flush

2017-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238081#comment-16238081
 ] 

Hadoop QA commented on HDFS-12763:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 78 
unchanged - 1 fixed = 78 total (was 79) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12763 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12895473/HDFS-12763.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ab88bd59967c 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git 

[jira] [Updated] (HDFS-12772) RBF: Track Router states

2017-11-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12772:
---
Description: To monitor the state of the cluster, we should track the state 
of the routers. This should be exposed in the UI.  (was: _emphasized text_)

> RBF: Track Router states
> 
>
> Key: HDFS-12772
> URL: https://issues.apache.org/jira/browse/HDFS-12772
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Normal
>
> To monitor the state of the cluster, we should track the state of the 
> routers. This should be exposed in the UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12772) RBF: Track Router states

2017-11-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12772:
---
Description: _emphasized text_

> RBF: Track Router states
> 
>
> Key: HDFS-12772
> URL: https://issues.apache.org/jira/browse/HDFS-12772
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Normal
>
> _emphasized text_



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >