[jira] [Commented] (HDFS-12925) Ozone: Container : Add key versioning support-2

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293667#comment-16293667
 ] 

genericqa commented on HDFS-12925:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
16s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}165m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}240m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestBlockStoragePolicy |
|   | hadoop.ozone.web.client.TestKeysRatis |
|   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
|   | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.TestHFlush |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.ksm.TestKeySpaceManager |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | 

[jira] [Commented] (HDFS-12292) Federation: Support viewfs:// schema path for DfsAdmin commands

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293664#comment-16293664
 ] 

genericqa commented on HDFS-12292:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
50s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 32s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 47s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}123m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}214m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem |
|   | hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem |
|   | hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12292 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882659/HDFS-12292-004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9d43a98f894e 

[jira] [Created] (HDFS-12933) Improve logging when DFSStripedOutputStream failed to read some blocks

2017-12-15 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-12933:


 Summary: Improve logging when DFSStripedOutputStream failed to 
read some blocks
 Key: HDFS-12933
 URL: https://issues.apache.org/jira/browse/HDFS-12933
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding
Reporter: Xiao Chen
Priority: Minor


Currently if there are less DataNodes than the erasure coding policy's (# of 
data blocks + # of parity blocks), the client sees this:

{noformat}
09:18:24 17/12/14 09:18:24 WARN hdfs.DFSOutputStream: Cannot allocate parity 
block(index=13, policy=RS-10-4-1024k). Not enough datanodes? Exclude nodes=[]
09:18:24 17/12/14 09:18:24 WARN hdfs.DFSOutputStream: Block group <1> has 1 
corrupt blocks.
{noformat}

The 1st line is good. The 2nd line may be confusing to end users. We should 
investigate the error and be more general / accurate. Maybe something like 
'failed to read x blocks'.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12904) Add DataTransferThrottler to the Datanode transfers

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293654#comment-16293654
 ] 

genericqa commented on HDFS-12904:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
54s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 633 unchanged - 0 fixed = 635 total (was 633) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}129m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}183m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy 
|
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 |
|   | hadoop.cli.TestErasureCodingCLI |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12904 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902475/HDFS-12904.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  

[jira] [Commented] (HDFS-12779) [READ] Allow cluster id to be specified to the Image generation tool

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293597#comment-16293597
 ] 

Hudson commented on HDFS-12779:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12779. [READ] Allow cluster id to be specified to the Image (cdouglas: rev 
6cd80b2521e6283036d8c7058d8e452a93ff8e4b)
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/NamespaceInfo.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java


> [READ] Allow cluster id to be specified to the Image generation tool
> 
>
> Key: HDFS-12779
> URL: https://issues.apache.org/jira/browse/HDFS-12779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Trivial
> Attachments: HDFS-12779-HDFS-9806.001.patch
>
>
> Setting the cluster id for the FSImage generated for PROVIDED files is 
> required when the Namenode for PROVIDED files is expected to run in 
> federation with other Namenodes that manage local storage/data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293614#comment-16293614
 ] 

Hudson commented on HDFS-12912:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12912. [READ] Fix configuration and implementation of LevelDB-based 
(cdouglas: rev 80c3fec3a13c41051daaae42e5c9a9fedf5c7ee7)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/LevelDBFileRegionAliasMap.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/aliasmap/TestInMemoryAliasMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/aliasmap/ITestInMemoryAliasMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestInMemoryLevelDBAliasMapClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestLevelDbMockAliasMapClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryLevelDBAliasMapServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md


> [READ] Fix configuration and implementation of LevelDB-based alias maps
> ---
>
> Key: HDFS-12912
> URL: https://issues.apache.org/jira/browse/HDFS-12912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12912-HDFS-9806.001.patch, 
> HDFS-12912-HDFS-9806.002.patch
>
>
> {{LevelDBFileRegionAliasMap}} fails to create the leveldb store if the 
> directory is absent.
> {{InMemoryAliasMap}} does not support reading from leveldb-based alias map 
> created from {{LevelDBFileRegionAliasMap}} with the block id configured. 
> Further, the configuration for these aliasmaps must be specified using local 
> paths and not as URIs as currently shown in the documentation 
> ({{HdfsProvidedStorage.md}}).
> This JIRA is to fix these issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12605) [READ] TestNameNodeProvidedImplementation#testProvidedDatanodeFailures fails after rebase

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293591#comment-16293591
 ] 

Hudson commented on HDFS-12605:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12605. [READ] (cdouglas: rev d6a9a8997339939b59ce36246225f7cc45b21da5)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java


> [READ] TestNameNodeProvidedImplementation#testProvidedDatanodeFailures fails 
> after rebase
> -
>
> Key: HDFS-12605
> URL: https://issues.apache.org/jira/browse/HDFS-12605
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12605-HDFS-9806.001.patch
>
>
> {{TestNameNodeProvidedImplementation#testProvidedDatanodeFailures}} fails 
> after rebase with the following error:
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.net.DFSTopologyNodeImpl.decStorageTypeCount(DFSTopologyNodeImpl.java:127)
>   at 
> org.apache.hadoop.hdfs.net.DFSTopologyNodeImpl.remove(DFSTopologyNodeImpl.java:318)
>   at 
> org.apache.hadoop.hdfs.net.DFSTopologyNodeImpl.remove(DFSTopologyNodeImpl.java:336)
>   at 
> org.apache.hadoop.net.NetworkTopology.remove(NetworkTopology.java:222)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.removeDatanode(DatanodeManager.java:712)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.removeDeadDatanode(DatanodeManager.java:755)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager.heartbeatCheck(HeartbeatManager.java:407)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerTestUtil.noticeDeadDatanode(BlockManagerTestUtil.java:213)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeProvidedImplementation.testProvidedDatanodeFailures(TestNameNodeProvidedImplementation.java:471)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12874) [READ] Documentation for provided storage

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293611#comment-16293611
 ] 

Hudson commented on HDFS-12874:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12874. Documentation for provided storage. Contributed by Virajith 
(cdouglas: rev 2298f2d76b2cafd84c8f7421ae792336d6f2f37a)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (add) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md


> [READ] Documentation for provided storage
> -
>
> Key: HDFS-12874
> URL: https://issues.apache.org/jira/browse/HDFS-12874
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chris Douglas
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12874-HDFS-9806.00.patch, 
> HDFS-12874-HDFS-9806.01.patch
>
>
> The configuration and deployment of provided storage should be documented for 
> end-users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12905) [READ] Handle decommissioning and under-maintenance Datanodes with Provided storage.

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293612#comment-16293612
 ] 

Hudson commented on HDFS-12905:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12905. [READ] Handle decommissioning and under-maintenance (cdouglas: rev 
0f6aa9564cbe0812a8cab36d999e353269dd6bc9)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java


> [READ] Handle decommissioning and under-maintenance Datanodes with Provided 
> storage.
> 
>
> Key: HDFS-12905
> URL: https://issues.apache.org/jira/browse/HDFS-12905
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12905-HDFS-9806.001.patch, 
> HDFS-12905-HDFS-9806.002.patch
>
>
> {{ProvidedStorageMap}} doesn't keep track of the state of the datanodes with 
> Provided storage. As a result, it can return nodes that are being 
> decommissioned or under-maintenance even when live datanodes exist. This JIRA 
> is to prefer live datanodes to datanodes in other states.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12685) [READ] FsVolumeImpl exception when scanning Provided storage volume

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293602#comment-16293602
 ] 

Hudson commented on HDFS-12685:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12685. [READ] FsVolumeImpl exception when scanning Provided storage 
(cdouglas: rev cc933cba77c147153e463415fc192cee2d53a1ef)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java


> [READ] FsVolumeImpl exception when scanning Provided storage volume
> ---
>
> Key: HDFS-12685
> URL: https://issues.apache.org/jira/browse/HDFS-12685
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12685-HDFS-9806.001.patch, 
> HDFS-12685-HDFS-9806.002.patch, HDFS-12685-HDFS-9806.003.patch, 
> HDFS-12685-HDFS-9806.004.patch
>
>
> I left a Datanode running overnight and found this in the logs in the morning:
> {code}
> 2017-10-18 23:51:54,391 ERROR datanode.DirectoryScanner: Error compiling 
> report for the volume, StorageId: DS-e75ebc3c-6b12-424e-875a-a4ae1a4dcc29 
>   
>  
> java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: 
> URI scheme is not "file"  
>   
>  
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   
>   
> 
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)   
>   
>   
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:544)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:393)
>   
>   
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:375)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:320)
>   
>
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)   
>   
>
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)   
>   
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   
>
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   
> 

[jira] [Commented] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293600#comment-16293600
 ] 

Hudson commented on HDFS-12778:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12778. [READ] Report multiple locations for PROVIDED blocks (cdouglas: rev 
3d3be87e301d9f8ab1a220bc5dbeae0f032c5a86)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FixedBlockResolver.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java


> [READ] Report multiple locations for PROVIDED blocks
> 
>
> Key: HDFS-12778
> URL: https://issues.apache.org/jira/browse/HDFS-12778
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12778-HDFS-9806.001.patch, 
> HDFS-12778-HDFS-9806.002.patch, HDFS-12778-HDFS-9806.003.patch
>
>
> On {{getBlockLocations}}, only one Datanode is returned as the location for 
> all PROVIDED blocks. This can hurt the performance of applications which 
> typically 3 locations per block. We need to return multiple Datanodes for 
> each PROVIDED block for better application performance/resilience. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12789) [READ] Image generation tool does not close an opened stream

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293595#comment-16293595
 ] 

Hudson commented on HDFS-12789:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12789. [READ] Image generation tool does not close an opened stream 
(cdouglas: rev 87dc026beec5d69a84771631ebca5fadb2f7195b)
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java


> [READ] Image generation tool does not close an opened stream
> 
>
> Key: HDFS-12789
> URL: https://issues.apache.org/jira/browse/HDFS-12789
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12789-HDFS-9806.001.patch, 
> HDFS-12789-HDFS-9806.002.patch
>
>
> Other JIRAs (e.g., HDFS-12671), generate a FindBug issues:
> {code}
> Bug type OBL_UNSATISFIED_OBLIGATION_EXCEPTION_EDGE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.ImageWriter
> In method new 
> org.apache.hadoop.hdfs.server.namenode.ImageWriter(ImageWriter$Options)
> Reference type java.io.OutputStream
> 1 instances of obligation remaining
> Obligation to clean up resource created at ImageWriter.java:[line 170] is not 
> discharged
> Remaining obligations: {OutputStream x 1}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12887) [READ] Allow Datanodes with Provided volumes to start when blocks with the same id exist locally

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293609#comment-16293609
 ] 

Hudson commented on HDFS-12887:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12887. [READ] Allow Datanodes with Provided volumes to start when 
(cdouglas: rev 71ec170107e67e42cdbc5052c3f7b23c64751835)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java


> [READ] Allow Datanodes with Provided volumes to start when blocks with the 
> same id exist locally
> 
>
> Key: HDFS-12887
> URL: https://issues.apache.org/jira/browse/HDFS-12887
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12887-HDFS-9806.001.patch
>
>
> Fix {{ProvidedVolumeImpl.getVolumeMap}} to not throw an exception even when 
> an existing block in the volumemap has the same id.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12712) [9806] Code style cleanup

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293615#comment-16293615
 ] 

Hudson commented on HDFS-12712:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12712. [9806] Code style cleanup (cdouglas: rev 
8239e3afb31d3c4485817d4b8b8b195b554acbe7)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/SingleUGIResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
* (add) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/ITestProvidedImplementation.java
* (delete) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
* (edit) hadoop-tools/hadoop-fs2img/pom.xml
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreePath.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestProvidedReplicaImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java


> [9806] Code style cleanup
> -
>
> Key: HDFS-12712
> URL: https://issues.apache.org/jira/browse/HDFS-12712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Minor
> Attachments: HDFS-12712-HDFS-9806.001.patch, 
> HDFS-12712-HDFS-9806.002.patch, HDFS-12712-HDFS-9806.003.patch
>
>
> The code for HDFS-9806 could use some style cleaning before merging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12885) Add visibility/stability annotations

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293605#comment-16293605
 ] 

Hudson commented on HDFS-12885:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12885. Add visibility/stability annotations. Contributed by Chris 
(cdouglas: rev a027055dd2bf5009fe272e9ceb08305bd0a8cc31)
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreeWalk.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/BlockResolver.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSTreeWalk.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsUGIResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InMemoryAliasMapProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/BlockAlias.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/BlockAliasMap.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockAliasMap.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/SingleUGIResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMapProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ProvidedStorageLocation.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/UGIResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/AliasMapProtocolPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/LevelDBFileRegionAliasMap.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FixedBlockResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegion.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FixedBlockMultiReplicaResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryLevelDBAliasMapServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreePath.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/AliasMapProtocolServerSideTranslatorPB.java


> Add visibility/stability annotations
> 
>
> Key: HDFS-12885
> URL: https://issues.apache.org/jira/browse/HDFS-12885
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Trivial
> Attachments: HDFS-12885-HDFS-9806.00.patch, 
> HDFS-12885-HDFS-9806.001.patch
>
>
> Classes added in HDFS-9806 should include stability/visibility annotations 
> (HADOOP-5073)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12776) [READ] Increasing replication for PROVIDED files should create local replicas

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293596#comment-16293596
 ] 

Hudson commented on HDFS-12776:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12776. [READ] Increasing replication for PROVIDED files should (cdouglas: 
rev 90d1b47a2a400e07e2b6b812c4bbd9c4f2877786)
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


> [READ] Increasing replication for PROVIDED files should create local replicas
> -
>
> Key: HDFS-12776
> URL: https://issues.apache.org/jira/browse/HDFS-12776
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12776-HDFS-9806.001.patch
>
>
> For PROVIDED files, set replication only works when the target datanode does 
> not have a PROVIDED volume. In a cluster, where all Datanodes have PROVIDED 
> volumes, set replication does not work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293592#comment-16293592
 ] 

Hudson commented on HDFS-11902:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-11902. [READ] Merge BlockFormatProvider and FileRegionProvider. (cdouglas: 
rev 98f5ed5aa377ddd3f35b763b20c499d2ccac2ed5)
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockProvider.java
* (delete) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockFormat.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/BlockFormat.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockAliasMap.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestTextBlockAliasMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegionProvider.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionProvider.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/package-info.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestTextBlockFormat.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/BlockAliasMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockFormatProvider.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreePath.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionFormat.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java


> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch, 
> HDFS-11902-HDFS-9806.010.patch, HDFS-11902-HDFS-9806.011.patch, 
> HDFS-11902-HDFS-9806.012.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12894) [READ] Skip setting block count of ProvidedDatanodeStorageInfo on DN registration update

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293607#comment-16293607
 ] 

Hudson commented on HDFS-12894:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12894. [READ] Skip setting block count of (cdouglas: rev 
fb996a32a98a25c0fe34a8ebb28563b53cd6e20e)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java


> [READ] Skip setting block count of ProvidedDatanodeStorageInfo on DN 
> registration update
> 
>
> Key: HDFS-12894
> URL: https://issues.apache.org/jira/browse/HDFS-12894
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12894-HDFS-9806.001.patch, 
> HDFS-12894-HDFS-9806.002.patch
>
>
> As the {{ProvidedDatanodeStorageInfo}} is shared across multiple Datanodes, 
> it's block count shouldn't be set to 0 (in 
> {{DatanodeDescriptor.updateRegInfo}}) when any one Datanode's registration 
> info is updated. This prevents {{processFirstBlockReport}} from being called 
> multiple times for {{ProvidedDatanodeStorageInfo}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12665) [AliasMap] Create a version of the AliasMap that runs in memory in the Namenode (leveldb)

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293603#comment-16293603
 ] 

Hudson commented on HDFS-12665:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12665. [AliasMap] Create a version of the AliasMap that runs in (cdouglas: 
rev 352f994b6484524cdcfcda021046c59905b62f31)
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/aliasmap/TestInMemoryAliasMap.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/pom.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMapProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestInMemoryLevelDBAliasMapClient.java
* (add) hadoop-hdfs-project/hadoop-hdfs/src/main/proto/AliasMapProtocol.proto
* (add) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ProvidedStorageLocation.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestLevelDbMockAliasMapClient.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/AliasMapProtocolServerSideTranslatorPB.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryLevelDBAliasMapServer.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
* (edit) hadoop-tools/hadoop-fs2img/pom.xml
* (edit) hadoop-project/pom.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockAliasMap.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/AliasMapProtocolPB.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/aliasmap/ITestInMemoryAliasMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InMemoryAliasMapProtocolClientSideTranslatorPB.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
* (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegion.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/BlockAliasMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java


> [AliasMap] Create a version of the AliasMap that runs in memory in the 
> Namenode (leveldb)
> -
>
> Key: HDFS-12665
> URL: https://issues.apache.org/jira/browse/HDFS-12665
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
> Attachments: HDFS-12665-HDFS-9806.001.patch, 
> HDFS-12665-HDFS-9806.002.patch, HDFS-12665-HDFS-9806.003.patch, 
> HDFS-12665-HDFS-9806.004.patch, HDFS-12665-HDFS-9806.005.patch, 
> HDFS-12665-HDFS-9806.006.patch, HDFS-12665-HDFS-9806.007.patch, 
> HDFS-12665-HDFS-9806.008.patch, HDFS-12665-HDFS-9806.009.patch, 
> HDFS-12665-HDFS-9806.010.patch, HDFS-12665-HDFS-9806.011.patch, 
> HDFS-12665-HDFS-9806.012.patch
>
>
> The design of Provided Storage requires the use of an 

[jira] [Commented] (HDFS-12893) [READ] Support replication of Provided blocks with non-default topologies.

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293613#comment-16293613
 ] 

Hudson commented on HDFS-12893:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12893. [READ] Support replication of Provided blocks with (cdouglas: rev 
c89b29bd421152f0e7e16936f18d9e852895c37a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java


> [READ] Support replication of Provided blocks with non-default topologies.
> --
>
> Key: HDFS-12893
> URL: https://issues.apache.org/jira/browse/HDFS-12893
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12893-HDFS-9806.001.patch, 
> HDFS-12893-HDFS-9806.002.patch, HDFS-12893-HDFS-9806.003.patch, 
> HDFS-12893-HDFS-9806.004.patch
>
>
> {{chooseSourceDatanodes}} returns the {{ProvidedDatanodeDescriptor}} as the 
> source of Provided blocks. As this isn't a physical datanode and doesn't 
> exist the topology, {{ReplicationWork.chooseTargets}} might fail depending on 
> the chosen {{BlockPlacementPolicy}} implementation. This JIRA aims to fix 
> this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293606#comment-16293606
 ] 

Hudson commented on HDFS-12713:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12713. [READ] Refactor FileRegion and BlockAliasMap to separate out 
(cdouglas: rev 9c35be86e17021202823bfd3c2067ff3b312ce5c)
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockAliasMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestInMemoryLevelDBAliasMapClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestTextBlockAliasMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMapProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryLevelDBAliasMapServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegion.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/LevelDBFileRegionAliasMap.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/proto/AliasMapProtocol.proto
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/AliasMapProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InMemoryAliasMapProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/BlockAliasMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestLevelDBFileRegionAliasMap.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreePath.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestLevelDbMockAliasMapClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/NamespaceInfo.java


> [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata 
> and PROVIDED storage metadata
> 
>
> Key: HDFS-12713
> URL: https://issues.apache.org/jira/browse/HDFS-12713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Ewan Higgs
> Attachments: HDFS-12713-HDFS-9806.001.patch, 
> HDFS-12713-HDFS-9806.002.patch, HDFS-12713-HDFS-9806.003.patch, 
> HDFS-12713-HDFS-9806.004.patch, HDFS-12713-HDFS-9806.005.patch, 
> HDFS-12713-HDFS-9806.006.patch, HDFS-12713-HDFS-9806.007.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional 

[jira] [Commented] (HDFS-12775) [READ] Fix reporting of Provided volumes

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293599#comment-16293599
 ] 

Hudson commented on HDFS-12775:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12775. [READ] Fix reporting of Provided volumes (cdouglas: rev 
3b1d30301bcd35bbe525a7e122d3e5acfab92c88)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStats.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeDF.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeMXBean.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/FSNamesystemMBean.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/impl/pb/MembershipStatsPBImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMBean.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/StorageTypeStats.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/DefaultProvidedVolumeDF.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/metrics/TestFederationMetrics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/NamenodeStatusReport.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/proto/FederationProtocol.proto
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStatistics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/MembershipStats.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java


> [READ] Fix reporting of Provided volumes
> 
>
> Key: HDFS-12775
> URL: https://issues.apache.org/jira/browse/HDFS-12775
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12775-HDFS-9806.001.patch, 
> HDFS-12775-HDFS-9806.002.patch, HDFS-12775-HDFS-9806.003.patch, 
> HDFS-12775-HDFS-9806.004.patch, provided_capacity_nn.png, 
> provided_storagetype_capacity.png, provided_storagetype_capacity_jmx.png
>
>
> Provided Volumes currently report infinite capacity and 0 space used. 
> Further, PROVIDED locations are reported as {{/default-rack/null:0}} in fsck. 
> This JIRA is for making this more readable, and replace these with what users 
> would expect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HDFS-12903) [READ] Fix closing streams in ImageWriter

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293610#comment-16293610
 ] 

Hudson commented on HDFS-12903:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12903. [READ] Fix closing streams in ImageWriter (cdouglas: rev 
962b5e722ba86d1c012be11280c6b8fb5e0a2043)
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
Revert "HDFS-12903. [READ] Fix closing streams in ImageWriter" (cdouglas: rev 
e515103a83e12ad4908c0ca0b4b1aa4a87e2a840)
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
HDFS-12903. [READ] Fix closing streams in ImageWriter. Contributed by 
(cdouglas: rev 4b3a785914d890c47745e57d12a5a9abd084ffc1)
* (add) hadoop-tools/hadoop-fs2img/dev-support/findbugs-exclude.xml


> [READ] Fix closing streams in ImageWriter
> -
>
> Key: HDFS-12903
> URL: https://issues.apache.org/jira/browse/HDFS-12903
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12903-HDFS-9806.001.patch, 
> HDFS-12903-HDFS-9806.002.patch
>
>
> HDFS-12894 showed a FindBug in HDFS-9806. This seems related to HDFS-12881 
> when using {{IOUtils.cleanupWithLogger()}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12809) [READ] Fix the randomized selection of locations in {{ProvidedBlocksBuilder}}.

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293601#comment-16293601
 ] 

Hudson commented on HDFS-12809:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12809. [READ] Fix the randomized selection of locations in (cdouglas: rev 
4d59dabb7f6ef1d8565bf2bb2d38aeb91bf7f7cc)
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java


> [READ] Fix the randomized selection of locations in {{ProvidedBlocksBuilder}}.
> --
>
> Key: HDFS-12809
> URL: https://issues.apache.org/jira/browse/HDFS-12809
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12809-HDFS-9806.001.patch, 
> HDFS-12809-HDFS-9806.002.patch
>
>
> Calling {{getBlockLocations}} on files that have a PROVIDED replica, results 
> in the datanode locations being selected at random. Currently, this 
> randomization uses the datanode uuids to pick a node at random 
> ({{ProvidedDescriptor#choose}}, {{ProvidedDescriptor#chooseRandom}}). 
> Depending on the distribution of the datanode UUIDs, this can lead to large 
> number of iterations (which may not terminate) before a location is chosen. 
> This JIRA aims to replace this with a more efficient randomization strategy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11640) [READ] Datanodes should use a unique identifier when reading from external stores

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293608#comment-16293608
 ] 

Hudson commented on HDFS-11640:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-11640. [READ] Datanodes should use a unique identifier when reading 
(cdouglas: rev 4531588a94dcd2b4141b12828cb60ca3b953a58c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSTreeWalk.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestProvidedReplicaImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegion.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreePath.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java


> [READ] Datanodes should use a unique identifier when reading from external 
> stores
> -
>
> Key: HDFS-11640
> URL: https://issues.apache.org/jira/browse/HDFS-11640
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11640-HDFS-9806.001.patch, 
> HDFS-11640-HDFS-9806.002.patch, HDFS-11640-HDFS-9806.003.patch, 
> HDFS-11640-HDFS-9806.004.patch, HDFS-11640-HDFS-9806.005.patch
>
>
> Use a unique identifier when reading from external stores to ensure that 
> datanodes read the correct (version of) file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12591) [READ] Implement LevelDBFileRegionFormat

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293604#comment-16293604
 ] 

Hudson commented on HDFS-12591:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12591. [READ] Implement LevelDBFileRegionFormat. Contributed by (cdouglas: 
rev b634053c4daec181511abb314aeef0a8fe851086)
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestLevelDBFileRegionAliasMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/LevelDBFileRegionAliasMap.java


> [READ] Implement LevelDBFileRegionFormat
> 
>
> Key: HDFS-12591
> URL: https://issues.apache.org/jira/browse/HDFS-12591
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Attachments: HDFS-12591-HDFS-9806.001.patch, 
> HDFS-12591-HDFS-9806.002.patch, HDFS-12591-HDFS-9806.003.patch, 
> HDFS-12591-HDFS-9806.004.patch, HDFS-12591-HDFS-9806.005.patch, 
> HDFS-12591-HDFS-9806.006.patch, HDFS-12591-HDFS-9806.007.patch
>
>
> The existing work for HDFS-9806 uses an implementation of the {{FileRegion}} 
> read from a csv file. This is good for testability and diagnostic purposes, 
> but it is not very efficient for larger systems.
> There should be a version that is similar to the {{TextFileRegionFormat}} 
> that instead uses LevelDB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12671) [READ] Test NameNode restarts when PROVIDED is configured

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293594#comment-16293594
 ] 

Hudson commented on HDFS-12671:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12671. [READ] Test NameNode restarts when PROVIDED is configured 
(cdouglas: rev c293cc8e9b032d2c573340725ef8ecc15d49430d)
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java


> [READ] Test NameNode restarts when PROVIDED is configured
> -
>
> Key: HDFS-12671
> URL: https://issues.apache.org/jira/browse/HDFS-12671
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12671-HDFS-9806.001.patch, 
> HDFS-12671-HDFS-9806.002.patch, HDFS-12671-HDFS-9806.003.patch, 
> HDFS-12671-HDFS-9806.004.patch
>
>
> Add test case to ensure namenode restarts can be handled with provided 
> storage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12777) [READ] Reduce memory and CPU footprint for PROVIDED volumes.

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293598#comment-16293598
 ] 

Hudson commented on HDFS-12777:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12777. [READ] Reduce memory and CPU footprint for PROVIDED volumes. 
(cdouglas: rev e1a28f95b8ffcb86300148f10a23b710f8388341)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java


> [READ] Reduce memory and CPU footprint for PROVIDED volumes.
> 
>
> Key: HDFS-12777
> URL: https://issues.apache.org/jira/browse/HDFS-12777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12777-HDFS-9806.001.patch, 
> HDFS-12777-HDFS-9806.002.patch, HDFS-12777-HDFS-9806.003.patch, 
> HDFS-12777-HDFS-9806.004.patch
>
>
> As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
> storage. This can be millions of blocks for 100s of TBs of PROVIDED data. 
> Storing the data for these blocks can lead to a large memory footprint. 
> Further, with so many blocks, {{DirectoryScanner}} running on a PROVIDED 
> volume can increase the memory and CPU utilization. 
> To reduce these overheads, this JIRA aims to (a) disable the 
> {{DirectoryScanner}} on PROVIDED volumes (as HDFS-9806 focuses on only 
> read-only data in PROVIDED volumes), (b) reduce the space occupied by 
> {{FinalizedProvidedReplicaInfo}} by using a common URI prefix across all 
> PROVIDED blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12607) [READ] Even one dead datanode with PROVIDED storage results in ProvidedStorageInfo being marked as FAILED

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293593#comment-16293593
 ] 

Hudson commented on HDFS-12607:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12607. [READ] Even one dead datanode with PROVIDED storage results 
(cdouglas: rev 71d0a825711387fe06396323a9ca6a5af0ade415)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java


> [READ] Even one dead datanode with PROVIDED storage results in 
> ProvidedStorageInfo being marked as FAILED
> -
>
> Key: HDFS-12607
> URL: https://issues.apache.org/jira/browse/HDFS-12607
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12607-HDFS-9806.001.patch, 
> HDFS-12607-HDFS-9806.002.patch, HDFS-12607-HDFS-9806.003.patch, 
> HDFS-12607.repro.patch
>
>
> When a DN configured with PROVIDED storage is marked as dead by the NN, the 
> state of {{providedStorageInfo}} in {{ProvidedStorageMap}} is set to FAILED, 
> and never becomes NORMAL. The state should change to FAILED only if all 
> datanodes with PROVIDED storage are dead, and should be restored back to 
> NORMAL when a Datanode with NORMAL DatanodeStorage reports in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12289) [READ] HDFS-12091 breaks the tests for provided block reads

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293588#comment-16293588
 ] 

Hudson commented on HDFS-12289:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12289. [READ] HDFS-12091 breaks the tests for provided block reads 
(cdouglas: rev aca023b72cdb325ca66d196443218f6107efa1ca)
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java


> [READ] HDFS-12091 breaks the tests for provided block reads
> ---
>
> Key: HDFS-12289
> URL: https://issues.apache.org/jira/browse/HDFS-12289
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12289-HDFS-9806.001.patch
>
>
> In the tests within {{TestNameNodeProvidedImplementation}}, the files that 
> are supposed to belong to a provided volume are not located under the Storage 
> directory assigned to the volume in {{MiniDFSCluster}}. With HDFS-12091, this 
> isn't correct and thus, it breaks the tests. This JIRA is to fix the tests 
> under {{TestNameNodeProvidedImplementation}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11673) [READ] Handle failures of Datanode with PROVIDED storage

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293585#comment-16293585
 ] 

Hudson commented on HDFS-11673:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-11673. [READ] Handle failures of Datanode with PROVIDED storage (cdouglas: 
rev 546b95f4843f3cbbbdf72d90d202cad551696082)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java


> [READ] Handle failures of Datanode with PROVIDED storage
> 
>
> Key: HDFS-11673
> URL: https://issues.apache.org/jira/browse/HDFS-11673
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11673-HDFS-9806.001.patch, 
> HDFS-11673-HDFS-9806.002.patch, HDFS-11673-HDFS-9806.003.patch, 
> HDFS-11673-HDFS-9806.004.patch, HDFS-11673-HDFS-9806.005.patch
>
>
> Blocks on {{PROVIDED}} storage should become unavailable if and only if all 
> Datanodes that are configured with {{PROVIDED}} storage become unavailable. 
> Even if one Datanode with {{PROVIDED}} storage is available, all blocks on 
> the {{PROVIDED}} storage should be accessible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12584) [READ] Fix errors in image generation tool from latest rebase

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293590#comment-16293590
 ] 

Hudson commented on HDFS-12584:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12584. [READ] Fix errors in image generation tool from latest (cdouglas: 
rev 17052c4aff104cb02701bc1e8dc9cd73d1a325fb)
* (edit) hadoop-tools/hadoop-fs2img/pom.xml
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java


> [READ] Fix errors in image generation tool from latest rebase
> -
>
> Key: HDFS-12584
> URL: https://issues.apache.org/jira/browse/HDFS-12584
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12584-HDFS-9806.001.patch
>
>
> Fix compile errors, from the latest rebase, in FSImage generation tool



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11791) [READ] Test for increasing replication of provided files.

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293583#comment-16293583
 ] 

Hudson commented on HDFS-11791:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-11791. [READ] Test for increasing replication of provided files. 
(cdouglas: rev 4851f06bc2df9d2cfc69fc7c4cecf7babcaa7728)
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java


> [READ] Test for increasing replication of provided files.
> -
>
> Key: HDFS-11791
> URL: https://issues.apache.org/jira/browse/HDFS-11791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11791-HDFS-9806.001.patch, 
> HDFS-11791-HDFS-9806.002.patch
>
>
> Test whether increasing the replication of a file with storage policy 
> {{PROVIDED}} replicates blocks locally (i.e., to {{DISK}}).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12091) [READ] Check that the replicas served from a {{ProvidedVolumeImpl}} belong to the correct external storage

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293586#comment-16293586
 ] 

Hudson commented on HDFS-12091:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12091. [READ] Check that the replicas served from a (cdouglas: rev 
663b3c08b131ea2db693e1a5d2f5da98242fa854)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
HDFS-12289. [READ] HDFS-12091 breaks the tests for provided block reads 
(cdouglas: rev aca023b72cdb325ca66d196443218f6107efa1ca)
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java


> [READ] Check that the replicas served from a {{ProvidedVolumeImpl}} belong to 
> the correct external storage
> --
>
> Key: HDFS-12091
> URL: https://issues.apache.org/jira/browse/HDFS-12091
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12091-HDFS-9806.001.patch, 
> HDFS-12091-HDFS-9806.002.patch
>
>
> A {{ProvidedVolumeImpl}} can only serve blocks that "belong" to it. i.e., for 
> blocks served from a {{ProvidedVolumeImpl}}, the {{baseURI}} of the 
> {{ProvidedVolumeImpl}} should be a prefix of the URI of the blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10706) [READ] Add tool generating FSImage from external store

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293578#comment-16293578
 ] 

Hudson commented on HDFS-10706:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-10706. [READ] Add tool generating FSImage from external store (cdouglas: 
rev 8da3a6e314609f9124bd9979cd09cddbc2a10d36)
* (add) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSingleUGIResolver.java
* (add) hadoop-tools/hadoop-fs2img/pom.xml
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreeWalk.java
* (add) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFixedBlockResolver.java
* (add) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreePath.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FixedBlockResolver.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsUGIResolver.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/UGIResolver.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/package-info.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
* (edit) hadoop-tools/pom.xml
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/BlockResolver.java
* (add) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestRandomTreeWalk.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSTreeWalk.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockFormat.java
* (add) hadoop-tools/hadoop-fs2img/src/test/resources/log4j.properties
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/SingleUGIResolver.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FixedBlockMultiReplicaResolver.java
* (edit) hadoop-tools/hadoop-tools-dist/pom.xml


> [READ] Add tool generating FSImage from external store
> --
>
> Key: HDFS-10706
> URL: https://issues.apache.org/jira/browse/HDFS-10706
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, tools
>Reporter: Chris Douglas
>Assignee: Chris Douglas
> Attachments: HDFS-10706-HDFS-9806.002.patch, 
> HDFS-10706-HDFS-9806.003.patch, HDFS-10706-HDFS-9806.004.patch, 
> HDFS-10706-HDFS-9806.005.patch, HDFS-10706-HDFS-9806.006.patch, 
> HDFS-10706.001.patch, HDFS-10706.002.patch
>
>
> To experiment with provided storage, this provides a tool to map an external 
> namespace to an FSImage/NN storage. By loading it in a NN, one can access the 
> remote FS using HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11653) [READ] ProvidedReplica should return an InputStream that is bounded by its length

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293580#comment-16293580
 ] 

Hudson commented on HDFS-11653:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-11653. [READ] ProvidedReplica should return an InputStream that is 
(cdouglas: rev 1108cb76917debf0a8541d5130e015883eb521af)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestProvidedReplicaImpl.java


> [READ] ProvidedReplica should return an InputStream that is bounded by its 
> length
> -
>
> Key: HDFS-11653
> URL: https://issues.apache.org/jira/browse/HDFS-11653
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11653-HDFS-9806.001.patch, 
> HDFS-11653-HDFS-9806.002.patch
>
>
> {{ProvidedReplica#getDataInputStream}} should return an InputStream that is 
> bounded by {{ProvidedReplica#getBlockDataLength()}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11792) [READ] Test cases for ProvidedVolumeDF and ProviderBlockIteratorImpl

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293584#comment-16293584
 ] 

Hudson commented on HDFS-11792:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-11792. [READ] Test cases for ProvidedVolumeDF and (cdouglas: rev 
55ade54b8ed36e18f028f478381a96e7b8c6be50)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java


> [READ] Test cases for ProvidedVolumeDF and ProviderBlockIteratorImpl
> 
>
> Key: HDFS-11792
> URL: https://issues.apache.org/jira/browse/HDFS-11792
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11792-HDFS-9806.001.patch
>
>
> Test cases for {{ProvidedVolumeDF}} and {{ProviderBlockIteratorImpl}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12093) [READ] Share remoteFS between ProvidedReplica instances.

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293587#comment-16293587
 ] 

Hudson commented on HDFS-12093:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12093. [READ] Share remoteFS between ProvidedReplica instances. (cdouglas: 
rev 2407c9b93aabb021b76c802b19c928fb6cbb0a85)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestProvidedReplicaImpl.java


> [READ] Share remoteFS between ProvidedReplica instances.
> 
>
> Key: HDFS-12093
> URL: https://issues.apache.org/jira/browse/HDFS-12093
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12093-HDFS-9806.001.patch, 
> HDFS-12093-HDFS-9806.002.patch
>
>
> When a Datanode comes online using Provided storage, it fills the 
> {{ReplicaMap}} with the known replicas. With Provided Storage, this includes 
> {{ProvidedReplica}} instances. Each of these objects, in their constructor, 
> will construct an FileSystem using the Service Provider. This can result in 
> contacting the remote file system and checking that the credentials are 
> correct and that the data is there. For large systems this is a prohibitively 
> expensive operation to perform per replica.
> Instead, the {{ProvidedVolumeImpl}} should own the reference to the 
> {{remoteFS}} and should share it with the {{ProvidedReplica}} objects on 
> their creation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11703) [READ] Tests for ProvidedStorageMap

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293582#comment-16293582
 ] 

Hudson commented on HDFS-11703:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-11703. [READ] Tests for ProvidedStorageMap (cdouglas: rev 
89b9faf5294c93f66ba7bbe08f5ab9083ecb5d72)
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java


> [READ] Tests for ProvidedStorageMap
> ---
>
> Key: HDFS-11703
> URL: https://issues.apache.org/jira/browse/HDFS-11703
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11703-HDFS-9806.001.patch, 
> HDFS-11703-HDFS-9806.002.patch
>
>
> Add tests for the {{ProvidedStorageMap}} in the namenode



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11663) [READ] Fix NullPointerException in ProvidedBlocksBuilder

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293581#comment-16293581
 ] 

Hudson commented on HDFS-11663:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-11663. [READ] Fix NullPointerException in ProvidedBlocksBuilder (cdouglas: 
rev aa5ec85f7fd2dc6ac568a88716109bab8df8be19)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java


> [READ] Fix NullPointerException in ProvidedBlocksBuilder
> 
>
> Key: HDFS-11663
> URL: https://issues.apache.org/jira/browse/HDFS-11663
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11663-HDFS-9806.001.patch, 
> HDFS-11663-HDFS-9806.002.patch, HDFS-11663-HDFS-9806.003.patch
>
>
> When there are no Datanodes with PROVIDED storage, 
> {{ProvidedBlocksBuilder#build}} leads to a {{NullPointerException}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11190) [READ] Namenode support for data stored in external stores.

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293579#comment-16293579
 ] 

Hudson commented on HDFS-11190:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-11190. [READ] Namenode support for data stored in external stores. 
(cdouglas: rev d65df0f27395792c6e25f5e03b6ba1765e2ba925)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/LocatedBlockBuilder.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockFormatProvider.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockProvider.java
* (add) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java


> [READ] Namenode support for data stored in external stores.
> ---
>
> Key: HDFS-11190
> URL: https://issues.apache.org/jira/browse/HDFS-11190
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11190-HDFS-9806.001.patch, 
> HDFS-11190-HDFS-9806.002.patch, HDFS-11190-HDFS-9806.003.patch, 
> HDFS-11190-HDFS-9806.004.patch
>
>
> The goal of this JIRA is to enable the Namenode to know about blocks that are 
> in {{PROVIDED}} stores and are not necessarily stored on any Datanodes. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10675) [READ] Datanode support to read from external stores.

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293577#comment-16293577
 ] 

Hudson commented on HDFS-10675:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-10675. Datanode support to read from external stores. (cdouglas: rev 
b668eb91556b8c85c2b4925808ccb1f769031c20)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegion.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/DefaultProvidedVolumeDF.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSRollback.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeDF.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestClusterId.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/BlockFormat.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImplBuilder.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionFormat.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestCount.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
* (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStartupVersions.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegionProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/BlockAlias.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/UpgradeUtilities.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgrade.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/StorageInfo.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageCompression.java
* (edit) 

[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-15 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-9806:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 3.1.0
Target Version/s: 3.1.0
  Status: Resolved  (was: Patch Available)

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Fix For: 3.1.0
>
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch, HDFS-9806.002.patch, HDFS-9806.003.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-15 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293569#comment-16293569
 ] 

Chris Douglas commented on HDFS-9806:
-

The merge vote [passed|https://s.apache.org/tqLt]

Merged to trunk. Thanks [~virajith], [~ehiggs], and [~Thomas Demoor]!

Thanks also to [~elgoiri], [~mackrorysd], [~ste...@apache.org], [~eddyxu], 
[~anu], [~drankye], and [~umamaheswararao] for help with the design, testing, 
and review of this feature.

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Fix For: 3.1.0
>
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch, HDFS-9806.002.patch, HDFS-9806.003.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-15 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-9806:

Release Note: Provided storage allows data stored outside HDFS to be mapped 
to and addressed from HDFS. It builds on heterogeneous storage by introducing a 
new storage type, PROVIDED, to the set of media in a datanode. Clients 
accessing data in PROVIDED storages can cache replicas in local media, enforce 
HDFS invariants (e.g., security, quotas), and address more data than the 
cluster could persist in the storage attached to DataNodes.

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch, HDFS-9806.002.patch, HDFS-9806.003.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12292) Federation: Support viewfs:// schema path for DfsAdmin commands

2017-12-15 Thread KaiXinXIaoLei (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293558#comment-16293558
 ] 

KaiXinXIaoLei commented on HDFS-12292:
--

I also meet this problem by running "hdfs dfsadmin -safemode get". Is this 
patch useful? 

> Federation: Support viewfs:// schema path for DfsAdmin commands
> ---
>
> Key: HDFS-12292
> URL: https://issues.apache.org/jira/browse/HDFS-12292
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Mikhail Erofeev
>Assignee: Mikhail Erofeev
> Attachments: HDFS-12292-002.patch, HDFS-12292-003.patch, 
> HDFS-12292-004.patch, HDFS-12292.patch
>
>
> Motivation:
> As of now, clients need to specify a nameservice when a cluster is federated, 
> otherwise, the exception is fired:
> {code}
> hdfs dfsadmin -setQuota 10 viewfs://vfs-root/user/uname
> setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system
> # with fs.defaultFS = viewfs://vfs-root/
> hdfs dfsadmin -setQuota 10 vfs-root/user/uname
> setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system
> # works fine thanks to https://issues.apache.org/jira/browse/HDFS-11432
> hdfs dfsadmin -setQuota 10 hdfs://users-fs/user/uname
> {code}
> This creates inconvenience, inability to rely on fs.defaultFS and forces to 
> create client-side mappings for management scripts
> Implementation:
> PathData that is passed to commands should be resolved to its actual 
> FileSystem
> Result:
> ViewFS will be resolved to the actual HDFS file system



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12920) HDFS default value change (with adding time unit) breaks old version MR tarball work with new version (3.0) of hadoop

2017-12-15 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293555#comment-16293555
 ] 

Junping Du commented on HDFS-12920:
---

bq. An alternative to reverting the change is to deprecate the old property and 
create a new one that understands time units, as was raised in that JIRA. If 
specifying units breaks rolling upgrades, then what is the point of adding 
units, ever?
That is also a possible approach. We can either keep the default value for 
existing properties or start to using new properties and deprecated previous 
properties.

bq. So another workaround is to have at least two tarballs on HDFS, one that 
uses 3.x and one that uses 2.x. The 3.x site configs request the 3.x tarball 
and the 2.x site configs request the 2.x tarball. When the job submitter client 
upgrades to use 3.x jars, it can also upgrade to 3.x configs to start running 
the job with 3.x as well.
As we discussed offline, if we explicitly packaging these configs into tarball, 
then we may not hitting this issue as different version tar ball and 
configuration will match each other in the end. However, some users may not 
follow this practice before and after. Also, managing configurations in 
different places (cluster setup, MR tar ball, job submission, etc.) is also 
complicated. May be it is more easier to fix issue here instead of tarball 
configuration?

bq. Junping Du, does presence of any unit-suffixed values in the config file 
cause this failure?
Hi [~arpitagarwal], the unit-suffixed values is by default (in 
hdfs-default.xml) now in 3.x. Job submit against old version MR tar ball will 
load new default values provided by new hadoop deployment which will get stuck 
with exception I put above.

> HDFS default value change (with adding time unit) breaks old version MR 
> tarball work with new version (3.0) of hadoop
> -
>
> Key: HDFS-12920
> URL: https://issues.apache.org/jira/browse/HDFS-12920
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Junping Du
>Priority: Blocker
>
> After HADOOP-15059 get resolved. I tried to deploy 2.9.0 tar ball with 3.0.0 
> RC1, and run the job with following errors:
> {noformat}
> 2017-12-12 13:29:06,824 INFO [main] 
> org.apache.hadoop.service.AbstractService: Service 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; cause: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:542)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1764)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:308)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1722)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1719)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1650)
> {noformat}
> This is because HDFS-10845, we are adding time unit to hdfs-default.xml but 
> it cannot be recognized by old version MR jars. 
> This break our rolling upgrade story, so should mark as blocker.
> A quick workaround is to add values in hdfs-site.xml with removing all time 
> unit. But the right way may be to revert HDFS-10845 (and get rid of noisy 
> warnings).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12903) [READ] Fix closing streams in ImageWriter

2017-12-15 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas resolved HDFS-12903.
--
Resolution: Fixed

> [READ] Fix closing streams in ImageWriter
> -
>
> Key: HDFS-12903
> URL: https://issues.apache.org/jira/browse/HDFS-12903
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12903-HDFS-9806.001.patch, 
> HDFS-12903-HDFS-9806.002.patch
>
>
> HDFS-12894 showed a FindBug in HDFS-9806. This seems related to HDFS-12881 
> when using {{IOUtils.cleanupWithLogger()}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-12903) [READ] Fix closing streams in ImageWriter

2017-12-15 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12903:
-
Comment: was deleted

(was: Checked locally, this suppresses the warning correctly. Reverted the old 
patch and pushed this.

Thanks, [~virajith])

> [READ] Fix closing streams in ImageWriter
> -
>
> Key: HDFS-12903
> URL: https://issues.apache.org/jira/browse/HDFS-12903
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12903-HDFS-9806.001.patch, 
> HDFS-12903-HDFS-9806.002.patch
>
>
> HDFS-12894 showed a FindBug in HDFS-9806. This seems related to HDFS-12881 
> when using {{IOUtils.cleanupWithLogger()}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12903) [READ] Fix closing streams in ImageWriter

2017-12-15 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293550#comment-16293550
 ] 

Chris Douglas commented on HDFS-12903:
--

Checked locally, this suppresses the warning correctly. Reverted the old patch 
and pushed this.

Thanks, [~virajith]

> [READ] Fix closing streams in ImageWriter
> -
>
> Key: HDFS-12903
> URL: https://issues.apache.org/jira/browse/HDFS-12903
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12903-HDFS-9806.001.patch, 
> HDFS-12903-HDFS-9806.002.patch
>
>
> HDFS-12894 showed a FindBug in HDFS-9806. This seems related to HDFS-12881 
> when using {{IOUtils.cleanupWithLogger()}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12903) [READ] Fix closing streams in ImageWriter

2017-12-15 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293547#comment-16293547
 ] 

Chris Douglas commented on HDFS-12903:
--

Checked locally, this suppresses the warning correctly. Reverted the old patch 
and pushed this.

Thanks, [~virajith]

> [READ] Fix closing streams in ImageWriter
> -
>
> Key: HDFS-12903
> URL: https://issues.apache.org/jira/browse/HDFS-12903
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12903-HDFS-9806.001.patch, 
> HDFS-12903-HDFS-9806.002.patch
>
>
> HDFS-12894 showed a FindBug in HDFS-9806. This seems related to HDFS-12881 
> when using {{IOUtils.cleanupWithLogger()}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12903) [READ] Fix closing streams in ImageWriter

2017-12-15 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12903:
-
Attachment: HDFS-12903-HDFS-9806.002.patch

> [READ] Fix closing streams in ImageWriter
> -
>
> Key: HDFS-12903
> URL: https://issues.apache.org/jira/browse/HDFS-12903
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12903-HDFS-9806.001.patch, 
> HDFS-12903-HDFS-9806.002.patch
>
>
> HDFS-12894 showed a FindBug in HDFS-9806. This seems related to HDFS-12881 
> when using {{IOUtils.cleanupWithLogger()}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12929) There is error message when hdfs dfsadmin is run against a ViewFS config

2017-12-15 Thread KaiXinXIaoLei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KaiXinXIaoLei resolved HDFS-12929.
--
Resolution: Duplicate

https://issues.apache.org/jira/browse/HDFS-12292

> There is  error message when hdfs dfsadmin is run against a ViewFS config
> -
>
> Key: HDFS-12929
> URL: https://issues.apache.org/jira/browse/HDFS-12929
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: KaiXinXIaoLei
>
> In viewfs config, i run "hdfs dfsadmin -safemode get", there is error:
> {noformat}
> safemode: FileSystem viewfs://XX/ is not an HDFS file system
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12903) [READ] Fix closing streams in ImageWriter

2017-12-15 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293542#comment-16293542
 ] 

Chris Douglas commented on HDFS-12903:
--

This reappears in spotbugs 3.1.1. It's spurious, as 
{{IOUtils::cleanupWithLogger}} will safely close the stream. Let's just 
suppress it.

> [READ] Fix closing streams in ImageWriter
> -
>
> Key: HDFS-12903
> URL: https://issues.apache.org/jira/browse/HDFS-12903
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12903-HDFS-9806.001.patch
>
>
> HDFS-12894 showed a FindBug in HDFS-9806. This seems related to HDFS-12881 
> when using {{IOUtils.cleanupWithLogger()}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-12903) [READ] Fix closing streams in ImageWriter

2017-12-15 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas reopened HDFS-12903:
--

> [READ] Fix closing streams in ImageWriter
> -
>
> Key: HDFS-12903
> URL: https://issues.apache.org/jira/browse/HDFS-12903
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12903-HDFS-9806.001.patch
>
>
> HDFS-12894 showed a FindBug in HDFS-9806. This seems related to HDFS-12881 
> when using {{IOUtils.cleanupWithLogger()}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12904) Add DataTransferThrottler to the Datanode transfers

2017-12-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12904:
---
Attachment: HDFS-12904.002.patch

> Add DataTransferThrottler to the Datanode transfers
> ---
>
> Key: HDFS-12904
> URL: https://issues.apache.org/jira/browse/HDFS-12904
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-12904.000.patch, HDFS-12904.001.patch, 
> HDFS-12904.002.patch
>
>
> The {{DataXceiverServer}} already uses throttling for the balancing. The 
> Datanode should also allow throttling the regular data transfers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12904) Add DataTransferThrottler to the Datanode transfers

2017-12-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293524#comment-16293524
 ] 

Íñigo Goiri commented on HDFS-12904:


Actually, [~lukmajercak] and [~thinktaocs] went through the code and there is 
another {{sendBlock()}}:
{{BPOfferService#processCommandFromActive()}} -> {{DataNode#transferBlocks()}} 
-> {{DataNode#transferBlock()}} -> {{DataTransfer#run()}} finally calls 
{{BlockSender#sendBlock()}} without a throttler.
This will start a {{DataXceiver}} in the other side which will be throttled but 
we should also throttle the one that sends.
I don't see a proper way to distinguish those.
In any case, we may want to throttle the one in {{DataTransfer#run()}}.
Thoughts?


> Add DataTransferThrottler to the Datanode transfers
> ---
>
> Key: HDFS-12904
> URL: https://issues.apache.org/jira/browse/HDFS-12904
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-12904.000.patch, HDFS-12904.001.patch
>
>
> The {{DataXceiverServer}} already uses throttling for the balancing. The 
> Datanode should also allow throttling the regular data transfers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12925) Ozone: Container : Add key versioning support-2

2017-12-15 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12925:
--
Attachment: HDFS-12925-HDFS-7240.004.patch

Latest Jenkins build failed again, resubmit v003 patch as v004 patch to trigger 
another run.

> Ozone: Container : Add key versioning support-2
> ---
>
> Key: HDFS-12925
> URL: https://issues.apache.org/jira/browse/HDFS-12925
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12925-HDFS-7240.001.patch, 
> HDFS-12925-HDFS-7240.002.patch, HDFS-12925-HDFS-7240.003.patch, 
> HDFS-12925-HDFS-7240.004.patch
>
>
> One component for versioning is assembling read IO vector, (please see 4.2 
> section of the [versioning design 
> doc|https://issues.apache.org/jira/secure/attachment/12877154/OzoneVersion.001.pdf]
>  under HDFS-12000 for the detail). This JIRA adds the util functions that 
> takes a list with blocks from different versions and properly generate the 
> read vector for the requested version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-3745) fsck prints that it's using KSSL even when it's in fact using SPNEGO for authentication

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293509#comment-16293509
 ] 

genericqa commented on HDFS-3745:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
58s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 6s{color} | {color:green} root: The patch generated 0 new + 361 unchanged - 1 
fixed = 361 total (was 362) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 58s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 53s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 14s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 58s{color} 
| {color:red} hadoop-mapreduce-client-hs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}266m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem |
|   | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.server.common.TestJspHelper |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce 

[jira] [Commented] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-12-15 Thread Misha Dmitriev (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293428#comment-16293428
 ] 

Misha Dmitriev commented on HDFS-12051:
---

There are some test failures again. They seem unrelated - some of them and/or 
some related fails also failed in the previous Hadoop Jenkins build.

> Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, 
> HDFS-12051.03.patch, HDFS-12051.04.patch, HDFS-12051.05.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these arrays are referenced by 
> {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}}
>  and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}:
> {code}
> 19. DUPLICATE PRIMITIVE ARRAYS
> Types of duplicate objects:
>  Ovhd Num objs  Num unique objs   Class name
> 3,220,272K (6.5%)   104749528  25760871 byte[]
> 
>   1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
> 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
> of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
> 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
> 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 
> of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 
> 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
> ... and 45902395 more arrays, of which 13158084 are unique
>  <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name 
> <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode 
> <--  {j.u.ArrayList} <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 
> elements) ... <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
>   409,830K (0.8%), 13482787 dup arrays (13260241 unique)
> 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...)
> ... and 13479257 more arrays, of which 13260231 are unique
>  <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- 
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> 

[jira] [Commented] (HDFS-12920) HDFS default value change (with adding time unit) breaks old version MR tarball work with new version (3.0) of hadoop

2017-12-15 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293424#comment-16293424
 ] 

Arpit Agarwal commented on HDFS-12920:
--

[~djp], does presence of any unit-suffixed values in the config file cause this 
failure?

> HDFS default value change (with adding time unit) breaks old version MR 
> tarball work with new version (3.0) of hadoop
> -
>
> Key: HDFS-12920
> URL: https://issues.apache.org/jira/browse/HDFS-12920
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Junping Du
>Priority: Blocker
>
> After HADOOP-15059 get resolved. I tried to deploy 2.9.0 tar ball with 3.0.0 
> RC1, and run the job with following errors:
> {noformat}
> 2017-12-12 13:29:06,824 INFO [main] 
> org.apache.hadoop.service.AbstractService: Service 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; cause: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:542)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1764)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:308)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1722)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1719)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1650)
> {noformat}
> This is because HDFS-10845, we are adding time unit to hdfs-default.xml but 
> it cannot be recognized by old version MR jars. 
> This break our rolling upgrade story, so should mark as blocker.
> A quick workaround is to add values in hdfs-site.xml with removing all time 
> unit. But the right way may be to revert HDFS-10845 (and get rid of noisy 
> warnings).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12932) Confusing LOG message for block replication

2017-12-15 Thread Chao Sun (JIRA)
Chao Sun created HDFS-12932:
---

 Summary: Confusing LOG message for block replication
 Key: HDFS-12932
 URL: https://issues.apache.org/jira/browse/HDFS-12932
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Affects Versions: 2.8.3
Reporter: Chao Sun
Assignee: Chao Sun
Priority: Minor


In our cluster we see large number of log messages such as the following:
{code}
2017-12-15 22:55:54,603 INFO 
org.apache.hadoop.hdfs.server.namenode.FSDirectory: Increasing replication from 
3 to 3 for 
{code}

This is a little confusing since "from 3 to 3" is not "increasing". Digging 
into it, it seems related to this piece of code:
{code}
if (oldBR != -1) {
  if (oldBR > targetReplication) {
FSDirectory.LOG.info("Decreasing replication from {} to {} for {}",
 oldBR, targetReplication, iip.getPath());
  } else {
FSDirectory.LOG.info("Increasing replication from {} to {} for {}",
 oldBR, targetReplication, iip.getPath());
  }
}
{code}
Perhaps a {{oldBR == targetReplication}} case is missing?




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293395#comment-16293395
 ] 

genericqa commented on HDFS-12051:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
59s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 1216 unchanged - 19 fixed = 1218 total (was 1235) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
6s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 1 
unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}181m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Increment of volatile field 
org.apache.hadoop.hdfs.server.namenode.NameCache.size in 
org.apache.hadoop.hdfs.server.namenode.NameCache.put(byte[])  At 
NameCache.java:in org.apache.hadoop.hdfs.server.namenode.NameCache.put(byte[])  
At NameCache.java:[line 117] |
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12051 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902438/HDFS-12051.05.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux af5d38c733e4 3.13.0-135-generic #184-Ubuntu SMP 

[jira] [Commented] (HDFS-12925) Ozone: Container : Add key versioning support-2

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293378#comment-16293378
 ] 

genericqa commented on HDFS-12925:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
12s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
12s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
26s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 19m  
7s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-hdfs-client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
40s{color} | {color:red} hadoop-hdfs in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
46s{color} | {color:red} hadoop-hdfs-client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdfs in HDFS-7240 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 34s{color} 
| {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 72 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
5s{color} | {color:red} The patch 1200 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  6m  
5s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
52s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m  
0s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
40s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
18s{color} | {color:red} The patch generated 7 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 

[jira] [Commented] (HDFS-12641) Backport HDFS-11755 into branch-2.7 to fix a regression in HDFS-11445

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293370#comment-16293370
 ] 

genericqa commented on HDFS-12641:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.7 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
32s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
43s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} branch-2.7 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 652 unchanged - 1 fixed = 654 total (was 653) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 60 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m 
17s{color} | {color:red} The patch generated 435 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:19 |
| Failed junit tests | hadoop.hdfs.TestWriteRead |
|   | hadoop.hdfs.TestClientReportBadBlock |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDataTransferKeepalive |
|   | hadoop.hdfs.TestFileLengthOnClusterRestart |
|   | hadoop.hdfs.TestBlockMissingException |
|   | hadoop.hdfs.TestHDFSTrash |
|   | hadoop.hdfs.TestDFSShellGenericOptions |
|   | hadoop.hdfs.web.TestWebHdfsTokens |
|   | hadoop.hdfs.TestFSInputChecker |
|   | hadoop.hdfs.TestFileCreationClient |
|   | hadoop.hdfs.TestSnapshotCommands |
| Timed out junit tests | org.apache.hadoop.hdfs.TestHdfsAdmin |
|   | org.apache.hadoop.hdfs.TestSetrepDecreasing |
|   | org.apache.hadoop.hdfs.TestQuota |
|   | org.apache.hadoop.hdfs.TestFileAppend4 |
|   | org.apache.hadoop.hdfs.TestReadWhileWriting |
|   | org.apache.hadoop.hdfs.TestLease |
|   | org.apache.hadoop.hdfs.TestHDFSServerPorts |
|   | org.apache.hadoop.hdfs.TestDFSUpgrade |
|   | org.apache.hadoop.hdfs.web.TestWebHDFS |
|   | org.apache.hadoop.hdfs.TestAppendSnapshotTruncate |
|   | org.apache.hadoop.hdfs.TestRollingUpgradeRollback |
|   | org.apache.hadoop.hdfs.TestMiniDFSCluster |
|   | org.apache.hadoop.hdfs.TestBlockReaderFactory |
|   | org.apache.hadoop.hdfs.TestHFlush |
|   | org.apache.hadoop.hdfs.TestEncryptedTransfer |
|   | org.apache.hadoop.hdfs.TestDFSShell |
|   | org.apache.hadoop.hdfs.TestDataTransferProtocol |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce 

[jira] [Commented] (HDFS-12917) Fix description errors in testErasureCodingConf.xml

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293332#comment-16293332
 ] 

Hudson commented on HDFS-12917:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13389 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13389/])
HDFS-12917. Fix description errors in testErasureCodingConf.xml. (cliang: rev 
aa503a29d0bba4725a10623a96f9220c9389117c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testErasureCodingConf.xml


> Fix description errors in testErasureCodingConf.xml
> ---
>
> Key: HDFS-12917
> URL: https://issues.apache.org/jira/browse/HDFS-12917
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-12917.002.patch, HADOOP-12917.patch
>
>
> In testErasureCodingConf.xml,there are two case's description may be 
> "getPolicy : get EC policy information at specified path, whick have an EC 
> Policy".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12917) Fix description errors in testErasureCodingConf.xml

2017-12-15 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12917:
--
  Resolution: Fixed
Target Version/s: 3.1.0
  Status: Resolved  (was: Patch Available)

> Fix description errors in testErasureCodingConf.xml
> ---
>
> Key: HDFS-12917
> URL: https://issues.apache.org/jira/browse/HDFS-12917
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-12917.002.patch, HADOOP-12917.patch
>
>
> In testErasureCodingConf.xml,there are two case's description may be 
> "getPolicy : get EC policy information at specified path, whick have an EC 
> Policy".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12917) Fix description errors in testErasureCodingConf.xml

2017-12-15 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293301#comment-16293301
 ] 

Chen Liang commented on HDFS-12917:
---

Thanks [~candychencan] for the updated patch! +1 on v002 patch, I've committed 
to trunk, (and I've changed the assignee of this JIRA to you). Thanks for your 
contribution!

> Fix description errors in testErasureCodingConf.xml
> ---
>
> Key: HDFS-12917
> URL: https://issues.apache.org/jira/browse/HDFS-12917
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-12917.002.patch, HADOOP-12917.patch
>
>
> In testErasureCodingConf.xml,there are two case's description may be 
> "getPolicy : get EC policy information at specified path, whick have an EC 
> Policy".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12917) Fix description errors in testErasureCodingConf.xml

2017-12-15 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang reassigned HDFS-12917:
-

Assignee: chencan

> Fix description errors in testErasureCodingConf.xml
> ---
>
> Key: HDFS-12917
> URL: https://issues.apache.org/jira/browse/HDFS-12917
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-12917.002.patch, HADOOP-12917.patch
>
>
> In testErasureCodingConf.xml,there are two case's description may be 
> "getPolicy : get EC policy information at specified path, whick have an EC 
> Policy".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12925) Ozone: Container : Add key versioning support-2

2017-12-15 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12925:
--
Attachment: HDFS-12925-HDFS-7240.003.patch

v003 patch fixes the checkstyle and javadoc issue, findbug warnings are not 
introduced by this patch. The failed tests all passed locally, except for the 
consistently failing test {{TestOzoneRpcClient}}

> Ozone: Container : Add key versioning support-2
> ---
>
> Key: HDFS-12925
> URL: https://issues.apache.org/jira/browse/HDFS-12925
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12925-HDFS-7240.001.patch, 
> HDFS-12925-HDFS-7240.002.patch, HDFS-12925-HDFS-7240.003.patch
>
>
> One component for versioning is assembling read IO vector, (please see 4.2 
> section of the [versioning design 
> doc|https://issues.apache.org/jira/secure/attachment/12877154/OzoneVersion.001.pdf]
>  under HDFS-12000 for the detail). This JIRA adds the util functions that 
> takes a list with blocks from different versions and properly generate the 
> read vector for the requested version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12925) Ozone: Container : Add key versioning support-2

2017-12-15 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293273#comment-16293273
 ] 

Chen Liang edited comment on HDFS-12925 at 12/15/17 9:31 PM:
-

v003 patch fixes the checkstyle and javadoc issue, findbug warnings are not 
introduced by this patch. The failed tests all passed locally, except for 
{{TestOzoneRpcClient}} which fails consistently even without the patch


was (Author: vagarychen):
v003 patch fixes the checkstyle and javadoc issue, findbug warnings are not 
introduced by this patch. The failed tests all passed locally, except for the 
consistently failing test {{TestOzoneRpcClient}}

> Ozone: Container : Add key versioning support-2
> ---
>
> Key: HDFS-12925
> URL: https://issues.apache.org/jira/browse/HDFS-12925
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12925-HDFS-7240.001.patch, 
> HDFS-12925-HDFS-7240.002.patch, HDFS-12925-HDFS-7240.003.patch
>
>
> One component for versioning is assembling read IO vector, (please see 4.2 
> section of the [versioning design 
> doc|https://issues.apache.org/jira/secure/attachment/12877154/OzoneVersion.001.pdf]
>  under HDFS-12000 for the detail). This JIRA adds the util functions that 
> takes a list with blocks from different versions and properly generate the 
> read vector for the requested version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-15 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HDFS-12881:
--
   Resolution: Fixed
Fix Version/s: 2.7.6
   2.8.4
   2.9.1
   2.10.0
   Status: Resolved  (was: Patch Available)

Thanks, Ajay!  I committed this to branch-2, branch-2.9, branch-2.8, and 
branch-2.7 as well.


> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HDFS-12881
> URL: https://issues.apache.org/jira/browse/HDFS-12881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4, 2.7.6
>
> Attachments: HDFS-12881-branch-2.10.0.001.patch, 
> HDFS-12881.001.patch, HDFS-12881.002.patch, HDFS-12881.003.patch, 
> HDFS-12881.004.patch
>
>
> There are a few places in HDFS code that are closing an output stream with 
> IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-15 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293222#comment-16293222
 ] 

Jason Lowe commented on HDFS-12881:
---

Thanks for the branch-2 patch!  +1 lgtm.  I agree the unit tests failures 
appear to be unrelated, and I verified those tests pass locally with the patch 
applied.

Committing this.



> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HDFS-12881
> URL: https://issues.apache.org/jira/browse/HDFS-12881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HDFS-12881-branch-2.10.0.001.patch, 
> HDFS-12881.001.patch, HDFS-12881.002.patch, HDFS-12881.003.patch, 
> HDFS-12881.004.patch
>
>
> There are a few places in HDFS code that are closing an output stream with 
> IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10477) Stop decommission a rack of DataNodes caused NameNode fail over to standby

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293196#comment-16293196
 ] 

genericqa commented on HDFS-10477:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-10477 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-10477 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817992/HDFS-10477.005.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22428/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Stop decommission a rack of DataNodes caused NameNode fail over to standby
> --
>
> Key: HDFS-10477
> URL: https://issues.apache.org/jira/browse/HDFS-10477
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
> Attachments: HDFS-10477.002.patch, HDFS-10477.003.patch, 
> HDFS-10477.004.patch, HDFS-10477.005.patch, HDFS-10477.patch
>
>
> In our cluster, when we stop decommissioning a rack which have 46 DataNodes, 
> it locked Namesystem for about 7 minutes as below log shows:
> {code}
> 2016-05-26 20:11:41,697 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.27:1004
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 285258 over-replicated blocks on 10.142.27.27:1004 during recommissioning
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.118:1004
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 279923 over-replicated blocks on 10.142.27.118:1004 during recommissioning
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.113:1004
> 2016-05-26 20:12:09,007 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 294307 over-replicated blocks on 10.142.27.113:1004 during recommissioning
> 2016-05-26 20:12:09,008 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.117:1004
> 2016-05-26 20:12:18,055 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 314381 over-replicated blocks on 10.142.27.117:1004 during recommissioning
> 2016-05-26 20:12:18,056 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.130:1004
> 2016-05-26 20:12:25,938 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 272779 over-replicated blocks on 10.142.27.130:1004 during recommissioning
> 2016-05-26 20:12:25,939 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.121:1004
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 287248 over-replicated blocks on 10.142.27.121:1004 during recommissioning
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.33:1004
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 299868 over-replicated blocks on 10.142.27.33:1004 during recommissioning
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.137:1004
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 303914 over-replicated blocks on 10.142.27.137:1004 during recommissioning
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.51:1004
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 281175 over-replicated blocks on 10.142.27.51:1004 during recommissioning
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.12:1004
> 2016-05-26 20:13:08,756 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 274880 over-replicated blocks on 

[jira] [Commented] (HDFS-10614) Appended blocks can be closed even before IBRs from DataNodes

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293195#comment-16293195
 ] 

genericqa commented on HDFS-10614:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-10614 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-10614 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865892/HDFS-10614.03.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22427/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Appended blocks can be closed even before IBRs from DataNodes
> -
>
> Key: HDFS-10614
> URL: https://issues.apache.org/jira/browse/HDFS-10614
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-10614.01.patch, HDFS-10614.02.patch, 
> HDFS-10614.03.patch
>
>
> Scenario:
>1. Open the file for append()
>2. Trigger append pipeline setup by adding some data.
>3. Consider RECEIVING IBRs of DNs reaches NN first.
>4. updatePipeline() rpc sent to namenode to update the pipeline.
>5. Now, if complete() is called on the file even before closing the 
> pipeline, then block will be COMPLETE, even before block is actually 
> FINALIZED at DN side and file will be closed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10348) Namenode report bad block method doesn't check whether the block belongs to datanode before adding it to corrupt replicas map.

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293193#comment-16293193
 ] 

genericqa commented on HDFS-10348:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-10348 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-10348 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801965/HDFS-10348-1.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22425/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Namenode report bad block method doesn't check whether the block belongs to 
> datanode before adding it to corrupt replicas map.
> --
>
> Key: HDFS-10348
> URL: https://issues.apache.org/jira/browse/HDFS-10348
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-10348-1.patch, HDFS-10348.patch
>
>
> Namenode (via report bad block nethod) doesn't check whether the block 
> belongs to the datanode before it adds to corrupt replicas map.
> In one of our cluster we found that there were 3 lingering corrupt blocks.
> It happened in the following order.
> 1. Two clients called getBlockLocations for a particular file.
> 2. Client C1 tried to open the file and encountered checksum error from   
> node N3 and it reported bad block (blk1) to the namenode.
> 3. Namenode added that node N3 and block blk1  to corrrupt replicas map   and 
> ask one of the good node (one of the 2 nodes) to replicate the block to 
> another node N4.
> 4. After receiving the block, N4 sends an IBR (with RECEIVED_BLOCK) to 
> namenode.
> 5. Namenode removed the block and node N3 from corrupt replicas map.
>It also removed N3's storage from triplets and queued an invalidate 
> request for N3.
> 6. In the mean time, Client C2 tries to open the file and the request went to 
> node N3.
>C2 also encountered the checksum exception and reported bad block to 
> namenode.
> 7. Namenode added the corrupt block blk1 and node N3 to the corrupt replicas 
> map without confirming whether node N3 has the block or not.
> After deleting the block, N3 sends an IBR (with DELETED) and the namenode 
> simply ignores the report since the N3's storage is no longer in the 
> triplets(from step 5)
> We took the node out of rotation, but still the block was present only in the 
> corruptReplciasMap. 
> Since on removing the node, we only goes through the block which are present 
> in the triplets for a given datanode.
> [~kshukla]'s patch fixed this bug via 
> https://issues.apache.org/jira/browse/HDFS-9958.
> But I think the following check should be made in the 
> BlockManager#markBlockAsCorrupt instead of 
> BlockManager#findAndMarkBlockAsCorrupt.
> {noformat}
> if (storage == null) {
>   storage = storedBlock.findStorageInfo(node);
> }
> if (storage == null) {
>   blockLog.debug("BLOCK* findAndMarkBlockAsCorrupt: {} not found on {}",
>   blk, dn);
>   return;
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-12-15 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HDFS-12051:
--
Attachment: HDFS-12051.05.patch

> Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, 
> HDFS-12051.03.patch, HDFS-12051.04.patch, HDFS-12051.05.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these arrays are referenced by 
> {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}}
>  and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}:
> {code}
> 19. DUPLICATE PRIMITIVE ARRAYS
> Types of duplicate objects:
>  Ovhd Num objs  Num unique objs   Class name
> 3,220,272K (6.5%)   104749528  25760871 byte[]
> 
>   1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
> 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
> of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
> 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
> 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 
> of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 
> 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
> ... and 45902395 more arrays, of which 13158084 are unique
>  <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name 
> <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode 
> <--  {j.u.ArrayList} <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 
> elements) ... <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
>   409,830K (0.8%), 13482787 dup arrays (13260241 unique)
> 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...)
> ... and 13479257 more arrays, of which 13260231 are unique
>  <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- 
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: 

[jira] [Updated] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-12-15 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HDFS-12051:
--
Status: Patch Available  (was: In Progress)

> Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, 
> HDFS-12051.03.patch, HDFS-12051.04.patch, HDFS-12051.05.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these arrays are referenced by 
> {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}}
>  and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}:
> {code}
> 19. DUPLICATE PRIMITIVE ARRAYS
> Types of duplicate objects:
>  Ovhd Num objs  Num unique objs   Class name
> 3,220,272K (6.5%)   104749528  25760871 byte[]
> 
>   1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
> 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
> of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
> 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
> 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 
> of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 
> 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
> ... and 45902395 more arrays, of which 13158084 are unique
>  <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name 
> <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode 
> <--  {j.u.ArrayList} <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 
> elements) ... <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
>   409,830K (0.8%), 13482787 dup arrays (13260241 unique)
> 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...)
> ... and 13479257 more arrays, of which 13260231 are unique
>  <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- 
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: 

[jira] [Updated] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-12-15 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HDFS-12051:
--
Status: In Progress  (was: Patch Available)

> Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, 
> HDFS-12051.03.patch, HDFS-12051.04.patch, HDFS-12051.05.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these arrays are referenced by 
> {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}}
>  and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}:
> {code}
> 19. DUPLICATE PRIMITIVE ARRAYS
> Types of duplicate objects:
>  Ovhd Num objs  Num unique objs   Class name
> 3,220,272K (6.5%)   104749528  25760871 byte[]
> 
>   1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
> 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
> of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
> 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
> 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 
> of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 
> 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
> ... and 45902395 more arrays, of which 13158084 are unique
>  <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name 
> <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode 
> <--  {j.u.ArrayList} <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 
> elements) ... <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
>   409,830K (0.8%), 13482787 dup arrays (13260241 unique)
> 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...)
> ... and 13479257 more arrays, of which 13260231 are unique
>  <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- 
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: 

[jira] [Commented] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-12-15 Thread Misha Dmitriev (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293109#comment-16293109
 ] 

Misha Dmitriev commented on HDFS-12051:
---

Test failures above (some with OOM) look rather strange. I doubt that they are 
related with my change.

I've fixed one checkstyle problem that I've introduced. The other two are about 
long lines, but in the file in question (DFSConfigKeys.java) all lines are 
long, so this is irrelevant.

The findbugs warning is about some code that I didn't write, that just happens 
to be in one of the files that I've touched.

I am now submitting one more patch with some comments fixed/improved.

> Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, 
> HDFS-12051.03.patch, HDFS-12051.04.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these arrays are referenced by 
> {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}}
>  and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}:
> {code}
> 19. DUPLICATE PRIMITIVE ARRAYS
> Types of duplicate objects:
>  Ovhd Num objs  Num unique objs   Class name
> 3,220,272K (6.5%)   104749528  25760871 byte[]
> 
>   1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
> 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
> of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
> 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
> 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 
> of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 
> 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
> ... and 45902395 more arrays, of which 13158084 are unique
>  <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name 
> <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode 
> <--  {j.u.ArrayList} <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 
> elements) ... <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
>   409,830K (0.8%), 13482787 dup arrays (13260241 unique)
> 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...)
> ... and 13479257 more arrays, of which 13260231 are unique
>  <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- 
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> 

[jira] [Commented] (HDFS-12904) Add DataTransferThrottler to the Datanode transfers

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293097#comment-16293097
 ] 

genericqa commented on HDFS-12904:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
13s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 479 unchanged - 0 fixed = 481 total (was 479) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}176m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12904 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902404/HDFS-12904.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux aa3fb1eeac8b 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 09d996f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22420/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 

[jira] [Commented] (HDFS-12818) Support multiple storages in DataNodeCluster / SimulatedFSDataset

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293090#comment-16293090
 ] 

genericqa commented on HDFS-12818:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
41s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 46s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 393 unchanged - 
1 fixed = 394 total (was 394) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 52 unchanged - 8 fixed = 52 total (was 60) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  8s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}163m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12818 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902410/HDFS-12818.009.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2414f31c3e27 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 09d996f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Comment Edited] (HDFS-12070) Failed block recovery leaves files open indefinitely and at risk for data loss

2017-12-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292994#comment-16292994
 ] 

Kihwal Lee edited comment on HDFS-12070 at 12/15/17 7:20 PM:
-

To complete the history lesson, I traced down when {{closeFile}} was added to 
{{commitBlockSynchronization()}} and why no one is calling it with {{false}} 
anymore.

It turns out, the {{closeFile}} argument has existed since the dawn of 
{{commitBlockSynchronization()}}. It was added by HADOOP-3310 to 0.18 in 2008. 
The old append (HADOOP-1700) dependeds on it.  Even in this, the normal lease 
recovery would always call it with {{closeFile == true}}.  There was a new 
{{ClientDatanodeProtocol}} method, {{recoverBlock()}}, which causes 
{{commitBlockSynchronization()}} to be called with {{closeFile == false}}.  I 
guess this disappeard when {{recoverBlock()}} client command was removed from 
datanode. Today, a {{recoverLease()}} call to namenode can be used instead.  It 
is really fortunate that the {{closeFile}} option was initially added and has 
survived for 9 years in spite of lack use.


was (Author: kihwal):
To complete the history lesson, I traced down when {{closeFile}} was added to 
{{commitBlockSynchronization()}} and why no one is calling it with {{false}} 
anymore.

It turns out, the {{closeFile}} argument has existed since the dawn of 
{{commitBlockSynchronization()}}. It was added by HADOOP-3310 to 0.18 in 2008. 
The old append dependeds on it.  Even in this, the normal lease recovery would 
always call it with {{closeFile == true}}.  There was a new 
{{ClientDatanodeProtocol}} method, {{recoverBlock()}}, which causes 
{{commitBlockSynchronization()}} to be called with {{closeFile == false}}.  I 
guess this disappeard when {{recoverBlock()}} client command was removed from 
datanode. Today, a {{recoverLease()}} call to namenode can be used instead.  It 
is really fortunate that the {{closeFile}} option was initially added and has 
survived for 9 years in spite of lack use.

> Failed block recovery leaves files open indefinitely and at risk for data loss
> --
>
> Key: HDFS-12070
> URL: https://issues.apache.org/jira/browse/HDFS-12070
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Kihwal Lee
>
> Files will remain open indefinitely if block recovery fails which creates a 
> high risk of data loss.  The replication monitor will not replicate these 
> blocks.
> The NN provides the primary node a list of candidate nodes for recovery which 
> involves a 2-stage process. The primary node removes any candidates that 
> cannot init replica recovery (essentially alive and knows about the block) to 
> create a sync list.  Stage 2 issues updates to the sync list – _but fails if 
> any node fails_ unlike the first stage.  The NN should be informed of nodes 
> that did succeed.
> Manual recovery will also fail until the problematic node is temporarily 
> stopped so a connection refused will induce the bad node to be pruned from 
> the candidates.  Recovery succeeds, the lease is released, under replication 
> is fixed, and block is invalidated from the bad node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12799) Ozone: SCM: Close containers: extend SCMCommandResponseProto with SCMCloseContainerCmdResponseProto

2017-12-15 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293035#comment-16293035
 ] 

Chen Liang commented on HDFS-12799:
---

Thanks [~elek] for the update! Looks like there were compilation failures, 
might need to be rebased, would you mind taking a look? Thanks

> Ozone: SCM: Close containers: extend SCMCommandResponseProto with 
> SCMCloseContainerCmdResponseProto
> ---
>
> Key: HDFS-12799
> URL: https://issues.apache.org/jira/browse/HDFS-12799
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12799-HDFS-7240.001.patch, 
> HDFS-12799-HDFS-7240.002.patch, HDFS-12799-HDFS-7240.003.patch, 
> HDFS-12799-HDFS-7240.004.patch
>
>
> This issue is about extending the HB response protocol between SCM and DN 
> with a command to ask the datanode to close a container. (This is just about 
> extending the protocol not about fixing the implementation of SCM tto handle 
> the state transitions).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12927) Update erasure coding doc to address unsupported APIs

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293020#comment-16293020
 ] 

Hudson commented on HDFS-12927:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13388 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13388/])
HDFS-12927. Update erasure coding doc to address unsupported APIs. (lei: rev 
949be14b0881186d76c3b60ee2f39ce67dc1654c)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md


> Update erasure coding doc to address unsupported APIs
> -
>
> Key: HDFS-12927
> URL: https://issues.apache.org/jira/browse/HDFS-12927
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.1
>
> Attachments: HDFS-12927.00.patch
>
>
> {{Concat}}, {{truncate}}, {{setReplication}} are not (fully) supported with 
> EC files. We should update the document to address them explicitly. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9806:
-
Status: Patch Available  (was: Open)

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch, HDFS-9806.002.patch, HDFS-9806.003.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9806:
-
Attachment: HDFS-9806.003.patch

Posting a rebased patch with all changes in HDFS-9806 feature branch.

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch, HDFS-9806.002.patch, HDFS-9806.003.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12070) Failed block recovery leaves files open indefinitely and at risk for data loss

2017-12-15 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HDFS-12070:
-

Assignee: Kihwal Lee

> Failed block recovery leaves files open indefinitely and at risk for data loss
> --
>
> Key: HDFS-12070
> URL: https://issues.apache.org/jira/browse/HDFS-12070
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Kihwal Lee
>
> Files will remain open indefinitely if block recovery fails which creates a 
> high risk of data loss.  The replication monitor will not replicate these 
> blocks.
> The NN provides the primary node a list of candidate nodes for recovery which 
> involves a 2-stage process. The primary node removes any candidates that 
> cannot init replica recovery (essentially alive and knows about the block) to 
> create a sync list.  Stage 2 issues updates to the sync list – _but fails if 
> any node fails_ unlike the first stage.  The NN should be informed of nodes 
> that did succeed.
> Manual recovery will also fail until the problematic node is temporarily 
> stopped so a connection refused will induce the bad node to be pruned from 
> the candidates.  Recovery succeeds, the lease is released, under replication 
> is fixed, and block is invalidated from the bad node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12070) Failed block recovery leaves files open indefinitely and at risk for data loss

2017-12-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292994#comment-16292994
 ] 

Kihwal Lee commented on HDFS-12070:
---

To complete the history lesson, I traced down when {{closeFile}} was added to 
{{commitBlockSynchronization()}} and why no one is calling it with {{false}} 
anymore.

It turns out, the {{closeFile}} argument has existed since the dawn of 
{{commitBlockSynchronization()}}. It was added by HADOOP-3310 to 0.18 in 2008. 
The old append dependeds on it.  Even in this, the normal lease recovery would 
always call it with {{closeFile == true}}.  There was a new 
{{ClientDatanodeProtocol}} method, {{recoverBlock()}}, which causes 
{{commitBlockSynchronization()}} to be called with {{closeFile == false}}.  I 
guess this disappeard when {{recoverBlock()}} client command was removed from 
datanode. Today, a {{recoverLease()}} call to namenode can be used instead.  It 
is really fortunate that the {{closeFile}} option was initially added and has 
survived for 9 years in spite of lack use.

> Failed block recovery leaves files open indefinitely and at risk for data loss
> --
>
> Key: HDFS-12070
> URL: https://issues.apache.org/jira/browse/HDFS-12070
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>
> Files will remain open indefinitely if block recovery fails which creates a 
> high risk of data loss.  The replication monitor will not replicate these 
> blocks.
> The NN provides the primary node a list of candidate nodes for recovery which 
> involves a 2-stage process. The primary node removes any candidates that 
> cannot init replica recovery (essentially alive and knows about the block) to 
> create a sync list.  Stage 2 issues updates to the sync list – _but fails if 
> any node fails_ unlike the first stage.  The NN should be informed of nodes 
> that did succeed.
> Manual recovery will also fail until the problematic node is temporarily 
> stopped so a connection refused will induce the bad node to be pruned from 
> the candidates.  Recovery succeeds, the lease is released, under replication 
> is fixed, and block is invalidated from the bad node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12927) Update erasure coding doc to address unsupported APIs

2017-12-15 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12927:
-
   Resolution: Fixed
Fix Version/s: 3.0.1
   Status: Resolved  (was: Patch Available)

Thanks for the review, [~xiaochen]

> Update erasure coding doc to address unsupported APIs
> -
>
> Key: HDFS-12927
> URL: https://issues.apache.org/jira/browse/HDFS-12927
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.1
>
> Attachments: HDFS-12927.00.patch
>
>
> {{Concat}}, {{truncate}}, {{setReplication}} are not (fully) supported with 
> EC files. We should update the document to address them explicitly. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12819) Setting/Unsetting EC policy shows warning if the directory is not empty

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292974#comment-16292974
 ] 

Hudson commented on HDFS-12819:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13387 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13387/])
HDFS-12819. Setting/Unsetting EC policy shows warning if the directory (lei: 
rev 1c15b1751c0698bd3063d5c25f556d4821b161d2)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/ECAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testErasureCodingConf.xml


> Setting/Unsetting EC policy shows warning if the directory is not empty
> ---
>
> Key: HDFS-12819
> URL: https://issues.apache.org/jira/browse/HDFS-12819
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Fix For: 3.0.1
>
> Attachments: HDFS-12819.00.patch, HDFS-12819.01.patch, 
> HDFS-12819.02.patch
>
>
> Because the existing data will not be converted when we set or unset EC 
> policy on a directory, a warning from CLI would help to clear user's 
> expectation. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-15 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292963#comment-16292963
 ] 

Ajay Kumar commented on HDFS-12881:
---

All test failures for branch-2 passed locally. 

> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HDFS-12881
> URL: https://issues.apache.org/jira/browse/HDFS-12881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HDFS-12881-branch-2.10.0.001.patch, 
> HDFS-12881.001.patch, HDFS-12881.002.patch, HDFS-12881.003.patch, 
> HDFS-12881.004.patch
>
>
> There are a few places in HDFS code that are closing an output stream with 
> IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12885) Add visibility/stability annotations

2017-12-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti reassigned HDFS-12885:
-

Assignee: Chris Douglas

> Add visibility/stability annotations
> 
>
> Key: HDFS-12885
> URL: https://issues.apache.org/jira/browse/HDFS-12885
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Trivial
> Attachments: HDFS-12885-HDFS-9806.00.patch, 
> HDFS-12885-HDFS-9806.001.patch
>
>
> Classes added in HDFS-9806 should include stability/visibility annotations 
> (HADOOP-5073)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9806:
-
Status: Open  (was: Patch Available)

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch, HDFS-9806.002.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12712) [9806] Code style cleanup

2017-12-15 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292962#comment-16292962
 ] 

Virajith Jalaparti commented on HDFS-12712:
---

Thanks for taking a look [~elgoiri]. Committing 
[^HDFS-12712-HDFS-9806.003.patch] to feature branch.

> [9806] Code style cleanup
> -
>
> Key: HDFS-12712
> URL: https://issues.apache.org/jira/browse/HDFS-12712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Minor
> Attachments: HDFS-12712-HDFS-9806.001.patch, 
> HDFS-12712-HDFS-9806.002.patch, HDFS-12712-HDFS-9806.003.patch
>
>
> The code for HDFS-9806 could use some style cleaning before merging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12712) [9806] Code style cleanup

2017-12-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12712:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> [9806] Code style cleanup
> -
>
> Key: HDFS-12712
> URL: https://issues.apache.org/jira/browse/HDFS-12712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Minor
> Attachments: HDFS-12712-HDFS-9806.001.patch, 
> HDFS-12712-HDFS-9806.002.patch, HDFS-12712-HDFS-9806.003.patch
>
>
> The code for HDFS-9806 could use some style cleaning before merging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12819) Setting/Unsetting EC policy shows warning if the directory is not empty

2017-12-15 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12819:
-
   Resolution: Fixed
Fix Version/s: 3.0.1
   Status: Resolved  (was: Patch Available)

Thanks [~xiaochen]

Committed to {{trunk}} and {{branch-3.0}}

> Setting/Unsetting EC policy shows warning if the directory is not empty
> ---
>
> Key: HDFS-12819
> URL: https://issues.apache.org/jira/browse/HDFS-12819
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Fix For: 3.0.1
>
> Attachments: HDFS-12819.00.patch, HDFS-12819.01.patch, 
> HDFS-12819.02.patch
>
>
> Because the existing data will not be converted when we set or unset EC 
> policy on a directory, a warning from CLI would help to clear user's 
> expectation. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12712) [9806] Code style cleanup

2017-12-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292939#comment-16292939
 ] 

Íñigo Goiri commented on HDFS-12712:


Not sure why Yetus is giving all the deprecations on earth but 
[^HDFS-12712-HDFS-9806.003.patch] LGTM.
Failed unit tests also seem unrelated.
+1

> [9806] Code style cleanup
> -
>
> Key: HDFS-12712
> URL: https://issues.apache.org/jira/browse/HDFS-12712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Minor
> Attachments: HDFS-12712-HDFS-9806.001.patch, 
> HDFS-12712-HDFS-9806.002.patch, HDFS-12712-HDFS-9806.003.patch
>
>
> The code for HDFS-9806 could use some style cleaning before merging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12070) Failed block recovery leaves files open indefinitely and at risk for data loss

2017-12-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292929#comment-16292929
 ] 

Kihwal Lee edited comment on HDFS-12070 at 12/15/17 5:59 PM:
-

bq. the PD needs to ... tell the namenode to exclude the failed node from the 
expected locations. 
It appears calling {{commitBlockSynchronization()}} with {{closeFile == false}} 
might do the trick. On the NN side, we could make it do block/lease recovery 
again soon. The older NNs will still work, but with 1 hour delay until the 
retry.   


was (Author: kihwal):
bq. the PD needs to ... tell the namenode to exclude the failed node from the 
expected locations. 
It appears calling {{commitBlockSynchronization()}} with {{closeFile == false}} 
might do the trick. On the NN size, we could make it do block/lease recovery 
again soon. The older NNs will still work, but with 1 hour delay until the 
retry.   

> Failed block recovery leaves files open indefinitely and at risk for data loss
> --
>
> Key: HDFS-12070
> URL: https://issues.apache.org/jira/browse/HDFS-12070
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>
> Files will remain open indefinitely if block recovery fails which creates a 
> high risk of data loss.  The replication monitor will not replicate these 
> blocks.
> The NN provides the primary node a list of candidate nodes for recovery which 
> involves a 2-stage process. The primary node removes any candidates that 
> cannot init replica recovery (essentially alive and knows about the block) to 
> create a sync list.  Stage 2 issues updates to the sync list – _but fails if 
> any node fails_ unlike the first stage.  The NN should be informed of nodes 
> that did succeed.
> Manual recovery will also fail until the problematic node is temporarily 
> stopped so a connection refused will induce the bad node to be pruned from 
> the candidates.  Recovery succeeds, the lease is released, under replication 
> is fixed, and block is invalidated from the bad node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12070) Failed block recovery leaves files open indefinitely and at risk for data loss

2017-12-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292929#comment-16292929
 ] 

Kihwal Lee commented on HDFS-12070:
---

bq. the PD needs to ... tell the namenode to exclude the failed node from the 
expected locations. 
It appears calling {{commitBlockSynchronization()}} with {{closeFile == false}} 
might do the trick. On the NN size, we could make it do block/lease recovery 
again soon. The older NNs will still work, but with 1 hour delay until the 
retry.   

> Failed block recovery leaves files open indefinitely and at risk for data loss
> --
>
> Key: HDFS-12070
> URL: https://issues.apache.org/jira/browse/HDFS-12070
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>
> Files will remain open indefinitely if block recovery fails which creates a 
> high risk of data loss.  The replication monitor will not replicate these 
> blocks.
> The NN provides the primary node a list of candidate nodes for recovery which 
> involves a 2-stage process. The primary node removes any candidates that 
> cannot init replica recovery (essentially alive and knows about the block) to 
> create a sync list.  Stage 2 issues updates to the sync list – _but fails if 
> any node fails_ unlike the first stage.  The NN should be informed of nodes 
> that did succeed.
> Manual recovery will also fail until the problematic node is temporarily 
> stopped so a connection refused will induce the bad node to be pruned from 
> the candidates.  Recovery succeeds, the lease is released, under replication 
> is fixed, and block is invalidated from the bad node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >