[jira] [Commented] (HDFS-12937) RBF: Add more unit tests for router admin commands

2017-12-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296408#comment-16296408
 ] 

Hudson commented on HDFS-12937:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13402 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13402/])
HDFS-12937. RBF: Add more unit tests for router admin commands. (yqlin: rev 
e040c97b7743469f363eeae52c8abcf4fe7c65d5)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java


> RBF: Add more unit tests for router admin commands
> --
>
> Key: HDFS-12937
> URL: https://issues.apache.org/jira/browse/HDFS-12937
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HDFS-12937.001.patch, HDFS-12937.002.patch
>
>
> Adding more unit tests to ensure that router admin commands works well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12937) RBF: Add more unit tests for router admin commands

2017-12-18 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12937:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.1
   2.9.1
   2.10.0
   3.1.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-3.0, branch-2 and branch-2.9.  Thanks 
[~elgoiri] for the review.

> RBF: Add more unit tests for router admin commands
> --
>
> Key: HDFS-12937
> URL: https://issues.apache.org/jira/browse/HDFS-12937
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HDFS-12937.001.patch, HDFS-12937.002.patch
>
>
> Adding more unit tests to ensure that router admin commands works well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12937) RBF: Add more unit tests for router admin commands

2017-12-18 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296373#comment-16296373
 ] 

Yiqun Lin commented on HDFS-12937:
--

Failed UT is not related. Committing...

> RBF: Add more unit tests for router admin commands
> --
>
> Key: HDFS-12937
> URL: https://issues.apache.org/jira/browse/HDFS-12937
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12937.001.patch, HDFS-12937.002.patch
>
>
> Adding more unit tests to ensure that router admin commands works well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12355) Webhdfs needs to support encryption zones.

2017-12-18 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296294#comment-16296294
 ] 

Xiao Chen commented on HDFS-12355:
--

[~shahrs87] thanks for working on this. Restate my comment from HDFS-12907: 
please have a simple design doc for future readers.

> Webhdfs needs to support encryption zones.
> --
>
> Key: HDFS-12355
> URL: https://issues.apache.org/jira/browse/HDFS-12355
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, kms
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>
> Will create a sub tasks.
> 1. Add fsserverdefaults to {{NamenodeWebhdfsMethods}}.
> 2. Return File encryption info in {{GETFILESTATUS}} call from 
> {{NamenodeWebhdfsMethods}}
> 3. Adding {{CryptoInputStream}} and {{CryptoOutputStream}} to InputStream and 
> OutputStream.
> 4. {{WebhdfsFilesystem}} needs to acquire kms delegation token from kms 
> servers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12937) RBF: Add more unit tests for router admin commands

2017-12-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296266#comment-16296266
 ] 

genericqa commented on HDFS-12937:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
9s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12937 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902762/HDFS-12937.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8550df37e40d 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c7499f2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22452/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22452/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22452/testReport/ |
| Max. process+thread count | 3191 (vs. ulimit of 5000) |
| modules | C: 

[jira] [Commented] (HDFS-11847) Enhance dfsadmin listOpenFiles command to list files blocking datanode decommissioning

2017-12-18 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296254#comment-16296254
 ] 

Xiao Chen commented on HDFS-11847:
--

Thanks for working on this Manoj! This will be a nice tool for troubleshooting 
decommissioning.

Some comments:
- Since HDFS-10480 is released, we cannot change the APIs unfortunately. It 
seems to me we'd have to provide an overload of {{listOpenFiles}}. I like the 
use of enums though, maybe we should deprecate the existing API to encourage 
the new API to always be used.
- From API, do we support {{BLOCKING_DECOMMISSION}} and {{ALL_OPEN_FILES}} both 
specified? Implementation in {{FSN#listOpenFiles}} doesn't look like so, but 
I'm also wondering how we plan to support them on the same 
{{OpenFilesIterator}}. Do we want to have types on {{OpenFileEntry}}?
- Usage perspective, it may also be useful if we print out DataNodes.
- {{DatanodeAdminManager#processBlocksInternal}}, maybe we can skip if a block 
and inode is inconsistent instead of throw from preconditions? Could log in NN 
to help debugging, but from hdfsadmin we can still see other openfiles.
- {{DatanodeAdminManager#processBlocksInternal}}, can we simply use 
{{lowRedundancyOpenFiles.size()}} and get rid of 
{{lowRedundancyBlocksInOpenFiles}}?
- {{LeavingServiceStatus}} similar to above, do we need both the counter and 
the set of openfiles?
(Holding all inode id would consume more memory, but since this only happens 
when decommissioning + open files, which hopefully would be a tiny portion of 
all files, I think we're okay)

Nits:
- {{LeavingServiceStatus}} trivial and pre-existing: comment at the end of this 
class should say {{End of class LeavingServiceStatus}}, not 
{{DecommissioningStatus}}
- {{FSN#getFilesBlockingDecom}} suggest to add {{assert hasReadLock();}} to 
safeguard future changes
- {{TestDecommission#verifyOpenFilesBlockingDecommission}}: Should save the 
previous {{System.out}} as a local var, and set back when we're done. 
{{System.setOut(System.out);}} won't restore to the old out. Also the restore 
logic should be in a finally block.
- {{TestDecommission}}, can we set the 
{{DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY}} as {{Integer.MAX_VALUE}}? 
1-second may not be robust enough.

> Enhance dfsadmin listOpenFiles command to list files blocking datanode 
> decommissioning
> --
>
> Key: HDFS-11847
> URL: https://issues.apache.org/jira/browse/HDFS-11847
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11847.01.patch, HDFS-11847.02.patch
>
>
> HDFS-10480 adds {{listOpenFiles}} option is to {{dfsadmin}} command to list 
> all the open files in the system.
> Additionally, it would be very useful to only list open files that are 
> blocking the DataNode decommissioning. With thousand+ node clusters, where 
> there might be machines added and removed regularly for maintenance, any 
> option to monitor and debug decommissioning status is very helpful. Proposal 
> here is to add suboptions to {{listOpenFiles}} for the above case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12347) TestBalancerRPCDelay#testBalancerRPCDelay fails very frequently

2017-12-18 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296236#comment-16296236
 ] 

Bharat Viswanadham commented on HDFS-12347:
---

Thank You [~ajayydv] and [~szetszwo] for review and committing the patch.


> TestBalancerRPCDelay#testBalancerRPCDelay fails very frequently
> ---
>
> Key: HDFS-12347
> URL: https://issues.apache.org/jira/browse/HDFS-12347
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-beta1, 2.7.5, 3.0.1
>Reporter: Xiao Chen
>Assignee: Bharat Viswanadham
>Priority: Critical
> Fix For: 2.10.0, 3.0.1
>
> Attachments: HDFS-12347.00.patch, trunk.failed.xml
>
>
> Seems to be failing consistently on trunk from yesterday-ish.
> A sample failure is 
> https://builds.apache.org/job/PreCommit-HDFS-Build/20824/testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerRPCDelay/testBalancerRPCDelay/
> Running locally failed with:
> {noformat}
>  type="java.lang.AssertionError">
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12934) RBF: Federation supports global quota

2017-12-18 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296221#comment-16296221
 ] 

Wei Yan commented on HDFS-12934:


[~linyiqun] some qq:
* How do you plan the State Store fetching per-subcluster usage information for 
the directories?
* Will there any additional performance penalty for checking quota in Router 
side each time when a WRITE request passing? 

> RBF: Federation supports global quota
> -
>
> Key: HDFS-12934
> URL: https://issues.apache.org/jira/browse/HDFS-12934
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
>
> Now federation doesn't support set the global quota for each folder. 
> Currently the quota will be applied for each subcluster under the specified 
> folder via RPC call.
> It will be very useful for users that federation can support setting global 
> quota and exposing the command of this.
> In a federated environment, a folder can be spread across multiple 
> subclusters. For this reason, we plan to solve this by following way:
> # Set global quota across each subcluster. We don't allow each subcluster can 
> exceed maximun quota value.
> # We need to construct one  cache map for storing the sum  
> quota usage of these subclusters under federation folder. Every time we want 
> to do WRITE operation under specified folder, we will get its quota usage 
> from cache and verify its quota. If quota exceeded, throw exception, 
> otherwise update its quota usage in cache when finishing operations.
> The quota will be set to mount table and as a new field in mount table. The 
> set/unset command will be like:
> {noformat}
>  hdfs dfsrouteradmin -setQuota -ns  -ss  
>  hdfs dfsrouteradmin -clrQuota  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12932) Confusing LOG message for block replication

2017-12-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296153#comment-16296153
 ] 

genericqa commented on HDFS-12932:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
59s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}128m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}181m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12932 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902739/HDFS-12932.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 79384a1eb719 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c7a4dda |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22451/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 

[jira] [Commented] (HDFS-12930) Remove the extra space in HdfsImageViewer.md

2017-12-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296144#comment-16296144
 ] 

Hudson commented on HDFS-12930:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13400 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13400/])
HDFS-12930. Remove the extra space in HdfsImageViewer.md. Contributed by 
(yqlin: rev 25a36b74528678f56c63be643c76d819d6f07840)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsImageViewer.md


> Remove the extra space in HdfsImageViewer.md
> 
>
> Key: HDFS-12930
> URL: https://issues.apache.org/jira/browse/HDFS-12930
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Rahul Pathak
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HDFS-12930.001.patch
>
>
> There is one extra space in HdfsImageViewer.md that leads page rendered error.
> {noformat}
> * [GETXATTRS](./WebHDFS.html#Get_an_XAttr)
> * [LISTXATTRS](./WebHDFS.html#List_all_XAttrs)
> * [CONTENTSUMMARY] (./WebHDFS.html#Get_Content_Summary_of_a_Directory)
> {noformat}
> Can see hadoop 3.0 
> website:http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html#Web_Processor



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12930) Remove the extra space in HdfsImageViewer.md

2017-12-18 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12930:
-
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.1
   3.1.0
   Status: Patch Available  (was: Open)

Committed this to trunk and branch-3.0. Thanks for the contribution, [~rahulp] 
and thanks for the help, [~anu].

> Remove the extra space in HdfsImageViewer.md
> 
>
> Key: HDFS-12930
> URL: https://issues.apache.org/jira/browse/HDFS-12930
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Rahul Pathak
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HDFS-12930.001.patch
>
>
> There is one extra space in HdfsImageViewer.md that leads page rendered error.
> {noformat}
> * [GETXATTRS](./WebHDFS.html#Get_an_XAttr)
> * [LISTXATTRS](./WebHDFS.html#List_all_XAttrs)
> * [CONTENTSUMMARY] (./WebHDFS.html#Get_Content_Summary_of_a_Directory)
> {noformat}
> Can see hadoop 3.0 
> website:http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html#Web_Processor



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12930) Remove the extra space in HdfsImageViewer.md

2017-12-18 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12930:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Remove the extra space in HdfsImageViewer.md
> 
>
> Key: HDFS-12930
> URL: https://issues.apache.org/jira/browse/HDFS-12930
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Rahul Pathak
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HDFS-12930.001.patch
>
>
> There is one extra space in HdfsImageViewer.md that leads page rendered error.
> {noformat}
> * [GETXATTRS](./WebHDFS.html#Get_an_XAttr)
> * [LISTXATTRS](./WebHDFS.html#List_all_XAttrs)
> * [CONTENTSUMMARY] (./WebHDFS.html#Get_Content_Summary_of_a_Directory)
> {noformat}
> Can see hadoop 3.0 
> website:http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html#Web_Processor



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12930) Remove the extra space in HdfsImageViewer.md

2017-12-18 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296109#comment-16296109
 ] 

Yiqun Lin commented on HDFS-12930:
--

LGTM, +1. Committing this.

> Remove the extra space in HdfsImageViewer.md
> 
>
> Key: HDFS-12930
> URL: https://issues.apache.org/jira/browse/HDFS-12930
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Rahul Pathak
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-12930.001.patch
>
>
> There is one extra space in HdfsImageViewer.md that leads page rendered error.
> {noformat}
> * [GETXATTRS](./WebHDFS.html#Get_an_XAttr)
> * [LISTXATTRS](./WebHDFS.html#List_all_XAttrs)
> * [CONTENTSUMMARY] (./WebHDFS.html#Get_Content_Summary_of_a_Directory)
> {noformat}
> Can see hadoop 3.0 
> website:http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html#Web_Processor



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12938) TestErasureCodigCLI testAll failing consistently.

2017-12-18 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HDFS-12938:
-

Assignee: Ajay Kumar

> TestErasureCodigCLI testAll failing consistently.
> -
>
> Key: HDFS-12938
> URL: https://issues.apache.org/jira/browse/HDFS-12938
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, hdfs
>Affects Versions: 3.1.0
>Reporter: Rushabh S Shah
>Assignee: Ajay Kumar
>
> {{TestErasureCodingCLI#testAll}} is failing consistently.
> It failed in this precommit: 
> https://builds.apache.org/job/PreCommit-HDFS-Build/22435/testReport/org.apache.hadoop.cli/TestErasureCodingCLI/testAll/
> I ran locally on my laptop and it failed too.
> It failed with this stack trace:
> {noformat}
> java.lang.AssertionError: One of the tests failed. See the Detailed results 
> to identify the command that failed
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
>   at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
>   at 
> org.apache.hadoop.cli.TestErasureCodingCLI.tearDown(TestErasureCodingCLI.java:77)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}
> Below is the detailed report from 
> {{org.apache.hadoop.cli.TestErasureCodingCLI-output.txt}}.
> {noformat}
> 2017-12-18 09:25:44,821 [Thread-0] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(156)) - 
> ---
> 2017-12-18 09:25:44,821 [Thread-0] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(157)) - Test ID: [15]
> 2017-12-18 09:25:44,821 [Thread-0] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(158)) -Test Description: 
> [setPolicy : set policy on non-empty directory]
> 2017-12-18 09:25:44,821 [Thread-0] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(159)) - 
> 2017-12-18 09:25:44,821 [Thread-0] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(163)) -   Test Commands: [-fs 
> hdfs://localhost:52345 -mkdir /ecdir]
> 2017-12-18 09:25:44,821 [Thread-0] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(163)) -   Test Commands: [-fs 
> hdfs://localhost:52345 -touchz /ecdir/file1]
> 2017-12-18 09:25:44,821 [Thread-0] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(163)) -   Test Commands: [-fs 
> hdfs://localhost:52345 -setPolicy -policy RS-6-3-1024k -path /ecdir]
> 2017-12-18 09:25:44,821 [Thread-0] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(167)) - 
> 2017-12-18 09:25:44,821 [Thread-0] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(170)) -Cleanup Commands: [-fs 
> hdfs://localhost:52345 -rm -R /ecdir]
> 2017-12-18 09:25:44,821 [Thread-0] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(174)) - 
> 2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(178)) -  Comparator: 
> [SubstringComparator]
> 2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(180)) -  Comparision result:   
> [fail]
> 2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(182)) - Expected output:   
> [Warning: setting erasure coding policy on an non-empty directory will not 
> automatically convert existing data to RS-6-3-1024]
> 2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(184)) -   Actual output:   
> [Set erasure coding policy RS-6-3-1024k on /ecdir
> Warning: setting erasure coding policy on a non-empty directory will not 
> automatically convert existing files to RS-6-3-1024k
> ]
> 2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(187)) - 
> 2017-12-18 09:25:44,822 

[jira] [Updated] (HDFS-12937) RBF: Add more unit tests for router admin commands

2017-12-18 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12937:
-
Attachment: HDFS-12937.002.patch

Thanks for the review, [~elgoiri]. Comment makes sense to me.
Attach updated patch. Move cleaning stdout to a @After method.
Will commit this once Jenkins looks good.

> RBF: Add more unit tests for router admin commands
> --
>
> Key: HDFS-12937
> URL: https://issues.apache.org/jira/browse/HDFS-12937
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12937.001.patch, HDFS-12937.002.patch
>
>
> Adding more unit tests to ensure that router admin commands works well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12795) Ozone: SCM: Support for Container LifeCycleState PENDING_CLOSE and LifeCycleEvent FULL_CONTAINER

2017-12-18 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12795:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

[~nandakumar131] Thank you for the contribution. I have committed this to the 
feature branch.


> Ozone: SCM: Support for Container LifeCycleState PENDING_CLOSE and 
> LifeCycleEvent FULL_CONTAINER
> 
>
> Key: HDFS-12795
> URL: https://issues.apache.org/jira/browse/HDFS-12795
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12795-HDFS-7240.000.patch
>
>
> To bring in support for close container, SCM has to have Container 
> LifeCycleState PENDING_CLOSE and LifeCycleEvent FULL_CONTAINER.
> {noformat}
> States: OPEN-->PENDING_CLOSE-->[CLOSED]
> Events:   (FULL_CONTAINER)(CLOSE)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11847) Enhance dfsadmin listOpenFiles command to list files blocking datanode decommissioning

2017-12-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296068#comment-16296068
 ] 

genericqa commented on HDFS-11847:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 19m 
46s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
19s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 57s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 
810 unchanged - 1 fixed = 813 total (was 811) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 24s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.namenode.TestAuditLogs |
|   | hadoop.hdfs.server.namenode.TestReencryption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-11847 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902737/HDFS-11847.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  

[jira] [Commented] (HDFS-12795) Ozone: SCM: Support for Container LifeCycleState PENDING_CLOSE and LifeCycleEvent FULL_CONTAINER

2017-12-18 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296065#comment-16296065
 ] 

Anu Engineer commented on HDFS-12795:
-

Tests seem to have passed in the Jenkins run. I will commit this shortly.


> Ozone: SCM: Support for Container LifeCycleState PENDING_CLOSE and 
> LifeCycleEvent FULL_CONTAINER
> 
>
> Key: HDFS-12795
> URL: https://issues.apache.org/jira/browse/HDFS-12795
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12795-HDFS-7240.000.patch
>
>
> To bring in support for close container, SCM has to have Container 
> LifeCycleState PENDING_CLOSE and LifeCycleEvent FULL_CONTAINER.
> {noformat}
> States: OPEN-->PENDING_CLOSE-->[CLOSED]
> Events:   (FULL_CONTAINER)(CLOSE)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12795) Ozone: SCM: Support for Container LifeCycleState PENDING_CLOSE and LifeCycleEvent FULL_CONTAINER

2017-12-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296061#comment-16296061
 ] 

genericqa commented on HDFS-12795:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
58s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
39s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}152m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}215m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.TestFileCreation |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.TestRestartDFS |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | 

[jira] [Commented] (HDFS-12347) TestBalancerRPCDelay#testBalancerRPCDelay fails very frequently

2017-12-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296049#comment-16296049
 ] 

Hudson commented on HDFS-12347:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13399 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13399/])
HDFS-12347. TestBalancerRPCDelay#testBalancerRPCDelay fails very (szetszwo: rev 
c7499f2d242c64bee8f822a22161d956525f7153)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java


> TestBalancerRPCDelay#testBalancerRPCDelay fails very frequently
> ---
>
> Key: HDFS-12347
> URL: https://issues.apache.org/jira/browse/HDFS-12347
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-beta1, 2.7.5, 3.0.1
>Reporter: Xiao Chen
>Assignee: Bharat Viswanadham
>Priority: Critical
> Fix For: 2.10.0, 3.0.1
>
> Attachments: HDFS-12347.00.patch, trunk.failed.xml
>
>
> Seems to be failing consistently on trunk from yesterday-ish.
> A sample failure is 
> https://builds.apache.org/job/PreCommit-HDFS-Build/20824/testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerRPCDelay/testBalancerRPCDelay/
> Running locally failed with:
> {noformat}
>  type="java.lang.AssertionError">
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12932) Confusing LOG message for block replication

2017-12-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296035#comment-16296035
 ] 

genericqa commented on HDFS-12932:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
40s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 24s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}135m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestFileAppend2 |
|   | hadoop.hdfs.TestDFSRollback |
|   | hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.TestDecommissionWithStriped |
|   | hadoop.hdfs.qjournal.server.TestJournalNode |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.TestReplication |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestErasureCodingMultipleRacks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.TestDatanodeDeath |
|   | hadoop.hdfs.TestDFSStorageStateRecovery |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | 

[jira] [Updated] (HDFS-12347) TestBalancerRPCDelay#testBalancerRPCDelay fails very frequently

2017-12-18 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-12347:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.1
   2.10.0
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Bharat!

Thanks also Ajay for testing the patch.

> TestBalancerRPCDelay#testBalancerRPCDelay fails very frequently
> ---
>
> Key: HDFS-12347
> URL: https://issues.apache.org/jira/browse/HDFS-12347
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-beta1, 2.7.5, 3.0.1
>Reporter: Xiao Chen
>Assignee: Bharat Viswanadham
>Priority: Critical
> Fix For: 2.10.0, 3.0.1
>
> Attachments: HDFS-12347.00.patch, trunk.failed.xml
>
>
> Seems to be failing consistently on trunk from yesterday-ish.
> A sample failure is 
> https://builds.apache.org/job/PreCommit-HDFS-Build/20824/testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerRPCDelay/testBalancerRPCDelay/
> Running locally failed with:
> {noformat}
>  type="java.lang.AssertionError">
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12347) TestBalancerRPCDelay#testBalancerRPCDelay fails very frequently

2017-12-18 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296027#comment-16296027
 ] 

Tsz Wo Nicholas Sze commented on HDFS-12347:


+1 patch looks good.  The test failures are obviously not related.

> TestBalancerRPCDelay#testBalancerRPCDelay fails very frequently
> ---
>
> Key: HDFS-12347
> URL: https://issues.apache.org/jira/browse/HDFS-12347
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-beta1, 2.7.5, 3.0.1
>Reporter: Xiao Chen
>Assignee: Bharat Viswanadham
>Priority: Critical
> Attachments: HDFS-12347.00.patch, trunk.failed.xml
>
>
> Seems to be failing consistently on trunk from yesterday-ish.
> A sample failure is 
> https://builds.apache.org/job/PreCommit-HDFS-Build/20824/testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerRPCDelay/testBalancerRPCDelay/
> Running locally failed with:
> {noformat}
>  type="java.lang.AssertionError">
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12347) TestBalancerRPCDelay#testBalancerRPCDelay fails very frequently

2017-12-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296018#comment-16296018
 ] 

genericqa commented on HDFS-12347:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
59s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestErasureCodingMultipleRacks |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12347 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902731/HDFS-12347.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8283e7cacc7e 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c7a4dda |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22448/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22448/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22448/testReport/ |
| Max. process+thread count | 4217 (vs. ulimit of 

[jira] [Commented] (HDFS-12347) TestBalancerRPCDelay#testBalancerRPCDelay fails very frequently

2017-12-18 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296001#comment-16296001
 ] 

Ajay Kumar commented on HDFS-12347:
---

LGTM. Tested the patch locally in loop, passed each time.

> TestBalancerRPCDelay#testBalancerRPCDelay fails very frequently
> ---
>
> Key: HDFS-12347
> URL: https://issues.apache.org/jira/browse/HDFS-12347
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-beta1, 2.7.5, 3.0.1
>Reporter: Xiao Chen
>Assignee: Bharat Viswanadham
>Priority: Critical
> Attachments: HDFS-12347.00.patch, trunk.failed.xml
>
>
> Seems to be failing consistently on trunk from yesterday-ish.
> A sample failure is 
> https://builds.apache.org/job/PreCommit-HDFS-Build/20824/testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerRPCDelay/testBalancerRPCDelay/
> Running locally failed with:
> {noformat}
>  type="java.lang.AssertionError">
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12936) java.lang.OutOfMemoryError: unable to create new native thread

2017-12-18 Thread Jepson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296000#comment-16296000
 ] 

Jepson commented on HDFS-12936:
---

[~anu] [~cheersyang] [~alicezhangchen] Thank you very much.

I turn up these parameters.
{code:java}
1.
echo "sys.kernel.threads-max=196605" >> /etc/sysctl.conf
echo "sys.kernel.pid_max=196605" >> /etc/sysctl.conf
echo "sys.vm.max_map_count=393210" >> /etc/sysctl.conf
sysctl -p  

2.
/etc/security/limits.conf
* soft nofile 196605
* hard nofile 196605
* soft nproc 196605
* hard nproc 196605
{code}


> java.lang.OutOfMemoryError: unable to create new native thread
> --
>
> Key: HDFS-12936
> URL: https://issues.apache.org/jira/browse/HDFS-12936
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.0
> Environment: CDH5.12
> hadoop2.6
>Reporter: Jepson
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> I configure the max user processes  65535 with any user ,and the datanode 
> memory is 8G.
> When a log of data was been writeen,the datanode was been shutdown.
> But I can see the memory use only < 1000M.
> Please to see https://pan.baidu.com/s/1o7BE0cy
> *DataNode shutdown error log:*  
> {code:java}
> 2017-12-17 23:58:14,422 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> PacketResponder: 
> BP-1437036909-192.168.17.36-1509097205664:blk_1074725940_987917, 
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2017-12-17 23:58:31,425 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory. 
> Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
>   at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:01,426 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory. 
> Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
>   at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:05,520 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory. 
> Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
>   at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:31,429 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Receiving BP-1437036909-192.168.17.36-1509097205664:blk_1074725951_987928 
> src: /192.168.17.54:40478 dest: /192.168.17.48:50010
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12869) Ozone: Service Discovery: RPC endpoint in KSM for getServiceList

2017-12-18 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295950#comment-16295950
 ] 

Anu Engineer commented on HDFS-12869:
-

[~nandakumar131] There is a conflict in {{TestKeySpaceManager.java}}. Can you 
please rebase this patch? 

> Ozone: Service Discovery: RPC endpoint in KSM for getServiceList
> 
>
> Key: HDFS-12869
> URL: https://issues.apache.org/jira/browse/HDFS-12869
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12869-HDFS-7240.000.patch, 
> HDFS-12869-HDFS-7240.001.patch
>
>
> A new RPC call to be added to KSM which will return the list of Services that 
> are there in Ozone cluster, this will be used by OzoneClient for establishing 
> the connection.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12932) Confusing LOG message for block replication

2017-12-18 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-12932:

Attachment: HDFS-12932.1.patch

Re-attaching patch v1 to trigger jenkins.

> Confusing LOG message for block replication
> ---
>
> Key: HDFS-12932
> URL: https://issues.apache.org/jira/browse/HDFS-12932
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.8.3
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-12932.0.patch, HDFS-12932.1.patch
>
>
> In our cluster we see large number of log messages such as the following:
> {code}
> 2017-12-15 22:55:54,603 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory: Increasing replication 
> from 3 to 3 for 
> {code}
> This is a little confusing since "from 3 to 3" is not "increasing". Digging 
> into it, it seems related to this piece of code:
> {code}
> if (oldBR != -1) {
>   if (oldBR > targetReplication) {
> FSDirectory.LOG.info("Decreasing replication from {} to {} for {}",
>  oldBR, targetReplication, iip.getPath());
>   } else {
> FSDirectory.LOG.info("Increasing replication from {} to {} for {}",
>  oldBR, targetReplication, iip.getPath());
>   }
> }
> {code}
> Perhaps a {{oldBR == targetReplication}} case is missing?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12932) Confusing LOG message for block replication

2017-12-18 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-12932:

Attachment: (was: HDFS-12932.1.patch)

> Confusing LOG message for block replication
> ---
>
> Key: HDFS-12932
> URL: https://issues.apache.org/jira/browse/HDFS-12932
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.8.3
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-12932.0.patch
>
>
> In our cluster we see large number of log messages such as the following:
> {code}
> 2017-12-15 22:55:54,603 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory: Increasing replication 
> from 3 to 3 for 
> {code}
> This is a little confusing since "from 3 to 3" is not "increasing". Digging 
> into it, it seems related to this piece of code:
> {code}
> if (oldBR != -1) {
>   if (oldBR > targetReplication) {
> FSDirectory.LOG.info("Decreasing replication from {} to {} for {}",
>  oldBR, targetReplication, iip.getPath());
>   } else {
> FSDirectory.LOG.info("Increasing replication from {} to {} for {}",
>  oldBR, targetReplication, iip.getPath());
>   }
> }
> {code}
> Perhaps a {{oldBR == targetReplication}} case is missing?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12932) Confusing LOG message for block replication

2017-12-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295929#comment-16295929
 ] 

genericqa commented on HDFS-12932:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
57s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}179m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12932 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902714/HDFS-12932.0.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1d6bd10d7564 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c7a4dda |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22445/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22445/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test 

[jira] [Commented] (HDFS-8068) Do not retry rpc calls If the proxy contains unresolved address

2017-12-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295930#comment-16295930
 ] 

genericqa commented on HDFS-8068:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-8068 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-8068 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12723657/HDFS-8068.v2.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22450/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Do not retry rpc calls If the proxy contains unresolved address
> ---
>
> Key: HDFS-8068
> URL: https://issues.apache.org/jira/browse/HDFS-8068
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-8068.v1.patch, HDFS-8068.v2.patch
>
>
> When the InetSocketAddress object happens to be unresolvable (e.g. due to 
> transient DNS issue), the rpc proxy object will not be usable since the 
> client will throw UnknownHostException when a Connection object is created. 
> If FailoverOnNetworkExceptionRetry is used as in the standard HA failover 
> proxy, the call will be retried, but this will never recover.  Instead, the 
> validity of address must be checked on pxoy creation and throw if it is 
> invalid.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11847) Enhance dfsadmin listOpenFiles command to list files blocking datanode decommissioning

2017-12-18 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11847:
--
Attachment: HDFS-11847.02.patch

Attached v02 patch with more unit  tests added.

> Enhance dfsadmin listOpenFiles command to list files blocking datanode 
> decommissioning
> --
>
> Key: HDFS-11847
> URL: https://issues.apache.org/jira/browse/HDFS-11847
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11847.01.patch, HDFS-11847.02.patch
>
>
> HDFS-10480 adds {{listOpenFiles}} option is to {{dfsadmin}} command to list 
> all the open files in the system.
> Additionally, it would be very useful to only list open files that are 
> blocking the DataNode decommissioning. With thousand+ node clusters, where 
> there might be machines added and removed regularly for maintenance, any 
> option to monitor and debug decommissioning status is very helpful. Proposal 
> here is to add suboptions to {{listOpenFiles}} for the above case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12911) [SPS]: Fix review comments from discussions in HDFS-10285

2017-12-18 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295918#comment-16295918
 ] 

Uma Maheswara Rao G commented on HDFS-12911:


I don't like to repeat my answers again and again, as I tried to give my 
explanations already in HDFS-10285.

However I would like to give a little clarifications on #1, To avoid confusions 
for others.

{quote}
I consider this feature far from complete and I am worried that other feature 
additions will hurt Namenode more and more.
{quote}
This could be your opinion. This feature does not intend to add any new 
policies. Please look at JIRA description/design doc. 
The goal of this work is to make the NN state and storage state to match. 
Current policies are the basic core policies which we are persisting in NN. SPS 
works to satisfy that core policy states. I am not sure whether you have more 
policies which requires the persistence in NN at all. To the bottom line, SPS 
works for the policies which you persisted in NN.
Fancier policies can be built anywhere outside as I think they don't need to 
persist in NN, and I am not discussing that work today as that needs good time 
to discuss that feature( IMO, its a non trivial feature) . Whatever fancy 
policies you may define, they may be falling into the same existing fundamental 
storage policies, they are HOT, WARM, COLD. SPS works only for the policies 
which you persist in the NN. *Does not intend to add any new policies on its 
own, This works under the NN exposed HSM options.*

{quote}
I completely disagree, this is a design choice that was made. Nothing prevents 
this process from working outside the Namenode.
{quote}
Probably I have to say the same thing in other way. :-) Nothing much overhead 
by running inside too.

{quote}
In my humble opinion, if you have not resolved the big picture, then proceeding 
with things like optimizing lock is pointless
{quote}
Thanks for the respect on developers time. Devs will receive commonly agreed 
feedback and work on, we don't see objection on lock improvement. And Currently 
its already 47th task in HDFS-10285, titled *SPS in NN*. Right now, unable to 
work on things which are not agreed by others yet.

Keeping arguments aside,  from SPS dev POV, we would be happy to work on, if we 
get commonly agreed feedbacks. As a community developer(with my experience) , I 
would like to respect everyone's  feedbacks. 
Right now there is no common agreement on the approach how we start SPS. 
Probably we should figure out a way how to satisfy all arguments. Probably 
online meeting should help discuss on that steps. Thanks



> [SPS]: Fix review comments from discussions in HDFS-10285
> -
>
> Key: HDFS-12911
> URL: https://issues.apache.org/jira/browse/HDFS-12911
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Rakesh R
> Attachments: HDFS-12911.00.patch
>
>
> This is the JIRA for tracking the possible improvements or issues discussed 
> in main JIRA
> So far comments to handle
> Daryn:
>  # Lock should not kept while executing placement policy.
>  # While starting up the NN, SPS Xattrs checks happen even if feature 
> disabled. This could potentially impact the startup speed. 
> UMA:
> # I am adding one more possible improvement to reduce Xattr objects 
> significantly.
>  SPS Xattr is constant object. So, we create one Xattr deduplication object 
> once statically and use the same object reference when required to add SPS 
> Xattr to Inode. So, here additional bytes required for storing SPS Xattr 
> would turn to same as single object ref ( i.e 4 bytes in 32 bit). So Xattr 
> overhead should come down significantly IMO. Lets explore the feasibility on 
> this option.
> Xattr list Future will not be specially created for SPS, that list would have 
> been created by SetStoragePolicy already on the same directory. So, no extra 
> Feature creation because of SPS alone.
> # Currently SPS putting long id objects in Q for tracking SPS called Inodes. 
> So, it is additional created and size of it would be (obj ref + value) = (8 + 
> 8) bytes [ ignoring alignment for time being]
> So, the possible improvement here is, instead of creating new Long obj, we 
> can keep existing inode object for tracking. Advantage is, Inode object 
> already maintained in NN, so no new object creation is needed. So, we just 
> need to maintain one obj ref. Above two points should significantly reduce 
> the memory requirements of SPS. So, for SPS call: 8bytes for called inode 
> tracking + 8 bytes for Xattr ref.
> # Use LightWeightLinkedSet instead of using LinkedList for from Q. This will 
> reduce unnecessary Node creations inside LinkedList. 



--
This message was sent by 

[jira] [Updated] (HDFS-12751) Ozone: SCM: update container allocated size to container db for all the open containers in ContainerStateManager#close

2017-12-18 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12751:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

[~nandakumar131] Thanks for filing and reviewing this patch. [~vagarychen] 
Thanks for the contribution. I have committed this to the feature branch.

> Ozone: SCM: update container allocated size to container db for all the open 
> containers in ContainerStateManager#close
> --
>
> Key: HDFS-12751
> URL: https://issues.apache.org/jira/browse/HDFS-12751
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Chen Liang
> Fix For: HDFS-7240
>
> Attachments: HDFS-12751-HDFS-7240.001.patch, 
> HDFS-12751-HDFS-7240.002.patch
>
>
> Container allocated size is maintained in memory by 
> {{ContainerStateManager}}, this has to be updated in container db when we 
> shutdown SCM. {{ContainerStateManager#close}} will be called during SCM 
> shutdown, so updating allocated size for all the open containers should be 
> done here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12751) Ozone: SCM: update container allocated size to container db for all the open containers in ContainerStateManager#close

2017-12-18 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295867#comment-16295867
 ] 

Anu Engineer commented on HDFS-12751:
-

+1, I will commit this shortly.


> Ozone: SCM: update container allocated size to container db for all the open 
> containers in ContainerStateManager#close
> --
>
> Key: HDFS-12751
> URL: https://issues.apache.org/jira/browse/HDFS-12751
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Chen Liang
> Attachments: HDFS-12751-HDFS-7240.001.patch, 
> HDFS-12751-HDFS-7240.002.patch
>
>
> Container allocated size is maintained in memory by 
> {{ContainerStateManager}}, this has to be updated in container db when we 
> shutdown SCM. {{ContainerStateManager#close}} will be called during SCM 
> shutdown, so updating allocated size for all the open containers should be 
> done here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12347) TestBalancerRPCDelay#testBalancerRPCDelay fails very frequently

2017-12-18 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12347:
--
Status: Patch Available  (was: Open)

> TestBalancerRPCDelay#testBalancerRPCDelay fails very frequently
> ---
>
> Key: HDFS-12347
> URL: https://issues.apache.org/jira/browse/HDFS-12347
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.5, 3.0.0-beta1, 3.0.1
>Reporter: Xiao Chen
>Assignee: Bharat Viswanadham
>Priority: Critical
> Attachments: HDFS-12347.00.patch, trunk.failed.xml
>
>
> Seems to be failing consistently on trunk from yesterday-ish.
> A sample failure is 
> https://builds.apache.org/job/PreCommit-HDFS-Build/20824/testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerRPCDelay/testBalancerRPCDelay/
> Running locally failed with:
> {noformat}
>  type="java.lang.AssertionError">
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12347) TestBalancerRPCDelay#testBalancerRPCDelay fails very frequently

2017-12-18 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295857#comment-16295857
 ] 

Bharat Viswanadham commented on HDFS-12347:
---

Thank You [~szetszwo] for inputs.
After reducing the number of datanodes to 20. Test case has passed. Attached 
the patch

Note: Even with number of datanodes set to 30 also it has passed. But changing 
it to 40, some times caused above exception and few times the return status 
from balancer is -3 (i.e No block has been moved for specified consecutive 
iterations (5 by default)). So, when datanodes is set to 40, the test case is 
behaving strangly.

Tried following, when it has throwed error -3:
1. Increased maxIdleIteration, still the same error.
2. Increased new number of datanodes added.





> TestBalancerRPCDelay#testBalancerRPCDelay fails very frequently
> ---
>
> Key: HDFS-12347
> URL: https://issues.apache.org/jira/browse/HDFS-12347
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-beta1, 2.7.5, 3.0.1
>Reporter: Xiao Chen
>Assignee: Bharat Viswanadham
>Priority: Critical
> Attachments: HDFS-12347.00.patch, trunk.failed.xml
>
>
> Seems to be failing consistently on trunk from yesterday-ish.
> A sample failure is 
> https://builds.apache.org/job/PreCommit-HDFS-Build/20824/testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerRPCDelay/testBalancerRPCDelay/
> Running locally failed with:
> {noformat}
>  type="java.lang.AssertionError">
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12795) Ozone: SCM: Support for Container LifeCycleState PENDING_CLOSE and LifeCycleEvent FULL_CONTAINER

2017-12-18 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295855#comment-16295855
 ] 

Anu Engineer commented on HDFS-12795:
-

[~nandakumar131] I ran this on my dev box and I see some test failures. Could 
you please take a look? I am not sure if all failures are related to this 
patch, but the TestSCMCli failures might be related to this patch.
* TestKeySpaceManager.testExpiredOpenKey:1104 expected:<4> but was:<5>
*  TestSCMCli.testCloseContainer:452 expected:<1> but was:<3>
*  TestSCMCli.testDeleteContainer:180 » Remote Failed to update container state 
n...
*  TestSCMCli.testInfoContainer:306 » Remote Failed to update container state 
Con...
* TestContainerStateManager.testUpdateContainerState:243 » SCM Failed to update 
..

> Ozone: SCM: Support for Container LifeCycleState PENDING_CLOSE and 
> LifeCycleEvent FULL_CONTAINER
> 
>
> Key: HDFS-12795
> URL: https://issues.apache.org/jira/browse/HDFS-12795
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12795-HDFS-7240.000.patch
>
>
> To bring in support for close container, SCM has to have Container 
> LifeCycleState PENDING_CLOSE and LifeCycleEvent FULL_CONTAINER.
> {noformat}
> States: OPEN-->PENDING_CLOSE-->[CLOSED]
> Events:   (FULL_CONTAINER)(CLOSE)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12347) TestBalancerRPCDelay#testBalancerRPCDelay fails very frequently

2017-12-18 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12347:
--
Attachment: HDFS-12347.00.patch

> TestBalancerRPCDelay#testBalancerRPCDelay fails very frequently
> ---
>
> Key: HDFS-12347
> URL: https://issues.apache.org/jira/browse/HDFS-12347
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-beta1, 2.7.5, 3.0.1
>Reporter: Xiao Chen
>Assignee: Bharat Viswanadham
>Priority: Critical
> Attachments: HDFS-12347.00.patch, trunk.failed.xml
>
>
> Seems to be failing consistently on trunk from yesterday-ish.
> A sample failure is 
> https://builds.apache.org/job/PreCommit-HDFS-Build/20824/testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerRPCDelay/testBalancerRPCDelay/
> Running locally failed with:
> {noformat}
>  type="java.lang.AssertionError">
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12932) Confusing LOG message for block replication

2017-12-18 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295823#comment-16295823
 ] 

Wei Yan commented on HDFS-12932:


+1 LGTM. Wait for the Jenkins.

> Confusing LOG message for block replication
> ---
>
> Key: HDFS-12932
> URL: https://issues.apache.org/jira/browse/HDFS-12932
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.8.3
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-12932.0.patch, HDFS-12932.1.patch
>
>
> In our cluster we see large number of log messages such as the following:
> {code}
> 2017-12-15 22:55:54,603 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory: Increasing replication 
> from 3 to 3 for 
> {code}
> This is a little confusing since "from 3 to 3" is not "increasing". Digging 
> into it, it seems related to this piece of code:
> {code}
> if (oldBR != -1) {
>   if (oldBR > targetReplication) {
> FSDirectory.LOG.info("Decreasing replication from {} to {} for {}",
>  oldBR, targetReplication, iip.getPath());
>   } else {
> FSDirectory.LOG.info("Increasing replication from {} to {} for {}",
>  oldBR, targetReplication, iip.getPath());
>   }
> }
> {code}
> Perhaps a {{oldBR == targetReplication}} case is missing?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12795) Ozone: SCM: Support for Container LifeCycleState PENDING_CLOSE and LifeCycleEvent FULL_CONTAINER

2017-12-18 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295817#comment-16295817
 ] 

Anu Engineer commented on HDFS-12795:
-

I am +1 on these changes. Since we don't have a Jenkins run on this patch yet, 
I have scheduled one here.

https://builds.apache.org/blue/organizations/jenkins/PreCommit-HDFS-Build/detail/PreCommit-HDFS-Build/22446/pipeline

As soon as we get a Jenkins run I will commit this patch.


> Ozone: SCM: Support for Container LifeCycleState PENDING_CLOSE and 
> LifeCycleEvent FULL_CONTAINER
> 
>
> Key: HDFS-12795
> URL: https://issues.apache.org/jira/browse/HDFS-12795
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12795-HDFS-7240.000.patch
>
>
> To bring in support for close container, SCM has to have Container 
> LifeCycleState PENDING_CLOSE and LifeCycleEvent FULL_CONTAINER.
> {noformat}
> States: OPEN-->PENDING_CLOSE-->[CLOSED]
> Events:   (FULL_CONTAINER)(CLOSE)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12932) Confusing LOG message for block replication

2017-12-18 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-12932:

Attachment: HDFS-12932.1.patch

Thanks [~vagarychen] for taking a look! Yes I agree it's probably better to 
keep the log messages for the {{==}} case. Attaching patch v1.

> Confusing LOG message for block replication
> ---
>
> Key: HDFS-12932
> URL: https://issues.apache.org/jira/browse/HDFS-12932
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.8.3
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-12932.0.patch, HDFS-12932.1.patch
>
>
> In our cluster we see large number of log messages such as the following:
> {code}
> 2017-12-15 22:55:54,603 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory: Increasing replication 
> from 3 to 3 for 
> {code}
> This is a little confusing since "from 3 to 3" is not "increasing". Digging 
> into it, it seems related to this piece of code:
> {code}
> if (oldBR != -1) {
>   if (oldBR > targetReplication) {
> FSDirectory.LOG.info("Decreasing replication from {} to {} for {}",
>  oldBR, targetReplication, iip.getPath());
>   } else {
> FSDirectory.LOG.info("Increasing replication from {} to {} for {}",
>  oldBR, targetReplication, iip.getPath());
>   }
> }
> {code}
> Perhaps a {{oldBR == targetReplication}} case is missing?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12890) Ozone: XceiverClient should have upper bound on async requests

2017-12-18 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12890:

  Resolution: Fixed
Hadoop Flags: Reviewed
Target Version/s: HDFS-7240
  Status: Resolved  (was: Patch Available)

[~msingh] Thanks for the thoughtful review and comments. [~shashikant] Thanks 
for the contribution. I have committed this to the feature branch.

> Ozone: XceiverClient should have upper bound on async requests
> --
>
> Key: HDFS-12890
> URL: https://issues.apache.org/jira/browse/HDFS-12890
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12890-HDFS-7240.001.patch, 
> HDFS-12890-HDFS-7240.002.patch, HDFS-12890-HDFS-7240.003.patch, 
> HDFS-12890-HDFS-7240.004.patch, HDFS-12890-HDFS-7240.005.patch
>
>
> XceiverClient-ratis maintains upper bound on the no of outstanding async 
> requests . XceiverClient
> should also impose an upper bound on the no of outstanding async requests 
> received from client
> for write.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12932) Confusing LOG message for block replication

2017-12-18 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295772#comment-16295772
 ] 

Chen Liang commented on HDFS-12932:
---

Thanks [~csun] for the catch! I think maybe it's better to add a third branch 
to catch the {{==}} case. Just log a message saying it remains unchanged at 
that value. Because the current code always out a message for all three cases 
of {{=}} {{<}} and {{>}}. I think it's probably better we don't change the 
syntax here.

> Confusing LOG message for block replication
> ---
>
> Key: HDFS-12932
> URL: https://issues.apache.org/jira/browse/HDFS-12932
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.8.3
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-12932.0.patch
>
>
> In our cluster we see large number of log messages such as the following:
> {code}
> 2017-12-15 22:55:54,603 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory: Increasing replication 
> from 3 to 3 for 
> {code}
> This is a little confusing since "from 3 to 3" is not "increasing". Digging 
> into it, it seems related to this piece of code:
> {code}
> if (oldBR != -1) {
>   if (oldBR > targetReplication) {
> FSDirectory.LOG.info("Decreasing replication from {} to {} for {}",
>  oldBR, targetReplication, iip.getPath());
>   } else {
> FSDirectory.LOG.info("Increasing replication from {} to {} for {}",
>  oldBR, targetReplication, iip.getPath());
>   }
> }
> {code}
> Perhaps a {{oldBR == targetReplication}} case is missing?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12932) Confusing LOG message for block replication

2017-12-18 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-12932:

Attachment: HDFS-12932.0.patch

Attaching patch v0 to address the issue.

> Confusing LOG message for block replication
> ---
>
> Key: HDFS-12932
> URL: https://issues.apache.org/jira/browse/HDFS-12932
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.8.3
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-12932.0.patch
>
>
> In our cluster we see large number of log messages such as the following:
> {code}
> 2017-12-15 22:55:54,603 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory: Increasing replication 
> from 3 to 3 for 
> {code}
> This is a little confusing since "from 3 to 3" is not "increasing". Digging 
> into it, it seems related to this piece of code:
> {code}
> if (oldBR != -1) {
>   if (oldBR > targetReplication) {
> FSDirectory.LOG.info("Decreasing replication from {} to {} for {}",
>  oldBR, targetReplication, iip.getPath());
>   } else {
> FSDirectory.LOG.info("Increasing replication from {} to {} for {}",
>  oldBR, targetReplication, iip.getPath());
>   }
> }
> {code}
> Perhaps a {{oldBR == targetReplication}} case is missing?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12932) Confusing LOG message for block replication

2017-12-18 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-12932:

Status: Patch Available  (was: Open)

> Confusing LOG message for block replication
> ---
>
> Key: HDFS-12932
> URL: https://issues.apache.org/jira/browse/HDFS-12932
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.8.3
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-12932.0.patch
>
>
> In our cluster we see large number of log messages such as the following:
> {code}
> 2017-12-15 22:55:54,603 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory: Increasing replication 
> from 3 to 3 for 
> {code}
> This is a little confusing since "from 3 to 3" is not "increasing". Digging 
> into it, it seems related to this piece of code:
> {code}
> if (oldBR != -1) {
>   if (oldBR > targetReplication) {
> FSDirectory.LOG.info("Decreasing replication from {} to {} for {}",
>  oldBR, targetReplication, iip.getPath());
>   } else {
> FSDirectory.LOG.info("Increasing replication from {} to {} for {}",
>  oldBR, targetReplication, iip.getPath());
>   }
> }
> {code}
> Perhaps a {{oldBR == targetReplication}} case is missing?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12818) Support multiple storages in DataNodeCluster / SimulatedFSDataset

2017-12-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295684#comment-16295684
 ] 

Hudson commented on HDFS-12818:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13395 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13395/])
HDFS-12818. Support multiple storages in DataNodeCluster / (shv: rev 
94576b17fbc19c440efafb6c3322f53ec78a5b55)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDatasetWithMultipleStorages.java


> Support multiple storages in DataNodeCluster / SimulatedFSDataset
> -
>
> Key: HDFS-12818
> URL: https://issues.apache.org/jira/browse/HDFS-12818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 3.0.1
>
> Attachments: HDFS-12818.000.patch, HDFS-12818.001.patch, 
> HDFS-12818.002.patch, HDFS-12818.003.patch, HDFS-12818.004.patch, 
> HDFS-12818.005.patch, HDFS-12818.006.patch, HDFS-12818.007.patch, 
> HDFS-12818.008.patch, HDFS-12818.009.patch, HDFS-12818.010.patch
>
>
> Currently {{SimulatedFSDataset}} (and thus, {{DataNodeCluster}} with 
> {{-simulated}}) only supports a single storage per {{DataNode}}. Given that 
> the number of storages can have important implications on the performance of 
> block report processing, it would be useful for these classes to support a 
> multiple storage configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12818) Support multiple storages in DataNodeCluster / SimulatedFSDataset

2017-12-18 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-12818:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.1
   2.10.0
   3.1.0
   Status: Resolved  (was: Patch Available)

I just committed this to trunk, branch-3, and branch-2. Thank you [~xkrogen].
LMK if we need it deeper down.

> Support multiple storages in DataNodeCluster / SimulatedFSDataset
> -
>
> Key: HDFS-12818
> URL: https://issues.apache.org/jira/browse/HDFS-12818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 3.0.1
>
> Attachments: HDFS-12818.000.patch, HDFS-12818.001.patch, 
> HDFS-12818.002.patch, HDFS-12818.003.patch, HDFS-12818.004.patch, 
> HDFS-12818.005.patch, HDFS-12818.006.patch, HDFS-12818.007.patch, 
> HDFS-12818.008.patch, HDFS-12818.009.patch, HDFS-12818.010.patch
>
>
> Currently {{SimulatedFSDataset}} (and thus, {{DataNodeCluster}} with 
> {{-simulated}}) only supports a single storage per {{DataNode}}. Given that 
> the number of storages can have important implications on the performance of 
> block report processing, it would be useful for these classes to support a 
> multiple storage configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12698) Ozone: Use time units in the Ozone configuration values

2017-12-18 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12698:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~elek] Thanks for the contribution. I have committed this to the feature 
branch.

> Ozone: Use time units in the Ozone configuration values
> ---
>
> Key: HDFS-12698
> URL: https://issues.apache.org/jira/browse/HDFS-12698
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12698-HDFS-7240.001.patch, 
> HDFS-12698-HDFS-7240.002.patch, HDFS-12698-HDFS-7240.003.patch, 
> HDFS-12698-HDFS-7240.005.patch, HDFS-12698-HDFS-7240.006.patch, 
> HDFS-12698-HDFS-7240.007.patch, HDFS-12698-HDFS-7240.008.patch, 
> HDFS-12698-HDFS-7240.009.patch, HDFS-12698-HDFS-7240.010.patch
>
>
> In HDFS-9847 introduced a new way to configure the time related configuration 
> with using time unit in the vaule (eg. 10s, 5m, ...).
> Because the new behavior I have seen a lot of warning during my tests:
> {code}
> 2017-10-19 18:35:19,955 [main] INFO  Configuration.deprecation 
> (Configuration.java:logDeprecation(1306)) - No unit for 
> scm.container.client.idle.threshold(1) assuming MILLISECONDS
> {code}
> So we need to add the time unit for every configuration. Unfortunately we 
> have a few configuration parameter which includes the unit in the key name 
> (eg dfs.cblock.block.buffer.flush.interval.seconds or 
> ozone.container.report.interval.ms).
> I suggest to remove all the units from the key name and follow the new 
> convention where any of the units could be used. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12698) Ozone: Use time units in the Ozone configuration values

2017-12-18 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295638#comment-16295638
 ] 

Anu Engineer commented on HDFS-12698:
-

+1, I will commit this shortly.


> Ozone: Use time units in the Ozone configuration values
> ---
>
> Key: HDFS-12698
> URL: https://issues.apache.org/jira/browse/HDFS-12698
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12698-HDFS-7240.001.patch, 
> HDFS-12698-HDFS-7240.002.patch, HDFS-12698-HDFS-7240.003.patch, 
> HDFS-12698-HDFS-7240.005.patch, HDFS-12698-HDFS-7240.006.patch, 
> HDFS-12698-HDFS-7240.007.patch, HDFS-12698-HDFS-7240.008.patch, 
> HDFS-12698-HDFS-7240.009.patch, HDFS-12698-HDFS-7240.010.patch
>
>
> In HDFS-9847 introduced a new way to configure the time related configuration 
> with using time unit in the vaule (eg. 10s, 5m, ...).
> Because the new behavior I have seen a lot of warning during my tests:
> {code}
> 2017-10-19 18:35:19,955 [main] INFO  Configuration.deprecation 
> (Configuration.java:logDeprecation(1306)) - No unit for 
> scm.container.client.idle.threshold(1) assuming MILLISECONDS
> {code}
> So we need to add the time unit for every configuration. Unfortunately we 
> have a few configuration parameter which includes the unit in the key name 
> (eg dfs.cblock.block.buffer.flush.interval.seconds or 
> ozone.container.report.interval.ms).
> I suggest to remove all the units from the key name and follow the new 
> convention where any of the units could be used. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12698) Ozone: Use time units in the Ozone configuration values

2017-12-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295633#comment-16295633
 ] 

genericqa commented on HDFS-12698:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
58s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
42s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}138m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}202m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12698 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-12799) Ozone: SCM: Close containers: extend SCMCommandResponseProto with SCMCloseContainerCmdResponseProto

2017-12-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295609#comment-16295609
 ] 

genericqa commented on HDFS-12799:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
12s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}158m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.ksm.TestKeySpaceManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12799 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902683/HDFS-12799-HDFS-7240.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 60964c1fa01b 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-7240 / 43a1334 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22444/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 

[jira] [Commented] (HDFS-12555) HDFS federation should support configure secondary directory

2017-12-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295519#comment-16295519
 ] 

genericqa commented on HDFS-12555:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
8s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m  
9s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  6s{color} | {color:orange} root: The patch generated 35 new + 14 unchanged 
- 0 fixed = 49 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
41s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 33s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}133m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}233m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Return value of new InodeTree$ResolveResult(InodeTree$ResultKind, Object, 
String, Path) ignored, but method has no side effect  At InodeTree.java:Object, 
String, Path) ignored, but method has no side effect  At InodeTree.java:[line 
786] |
| Failed junit tests | hadoop.fs.viewfs.TestViewFsWithAuthorityLocalFs |
|   | hadoop.fs.viewfs.TestViewFsLocalFs |
|   | hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem |
|   | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem |
|   | hadoop.fs.viewfs.TestViewFsConfig |
|   | 

[jira] [Commented] (HDFS-12818) Support multiple storages in DataNodeCluster / SimulatedFSDataset

2017-12-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295513#comment-16295513
 ] 

genericqa commented on HDFS-12818:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
2s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 54s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 393 unchanged - 
1 fixed = 394 total (was 394) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 52 unchanged - 8 fixed = 52 total (was 60) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}137m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}191m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestDFSStripedInputStream |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestWriteReadStripedFile |
|   | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12818 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902669/HDFS-12818.010.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 28dbaaa1aea1 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0010089 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| 

[jira] [Commented] (HDFS-12665) [AliasMap] Create a version of the AliasMap that runs in memory in the Namenode (leveldb)

2017-12-18 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295503#comment-16295503
 ] 

Virajith Jalaparti commented on HDFS-12665:
---

fix-version is set now. Thanks [~vinodkv]

> [AliasMap] Create a version of the AliasMap that runs in memory in the 
> Namenode (leveldb)
> -
>
> Key: HDFS-12665
> URL: https://issues.apache.org/jira/browse/HDFS-12665
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
> Fix For: 3.1.0
>
> Attachments: HDFS-12665-HDFS-9806.001.patch, 
> HDFS-12665-HDFS-9806.002.patch, HDFS-12665-HDFS-9806.003.patch, 
> HDFS-12665-HDFS-9806.004.patch, HDFS-12665-HDFS-9806.005.patch, 
> HDFS-12665-HDFS-9806.006.patch, HDFS-12665-HDFS-9806.007.patch, 
> HDFS-12665-HDFS-9806.008.patch, HDFS-12665-HDFS-9806.009.patch, 
> HDFS-12665-HDFS-9806.010.patch, HDFS-12665-HDFS-9806.011.patch, 
> HDFS-12665-HDFS-9806.012.patch
>
>
> The design of Provided Storage requires the use of an AliasMap to manage the 
> mapping between blocks of files on the local HDFS and ranges of files on a 
> remote storage system. To reduce load from the Namenode, this can be done 
> using a pluggable external service (e.g. AzureTable, Cassandra, Ratis). 
> However, to aide adoption and ease of deployment, we propose an in memory 
> version.
> This AliasMap will be a wrapper around LevelDB (already a dependency from the 
> Timeline Service) and use protobuf for the key (blockpool, blockid, and 
> genstamp) and the value (url, offset, length, nonce). The in memory service 
> will also have a configurable port on which it will listen for updates from 
> Storage Policy Satisfier (SPS) Coordinating Datanodes (C-DN).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12591) [READ] Implement LevelDBFileRegionFormat

2017-12-18 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295502#comment-16295502
 ] 

Virajith Jalaparti commented on HDFS-12591:
---

Set now. Thanks [~vinodkv]

> [READ] Implement LevelDBFileRegionFormat
> 
>
> Key: HDFS-12591
> URL: https://issues.apache.org/jira/browse/HDFS-12591
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HDFS-12591-HDFS-9806.001.patch, 
> HDFS-12591-HDFS-9806.002.patch, HDFS-12591-HDFS-9806.003.patch, 
> HDFS-12591-HDFS-9806.004.patch, HDFS-12591-HDFS-9806.005.patch, 
> HDFS-12591-HDFS-9806.006.patch, HDFS-12591-HDFS-9806.007.patch
>
>
> The existing work for HDFS-9806 uses an implementation of the {{FileRegion}} 
> read from a csv file. This is good for testability and diagnostic purposes, 
> but it is not very efficient for larger systems.
> There should be a version that is similar to the {{TextFileRegionFormat}} 
> that instead uses LevelDB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata

2017-12-18 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295495#comment-16295495
 ] 

Virajith Jalaparti commented on HDFS-12713:
---

[~vinodkv] - The fix-version and the reviewed flag are now set for all the 
sub-tasks of HDFS-9806. Thanks for checking this.

> [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata 
> and PROVIDED storage metadata
> 
>
> Key: HDFS-12713
> URL: https://issues.apache.org/jira/browse/HDFS-12713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Ewan Higgs
> Fix For: 3.1.0
>
> Attachments: HDFS-12713-HDFS-9806.001.patch, 
> HDFS-12713-HDFS-9806.002.patch, HDFS-12713-HDFS-9806.003.patch, 
> HDFS-12713-HDFS-9806.004.patch, HDFS-12713-HDFS-9806.005.patch, 
> HDFS-12713-HDFS-9806.006.patch, HDFS-12713-HDFS-9806.007.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11640) [READ] Datanodes should use a unique identifier when reading from external stores

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11640:
--
Hadoop Flags: Reviewed

> [READ] Datanodes should use a unique identifier when reading from external 
> stores
> -
>
> Key: HDFS-11640
> URL: https://issues.apache.org/jira/browse/HDFS-11640
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-11640-HDFS-9806.001.patch, 
> HDFS-11640-HDFS-9806.002.patch, HDFS-11640-HDFS-9806.003.patch, 
> HDFS-11640-HDFS-9806.004.patch, HDFS-11640-HDFS-9806.005.patch
>
>
> Use a unique identifier when reading from external stores to ensure that 
> datanodes read the correct (version of) file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12937) RBF: Add more unit tests for router admin commands

2017-12-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295492#comment-16295492
 ] 

Íñigo Goiri commented on HDFS-12937:


[^HDFS-12937.001.patch] looks good.
Not sure if it makes sense to do the cleaning of the stdout on a @Before or 
@After method.
Other than that, the new unit tests cover {{add}}, {{ls}}, and {{rm}} pretty 
exhaustively and they pass for QA.
+1

> RBF: Add more unit tests for router admin commands
> --
>
> Key: HDFS-12937
> URL: https://issues.apache.org/jira/browse/HDFS-12937
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12937.001.patch
>
>
> Adding more unit tests to ensure that router admin commands works well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12671) [READ] Test NameNode restarts when PROVIDED is configured

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12671:
--
Hadoop Flags: Reviewed

> [READ] Test NameNode restarts when PROVIDED is configured
> -
>
> Key: HDFS-12671
> URL: https://issues.apache.org/jira/browse/HDFS-12671
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-12671-HDFS-9806.001.patch, 
> HDFS-12671-HDFS-9806.002.patch, HDFS-12671-HDFS-9806.003.patch, 
> HDFS-12671-HDFS-9806.004.patch
>
>
> Add test case to ensure namenode restarts can be handled with provided 
> storage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Hadoop Flags: Reviewed

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch, 
> HDFS-11902-HDFS-9806.010.patch, HDFS-11902-HDFS-9806.011.patch, 
> HDFS-11902-HDFS-9806.012.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11673) [READ] Handle failures of Datanode with PROVIDED storage

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11673:
--
Hadoop Flags: Reviewed

> [READ] Handle failures of Datanode with PROVIDED storage
> 
>
> Key: HDFS-11673
> URL: https://issues.apache.org/jira/browse/HDFS-11673
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-11673-HDFS-9806.001.patch, 
> HDFS-11673-HDFS-9806.002.patch, HDFS-11673-HDFS-9806.003.patch, 
> HDFS-11673-HDFS-9806.004.patch, HDFS-11673-HDFS-9806.005.patch
>
>
> Blocks on {{PROVIDED}} storage should become unavailable if and only if all 
> Datanodes that are configured with {{PROVIDED}} storage become unavailable. 
> Even if one Datanode with {{PROVIDED}} storage is available, all blocks on 
> the {{PROVIDED}} storage should be accessible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12591) [READ] Implement LevelDBFileRegionFormat

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12591:
--
Hadoop Flags: Reviewed

> [READ] Implement LevelDBFileRegionFormat
> 
>
> Key: HDFS-12591
> URL: https://issues.apache.org/jira/browse/HDFS-12591
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HDFS-12591-HDFS-9806.001.patch, 
> HDFS-12591-HDFS-9806.002.patch, HDFS-12591-HDFS-9806.003.patch, 
> HDFS-12591-HDFS-9806.004.patch, HDFS-12591-HDFS-9806.005.patch, 
> HDFS-12591-HDFS-9806.006.patch, HDFS-12591-HDFS-9806.007.patch
>
>
> The existing work for HDFS-9806 uses an implementation of the {{FileRegion}} 
> read from a csv file. This is good for testability and diagnostic purposes, 
> but it is not very efficient for larger systems.
> There should be a version that is similar to the {{TextFileRegionFormat}} 
> that instead uses LevelDB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12887) [READ] Allow Datanodes with Provided volumes to start when blocks with the same id exist locally

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12887:
--
Hadoop Flags: Reviewed

> [READ] Allow Datanodes with Provided volumes to start when blocks with the 
> same id exist locally
> 
>
> Key: HDFS-12887
> URL: https://issues.apache.org/jira/browse/HDFS-12887
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-12887-HDFS-9806.001.patch
>
>
> Fix {{ProvidedVolumeImpl.getVolumeMap}} to not throw an exception even when 
> an existing block in the volumemap has the same id.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12584) [READ] Fix errors in image generation tool from latest rebase

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12584:
--
Hadoop Flags: Reviewed

> [READ] Fix errors in image generation tool from latest rebase
> -
>
> Key: HDFS-12584
> URL: https://issues.apache.org/jira/browse/HDFS-12584
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-12584-HDFS-9806.001.patch
>
>
> Fix compile errors, from the latest rebase, in FSImage generation tool



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12713:
--
Hadoop Flags: Reviewed

> [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata 
> and PROVIDED storage metadata
> 
>
> Key: HDFS-12713
> URL: https://issues.apache.org/jira/browse/HDFS-12713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Ewan Higgs
> Fix For: 3.1.0
>
> Attachments: HDFS-12713-HDFS-9806.001.patch, 
> HDFS-12713-HDFS-9806.002.patch, HDFS-12713-HDFS-9806.003.patch, 
> HDFS-12713-HDFS-9806.004.patch, HDFS-12713-HDFS-9806.005.patch, 
> HDFS-12713-HDFS-9806.006.patch, HDFS-12713-HDFS-9806.007.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10675) [READ] Datanode support to read from external stores.

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10675:
--
Hadoop Flags: Reviewed

> [READ] Datanode support to read from external stores.
> -
>
> Key: HDFS-10675
> URL: https://issues.apache.org/jira/browse/HDFS-10675
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-10675-HDFS-9806.001.patch, 
> HDFS-10675-HDFS-9806.002.patch, HDFS-10675-HDFS-9806.003.patch, 
> HDFS-10675-HDFS-9806.004.patch, HDFS-10675-HDFS-9806.005.patch, 
> HDFS-10675-HDFS-9806.006.patch, HDFS-10675-HDFS-9806.007.patch, 
> HDFS-10675-HDFS-9806.008.patch, HDFS-10675-HDFS-9806.009.patch
>
>
> This JIRA introduces a new {{PROVIDED}} {{StorageType}} to represent external 
> stores, along with enabling the Datanode to read from such stores using a 
> {{ProvidedReplica}} and a {{ProvidedVolume}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11653) [READ] ProvidedReplica should return an InputStream that is bounded by its length

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11653:
--
Hadoop Flags: Reviewed

> [READ] ProvidedReplica should return an InputStream that is bounded by its 
> length
> -
>
> Key: HDFS-11653
> URL: https://issues.apache.org/jira/browse/HDFS-11653
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-11653-HDFS-9806.001.patch, 
> HDFS-11653-HDFS-9806.002.patch
>
>
> {{ProvidedReplica#getDataInputStream}} should return an InputStream that is 
> bounded by {{ProvidedReplica#getBlockDataLength()}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12893) [READ] Support replication of Provided blocks with non-default topologies.

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12893:
--
Hadoop Flags: Reviewed

> [READ] Support replication of Provided blocks with non-default topologies.
> --
>
> Key: HDFS-12893
> URL: https://issues.apache.org/jira/browse/HDFS-12893
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-12893-HDFS-9806.001.patch, 
> HDFS-12893-HDFS-9806.002.patch, HDFS-12893-HDFS-9806.003.patch, 
> HDFS-12893-HDFS-9806.004.patch
>
>
> {{chooseSourceDatanodes}} returns the {{ProvidedDatanodeDescriptor}} as the 
> source of Provided blocks. As this isn't a physical datanode and doesn't 
> exist the topology, {{ReplicationWork.chooseTargets}} might fail depending on 
> the chosen {{BlockPlacementPolicy}} implementation. This JIRA aims to fix 
> this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12779) [READ] Allow cluster id to be specified to the Image generation tool

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12779:
--
Hadoop Flags: Reviewed

> [READ] Allow cluster id to be specified to the Image generation tool
> 
>
> Key: HDFS-12779
> URL: https://issues.apache.org/jira/browse/HDFS-12779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Trivial
> Fix For: 3.1.0
>
> Attachments: HDFS-12779-HDFS-9806.001.patch
>
>
> Setting the cluster id for the FSImage generated for PROVIDED files is 
> required when the Namenode for PROVIDED files is expected to run in 
> federation with other Namenodes that manage local storage/data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12912:
--
Hadoop Flags: Reviewed

> [READ] Fix configuration and implementation of LevelDB-based alias maps
> ---
>
> Key: HDFS-12912
> URL: https://issues.apache.org/jira/browse/HDFS-12912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-12912-HDFS-9806.001.patch, 
> HDFS-12912-HDFS-9806.002.patch
>
>
> {{LevelDBFileRegionAliasMap}} fails to create the leveldb store if the 
> directory is absent.
> {{InMemoryAliasMap}} does not support reading from leveldb-based alias map 
> created from {{LevelDBFileRegionAliasMap}} with the block id configured. 
> Further, the configuration for these aliasmaps must be specified using local 
> paths and not as URIs as currently shown in the documentation 
> ({{HdfsProvidedStorage.md}}).
> This JIRA is to fix these issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12777) [READ] Reduce memory and CPU footprint for PROVIDED volumes.

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12777:
--
Hadoop Flags: Reviewed

> [READ] Reduce memory and CPU footprint for PROVIDED volumes.
> 
>
> Key: HDFS-12777
> URL: https://issues.apache.org/jira/browse/HDFS-12777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-12777-HDFS-9806.001.patch, 
> HDFS-12777-HDFS-9806.002.patch, HDFS-12777-HDFS-9806.003.patch, 
> HDFS-12777-HDFS-9806.004.patch
>
>
> As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
> storage. This can be millions of blocks for 100s of TBs of PROVIDED data. 
> Storing the data for these blocks can lead to a large memory footprint. 
> Further, with so many blocks, {{DirectoryScanner}} running on a PROVIDED 
> volume can increase the memory and CPU utilization. 
> To reduce these overheads, this JIRA aims to (a) disable the 
> {{DirectoryScanner}} on PROVIDED volumes (as HDFS-9806 focuses on only 
> read-only data in PROVIDED volumes), (b) reduce the space occupied by 
> {{FinalizedProvidedReplicaInfo}} by using a common URI prefix across all 
> PROVIDED blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12712) [9806] Code style cleanup

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12712:
--
Hadoop Flags: Reviewed

> [9806] Code style cleanup
> -
>
> Key: HDFS-12712
> URL: https://issues.apache.org/jira/browse/HDFS-12712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HDFS-12712-HDFS-9806.001.patch, 
> HDFS-12712-HDFS-9806.002.patch, HDFS-12712-HDFS-9806.003.patch
>
>
> The code for HDFS-9806 could use some style cleaning before merging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12665) [AliasMap] Create a version of the AliasMap that runs in memory in the Namenode (leveldb)

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12665:
--
Hadoop Flags: Reviewed

> [AliasMap] Create a version of the AliasMap that runs in memory in the 
> Namenode (leveldb)
> -
>
> Key: HDFS-12665
> URL: https://issues.apache.org/jira/browse/HDFS-12665
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
> Fix For: 3.1.0
>
> Attachments: HDFS-12665-HDFS-9806.001.patch, 
> HDFS-12665-HDFS-9806.002.patch, HDFS-12665-HDFS-9806.003.patch, 
> HDFS-12665-HDFS-9806.004.patch, HDFS-12665-HDFS-9806.005.patch, 
> HDFS-12665-HDFS-9806.006.patch, HDFS-12665-HDFS-9806.007.patch, 
> HDFS-12665-HDFS-9806.008.patch, HDFS-12665-HDFS-9806.009.patch, 
> HDFS-12665-HDFS-9806.010.patch, HDFS-12665-HDFS-9806.011.patch, 
> HDFS-12665-HDFS-9806.012.patch
>
>
> The design of Provided Storage requires the use of an AliasMap to manage the 
> mapping between blocks of files on the local HDFS and ranges of files on a 
> remote storage system. To reduce load from the Namenode, this can be done 
> using a pluggable external service (e.g. AzureTable, Cassandra, Ratis). 
> However, to aide adoption and ease of deployment, we propose an in memory 
> version.
> This AliasMap will be a wrapper around LevelDB (already a dependency from the 
> Timeline Service) and use protobuf for the key (blockpool, blockid, and 
> genstamp) and the value (url, offset, length, nonce). The in memory service 
> will also have a configurable port on which it will listen for updates from 
> Storage Policy Satisfier (SPS) Coordinating Datanodes (C-DN).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12894) [READ] Skip setting block count of ProvidedDatanodeStorageInfo on DN registration update

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12894:
--
Hadoop Flags: Reviewed

> [READ] Skip setting block count of ProvidedDatanodeStorageInfo on DN 
> registration update
> 
>
> Key: HDFS-12894
> URL: https://issues.apache.org/jira/browse/HDFS-12894
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-12894-HDFS-9806.001.patch, 
> HDFS-12894-HDFS-9806.002.patch
>
>
> As the {{ProvidedDatanodeStorageInfo}} is shared across multiple Datanodes, 
> it's block count shouldn't be set to 0 (in 
> {{DatanodeDescriptor.updateRegInfo}}) when any one Datanode's registration 
> info is updated. This prevents {{processFirstBlockReport}} from being called 
> multiple times for {{ProvidedDatanodeStorageInfo}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12607) [READ] Even one dead datanode with PROVIDED storage results in ProvidedStorageInfo being marked as FAILED

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12607:
--
Hadoop Flags: Reviewed

> [READ] Even one dead datanode with PROVIDED storage results in 
> ProvidedStorageInfo being marked as FAILED
> -
>
> Key: HDFS-12607
> URL: https://issues.apache.org/jira/browse/HDFS-12607
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-12607-HDFS-9806.001.patch, 
> HDFS-12607-HDFS-9806.002.patch, HDFS-12607-HDFS-9806.003.patch, 
> HDFS-12607.repro.patch
>
>
> When a DN configured with PROVIDED storage is marked as dead by the NN, the 
> state of {{providedStorageInfo}} in {{ProvidedStorageMap}} is set to FAILED, 
> and never becomes NORMAL. The state should change to FAILED only if all 
> datanodes with PROVIDED storage are dead, and should be restored back to 
> NORMAL when a Datanode with NORMAL DatanodeStorage reports in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11791) [READ] Test for increasing replication of provided files.

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11791:
--
Hadoop Flags: Reviewed

> [READ] Test for increasing replication of provided files.
> -
>
> Key: HDFS-11791
> URL: https://issues.apache.org/jira/browse/HDFS-11791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-11791-HDFS-9806.001.patch, 
> HDFS-11791-HDFS-9806.002.patch
>
>
> Test whether increasing the replication of a file with storage policy 
> {{PROVIDED}} replicates blocks locally (i.e., to {{DISK}}).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10706) [READ] Add tool generating FSImage from external store

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10706:
--
Hadoop Flags: Reviewed

> [READ] Add tool generating FSImage from external store
> --
>
> Key: HDFS-10706
> URL: https://issues.apache.org/jira/browse/HDFS-10706
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, tools
>Reporter: Chris Douglas
>Assignee: Chris Douglas
> Fix For: 3.1.0
>
> Attachments: HDFS-10706-HDFS-9806.002.patch, 
> HDFS-10706-HDFS-9806.003.patch, HDFS-10706-HDFS-9806.004.patch, 
> HDFS-10706-HDFS-9806.005.patch, HDFS-10706-HDFS-9806.006.patch, 
> HDFS-10706.001.patch, HDFS-10706.002.patch
>
>
> To experiment with provided storage, this provides a tool to map an external 
> namespace to an FSImage/NN storage. By loading it in a NN, one can access the 
> remote FS using HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11663) [READ] Fix NullPointerException in ProvidedBlocksBuilder

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11663:
--
Hadoop Flags: Reviewed

> [READ] Fix NullPointerException in ProvidedBlocksBuilder
> 
>
> Key: HDFS-11663
> URL: https://issues.apache.org/jira/browse/HDFS-11663
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-11663-HDFS-9806.001.patch, 
> HDFS-11663-HDFS-9806.002.patch, HDFS-11663-HDFS-9806.003.patch
>
>
> When there are no Datanodes with PROVIDED storage, 
> {{ProvidedBlocksBuilder#build}} leads to a {{NullPointerException}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12885) Add visibility/stability annotations

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12885:
--
Hadoop Flags: Reviewed

> Add visibility/stability annotations
> 
>
> Key: HDFS-12885
> URL: https://issues.apache.org/jira/browse/HDFS-12885
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Trivial
> Fix For: 3.1.0
>
> Attachments: HDFS-12885-HDFS-9806.00.patch, 
> HDFS-12885-HDFS-9806.001.patch
>
>
> Classes added in HDFS-9806 should include stability/visibility annotations 
> (HADOOP-5073)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12809) [READ] Fix the randomized selection of locations in {{ProvidedBlocksBuilder}}.

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12809:
--
Hadoop Flags: Reviewed

> [READ] Fix the randomized selection of locations in {{ProvidedBlocksBuilder}}.
> --
>
> Key: HDFS-12809
> URL: https://issues.apache.org/jira/browse/HDFS-12809
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-12809-HDFS-9806.001.patch, 
> HDFS-12809-HDFS-9806.002.patch
>
>
> Calling {{getBlockLocations}} on files that have a PROVIDED replica, results 
> in the datanode locations being selected at random. Currently, this 
> randomization uses the datanode uuids to pick a node at random 
> ({{ProvidedDescriptor#choose}}, {{ProvidedDescriptor#chooseRandom}}). 
> Depending on the distribution of the datanode UUIDs, this can lead to large 
> number of iterations (which may not terminate) before a location is chosen. 
> This JIRA aims to replace this with a more efficient randomization strategy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11703) [READ] Tests for ProvidedStorageMap

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11703:
--
Hadoop Flags: Reviewed

> [READ] Tests for ProvidedStorageMap
> ---
>
> Key: HDFS-11703
> URL: https://issues.apache.org/jira/browse/HDFS-11703
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-11703-HDFS-9806.001.patch, 
> HDFS-11703-HDFS-9806.002.patch
>
>
> Add tests for the {{ProvidedStorageMap}} in the namenode



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11792) [READ] Test cases for ProvidedVolumeDF and ProviderBlockIteratorImpl

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11792:
--
Hadoop Flags: Reviewed

> [READ] Test cases for ProvidedVolumeDF and ProviderBlockIteratorImpl
> 
>
> Key: HDFS-11792
> URL: https://issues.apache.org/jira/browse/HDFS-11792
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-11792-HDFS-9806.001.patch
>
>
> Test cases for {{ProvidedVolumeDF}} and {{ProviderBlockIteratorImpl}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12091) [READ] Check that the replicas served from a {{ProvidedVolumeImpl}} belong to the correct external storage

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12091:
--
Hadoop Flags: Reviewed

> [READ] Check that the replicas served from a {{ProvidedVolumeImpl}} belong to 
> the correct external storage
> --
>
> Key: HDFS-12091
> URL: https://issues.apache.org/jira/browse/HDFS-12091
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-12091-HDFS-9806.001.patch, 
> HDFS-12091-HDFS-9806.002.patch
>
>
> A {{ProvidedVolumeImpl}} can only serve blocks that "belong" to it. i.e., for 
> blocks served from a {{ProvidedVolumeImpl}}, the {{baseURI}} of the 
> {{ProvidedVolumeImpl}} should be a prefix of the URI of the blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12685) [READ] FsVolumeImpl exception when scanning Provided storage volume

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12685:
--
Hadoop Flags: Reviewed

> [READ] FsVolumeImpl exception when scanning Provided storage volume
> ---
>
> Key: HDFS-12685
> URL: https://issues.apache.org/jira/browse/HDFS-12685
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-12685-HDFS-9806.001.patch, 
> HDFS-12685-HDFS-9806.002.patch, HDFS-12685-HDFS-9806.003.patch, 
> HDFS-12685-HDFS-9806.004.patch
>
>
> I left a Datanode running overnight and found this in the logs in the morning:
> {code}
> 2017-10-18 23:51:54,391 ERROR datanode.DirectoryScanner: Error compiling 
> report for the volume, StorageId: DS-e75ebc3c-6b12-424e-875a-a4ae1a4dcc29 
>   
>  
> java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: 
> URI scheme is not "file"  
>   
>  
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   
>   
> 
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)   
>   
>   
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:544)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:393)
>   
>   
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:375)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:320)
>   
>
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)   
>   
>
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)   
>   
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   
>
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   
>   
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   
>   
> at java.lang.Thread.run(Thread.java:748)  
>   
>   
> 
> Caused by: java.lang.IllegalArgumentException: URI scheme is not "file"   
>   
>   
> 
> at java.io.File.(File.java:421) 
> 

[jira] [Updated] (HDFS-12903) [READ] Fix closing streams in ImageWriter

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12903:
--
Hadoop Flags: Reviewed

> [READ] Fix closing streams in ImageWriter
> -
>
> Key: HDFS-12903
> URL: https://issues.apache.org/jira/browse/HDFS-12903
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-12903-HDFS-9806.001.patch, 
> HDFS-12903-HDFS-9806.002.patch
>
>
> HDFS-12894 showed a FindBug in HDFS-9806. This seems related to HDFS-12881 
> when using {{IOUtils.cleanupWithLogger()}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12778:
--
Hadoop Flags: Reviewed

> [READ] Report multiple locations for PROVIDED blocks
> 
>
> Key: HDFS-12778
> URL: https://issues.apache.org/jira/browse/HDFS-12778
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-12778-HDFS-9806.001.patch, 
> HDFS-12778-HDFS-9806.002.patch, HDFS-12778-HDFS-9806.003.patch
>
>
> On {{getBlockLocations}}, only one Datanode is returned as the location for 
> all PROVIDED blocks. This can hurt the performance of applications which 
> typically 3 locations per block. We need to return multiple Datanodes for 
> each PROVIDED block for better application performance/resilience. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11190) [READ] Namenode support for data stored in external stores.

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11190:
--
Hadoop Flags: Reviewed

> [READ] Namenode support for data stored in external stores.
> ---
>
> Key: HDFS-11190
> URL: https://issues.apache.org/jira/browse/HDFS-11190
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-11190-HDFS-9806.001.patch, 
> HDFS-11190-HDFS-9806.002.patch, HDFS-11190-HDFS-9806.003.patch, 
> HDFS-11190-HDFS-9806.004.patch
>
>
> The goal of this JIRA is to enable the Namenode to know about blocks that are 
> in {{PROVIDED}} stores and are not necessarily stored on any Datanodes. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12905) [READ] Handle decommissioning and under-maintenance Datanodes with Provided storage.

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12905:
--
Hadoop Flags: Reviewed

> [READ] Handle decommissioning and under-maintenance Datanodes with Provided 
> storage.
> 
>
> Key: HDFS-12905
> URL: https://issues.apache.org/jira/browse/HDFS-12905
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-12905-HDFS-9806.001.patch, 
> HDFS-12905-HDFS-9806.002.patch
>
>
> {{ProvidedStorageMap}} doesn't keep track of the state of the datanodes with 
> Provided storage. As a result, it can return nodes that are being 
> decommissioned or under-maintenance even when live datanodes exist. This JIRA 
> is to prefer live datanodes to datanodes in other states.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12289) [READ] HDFS-12091 breaks the tests for provided block reads

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12289:
--
Hadoop Flags: Reviewed

> [READ] HDFS-12091 breaks the tests for provided block reads
> ---
>
> Key: HDFS-12289
> URL: https://issues.apache.org/jira/browse/HDFS-12289
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-12289-HDFS-9806.001.patch
>
>
> In the tests within {{TestNameNodeProvidedImplementation}}, the files that 
> are supposed to belong to a provided volume are not located under the Storage 
> directory assigned to the volume in {{MiniDFSCluster}}. With HDFS-12091, this 
> isn't correct and thus, it breaks the tests. This JIRA is to fix the tests 
> under {{TestNameNodeProvidedImplementation}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12776) [READ] Increasing replication for PROVIDED files should create local replicas

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12776:
--
Hadoop Flags: Reviewed

> [READ] Increasing replication for PROVIDED files should create local replicas
> -
>
> Key: HDFS-12776
> URL: https://issues.apache.org/jira/browse/HDFS-12776
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-12776-HDFS-9806.001.patch
>
>
> For PROVIDED files, set replication only works when the target datanode does 
> not have a PROVIDED volume. In a cluster, where all Datanodes have PROVIDED 
> volumes, set replication does not work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12775) [READ] Fix reporting of Provided volumes

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12775:
--
Hadoop Flags: Reviewed

> [READ] Fix reporting of Provided volumes
> 
>
> Key: HDFS-12775
> URL: https://issues.apache.org/jira/browse/HDFS-12775
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-12775-HDFS-9806.001.patch, 
> HDFS-12775-HDFS-9806.002.patch, HDFS-12775-HDFS-9806.003.patch, 
> HDFS-12775-HDFS-9806.004.patch, provided_capacity_nn.png, 
> provided_storagetype_capacity.png, provided_storagetype_capacity_jmx.png
>
>
> Provided Volumes currently report infinite capacity and 0 space used. 
> Further, PROVIDED locations are reported as {{/default-rack/null:0}} in fsck. 
> This JIRA is for making this more readable, and replace these with what users 
> would expect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12605) [READ] TestNameNodeProvidedImplementation#testProvidedDatanodeFailures fails after rebase

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12605:
--
Hadoop Flags: Reviewed

> [READ] TestNameNodeProvidedImplementation#testProvidedDatanodeFailures fails 
> after rebase
> -
>
> Key: HDFS-12605
> URL: https://issues.apache.org/jira/browse/HDFS-12605
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-12605-HDFS-9806.001.patch
>
>
> {{TestNameNodeProvidedImplementation#testProvidedDatanodeFailures}} fails 
> after rebase with the following error:
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.net.DFSTopologyNodeImpl.decStorageTypeCount(DFSTopologyNodeImpl.java:127)
>   at 
> org.apache.hadoop.hdfs.net.DFSTopologyNodeImpl.remove(DFSTopologyNodeImpl.java:318)
>   at 
> org.apache.hadoop.hdfs.net.DFSTopologyNodeImpl.remove(DFSTopologyNodeImpl.java:336)
>   at 
> org.apache.hadoop.net.NetworkTopology.remove(NetworkTopology.java:222)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.removeDatanode(DatanodeManager.java:712)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.removeDeadDatanode(DatanodeManager.java:755)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager.heartbeatCheck(HeartbeatManager.java:407)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerTestUtil.noticeDeadDatanode(BlockManagerTestUtil.java:213)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeProvidedImplementation.testProvidedDatanodeFailures(TestNameNodeProvidedImplementation.java:471)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12789) [READ] Image generation tool does not close an opened stream

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12789:
--
Hadoop Flags: Reviewed

> [READ] Image generation tool does not close an opened stream
> 
>
> Key: HDFS-12789
> URL: https://issues.apache.org/jira/browse/HDFS-12789
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-12789-HDFS-9806.001.patch, 
> HDFS-12789-HDFS-9806.002.patch
>
>
> Other JIRAs (e.g., HDFS-12671), generate a FindBug issues:
> {code}
> Bug type OBL_UNSATISFIED_OBLIGATION_EXCEPTION_EDGE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.ImageWriter
> In method new 
> org.apache.hadoop.hdfs.server.namenode.ImageWriter(ImageWriter$Options)
> Reference type java.io.OutputStream
> 1 instances of obligation remaining
> Obligation to clean up resource created at ImageWriter.java:[line 170] is not 
> discharged
> Remaining obligations: {OutputStream x 1}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12093) [READ] Share remoteFS between ProvidedReplica instances.

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12093:
--
Hadoop Flags: Reviewed

> [READ] Share remoteFS between ProvidedReplica instances.
> 
>
> Key: HDFS-12093
> URL: https://issues.apache.org/jira/browse/HDFS-12093
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Virajith Jalaparti
> Fix For: 3.1.0
>
> Attachments: HDFS-12093-HDFS-9806.001.patch, 
> HDFS-12093-HDFS-9806.002.patch
>
>
> When a Datanode comes online using Provided storage, it fills the 
> {{ReplicaMap}} with the known replicas. With Provided Storage, this includes 
> {{ProvidedReplica}} instances. Each of these objects, in their constructor, 
> will construct an FileSystem using the Service Provider. This can result in 
> contacting the remote file system and checking that the credentials are 
> correct and that the data is there. For large systems this is a prohibitively 
> expensive operation to perform per replica.
> Instead, the {{ProvidedVolumeImpl}} should own the reference to the 
> {{remoteFS}} and should share it with the {{ProvidedReplica}} objects on 
> their creation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9806:
-
Labels:   (was: provided)

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Fix For: 3.1.0
>
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch, HDFS-9806.002.patch, HDFS-9806.003.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12934) RBF: Federation supports global quota

2017-12-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295480#comment-16295480
 ] 

Íñigo Goiri commented on HDFS-12934:


Thanks [~linyiqun] for proposing this. We internally don't use quotas yet so 
I'm not very familiar with the semantics that users/admins expect from this. 
But let me throw a couple thoughts here.

If I understood correctly, your proposal would be to do quota management per 
mount table.
In that case, we may want to create a new Quota structure in the State Store to 
keep track of the usage (and maybe even the quota definition) separately.
Not sure who would keep this data up to date across Routers though, we will 
need some synchronization to do it distributed.

In addition to the global quota, we may want to think if it makes sense to 
implement functions like {{setQuota()}} from {{ClientProtocol}}.
It could even be the interface for setting the quota instead of adding it to 
the {{dfsrouteradmin}}.
In addition, we could move all the bookkeeping to the Namenodes and just query 
the quotas from those and just aggregate the values from the subclusters.

[~ajayydv], HDFS-12512 will add WebHDFS and could support the second approach 
using {{setQuota()}}.
For the pure admin interface, we don't have any REST interface.
This might be a good reason to implement quotas through {{ClientProtocol}}.

> RBF: Federation supports global quota
> -
>
> Key: HDFS-12934
> URL: https://issues.apache.org/jira/browse/HDFS-12934
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
>
> Now federation doesn't support set the global quota for each folder. 
> Currently the quota will be applied for each subcluster under the specified 
> folder via RPC call.
> It will be very useful for users that federation can support setting global 
> quota and exposing the command of this.
> In a federated environment, a folder can be spread across multiple 
> subclusters. For this reason, we plan to solve this by following way:
> # Set global quota across each subcluster. We don't allow each subcluster can 
> exceed maximun quota value.
> # We need to construct one  cache map for storing the sum  
> quota usage of these subclusters under federation folder. Every time we want 
> to do WRITE operation under specified folder, we will get its quota usage 
> from cache and verify its quota. If quota exceeded, throw exception, 
> otherwise update its quota usage in cache when finishing operations.
> The quota will be set to mount table and as a new field in mount table. The 
> set/unset command will be like:
> {noformat}
>  hdfs dfsrouteradmin -setQuota -ns  -ss  
>  hdfs dfsrouteradmin -clrQuota  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12930) Remove the extra space in HdfsImageViewer.md

2017-12-18 Thread Rahul Pathak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rahul Pathak updated HDFS-12930:

Attachment: HDFS-12930.001.patch

Thanks [~anu]

I am adding the patch file.

> Remove the extra space in HdfsImageViewer.md
> 
>
> Key: HDFS-12930
> URL: https://issues.apache.org/jira/browse/HDFS-12930
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Rahul Pathak
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-12930.001.patch
>
>
> There is one extra space in HdfsImageViewer.md that leads page rendered error.
> {noformat}
> * [GETXATTRS](./WebHDFS.html#Get_an_XAttr)
> * [LISTXATTRS](./WebHDFS.html#List_all_XAttrs)
> * [CONTENTSUMMARY] (./WebHDFS.html#Get_Content_Summary_of_a_Directory)
> {noformat}
> Can see hadoop 3.0 
> website:http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html#Web_Processor



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-18 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9806:
-
Labels: provided  (was: )

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
>  Labels: provided
> Fix For: 3.1.0
>
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch, HDFS-9806.002.patch, HDFS-9806.003.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12936) java.lang.OutOfMemoryError: unable to create new native thread

2017-12-18 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-12936.
-
Resolution: Not A Bug

> java.lang.OutOfMemoryError: unable to create new native thread
> --
>
> Key: HDFS-12936
> URL: https://issues.apache.org/jira/browse/HDFS-12936
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.0
> Environment: CDH5.12
> hadoop2.6
>Reporter: Jepson
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> I configure the max user processes  65535 with any user ,and the datanode 
> memory is 8G.
> When a log of data was been writeen,the datanode was been shutdown.
> But I can see the memory use only < 1000M.
> Please to see https://pan.baidu.com/s/1o7BE0cy
> *DataNode shutdown error log:*  
> {code:java}
> 2017-12-17 23:58:14,422 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> PacketResponder: 
> BP-1437036909-192.168.17.36-1509097205664:blk_1074725940_987917, 
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2017-12-17 23:58:31,425 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory. 
> Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
>   at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:01,426 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory. 
> Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
>   at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:05,520 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory. 
> Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
>   at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:31,429 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Receiving BP-1437036909-192.168.17.36-1509097205664:blk_1074725951_987928 
> src: /192.168.17.54:40478 dest: /192.168.17.48:50010
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >