[jira] [Commented] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-07 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15320126#comment-15320126
 ] 

Uma Maheswara Rao G commented on HDFS-10473:


>Also, do you see any critical issue if an admin really sets WARM/ONE_SSD 
>policy to EC files?
In striping model all data blocks are equally important. So, there is no 
meaning to set ONE_SSD policy for this files. This is the only point I had. I 
am not sure this is fine.  
To simplify, how about Mover tool allows to move EC files only if the target 
policy is ARCHIVE/ALL_SDD ? (because allowing Mover tool to do other 
policy(ONE_SSD, .etc) movements are not making any sense to me). We can just 
log saying, this policy is not recommended for striped EC files, so ignoring 
for movements.
Let's leave NN side changes as is since that is bringing complexities to handle.

> Allow only suitable storage policies to be set on striped files
> ---
>
> Key: HDFS-10473
> URL: https://issues.apache.org/jira/browse/HDFS-10473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10473-01.patch
>
>
> Currently some of existing storage policies are not suitable for striped 
> layout files.
> This JIRA proposes to reject setting storage policy on striped files.
> Another thought is to allow only suitable storage polices like ALL_SSD.
> Since the major use case of EC is for cold data, this may not be at high 
> importance. So, I am ok to reject setting storage policy on striped files at 
> this stage. Please suggest if others have some thoughts on this.
> Thanks [~zhz] for offline discussion on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-07 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15320085#comment-15320085
 ] 

Jing Zhao commented on HDFS-10473:
--

Thanks for the reply, Uma! My main concern is the change proposed here brings 
extra complexity to NN and also is inconsistent. The inconsistency lies between 
files v.s. directories, online (i.e., during file creation) v.s. offline (after 
file gets created). Also, do you see any critical issue if an admin really sets 
WARM/ONE_SSD policy to EC files? 

> Allow only suitable storage policies to be set on striped files
> ---
>
> Key: HDFS-10473
> URL: https://issues.apache.org/jira/browse/HDFS-10473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10473-01.patch
>
>
> Currently some of existing storage policies are not suitable for striped 
> layout files.
> This JIRA proposes to reject setting storage policy on striped files.
> Another thought is to allow only suitable storage polices like ALL_SSD.
> Since the major use case of EC is for cold data, this may not be at high 
> importance. So, I am ok to reject setting storage policy on striped files at 
> this stage. Please suggest if others have some thoughts on this.
> Thanks [~zhz] for offline discussion on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10488) WebHDFS CREATE and MKDIRS does not follow same rules as DFS CLI when creating files/directories without specifying permissions

2016-06-07 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-10488:
-
Assignee: Wellington Chevreuil

> WebHDFS CREATE and MKDIRS does not follow same rules as DFS CLI when creating 
> files/directories without specifying permissions
> --
>
> Key: HDFS-10488
> URL: https://issues.apache.org/jira/browse/HDFS-10488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-10488.002.patch, HDFS-10488.003.patch, 
> HDFS-10488.patch
>
>
> WebHDFS methods for creating file/directories are always creating it with 755 
> permissions as default, even ignoring any configured 
> *fs.permissions.umask-mode* in the case of directories.
> Dfs CLI, however, applies the configured umask to 777 permission for 
> directories, or 666 permission for files.
> Example below shows the different behaviour when creating directory via CLI 
> and WebHDFS:
> {noformat}
> 1) Creating a directory under '/test/' as 'test-user'. Configured 
> fs.permissions.umask-mode is 000: 
> $ sudo -u test-user hdfs dfs -mkdir /test/test-user1 
> $ sudo -u test-user hdfs dfs -getfacl /test/test-user1 
> # file: /test/test-user1
> # owner: test-user 
> # group: supergroup 
> user::rwx 
> group::rwx 
> other::rwx 
> 4) Doing the same via WebHDFS does not get the proper ACLs: 
> $ curl -i -X PUT 
> "http://namenode-host:50070/webhdfs/v1/test/test-user2?user.name=test-user&op=MKDIRS";
>  
> $ sudo -u test-user hdfs dfs -getfacl /test/test-user2 
> # file: /test/test-user2 
> # owner: test-user 
> # group: supergroup 
> user::rwx 
> group::r-x 
> other::r-x
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10501) DiskBalancer: Use the default datanode port if port is not provided.

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319959#comment-15319959
 ] 

Hadoop QA commented on HDFS-10501:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
27s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
35s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 7s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
12s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 8s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 108m 51s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestAsyncHDFSWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808821/HDFS-10501-HDFS-1312.001.patch
 |
| JIRA Issue | HDFS-10501 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5961824db038 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-1312 / 32058f9 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15701/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15701/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15701/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15701/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> DiskBalancer: Use the default datanode port if port is not 

[jira] [Updated] (HDFS-10453) ReplicationMonitor thread could stuck for long time due to the race between replication and delete of same file in a large cluster.

2016-06-07 Thread He Xiaoqiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HDFS-10453:
---
Affects Version/s: 2.5.2
   2.6.4

> ReplicationMonitor thread could stuck for long time due to the race between 
> replication and delete of same file in a large cluster.
> ---
>
> Key: HDFS-10453
> URL: https://issues.apache.org/jira/browse/HDFS-10453
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.1, 2.5.2, 2.7.1, 2.6.4
>Reporter: He Xiaoqiao
> Attachments: HDFS-10453-branch-2.001.patch, 
> HDFS-10453-branch-2.003.patch, HDFS-10453.001.patch
>
>
> ReplicationMonitor thread could stuck for long time and loss data with little 
> probability. Consider the typical scenarioļ¼š
> (1) create and close a file with the default replicas(3);
> (2) increase replication (to 10) of the file.
> (3) delete the file while ReplicationMonitor is scheduling blocks belong to 
> that file for replications.
> if ReplicationMonitor stuck reappeared, NameNode will print log as:
> {code:xml}
> 2016-04-19 10:20:48,083 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) For more information, please enable DEBUG log level on 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> ..
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) For more information, please enable DEBUG log level on 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough 
> replicas: expected size is 7 but only 0 storage types can be selected 
> (replication=10, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK, 
> DISK, DISK, DISK, DISK, DISK, DISK], policy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) All required storage types are unavailable:  
> unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
> {code}
> This is because 2 threads (#NameNodeRpcServer and #ReplicationMonitor) 
> process same block at the same moment.
> (1) ReplicationMonitor#computeReplicationWorkForBlocks get blocks to 
> replicate and leave the global lock.
> (2) FSNamesystem#delete invoked to delete blocks then clear the reference in 
> blocksmap, needReplications, etc. the block's NumBytes will set 
> NO_ACK(Long.MAX_VALUE) which is used to indicate that the block deletion does 
> not need explicit ACK from the node. 
> (3) ReplicationMonitor#computeReplicationWorkForBlocks continue to 
> chooseTargets for the same blocks and no node will be selected after traverse 
> whole cluster because  no node choice satisfy the goodness criteria 
> (remaining spaces achieve required size Long.MAX_VALUE). 
> During of stage#3 ReplicationMonitor stuck for long time, especial in a large 
> cluster. invalidateBlocks & neededReplications continues to grow and no 
> consumes. it will loss data at the worst.
> This can mostly be avoided by skip chooseTarget for BlockCommand.NO_ACK block 
> and remove it from neededReplications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-10423) Increase default value of httpfs maxHttpHeaderSize

2016-06-07 Thread Nicolae Popa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolae Popa updated HDFS-10423:

Comment: was deleted

(was: Testing before the patch was applied)

> Increase default value of httpfs maxHttpHeaderSize
> --
>
> Key: HDFS-10423
> URL: https://issues.apache.org/jira/browse/HDFS-10423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.4, 3.0.0-alpha1
>Reporter: Nicolae Popa
>Priority: Minor
> Attachments: HDFS-10423.01.patch
>
>
> The Tomcat default value of maxHttpHeaderSize is 8k, which is too low for 
> certain Hadoop workloads in kerberos enabled environments. This JIRA will to 
> change it to 65536 in server.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-10423) Increase default value of httpfs maxHttpHeaderSize

2016-06-07 Thread Nicolae Popa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolae Popa updated HDFS-10423:

Comment: was deleted

(was: Testing by setting HTTPFS_MAX_HTTP_HEADER_SIZE to 4
# The maximum size of Tomcat HTTP header
#
export HTTPFS_MAX_HTTP_HEADER_SIZE=4)

> Increase default value of httpfs maxHttpHeaderSize
> --
>
> Key: HDFS-10423
> URL: https://issues.apache.org/jira/browse/HDFS-10423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.4, 3.0.0-alpha1
>Reporter: Nicolae Popa
>Priority: Minor
> Attachments: HDFS-10423.01.patch
>
>
> The Tomcat default value of maxHttpHeaderSize is 8k, which is too low for 
> certain Hadoop workloads in kerberos enabled environments. This JIRA will to 
> change it to 65536 in server.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-10423) Increase default value of httpfs maxHttpHeaderSize

2016-06-07 Thread Nicolae Popa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolae Popa updated HDFS-10423:

Comment: was deleted

(was: Test after the patch was applied)

> Increase default value of httpfs maxHttpHeaderSize
> --
>
> Key: HDFS-10423
> URL: https://issues.apache.org/jira/browse/HDFS-10423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.4, 3.0.0-alpha1
>Reporter: Nicolae Popa
>Priority: Minor
> Attachments: HDFS-10423.01.patch
>
>
> The Tomcat default value of maxHttpHeaderSize is 8k, which is too low for 
> certain Hadoop workloads in kerberos enabled environments. This JIRA will to 
> change it to 65536 in server.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10423) Increase default value of httpfs maxHttpHeaderSize

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319872#comment-15319872
 ] 

Hadoop QA commented on HDFS-10423:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 6s {color} 
| {color:red} Docker failed to build yetus/hadoop:2c91fd8. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808592/HDFS-10423.01.patch |
| JIRA Issue | HDFS-10423 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15702/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Increase default value of httpfs maxHttpHeaderSize
> --
>
> Key: HDFS-10423
> URL: https://issues.apache.org/jira/browse/HDFS-10423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.4, 3.0.0-alpha1
>Reporter: Nicolae Popa
>Priority: Minor
> Attachments: HDFS-10423.01.patch
>
>
> The Tomcat default value of maxHttpHeaderSize is 8k, which is too low for 
> certain Hadoop workloads in kerberos enabled environments. This JIRA will to 
> change it to 65536 in server.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10423) Increase default value of httpfs maxHttpHeaderSize

2016-06-07 Thread Nicolae Popa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolae Popa updated HDFS-10423:

Attachment: (was: before-HDFS-10423.txt)

> Increase default value of httpfs maxHttpHeaderSize
> --
>
> Key: HDFS-10423
> URL: https://issues.apache.org/jira/browse/HDFS-10423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.4, 3.0.0-alpha1
>Reporter: Nicolae Popa
>Priority: Minor
> Attachments: HDFS-10423.01.patch
>
>
> The Tomcat default value of maxHttpHeaderSize is 8k, which is too low for 
> certain Hadoop workloads in kerberos enabled environments. This JIRA will to 
> change it to 65536 in server.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10423) Increase default value of httpfs maxHttpHeaderSize

2016-06-07 Thread Nicolae Popa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolae Popa updated HDFS-10423:

Attachment: (was: after-HDFS-10423.txt)

> Increase default value of httpfs maxHttpHeaderSize
> --
>
> Key: HDFS-10423
> URL: https://issues.apache.org/jira/browse/HDFS-10423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.4, 3.0.0-alpha1
>Reporter: Nicolae Popa
>Priority: Minor
> Attachments: HDFS-10423.01.patch, before-HDFS-10423.txt
>
>
> The Tomcat default value of maxHttpHeaderSize is 8k, which is too low for 
> certain Hadoop workloads in kerberos enabled environments. This JIRA will to 
> change it to 65536 in server.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10423) Increase default value of httpfs maxHttpHeaderSize

2016-06-07 Thread Nicolae Popa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolae Popa updated HDFS-10423:

Attachment: (was: after-HDFS-10423_withCustomHeader4.txt)

> Increase default value of httpfs maxHttpHeaderSize
> --
>
> Key: HDFS-10423
> URL: https://issues.apache.org/jira/browse/HDFS-10423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.4, 3.0.0-alpha1
>Reporter: Nicolae Popa
>Priority: Minor
> Attachments: HDFS-10423.01.patch, before-HDFS-10423.txt
>
>
> The Tomcat default value of maxHttpHeaderSize is 8k, which is too low for 
> certain Hadoop workloads in kerberos enabled environments. This JIRA will to 
> change it to 65536 in server.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10501) DiskBalancer: Use the default datanode port if port is not provided.

2016-06-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10501:

Status: Patch Available  (was: Open)

> DiskBalancer: Use the default datanode port if port is not provided.
> 
>
> Key: HDFS-10501
> URL: https://issues.apache.org/jira/browse/HDFS-10501
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10501-HDFS-1312.001.patch
>
>
> In the query command, we should read the default datanode port from the 
> config if the user does provides hostname instead of hostname:port



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10501) DiskBalancer: Use the default datanode port if port is not provided.

2016-06-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10501:

Attachment: HDFS-10501-HDFS-1312.001.patch

> DiskBalancer: Use the default datanode port if port is not provided.
> 
>
> Key: HDFS-10501
> URL: https://issues.apache.org/jira/browse/HDFS-10501
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10501-HDFS-1312.001.patch
>
>
> In the query command, we should read the default datanode port from the 
> config if the user does provides hostname instead of hostname:port



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7240) Object store in HDFS

2016-06-07 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319857#comment-15319857
 ] 

Anu Engineer commented on HDFS-7240:


Just posting here a reminder for the ozone design review. It is scheduled @ Jun 
9, 2016 2:00 PM (GMT-7:00) Pacific Time. 
This meeting is to review ozone's proposed design.  Hopefully everyone has got 
a chance to read the posted doc already.

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10501) DiskBalancer: Use the default datanode port if port is not provided.

2016-06-07 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-10501:
---

 Summary: DiskBalancer: Use the default datanode port if port is 
not provided.
 Key: HDFS-10501
 URL: https://issues.apache.org/jira/browse/HDFS-10501
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: balancer & mover
Affects Versions: HDFS-1312
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: HDFS-1312


In the query command, we should read the default datanode port from the config 
if the user does provides hostname instead of hostname:port



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-07 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-10473:
---
Description: 
Currently some of existing storage policies are not suitable for striped layout 
files.
This JIRA proposes to reject setting storage policy on striped files.

Another thought is to allow only suitable storage polices like ALL_SSD.
Since the major use case of EC is for cold data, this may not be at high 
importance. So, I am ok to reject setting storage policy on striped files at 
this stage. Please suggest if others have some thoughts on this.

Thanks [~zhz] for offline discussion on this.

  was:
Currently existing storage policies are not suitable for striped layout files.
This JIRA proposes to reject setting storage policy on striped files.

Another thought is to allow only suitable storage polices like ALL_SSD.
Since the major use case of EC is for cold data, this may not be at high 
importance. So, I am ok to reject setting storage policy on striped files at 
this stage. Please suggest if others have some thoughts on this.

Thanks [~zhz] for offline discussion on this.


> Allow only suitable storage policies to be set on striped files
> ---
>
> Key: HDFS-10473
> URL: https://issues.apache.org/jira/browse/HDFS-10473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10473-01.patch
>
>
> Currently some of existing storage policies are not suitable for striped 
> layout files.
> This JIRA proposes to reject setting storage policy on striped files.
> Another thought is to allow only suitable storage polices like ALL_SSD.
> Since the major use case of EC is for cold data, this may not be at high 
> importance. So, I am ok to reject setting storage policy on striped files at 
> this stage. Please suggest if others have some thoughts on this.
> Thanks [~zhz] for offline discussion on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10423) Increase default value of httpfs maxHttpHeaderSize

2016-06-07 Thread Nicolae Popa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolae Popa updated HDFS-10423:

Affects Version/s: 3.0.0-alpha1

> Increase default value of httpfs maxHttpHeaderSize
> --
>
> Key: HDFS-10423
> URL: https://issues.apache.org/jira/browse/HDFS-10423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.4, 3.0.0-alpha1
>Reporter: Nicolae Popa
>Priority: Minor
> Attachments: HDFS-10423.01.patch, after-HDFS-10423.txt, 
> after-HDFS-10423_withCustomHeader4.txt, before-HDFS-10423.txt
>
>
> The Tomcat default value of maxHttpHeaderSize is 8k, which is too low for 
> certain Hadoop workloads in kerberos enabled environments. This JIRA will to 
> change it to 65536 in server.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10469) Add number of active xceivers to datanode metrics

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319825#comment-15319825
 ] 

Hadoop QA commented on HDFS-10469:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 26s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 
new + 268 unchanged - 1 fixed = 269 total (was 269) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 22s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 110m 56s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.TestFileCreationDelete |
|   | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808770/HDFS-10469.002.patch |
| JIRA Issue | HDFS-10469 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 97f7d766f3e9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 58be55b |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15699/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15699/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15699/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15699/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-proj

[jira] [Updated] (HDFS-10493) Add links to datanode web UI in namenode datanodes page

2016-06-07 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10493:
---
Status: In Progress  (was: Patch Available)

> Add links to datanode web UI in namenode datanodes page
> ---
>
> Key: HDFS-10493
> URL: https://issues.apache.org/jira/browse/HDFS-10493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10493.001.patch, NN_DN_Links.jpg, 
> secure_nn_dn_links.jpg
>
>
> HDFS-10440 makes some improvements to datanode UI, it will be good to provide 
> links from namenode datanodes information page to individual datanode UI, to 
> check more datanode information easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10493) Add links to datanode web UI in namenode datanodes page

2016-06-07 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10493:
---
Status: Patch Available  (was: In Progress)

> Add links to datanode web UI in namenode datanodes page
> ---
>
> Key: HDFS-10493
> URL: https://issues.apache.org/jira/browse/HDFS-10493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10493.001.patch, NN_DN_Links.jpg, 
> secure_nn_dn_links.jpg
>
>
> HDFS-10440 makes some improvements to datanode UI, it will be good to provide 
> links from namenode datanodes information page to individual datanode UI, to 
> check more datanode information easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10467) Router-based HDFS federation

2016-06-07 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319822#comment-15319822
 ] 

Inigo Goiri commented on HDFS-10467:


And we targeted to minimize the impact on those 12 classes :)
Actually, we expect that based on feedback we can reduce the impact on the 
{{Client}} and the {{Server}}.
Right now, we are using those extensions to allow more connections between 
{{Router}} and {{NameNode}}.

> Router-based HDFS federation
> 
>
> Key: HDFS-10467
> URL: https://issues.apache.org/jira/browse/HDFS-10467
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Inigo Goiri
> Attachments: HDFS Router Federation.pdf, HDFS-10467.PoC.patch, 
> HDFS-Router-Federation-Prototype.patch
>
>
> Add a Router to provide a federated view of multiple HDFS clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-07 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319807#comment-15319807
 ] 

Uma Maheswara Rao G edited comment on HDFS-10473 at 6/8/16 1:00 AM:


Thanks a lot Jing for taking look.
{quote}
My understanding is policies like "WARM" and "ONE_SSD" are mainly targeting 
replication (since they're mainly setting specific storage type for the first 
replica) thus are not suitable. Could you please confirm it?
{quote}
Yes. You are right.

{quote}
For the patch, storage policies are mainly set on directories (in fact to set 
storage policies on files is not recommended), and we allow moving EC files 
across EC directory boundaries. Therefore it is not possible to disallow 
setting storage policies on striped file in O(1) time complexity. Looks like 
the changes on the NN side may be unnecessary here. We only need to let Mover 
ignore striped files for now.
{quote}
In reality yes. In current patch, we just disabled for files only if some one 
sets. Yes you are right we can not disable for each level of file here in dir 
case. That handled while running mover only.  

Actual plan is to find the suitable policies and enable only for them. At first 
step we thought we will disable and then think more carefully what policies 
suitable. Yes, we can think now itself and do full changes.
Here is how I am thinking :
Mover is the key tool here who moves the file blocks.  So, lets define EC 
allowed policies. i.e, ALL_SSD, ARCHIVE, etc
Since policies are static, lets keep allowed list statically in Code. When 
mover attempt move striped files, if the targeted policy is either of the them, 
then we will just proceed for that file, otherwise we will just skip. 
For the files specifically if someone attempting to set other than above 
policy, then we don't allow straightway by rejecting call. We can not do on 
directory case because it applies for many files under it. some of them may be 
non ec file directories. May be when listing policies for EC files, we should 
ignore if inherited one other than above list?
What do you say?




was (Author: umamaheswararao):
Thanks a lot Jing for taking look.
{quote}
My understanding is policies like "WARM" and "ONE_SSD" are mainly targeting 
replication (since they're mainly setting specific storage type for the first 
replica) thus are not suitable. Could you please confirm it?
{quote}
Yes. You are right.

{quote}
For the patch, storage policies are mainly set on directories (in fact to set 
storage policies on files is not recommended), and we allow moving EC files 
across EC directory boundaries. Therefore it is not possible to disallow 
setting storage policies on striped file in O(1) time complexity. Looks like 
the changes on the NN side may be unnecessary here. We only need to let Mover 
ignore striped files for now.
{quote}
In reality yes. Currently in NN we just disable for files only if some one 
sets. Yes you are right we can not disable for each level of file here. This 
handle while running mover only.  

Actual plan is to find the suitable policies and enable only for them. At first 
step we thought we will disable and then think more carefully what policies 
suitable. Yes, we can think now itself and do full changes.
Here is how I am thinking :
Mover is the key tool here who moves the file blocks.  So, lets define EC 
allowed policies. i.e, ALL_SSD, ARCHIVE, etc
Since policies are static, lets keep allowed list statically in Code. When 
mover attempt move striped files, if the targeted policy is either of the them, 
then we will just proceed for that file, otherwise we will just skip. 
For the files specifically if someone attempting to set other than above 
policy, then we don't allow straightway by rejecting call. We can not do on 
directory case because it applies for many files under it. some of them may be 
non ec file directories. May be when listing policies for EC files, we should 
ignore if inherited one other than above list?
What do you say?



> Allow only suitable storage policies to be set on striped files
> ---
>
> Key: HDFS-10473
> URL: https://issues.apache.org/jira/browse/HDFS-10473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10473-01.patch
>
>
> Currently existing storage policies are not suitable for striped layout files.
> This JIRA proposes to reject setting storage policy on striped files.
> Another thought is to allow only suitable storage polices like ALL_SSD.
> Since the major use case of EC is for cold data, this may not be at high 
> importance. So, I am ok to reject setting storage policy on striped files at 
> this stage. Ple

[jira] [Commented] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-07 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319807#comment-15319807
 ] 

Uma Maheswara Rao G commented on HDFS-10473:


Thanks a lot Jing for taking look.
{quote}
My understanding is policies like "WARM" and "ONE_SSD" are mainly targeting 
replication (since they're mainly setting specific storage type for the first 
replica) thus are not suitable. Could you please confirm it?
{quote}
Yes. You are right.

{quote}
For the patch, storage policies are mainly set on directories (in fact to set 
storage policies on files is not recommended), and we allow moving EC files 
across EC directory boundaries. Therefore it is not possible to disallow 
setting storage policies on striped file in O(1) time complexity. Looks like 
the changes on the NN side may be unnecessary here. We only need to let Mover 
ignore striped files for now.
{quote}
In reality yes. Currently in NN we just disable for files only if some one 
sets. Yes you are right we can not disable for each level of file here. This 
handle while running mover only.  

Actual plan is to find the suitable policies and enable only for them. At first 
step we thought we will disable and then think more carefully what policies 
suitable. Yes, we can think now itself and do full changes.
Here is how I am thinking :
Mover is the key tool here who moves the file blocks.  So, lets define EC 
allowed policies. i.e, ALL_SSD, ARCHIVE, etc
Since policies are static, lets keep allowed list statically in Code. When 
mover attempt move striped files, if the targeted policy is either of the them, 
then we will just proceed for that file, otherwise we will just skip. 
For the files specifically if someone attempting to set other than above 
policy, then we don't allow straightway by rejecting call. We can not do on 
directory case because it applies for many files under it. some of them may be 
non ec file directories. May be when listing policies for EC files, we should 
ignore if inherited one other than above list?
What do you say?



> Allow only suitable storage policies to be set on striped files
> ---
>
> Key: HDFS-10473
> URL: https://issues.apache.org/jira/browse/HDFS-10473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10473-01.patch
>
>
> Currently existing storage policies are not suitable for striped layout files.
> This JIRA proposes to reject setting storage policy on striped files.
> Another thought is to allow only suitable storage polices like ALL_SSD.
> Since the major use case of EC is for cold data, this may not be at high 
> importance. So, I am ok to reject setting storage policy on striped files at 
> this stage. Please suggest if others have some thoughts on this.
> Thanks [~zhz] for offline discussion on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10499) Intermittent test failure org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency.testGenerationStampInFuture

2016-06-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDFS-10499:
---

Assignee: Anu Engineer

> Intermittent test failure 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency.testGenerationStampInFuture
> 
>
> Key: HDFS-10499
> URL: https://issues.apache.org/jira/browse/HDFS-10499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Anu Engineer
>
> Per https://builds.apache.org/job/PreCommit-HDFS-Build/15646/testReport/, we 
> had the following failure. Local rerun is successful.
> Stack Trace:
> {panel}
> java.lang.AssertionError: expected:<17> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency.testGenerationStampInFuture(TestNameNodeMetadataConsistency.java:113)
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10500) Diskbalancer: Print out information when a plan is not generated.

2016-06-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10500:

Status: Patch Available  (was: Open)

> Diskbalancer: Print out information when a plan is not generated.
> -
>
> Key: HDFS-10500
> URL: https://issues.apache.org/jira/browse/HDFS-10500
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10500-HDFS-1312.001.patch
>
>
> This collects a bunch of issues that were identified in testing and fixes all 
> of them since most of them are one line fixes in diskbalancer command shell.
> * Fix bugs in Precondition checks
> * Use NodePlan.Parse instead of readPlan(), remove readPlan, GetPlan
> * Print out information when a plan is not generated. 
> * Format plan command  output to console.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10500) Diskbalancer: Print out information when a plan is not generated.

2016-06-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10500:

Status: Open  (was: Patch Available)

> Diskbalancer: Print out information when a plan is not generated.
> -
>
> Key: HDFS-10500
> URL: https://issues.apache.org/jira/browse/HDFS-10500
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10500-HDFS-1312.001.patch
>
>
> This collects a bunch of issues that were identified in testing and fixes all 
> of them since most of them are one line fixes in diskbalancer command shell.
> * Fix bugs in Precondition checks
> * Use NodePlan.Parse instead of readPlan(), remove readPlan, GetPlan
> * Print out information when a plan is not generated. 
> * Format plan command  output to console.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10500) Diskbalancer: Print out information when a plan is not generated.

2016-06-07 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319780#comment-15319780
 ] 

Anu Engineer commented on HDFS-10500:
-

This failed due to Oracle Java install failed, during build. Retrying the same 
patch.

> Diskbalancer: Print out information when a plan is not generated.
> -
>
> Key: HDFS-10500
> URL: https://issues.apache.org/jira/browse/HDFS-10500
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10500-HDFS-1312.001.patch
>
>
> This collects a bunch of issues that were identified in testing and fixes all 
> of them since most of them are one line fixes in diskbalancer command shell.
> * Fix bugs in Precondition checks
> * Use NodePlan.Parse instead of readPlan(), remove readPlan, GetPlan
> * Print out information when a plan is not generated. 
> * Format plan command  output to console.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10500) Diskbalancer: Print out information when a plan is not generated.

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319763#comment-15319763
 ] 

Hadoop QA commented on HDFS-10500:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 6s {color} 
| {color:red} Docker failed to build yetus/hadoop:2c91fd8. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808781/HDFS-10500-HDFS-1312.001.patch
 |
| JIRA Issue | HDFS-10500 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15700/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Diskbalancer: Print out information when a plan is not generated.
> -
>
> Key: HDFS-10500
> URL: https://issues.apache.org/jira/browse/HDFS-10500
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10500-HDFS-1312.001.patch
>
>
> This collects a bunch of issues that were identified in testing and fixes all 
> of them since most of them are one line fixes in diskbalancer command shell.
> * Fix bugs in Precondition checks
> * Use NodePlan.Parse instead of readPlan(), remove readPlan, GetPlan
> * Print out information when a plan is not generated. 
> * Format plan command  output to console.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10500) Diskbalancer: Print out information when a plan is not generated.

2016-06-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10500:

Status: Patch Available  (was: Open)

> Diskbalancer: Print out information when a plan is not generated.
> -
>
> Key: HDFS-10500
> URL: https://issues.apache.org/jira/browse/HDFS-10500
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10500-HDFS-1312.001.patch
>
>
> This collects a bunch of issues that were identified in testing and fixes all 
> of them since most of them are one line fixes in diskbalancer command shell.
> * Fix bugs in Precondition checks
> * Use NodePlan.Parse instead of readPlan(), remove readPlan, GetPlan
> * Print out information when a plan is not generated. 
> * Format plan command  output to console.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-07 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319754#comment-15319754
 ] 

Jing Zhao commented on HDFS-10473:
--

Thanks for working on this, Uma! Could you please explain more on why "existing 
storage policies are not suitable for striped layout files" ? My understanding 
is policies like "WARM" and "ONE_SSD" are mainly targeting replication (since 
they're mainly setting specific storage type for the first replica) thus are 
not suitable. Could you please confirm it?

For the patch, storage policies are mainly set on directories (in fact to set 
storage policies on files is not recommended), and we allow moving EC files 
across EC directory boundaries. Therefore it is not possible to disallow 
setting storage policies on striped file in O(1) time complexity. Looks like 
the changes on the NN side may be unnecessary here. We only need to let Mover 
ignore striped files for now.

However, this change may cause other issue. Since currently the main use case 
for EC is cold data, it is very natural for a customer to set a directory as 
EC, and set COLD storage policy on the directory. In this way all the EC files 
created later under this directory will be placed on Archival storages. We 
should keep this semantic since this is a very strong use case, but in the 
meanwhile, disabling Mover for EC files will conflict with this semantic: i.e., 
we recognize storage policies during file creation but not afterwards.

Therefore, currently I think we can either 1) make no changes at all and depend 
on admin to make the correct decision while setting EC and storage policies, or 
2) have a long term plan to fix the issue completely. For #2 maybe the best way 
is to bring in Volume concept, since if we have different settings on nested 
directories we will have to scan the subtree for validation.

> Allow only suitable storage policies to be set on striped files
> ---
>
> Key: HDFS-10473
> URL: https://issues.apache.org/jira/browse/HDFS-10473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10473-01.patch
>
>
> Currently existing storage policies are not suitable for striped layout files.
> This JIRA proposes to reject setting storage policy on striped files.
> Another thought is to allow only suitable storage polices like ALL_SSD.
> Since the major use case of EC is for cold data, this may not be at high 
> importance. So, I am ok to reject setting storage policy on striped files at 
> this stage. Please suggest if others have some thoughts on this.
> Thanks [~zhz] for offline discussion on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319699#comment-15319699
 ] 

Hadoop QA commented on HDFS-10473:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 
new + 67 unchanged - 0 fixed = 69 total (was 67) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 0s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 13s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDecommissionWithStriped |
|   | hadoop.hdfs.TestAsyncHDFSWithHA |
|   | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808773/HDFS-10473-01.patch |
| JIRA Issue | HDFS-10473 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 041693cea4af 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 620325e |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15698/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15698/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15698/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15698/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15698/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-proj

[jira] [Comment Edited] (HDFS-10469) Add number of active xceivers to datanode metrics

2016-06-07 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319557#comment-15319557
 ] 

Hanisha Koneru edited comment on HDFS-10469 at 6/7/16 11:24 PM:


Thank you [~shahrs87] for reviewing this.

# I am now setting the data xceivers count to 0 when closing all the peers.
# Created an improvement to add support for MutableGaugeShort in the metrics2 
library: HADOOP-13246
# resolved the checkstyle errors.


was (Author: hanishakoneru):
Checkstyle fixes

> Add number of active xceivers to datanode metrics
> -
>
> Key: HDFS-10469
> URL: https://issues.apache.org/jira/browse/HDFS-10469
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-10469.000.patch, HDFS-10469.001.patch, 
> HDFS-10469.002.patch
>
>
> Number of active xceivers is exposed via jmx, but not in Datanode metrics. We 
> should add it to datanode metrics for monitoring the load on Datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10469) Add number of active xceivers to datanode metrics

2016-06-07 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-10469:
--
Status: Patch Available  (was: In Progress)

> Add number of active xceivers to datanode metrics
> -
>
> Key: HDFS-10469
> URL: https://issues.apache.org/jira/browse/HDFS-10469
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-10469.000.patch, HDFS-10469.001.patch, 
> HDFS-10469.002.patch
>
>
> Number of active xceivers is exposed via jmx, but not in Datanode metrics. We 
> should add it to datanode metrics for monitoring the load on Datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10500) Diskbalancer: Print out information when a plan is not generated.

2016-06-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10500:

Attachment: HDFS-10500-HDFS-1312.001.patch

> Diskbalancer: Print out information when a plan is not generated.
> -
>
> Key: HDFS-10500
> URL: https://issues.apache.org/jira/browse/HDFS-10500
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10500-HDFS-1312.001.patch
>
>
> This collects a bunch of issues that were identified in testing and fixes all 
> of them since most of them are one line fixes in diskbalancer command shell.
> * Fix bugs in Precondition checks
> * Use NodePlan.Parse instead of readPlan(), remove readPlan, GetPlan
> * Print out information when a plan is not generated. 
> * Format plan command  output to console.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-07 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319583#comment-15319583
 ] 

Uma Maheswara Rao G edited comment on HDFS-10473 at 6/7/16 10:26 PM:
-

Here is a patch which rejects to set storage policy on striped files. Let me 
file a JIRA for more discussion on identifying suitable one/define newer ones.


was (Author: umamaheswararao):
Here is a patch which rejects to set storage policy on striped files.

> Allow only suitable storage policies to be set on striped files
> ---
>
> Key: HDFS-10473
> URL: https://issues.apache.org/jira/browse/HDFS-10473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10473-01.patch
>
>
> Currently existing storage policies are not suitable for striped layout files.
> This JIRA proposes to reject setting storage policy on striped files.
> Another thought is to allow only suitable storage polices like ALL_SSD.
> Since the major use case of EC is for cold data, this may not be at high 
> importance. So, I am ok to reject setting storage policy on striped files at 
> this stage. Please suggest if others have some thoughts on this.
> Thanks [~zhz] for offline discussion on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-07 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-10473:
---
Status: Patch Available  (was: Open)

> Allow only suitable storage policies to be set on striped files
> ---
>
> Key: HDFS-10473
> URL: https://issues.apache.org/jira/browse/HDFS-10473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10473-01.patch
>
>
> Currently existing storage policies are not suitable for striped layout files.
> This JIRA proposes to reject setting storage policy on striped files.
> Another thought is to allow only suitable storage polices like ALL_SSD.
> Since the major use case of EC is for cold data, this may not be at high 
> importance. So, I am ok to reject setting storage policy on striped files at 
> this stage. Please suggest if others have some thoughts on this.
> Thanks [~zhz] for offline discussion on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-07 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-10473:
---
Attachment: HDFS-10473-01.patch

Here is a patch which rejects to set storage policy on striped files.

> Allow only suitable storage policies to be set on striped files
> ---
>
> Key: HDFS-10473
> URL: https://issues.apache.org/jira/browse/HDFS-10473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10473-01.patch
>
>
> Currently existing storage policies are not suitable for striped layout files.
> This JIRA proposes to reject setting storage policy on striped files.
> Another thought is to allow only suitable storage polices like ALL_SSD.
> Since the major use case of EC is for cold data, this may not be at high 
> importance. So, I am ok to reject setting storage policy on striped files at 
> this stage. Please suggest if others have some thoughts on this.
> Thanks [~zhz] for offline discussion on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10467) Router-based HDFS federation

2016-06-07 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319564#comment-15319564
 ] 

Zhe Zhang commented on HDFS-10467:
--

Thanks. Impressive that only {{12 deletions(-)}} :)

> Router-based HDFS federation
> 
>
> Key: HDFS-10467
> URL: https://issues.apache.org/jira/browse/HDFS-10467
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Inigo Goiri
> Attachments: HDFS Router Federation.pdf, HDFS-10467.PoC.patch, 
> HDFS-Router-Federation-Prototype.patch
>
>
> Add a Router to provide a federated view of multiple HDFS clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10496) DiskBalancer: ExecuteCommand checks planFile in a wrong way

2016-06-07 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319563#comment-15319563
 ] 

Lei (Eddy) Xu commented on HDFS-10496:
--

Thanks you so much for committing this. [~anu]

> DiskBalancer: ExecuteCommand checks planFile in a wrong way
> ---
>
> Key: HDFS-10496
> URL: https://issues.apache.org/jira/browse/HDFS-10496
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Critical
> Fix For: HDFS-1312
>
> Attachments: HDFS-10496-HDFS-1312.001.patch, HDFS-10496.0.patch
>
>
> In {{ExecuteCommand#execute}}, it checks the plan file as 
> {code}
>  Preconditions.checkArgument(planFile == null || planFile.isEmpty(),
> "Invalid plan file specified.");
> {code}
> Which stops the execution with a correct planFile argument.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10496) DiskBalancer: ExecuteCommand checks planFile in a wrong way

2016-06-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10496:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~eddyxu] Thank you for your contribution. I have committed this to the feature 
branch.

> DiskBalancer: ExecuteCommand checks planFile in a wrong way
> ---
>
> Key: HDFS-10496
> URL: https://issues.apache.org/jira/browse/HDFS-10496
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Critical
> Fix For: HDFS-1312
>
> Attachments: HDFS-10496-HDFS-1312.001.patch, HDFS-10496.0.patch
>
>
> In {{ExecuteCommand#execute}}, it checks the plan file as 
> {code}
>  Preconditions.checkArgument(planFile == null || planFile.isEmpty(),
> "Invalid plan file specified.");
> {code}
> Which stops the execution with a correct planFile argument.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10469) Add number of active xceivers to datanode metrics

2016-06-07 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-10469:
--
Attachment: HDFS-10469.002.patch

Checkstyle fixes

> Add number of active xceivers to datanode metrics
> -
>
> Key: HDFS-10469
> URL: https://issues.apache.org/jira/browse/HDFS-10469
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-10469.000.patch, HDFS-10469.001.patch, 
> HDFS-10469.002.patch
>
>
> Number of active xceivers is exposed via jmx, but not in Datanode metrics. We 
> should add it to datanode metrics for monitoring the load on Datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10496) DiskBalancer: ExecuteCommand checks planFile in a wrong way

2016-06-07 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319551#comment-15319551
 ] 

Anu Engineer commented on HDFS-10496:
-

Test failures are not related to this patch. [~eddyxu] Thank you for taking 
care of this issue. I will commit this shortly.


> DiskBalancer: ExecuteCommand checks planFile in a wrong way
> ---
>
> Key: HDFS-10496
> URL: https://issues.apache.org/jira/browse/HDFS-10496
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Critical
> Fix For: HDFS-1312
>
> Attachments: HDFS-10496-HDFS-1312.001.patch, HDFS-10496.0.patch
>
>
> In {{ExecuteCommand#execute}}, it checks the plan file as 
> {code}
>  Preconditions.checkArgument(planFile == null || planFile.isEmpty(),
> "Invalid plan file specified.");
> {code}
> Which stops the execution with a correct planFile argument.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org




[jira] [Updated] (HDFS-10469) Add number of active xceivers to datanode metrics

2016-06-07 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-10469:
--
Status: In Progress  (was: Patch Available)

> Add number of active xceivers to datanode metrics
> -
>
> Key: HDFS-10469
> URL: https://issues.apache.org/jira/browse/HDFS-10469
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-10469.000.patch, HDFS-10469.001.patch
>
>
> Number of active xceivers is exposed via jmx, but not in Datanode metrics. We 
> should add it to datanode metrics for monitoring the load on Datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10496) DiskBalancer: ExecuteCommand checks planFile in a wrong way

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319502#comment-15319502
 ] 

Hadoop QA commented on HDFS-10496:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
59s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
38s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 2s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 30s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
|   | hadoop.hdfs.TestAsyncHDFSWithHA |
|   | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808745/HDFS-10496-HDFS-1312.001.patch
 |
| JIRA Issue | HDFS-10496 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9fca1d5b8e75 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-1312 / 76a1391 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15696/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15696/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15696/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15696/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automa

[jira] [Commented] (HDFS-10469) Add number of active xceivers to datanode metrics

2016-06-07 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319470#comment-15319470
 ] 

Rushabh S Shah commented on HDFS-10469:
---

Overall the patch looks good.
Just couple of  comments.
1. In {{DataXceiverServer#closeAllPeers}}, I would set the metric to 0 since we 
are closing all the data xceiver threads.
2. Since we have deprecated the key {{dfs.datanode.max.transfer.threads}} and 
replaced the maxXceiverCount with hardcoded value of 4096, assigning 
MutableGaugeInt to hold this metric (whose maximum value can be 4096) seems to 
be wasteful.
Instead we can add MutableGaugeShort support.

I ran TestNameNodeMetadataConsistency and it didn't fail for me.

You need to address the checkstyle warnings.

> Add number of active xceivers to datanode metrics
> -
>
> Key: HDFS-10469
> URL: https://issues.apache.org/jira/browse/HDFS-10469
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-10469.000.patch, HDFS-10469.001.patch
>
>
> Number of active xceivers is exposed via jmx, but not in Datanode metrics. We 
> should add it to datanode metrics for monitoring the load on Datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-10469) Add number of active xceivers to datanode metrics

2016-06-07 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-10469:
--
Comment: was deleted

(was: Overall the patch looks good.)

> Add number of active xceivers to datanode metrics
> -
>
> Key: HDFS-10469
> URL: https://issues.apache.org/jira/browse/HDFS-10469
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-10469.000.patch, HDFS-10469.001.patch
>
>
> Number of active xceivers is exposed via jmx, but not in Datanode metrics. We 
> should add it to datanode metrics for monitoring the load on Datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10469) Add number of active xceivers to datanode metrics

2016-06-07 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319428#comment-15319428
 ] 

Rushabh S Shah commented on HDFS-10469:
---

Overall the patch looks good.

> Add number of active xceivers to datanode metrics
> -
>
> Key: HDFS-10469
> URL: https://issues.apache.org/jira/browse/HDFS-10469
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-10469.000.patch, HDFS-10469.001.patch
>
>
> Number of active xceivers is exposed via jmx, but not in Datanode metrics. We 
> should add it to datanode metrics for monitoring the load on Datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10500) Diskbalancer: Print out information when a plan is not generated.

2016-06-07 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-10500:
---

 Summary: Diskbalancer: Print out information when a plan is not 
generated.
 Key: HDFS-10500
 URL: https://issues.apache.org/jira/browse/HDFS-10500
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: balancer & mover
Affects Versions: HDFS-1312
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: HDFS-1312


This collects a bunch of issues that were identified in testing and fixes all 
of them since most of them are one line fixes in diskbalancer command shell.

* Fix bugs in Precondition checks
* Use NodePlan.Parse instead of readPlan(), remove readPlan, GetPlan
* Print out information when a plan is not generated. 
* Format plan command  output to console.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10467) Router-based HDFS federation

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319411#comment-15319411
 ] 

Hadoop QA commented on HDFS-10467:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 1s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 50s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 50s {color} 
| {color:red} root generated 4 new + 695 unchanged - 2 fixed = 699 total (was 
697) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 49s 
{color} | {color:red} root: The patch generated 609 new + 1185 unchanged - 5 
fixed = 1794 total (was 1190) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 12s 
{color} | {color:red} The patch generated 4 new + 80 unchanged - 1 fixed = 84 
total (was 81) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 48 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 2s 
{color} | {color:red} The patch 37 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 56s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 44 new + 0 
unchanged - 0 fixed = 44 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 5s 
{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 65 new + 7 
unchanged - 0 fixed = 72 total (was 7) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 15s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 50s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 30s 
{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 145m 55s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Unread field:StateStoreMetrics.java:[line 55] |
|  |  org.apache.hadoop.hdfs.server.federation.router.FederationConnectionId 
doe

[jira] [Updated] (HDFS-10496) DiskBalancer: ExecuteCommand checks planFile in a wrong way

2016-06-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10496:

Summary: DiskBalancer: ExecuteCommand checks planFile in a wrong way  (was: 
ExecuteCommand checks planFile in a wrong way)

> DiskBalancer: ExecuteCommand checks planFile in a wrong way
> ---
>
> Key: HDFS-10496
> URL: https://issues.apache.org/jira/browse/HDFS-10496
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Critical
> Fix For: HDFS-1312
>
> Attachments: HDFS-10496-HDFS-1312.001.patch, HDFS-10496.0.patch
>
>
> In {{ExecuteCommand#execute}}, it checks the plan file as 
> {code}
>  Preconditions.checkArgument(planFile == null || planFile.isEmpty(),
> "Invalid plan file specified.");
> {code}
> Which stops the execution with a correct planFile argument.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9943) Support reconfiguring namenode replication confs

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319405#comment-15319405
 ] 

Hadoop QA commented on HDFS-9943:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 5s {color} 
| {color:red} Docker failed to build yetus/hadoop:2c91fd8. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808754/HDFS-9943-HDFS-9000.006.patch
 |
| JIRA Issue | HDFS-9943 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15697/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Support reconfiguring namenode replication confs
> 
>
> Key: HDFS-9943
> URL: https://issues.apache.org/jira/browse/HDFS-9943
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9943-HDFS-9000.000.patch, 
> HDFS-9943-HDFS-9000.001.patch, HDFS-9943-HDFS-9000.002.patch, 
> HDFS-9943-HDFS-9000.003.patch, HDFS-9943-HDFS-9000.004.patch, 
> HDFS-9943-HDFS-9000.005.patch, HDFS-9943-HDFS-9000.006.patch
>
>
> The following confs should be re-configurable in runtime.
> - dfs.namenode.replication.work.multiplier.per.iteration
> - dfs.namenode.replication.interval
> - dfs.namenode.replication.max-streams
> - dfs.namenode.replication.max-streams-hard-limit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10488) WebHDFS CREATE and MKDIRS does not follow same rules as DFS CLI when creating files/directories without specifying permissions

2016-06-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319384#comment-15319384
 ] 

Chris Nauroth commented on HDFS-10488:
--

bq. So, Chris Nauroth, summarizing, fs.permissions.umask-mode should not be 
applied for WebHDFS created directories/files.

I think a slight refinement of this is to say that it should not be applied by 
the WebHDFS server side (the NameNode).  It may be applied by the WebHDFS 
client side.  For example, the {{WebHdfsFileSystem}} class that ships in Hadoop 
does apply {{fs.permissions.umask-mode}} from the client side before calling 
the WebHDFS server side.

bq. While working on this, I had found out the default permission (if no 
permission is specified while calling the method) for both directories and 
files created by WebHDFS currently is 755. However, defining "execution" 
permissions for HDFS files don't have any value. Should this be changed to give 
different default permissions for files and directories?

This part is admittedly odd, and there is a long-standing open JIRA requesting 
a change to 644 as the default for files.  That is HDFS-6434.  This change is 
potentially backwards-incompatible, such as if someone has an existing workflow 
that round-trips a file through HDFS and expects it to be executable after 
getting it back out, though that's likely a remote edge case.  If you'd like to 
proceed with HDFS-6434, then I'd suggest targeting trunk/Hadoop 3.x, where we 
currently can make backwards-incompatible changes.

bq. Still on the default values, setting 755 as default can lead to confusion 
about umask being used. Since default umask is 022, users can conclude that the 
umask is being applied when they see newly created directories got 755. Should 
this be changed to more permissive permissions such as 777?

I do think 777 makes sense from one perspective, but there is also a trade-off 
with providing behavior that is secure by default.  In HDFS-2427, the project 
made the choice to go with 755, favoring secure default behavior (755) over the 
possibly more intuitive behavior (777).

bq. When working on tests for WebHDFS CREATESYMLINK as suggested by Wei-Chiu 
Chuang, I realized this method is no longer supported. Should we simply remove 
from WebHDFS, or only document this is not supported anymore and leave it 
giving the current error?

HDFS symlinks are currently in a state where the code is partially completed 
but dormant due to unresolved problems with backwards-compatibility and 
security.  We might get past those hurdles someday, so I suggest leaving that 
code as is.  We still run tests against the symlink code paths.  This works by 
having the tests call the private {{FileSystem#enableSymlinks}} method to 
toggle on the dormant symlink code.

> WebHDFS CREATE and MKDIRS does not follow same rules as DFS CLI when creating 
> files/directories without specifying permissions
> --
>
> Key: HDFS-10488
> URL: https://issues.apache.org/jira/browse/HDFS-10488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-10488.002.patch, HDFS-10488.003.patch, 
> HDFS-10488.patch
>
>
> WebHDFS methods for creating file/directories are always creating it with 755 
> permissions as default, even ignoring any configured 
> *fs.permissions.umask-mode* in the case of directories.
> Dfs CLI, however, applies the configured umask to 777 permission for 
> directories, or 666 permission for files.
> Example below shows the different behaviour when creating directory via CLI 
> and WebHDFS:
> {noformat}
> 1) Creating a directory under '/test/' as 'test-user'. Configured 
> fs.permissions.umask-mode is 000: 
> $ sudo -u test-user hdfs dfs -mkdir /test/test-user1 
> $ sudo -u test-user hdfs dfs -getfacl /test/test-user1 
> # file: /test/test-user1
> # owner: test-user 
> # group: supergroup 
> user::rwx 
> group::rwx 
> other::rwx 
> 4) Doing the same via WebHDFS does not get the proper ACLs: 
> $ curl -i -X PUT 
> "http://namenode-host:50070/webhdfs/v1/test/test-user2?user.name=test-user&op=MKDIRS";
>  
> $ sudo -u test-user hdfs dfs -getfacl /test/test-user2 
> # file: /test/test-user2 
> # owner: test-user 
> # group: supergroup 
> user::rwx 
> group::r-x 
> other::r-x
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9943) Support reconfiguring namenode replication confs

2016-06-07 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9943:

Attachment: HDFS-9943-HDFS-9000.006.patch

v006 fixed some check style issues.

> Support reconfiguring namenode replication confs
> 
>
> Key: HDFS-9943
> URL: https://issues.apache.org/jira/browse/HDFS-9943
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9943-HDFS-9000.000.patch, 
> HDFS-9943-HDFS-9000.001.patch, HDFS-9943-HDFS-9000.002.patch, 
> HDFS-9943-HDFS-9000.003.patch, HDFS-9943-HDFS-9000.004.patch, 
> HDFS-9943-HDFS-9000.005.patch, HDFS-9943-HDFS-9000.006.patch
>
>
> The following confs should be re-configurable in runtime.
> - dfs.namenode.replication.work.multiplier.per.iteration
> - dfs.namenode.replication.interval
> - dfs.namenode.replication.max-streams
> - dfs.namenode.replication.max-streams-hard-limit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10497) Intermittent test failure org.apache.hadoop.hdfs.TestAsyncHDFSWithHA.testAsyncWithHAFailover

2016-06-07 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou reassigned HDFS-10497:


Assignee: Xiaobing Zhou

> Intermittent test failure 
> org.apache.hadoop.hdfs.TestAsyncHDFSWithHA.testAsyncWithHAFailover
> 
>
> Key: HDFS-10497
> URL: https://issues.apache.org/jira/browse/HDFS-10497
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Xiaobing Zhou
>
> Per https://builds.apache.org/job/PreCommit-HDFS-Build/15646/testReport/, we 
> had the following failure. Local rerun is successful.
> Error Details: 
> {panel}
> org.apache.hadoop.ipc.RemoteException(java.lang.AssertionError): bad state: 
> CLOSED
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.logEdit(FSEditLog.java:428)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.logRename(FSEditLog.java:867)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:289)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:247)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:2755)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename2(NameNodeRpcServer.java:1027)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename2(ClientNamenodeProtocolServerSideTranslatorPB.java:607)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:664)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2423)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2419)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419)
> {panel}
> Stack Trace:
> {panel}
> java.util.concurrent.ExecutionException: 
> org.apache.hadoop.ipc.RemoteException(java.lang.AssertionError): bad state: 
> CLOSED
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.logEdit(FSEditLog.java:428)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.logRename(FSEditLog.java:867)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:289)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:247)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:2755)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename2(NameNodeRpcServer.java:1027)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename2(ClientNamenodeProtocolServerSideTranslatorPB.java:607)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:664)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2423)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2419)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419)
>   at 
> com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:289)
>   at 
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
>   at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111)
>   at 
> org.apache.hadoop.util.concurrent.AsyncGetFuture.get(AsyncGetFuture.java:58)
>   at 
> org.apache.hadoop.hdfs.TestAsyncHDFSWithHA.testAsyncWithHAFailover(TestAsyncHDFSWithHA.java:166)
> Caused by: org.apache.hadoop.ipc.RemoteException: bad state: CLOSED
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.logEdit(FSEditLog.java:428)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.logRename(FSEditLog.java:867)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:289)
>

[jira] [Updated] (HDFS-3714) Disallow manual failover to already active NN when auto failover is enabled

2016-06-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-3714:
--
Labels: newbie  (was: )

> Disallow manual failover to already active NN when auto failover is enabled
> ---
>
> Key: HDFS-3714
> URL: https://issues.apache.org/jira/browse/HDFS-3714
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Priority: Minor
>  Labels: newbie
>
> If nn1 is active and nn2 is standby "hdfs haadmin -failover nn2 nn1" says 
> that failover to nn1 is successful even though nn1 is already active. When 
> only manual failover is active, there is a check that tells users that the 
> user can't failover to an already active service. We should have the same 
> check for auto failover.
> {noformat}
> bash-4.1$ hdfs haadmin -failover nn2 nn1
> Failover to NameNode at brut01.sf.cloudera.com/172.22.35.149:17020 successful
> bash-4.1$ hdfs haadmin -failover nn2 nn1
> Failover to NameNode at brut01.sf.cloudera.com/172.22.35.149:17020 successful
> bash-4.1$ hdfs haadmin -failover nn2 nn1
> Failover to NameNode at brut01.sf.cloudera.com/172.22.35.149:17020 successful
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9943) Support reconfiguring namenode replication confs

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319327#comment-15319327
 ] 

Hadoop QA commented on HDFS-9943:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: The patch generated 13 
new + 291 unchanged - 4 fixed = 304 total (was 295) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 25s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 45s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808736/HDFS-9943-HDFS-9000.005.patch
 |
| JIRA Issue | HDFS-9943 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 50541c4c9bb2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / be34e85 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15694/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15694/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15694/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15694/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15694/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> S

[jira] [Commented] (HDFS-10469) Add number of active xceivers to datanode metrics

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319320#comment-15319320
 ] 

Hadoop QA commented on HDFS-10469:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 26s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: The patch generated 8 
new + 268 unchanged - 1 fixed = 276 total (was 269) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 13s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 55s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808735/HDFS-10469.001.patch |
| JIRA Issue | HDFS-10469 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 09d398baa5b4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / be34e85 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15693/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15693/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15693/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15693/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15693/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Add numb

[jira] [Updated] (HDFS-10496) ExecuteCommand checks planFile in a wrong way

2016-06-07 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-10496:
-
Attachment: HDFS-10496-HDFS-1312.001.patch

Re upload to test against feature branch.

> ExecuteCommand checks planFile in a wrong way
> -
>
> Key: HDFS-10496
> URL: https://issues.apache.org/jira/browse/HDFS-10496
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Critical
> Fix For: HDFS-1312
>
> Attachments: HDFS-10496-HDFS-1312.001.patch, HDFS-10496.0.patch
>
>
> In {{ExecuteCommand#execute}}, it checks the plan file as 
> {code}
>  Preconditions.checkArgument(planFile == null || planFile.isEmpty(),
> "Invalid plan file specified.");
> {code}
> Which stops the execution with a correct planFile argument.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10496) ExecuteCommand checks planFile in a wrong way

2016-06-07 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-10496:
-
Fix Version/s: HDFS-1312

> ExecuteCommand checks planFile in a wrong way
> -
>
> Key: HDFS-10496
> URL: https://issues.apache.org/jira/browse/HDFS-10496
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Critical
> Fix For: HDFS-1312
>
> Attachments: HDFS-10496-HDFS-1312.001.patch, HDFS-10496.0.patch
>
>
> In {{ExecuteCommand#execute}}, it checks the plan file as 
> {code}
>  Preconditions.checkArgument(planFile == null || planFile.isEmpty(),
> "Invalid plan file specified.");
> {code}
> Which stops the execution with a correct planFile argument.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10469) Add number of active xceivers to datanode metrics

2016-06-07 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-10469:
--
Status: In Progress  (was: Patch Available)

> Add number of active xceivers to datanode metrics
> -
>
> Key: HDFS-10469
> URL: https://issues.apache.org/jira/browse/HDFS-10469
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-10469.000.patch, HDFS-10469.001.patch
>
>
> Number of active xceivers is exposed via jmx, but not in Datanode metrics. We 
> should add it to datanode metrics for monitoring the load on Datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10469) Add number of active xceivers to datanode metrics

2016-06-07 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-10469:
--
Status: Patch Available  (was: In Progress)

> Add number of active xceivers to datanode metrics
> -
>
> Key: HDFS-10469
> URL: https://issues.apache.org/jira/browse/HDFS-10469
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-10469.000.patch, HDFS-10469.001.patch
>
>
> Number of active xceivers is exposed via jmx, but not in Datanode metrics. We 
> should add it to datanode metrics for monitoring the load on Datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10469) Add number of active xceivers to datanode metrics

2016-06-07 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319222#comment-15319222
 ] 

Hanisha Koneru commented on HDFS-10469:
---

* Fixed Checkstyle errors in patch version .001
* Failed junit tests
** _hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency_
*** Unrelated and intermittent test failure. Local rerun is successful. Raised 
a bug: HDFS-10499
** _hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration_
*** Fixed in new patch version .001
** _hadoop.hdfs.TestCrcCorruption_
*** Not related to this patch. A bug has already been raised for this: HDFS-6532
** _hadoop.hdfs.TestHDFSServerPorts_
*** Fixed in new patch version .001
** _hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength_
*** Unrelated and intermittent test failure. Local rerun is successful. Raised 
a bug: HDFS-10498
** _hadoop.hdfs.TestAsyncHDFSWithHA
*** Intermittent test failure. Local rerun is successful. Raised a bug: 
HDFS-10497

> Add number of active xceivers to datanode metrics
> -
>
> Key: HDFS-10469
> URL: https://issues.apache.org/jira/browse/HDFS-10469
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-10469.000.patch, HDFS-10469.001.patch
>
>
> Number of active xceivers is exposed via jmx, but not in Datanode metrics. We 
> should add it to datanode metrics for monitoring the load on Datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10499) Intermittent test failure org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency.testGenerationStampInFuture

2016-06-07 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDFS-10499:
-

 Summary: Intermittent test failure 
org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency.testGenerationStampInFuture
 Key: HDFS-10499
 URL: https://issues.apache.org/jira/browse/HDFS-10499
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, namenode
Affects Versions: 3.0.0-alpha1
Reporter: Hanisha Koneru


Per https://builds.apache.org/job/PreCommit-HDFS-Build/15646/testReport/, we 
had the following failure. Local rerun is successful.

Stack Trace:
{panel}
java.lang.AssertionError: expected:<17> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency.testGenerationStampInFuture(TestNameNodeMetadataConsistency.java:113)
{panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10496) ExecuteCommand checks planFile in a wrong way

2016-06-07 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319210#comment-15319210
 ] 

Anu Engineer commented on HDFS-10496:
-

+1. I have the same change in the next patch. I will commit this and modify my 
patch.


> ExecuteCommand checks planFile in a wrong way
> -
>
> Key: HDFS-10496
> URL: https://issues.apache.org/jira/browse/HDFS-10496
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Critical
> Attachments: HDFS-10496.0.patch
>
>
> In {{ExecuteCommand#execute}}, it checks the plan file as 
> {code}
>  Preconditions.checkArgument(planFile == null || planFile.isEmpty(),
> "Invalid plan file specified.");
> {code}
> Which stops the execution with a correct planFile argument.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10498) Intermittent test failure org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength.testSnapshotfileLength

2016-06-07 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDFS-10498:
-

 Summary: Intermittent test failure 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength.testSnapshotfileLength
 Key: HDFS-10498
 URL: https://issues.apache.org/jira/browse/HDFS-10498
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, snapshots
Affects Versions: 3.0.0-alpha1
Reporter: Hanisha Koneru


Error Details
Per https://builds.apache.org/job/PreCommit-HDFS-Build/15646/testReport/, we 
had the following failure. Local rerun is successful.
Error Details:
{panel}
Fail to get block MD5 for 
LocatedBlock{BP-145245805-172.17.0.3-1464981728847:blk_1073741826_1002; 
getBlockSize()=1; corrupt=false; offset=1024; 
locs=[DatanodeInfoWithStorage[127.0.0.1:55764,DS-a33d7c97-9d4a-4694-a47e-a3187a33ed5a,DISK]]}
{panel}
Stack Trace: 
{panel}
java.io.IOException: Fail to get block MD5 for 
LocatedBlock{BP-145245805-172.17.0.3-1464981728847:blk_1073741826_1002; 
getBlockSize()=1; corrupt=false; offset=1024; 
locs=[DatanodeInfoWithStorage[127.0.0.1:55764,DS-a33d7c97-9d4a-4694-a47e-a3187a33ed5a,DISK]]}
at 
org.apache.hadoop.hdfs.FileChecksumHelper$ReplicatedFileChecksumComputer.checksumBlocks(FileChecksumHelper.java:289)
at 
org.apache.hadoop.hdfs.FileChecksumHelper$FileChecksumComputer.compute(FileChecksumHelper.java:206)
at org.apache.hadoop.hdfs.DFSClient.getFileChecksum(DFSClient.java:1731)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$31.doCall(DistributedFileSystem.java:1482)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$31.doCall(DistributedFileSystem.java:1479)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1490)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength.testSnapshotfileLength(TestSnapshotFileLength.java:137)
 Standard Output7 sec
{panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10496) ExecuteCommand checks planFile in a wrong way

2016-06-07 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-10496:
-
Component/s: balancer & mover

> ExecuteCommand checks planFile in a wrong way
> -
>
> Key: HDFS-10496
> URL: https://issues.apache.org/jira/browse/HDFS-10496
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Critical
> Attachments: HDFS-10496.0.patch
>
>
> In {{ExecuteCommand#execute}}, it checks the plan file as 
> {code}
>  Preconditions.checkArgument(planFile == null || planFile.isEmpty(),
> "Invalid plan file specified.");
> {code}
> Which stops the execution with a correct planFile argument.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10497) Intermittent test failure org.apache.hadoop.hdfs.TestAsyncHDFSWithHA.testAsyncWithHAFailover

2016-06-07 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDFS-10497:
-

 Summary: Intermittent test failure 
org.apache.hadoop.hdfs.TestAsyncHDFSWithHA.testAsyncWithHAFailover
 Key: HDFS-10497
 URL: https://issues.apache.org/jira/browse/HDFS-10497
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, namenode
Affects Versions: 3.0.0-alpha1
Reporter: Hanisha Koneru


Per https://builds.apache.org/job/PreCommit-HDFS-Build/15646/testReport/, we 
had the following failure. Local rerun is successful.
Error Details: 
{panel}
org.apache.hadoop.ipc.RemoteException(java.lang.AssertionError): bad state: 
CLOSED
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logEdit(FSEditLog.java:428)
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.logRename(FSEditLog.java:867)
 at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:289)
 at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:247)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:2755)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename2(NameNodeRpcServer.java:1027)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename2(ClientNamenodeProtocolServerSideTranslatorPB.java:607)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:664)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2423)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2419)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419)
{panel}

Stack Trace:
{panel}
java.util.concurrent.ExecutionException: 
org.apache.hadoop.ipc.RemoteException(java.lang.AssertionError): bad state: 
CLOSED
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.logEdit(FSEditLog.java:428)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.logRename(FSEditLog.java:867)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:289)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:247)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:2755)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename2(NameNodeRpcServer.java:1027)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename2(ClientNamenodeProtocolServerSideTranslatorPB.java:607)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:664)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2423)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2419)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419)

at 
com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:289)
at 
com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
at 
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111)
at 
org.apache.hadoop.util.concurrent.AsyncGetFuture.get(AsyncGetFuture.java:58)
at 
org.apache.hadoop.hdfs.TestAsyncHDFSWithHA.testAsyncWithHAFailover(TestAsyncHDFSWithHA.java:166)
Caused by: org.apache.hadoop.ipc.RemoteException: bad state: CLOSED
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.logEdit(FSEditLog.java:428)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.logRename(FSEditLog.java:867)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:289)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:247)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:2755)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename2(NameNodeRpcServer.java:1027)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rena

[jira] [Commented] (HDFS-10496) ExecuteCommand checks planFile in a wrong way

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319184#comment-15319184
 ] 

Hadoop QA commented on HDFS-10496:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s {color} 
| {color:red} HDFS-10496 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808740/HDFS-10496.0.patch |
| JIRA Issue | HDFS-10496 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15695/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> ExecuteCommand checks planFile in a wrong way
> -
>
> Key: HDFS-10496
> URL: https://issues.apache.org/jira/browse/HDFS-10496
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Critical
> Attachments: HDFS-10496.0.patch
>
>
> In {{ExecuteCommand#execute}}, it checks the plan file as 
> {code}
>  Preconditions.checkArgument(planFile == null || planFile.isEmpty(),
> "Invalid plan file specified.");
> {code}
> Which stops the execution with a correct planFile argument.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10496) ExecuteCommand checks planFile in a wrong way

2016-06-07 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-10496:
-
Status: Patch Available  (was: Open)

> ExecuteCommand checks planFile in a wrong way
> -
>
> Key: HDFS-10496
> URL: https://issues.apache.org/jira/browse/HDFS-10496
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Critical
> Attachments: HDFS-10496.0.patch
>
>
> In {{ExecuteCommand#execute}}, it checks the plan file as 
> {code}
>  Preconditions.checkArgument(planFile == null || planFile.isEmpty(),
> "Invalid plan file specified.");
> {code}
> Which stops the execution with a correct planFile argument.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10496) ExecuteCommand checks planFile in a wrong way

2016-06-07 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-10496:
-
Attachment: HDFS-10496.0.patch

Change the conditions to check arguments in {{ExecuteCommand.execute}}.

> ExecuteCommand checks planFile in a wrong way
> -
>
> Key: HDFS-10496
> URL: https://issues.apache.org/jira/browse/HDFS-10496
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Critical
> Attachments: HDFS-10496.0.patch
>
>
> In {{ExecuteCommand#execute}}, it checks the plan file as 
> {code}
>  Preconditions.checkArgument(planFile == null || planFile.isEmpty(),
> "Invalid plan file specified.");
> {code}
> Which stops the execution with a correct planFile argument.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10496) ExecuteCommand checks planFile in a wrong way

2016-06-07 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-10496:


 Summary: ExecuteCommand checks planFile in a wrong way
 Key: HDFS-10496
 URL: https://issues.apache.org/jira/browse/HDFS-10496
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-1312
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Critical


In {{ExecuteCommand#execute}}, it checks the plan file as 

{code}
 Preconditions.checkArgument(planFile == null || planFile.isEmpty(),
"Invalid plan file specified.");
{code}

Which stops the execution with a correct planFile argument.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10491) libhdfs++: Implement GetFsStats

2016-06-07 Thread Anatoli Shein (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319159#comment-15319159
 ] 

Anatoli Shein commented on HDFS-10491:
--

I added TestGetUsed to hdfs_ext_test.

> libhdfs++: Implement GetFsStats
> ---
>
> Key: HDFS-10491
> URL: https://issues.apache.org/jira/browse/HDFS-10491
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-10491.HDFS-8707.000.patch, 
> HDFS-10491.HDFS-8707.000.patch, HDFS-10491.HDFS-8707.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-9943) Support reconfiguring namenode replication confs

2016-06-07 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319151#comment-15319151
 ] 

Xiaobing Zhou edited comment on HDFS-9943 at 6/7/16 7:08 PM:
-

Patch v005 is posted to remove volatile for two parameters.

I checked dfs.namenode.replication.max-streams and 
dfs.namenode.replication.max-streams-hard-limit are actually used somewhere 
that hold namesystem.writeLock();, e.g. the stack trace
{code}
BlockManager#chooseSourceDatanodes
getMaxReplicationStreams
getReplicationStreamsHardLimit  
BlockManager#scheduleReconstruction
BlockManager#computeReconstructionWorkForBlocks
   // Step 1: categorize at-risk blocks into replication and EC tasks
namesystem.writeLock();
try {
  synchronized (neededReconstruction) {
for (int priority = 0; priority < blocksToReconstruct
.size(); priority++) {
  for (BlockInfo block : blocksToReconstruct.get(priority)) {
BlockReconstructionWork rw = scheduleReconstruction(block,
priority);
if (rw != null) {
  reconWork.add(rw);
}
  }
}
  }
} finally {
  namesystem.writeUnlock();
}
{code}



was (Author: xiaobingo):
Patch v005 is posted to remove volatile for two parameters.

I checked dfs.namenode.replication.max-streams and 
dfs.namenode.replication.max-streams-hard-limit are actually used somewhere 
that hold namesystem.writeLock();, e.g. the stack trace
{code}
BlockManager#chooseSourceDatanodes
getMaxReplicationStreams
getReplicationStreamsHardLimit  
BlockManager#scheduleReconstruction
BlockManager#computeReconstructionWorkForBlocks
// Step 1: categorize at-risk blocks into replication and EC tasks
namesystem.writeLock();
try {
  synchronized (neededReconstruction) {
for (int priority = 0; priority < blocksToReconstruct
.size(); priority++) {
  for (BlockInfo block : blocksToReconstruct.get(priority)) {
BlockReconstructionWork rw = scheduleReconstruction(block,
priority);
if (rw != null) {
  reconWork.add(rw);
}
  }
}
  }
} finally {
  namesystem.writeUnlock();
}
{code}


> Support reconfiguring namenode replication confs
> 
>
> Key: HDFS-9943
> URL: https://issues.apache.org/jira/browse/HDFS-9943
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9943-HDFS-9000.000.patch, 
> HDFS-9943-HDFS-9000.001.patch, HDFS-9943-HDFS-9000.002.patch, 
> HDFS-9943-HDFS-9000.003.patch, HDFS-9943-HDFS-9000.004.patch, 
> HDFS-9943-HDFS-9000.005.patch
>
>
> The following confs should be re-configurable in runtime.
> - dfs.namenode.replication.work.multiplier.per.iteration
> - dfs.namenode.replication.interval
> - dfs.namenode.replication.max-streams
> - dfs.namenode.replication.max-streams-hard-limit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9943) Support reconfiguring namenode replication confs

2016-06-07 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319151#comment-15319151
 ] 

Xiaobing Zhou commented on HDFS-9943:
-

Patch v005 is posted to remove volatile for two parameters.

I checked dfs.namenode.replication.max-streams and 
dfs.namenode.replication.max-streams-hard-limit are actually used somewhere 
that hold namesystem.writeLock();, e.g. the stack trace
{code}
BlockManager#chooseSourceDatanodes
getMaxReplicationStreams
getReplicationStreamsHardLimit  
BlockManager#scheduleReconstruction
BlockManager#computeReconstructionWorkForBlocks
// Step 1: categorize at-risk blocks into replication and EC tasks
namesystem.writeLock();
try {
  synchronized (neededReconstruction) {
for (int priority = 0; priority < blocksToReconstruct
.size(); priority++) {
  for (BlockInfo block : blocksToReconstruct.get(priority)) {
BlockReconstructionWork rw = scheduleReconstruction(block,
priority);
if (rw != null) {
  reconWork.add(rw);
}
  }
}
  }
} finally {
  namesystem.writeUnlock();
}
{code}


> Support reconfiguring namenode replication confs
> 
>
> Key: HDFS-9943
> URL: https://issues.apache.org/jira/browse/HDFS-9943
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9943-HDFS-9000.000.patch, 
> HDFS-9943-HDFS-9000.001.patch, HDFS-9943-HDFS-9000.002.patch, 
> HDFS-9943-HDFS-9000.003.patch, HDFS-9943-HDFS-9000.004.patch, 
> HDFS-9943-HDFS-9000.005.patch
>
>
> The following confs should be re-configurable in runtime.
> - dfs.namenode.replication.work.multiplier.per.iteration
> - dfs.namenode.replication.interval
> - dfs.namenode.replication.max-streams
> - dfs.namenode.replication.max-streams-hard-limit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9943) Support reconfiguring namenode replication confs

2016-06-07 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9943:

Attachment: HDFS-9943-HDFS-9000.005.patch

> Support reconfiguring namenode replication confs
> 
>
> Key: HDFS-9943
> URL: https://issues.apache.org/jira/browse/HDFS-9943
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9943-HDFS-9000.000.patch, 
> HDFS-9943-HDFS-9000.001.patch, HDFS-9943-HDFS-9000.002.patch, 
> HDFS-9943-HDFS-9000.003.patch, HDFS-9943-HDFS-9000.004.patch, 
> HDFS-9943-HDFS-9000.005.patch
>
>
> The following confs should be re-configurable in runtime.
> - dfs.namenode.replication.work.multiplier.per.iteration
> - dfs.namenode.replication.interval
> - dfs.namenode.replication.max-streams
> - dfs.namenode.replication.max-streams-hard-limit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10469) Add number of active xceivers to datanode metrics

2016-06-07 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-10469:
--
Attachment: HDFS-10469.001.patch

> Add number of active xceivers to datanode metrics
> -
>
> Key: HDFS-10469
> URL: https://issues.apache.org/jira/browse/HDFS-10469
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-10469.000.patch, HDFS-10469.001.patch
>
>
> Number of active xceivers is exposed via jmx, but not in Datanode metrics. We 
> should add it to datanode metrics for monitoring the load on Datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10476) DiskBalancer: Plan command output directory should be a sub-directory

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319105#comment-15319105
 ] 

Hadoop QA commented on HDFS-10476:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
26s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
33s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
1s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 33s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 18s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808713/HDFS-10476-HDFS-1312.002.patch
 |
| JIRA Issue | HDFS-10476 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a9f3167250c4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-1312 / 20d8cf7 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15690/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15690/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15690/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15690/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> DiskBalancer: Plan command output di

[jira] [Commented] (HDFS-10467) Router-based HDFS federation

2016-06-07 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319098#comment-15319098
 ] 

Inigo Goiri commented on HDFS-10467:


I went through the rebase into trunk and there are just a couple changes in 
{{Server}}, {{Client}}, and a couple related classes.
It should be easy to keep rebasing the patch as needed.

I haven't been able to fully test it on trunk yet but we'll go over it during 
the day.

> Router-based HDFS federation
> 
>
> Key: HDFS-10467
> URL: https://issues.apache.org/jira/browse/HDFS-10467
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Inigo Goiri
> Attachments: HDFS Router Federation.pdf, HDFS-10467.PoC.patch, 
> HDFS-Router-Federation-Prototype.patch
>
>
> Add a Router to provide a federated view of multiple HDFS clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10491) libhdfs++: Implement GetFsStats

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319080#comment-15319080
 ] 

Hadoop QA commented on HDFS-10491:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 4s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
40s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 11s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 13s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 42s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 45s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 6s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808718/HDFS-10491.HDFS-8707.000.patch
 |
| JIRA Issue | HDFS-10491 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 2896e665bdaa 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / bfb2e27 |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| JDK v1.7.0_101  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15691/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15691/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Implement GetFsStats
> ---
>
> Key: HDFS-10491
> URL: https://issues.apache.org/jira/browse/HDFS-10491
> Project: Hadoop HDFS
>  Issue Type: Sub-ta

[jira] [Updated] (HDFS-10467) Router-based HDFS federation

2016-06-07 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10467:
---
Status: Patch Available  (was: Open)

> Router-based HDFS federation
> 
>
> Key: HDFS-10467
> URL: https://issues.apache.org/jira/browse/HDFS-10467
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Inigo Goiri
> Attachments: HDFS Router Federation.pdf, HDFS-10467.PoC.patch, 
> HDFS-Router-Federation-Prototype.patch
>
>
> Add a Router to provide a federated view of multiple HDFS clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10467) Router-based HDFS federation

2016-06-07 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10467:
---
Attachment: HDFS-10467.PoC.patch

Prototype on trunk (Not fully tested though).

> Router-based HDFS federation
> 
>
> Key: HDFS-10467
> URL: https://issues.apache.org/jira/browse/HDFS-10467
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Inigo Goiri
> Attachments: HDFS Router Federation.pdf, HDFS-10467.PoC.patch, 
> HDFS-Router-Federation-Prototype.patch
>
>
> Add a Router to provide a federated view of multiple HDFS clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2016-06-07 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319050#comment-15319050
 ] 

Zhe Zhang commented on HDFS-9806:
-

Another thought is, maybe we can leverage caching policies and consistency 
models from NFS? Fundamentally, each "small HDFS" is like an NFS client, and 
the "big external store" is like the NFS server.

E.g. maybe we can use lease-based locking to prevent conflicting updates to the 
same subtree.

bq. Initially, writes are not supported through HDFS (read-only). Refresh is an 
important case,
Thanks for the clarification. This happens to be the most important use case in 
our setup. I think a read-only "small HDFS" should be able to simplify the 
design. A few additional questions:
# Should NN periodically refresh for the entire mounted subtree? Or fetch new 
metadata and data on-demand? Or a mix of on-demand fetch and prefetching? E.g. 
when application accesses file {{/data/log1.txt}} and it's a caches miss on 
small HDFS, proactively fetch all files under {{/data/}} to small HDFS. If we 
assume small HDFS has a significantly smaller capacity than external store, 
refreshing the entire subtree seems too heavy (network bandwidth usage and 
small HDFS capacity)?
# On the on-demand fetching path, the block will be transferred from external 
store to the small HDFS DN first, and them from small HDFS DN to application. 
This actually increases latency from 1 hop to 2 hops. It's tricky how to reduce 
this.

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10468) HDFS read ends up ignoring an interrupt

2016-06-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319010#comment-15319010
 ] 

Hudson commented on HDFS-10468:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #9922 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9922/])
HDFS-10468. HDFS read ends up ignoring an interrupt. Contributed by Jing 
(jing9: rev be34e85e682880f46eee0310bf00ecc7d39cd5bd)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRead.java


> HDFS read ends up ignoring an interrupt
> ---
>
> Key: HDFS-10468
> URL: https://issues.apache.org/jira/browse/HDFS-10468
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Siddharth Seth
>Assignee: Jing Zhao
> Fix For: 2.9.0
>
> Attachments: HDFS-10468.000.patch, HDFS-10468.001.patch, 
> HDFS-10468.002.patch, HDFS-10468.003.patch, log
>
>
> If an interrupt comes in during an HDFS read - it looks like HDFS ends up 
> ignoring it (handling it), and retries the read after an interval.
> An interrupt should result in the read being cancelled, with an 
> InterruptedException being thrown.
> Similarly - if an HDFS op is started with the interrupt status on the thread 
> set, an InterruptedException should be thrown.
> cc [~jingzhao]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10468) HDFS read ends up ignoring an interrupt

2016-06-07 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-10468:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Thanks again for the review, [~iwasakims]. I've committed this to trunk and 
branch-2.

> HDFS read ends up ignoring an interrupt
> ---
>
> Key: HDFS-10468
> URL: https://issues.apache.org/jira/browse/HDFS-10468
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Siddharth Seth
>Assignee: Jing Zhao
> Fix For: 2.9.0
>
> Attachments: HDFS-10468.000.patch, HDFS-10468.001.patch, 
> HDFS-10468.002.patch, HDFS-10468.003.patch, log
>
>
> If an interrupt comes in during an HDFS read - it looks like HDFS ends up 
> ignoring it (handling it), and retries the read after an interval.
> An interrupt should result in the read being cancelled, with an 
> InterruptedException being thrown.
> Similarly - if an HDFS op is started with the interrupt status on the thread 
> set, an InterruptedException should be thrown.
> cc [~jingzhao]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-4821) It's possible to create files with special characters in the filenames, but 'hadoop fs -ls' gives no indication

2016-06-07 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HDFS-4821.
--
Resolution: Duplicate

> It's possible to create files with special characters in the filenames, but 
> 'hadoop fs -ls' gives no indication
> ---
>
> Key: HDFS-4821
> URL: https://issues.apache.org/jira/browse/HDFS-4821
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Stephen Fritz
>
> For example:
> -bash-4.1$ hadoop fs -mkdir /user/hdfs
> -bash-4.1$ hadoop fs -touchz /user/hdfs/dupfile
> -bash-4.1$ hadoop fs -touchz /user/hdfs/dupfile^M
> -bash-4.1$ hadoop fs -ls /user/hdfs
> Found 2 items
> -rw-r--r--   3 hdfs supergroup  0 2013-05-14 07:13 /user/hdfs/dupfile
> -rw-r--r--   3 hdfs supergroup  0 2013-05-14 07:13 /user/hdfs/dupfile
> By way of comparison, bash will print a '?' at the end of the file name, 
> indicating there's a special character in the filename, which isn't perfect 
> but at least gives an indication that there's more to the filename than just 
> alphanumeric characters



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10458) getFileEncryptionInfo should return quickly for non-encrypted cluster

2016-06-07 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10458:
-
Labels: encryption namenode scalability  (was: encryption)

> getFileEncryptionInfo should return quickly for non-encrypted cluster
> -
>
> Key: HDFS-10458
> URL: https://issues.apache.org/jira/browse/HDFS-10458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 2.6.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>  Labels: encryption, namenode, scalability
> Fix For: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: HDFS-10458-branch-2.00.patch, 
> HDFS-10458-branch-2.6.00.patch, HDFS-10458-branch-2.6.01.patch, 
> HDFS-10458-branch-2.7.00.patch, HDFS-10458.00.patch, HDFS-10458.03.patch, 
> HDFS-10458.04.patch, HDFS-10458.05.patch, HDFSA-10458.01.patch, 
> HDFSA-10458.02.patch
>
>
> {{FSDirectory#getFileEncryptionInfo}} always acquires {{readLock}} and checks 
> if the path belongs to an EZ. For a busy system with potentially many listing 
> operations, this could cause locking contention.
> I think we should add a call {{EncryptionZoneManager#hasEncryptionZone()}} to 
> return whether the system has any EZ. If no EZ at all, 
> {{getFileEncryptionInfo}} should return null without {{readLock}}.
> If {{hasEncryptionZone}} is only used in the above scenario, maybe itself 
> doesn't need a {{readLock}} -- if the system doesn't have any EZ when 
> {{getFileEncryptionInfo}} is called on a path, it means the path cannot be 
> encrypted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10458) getFileEncryptionInfo should return quickly for non-encrypted cluster

2016-06-07 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10458:
-
Labels: encryption  (was: )

> getFileEncryptionInfo should return quickly for non-encrypted cluster
> -
>
> Key: HDFS-10458
> URL: https://issues.apache.org/jira/browse/HDFS-10458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 2.6.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>  Labels: encryption, namenode, scalability
> Fix For: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: HDFS-10458-branch-2.00.patch, 
> HDFS-10458-branch-2.6.00.patch, HDFS-10458-branch-2.6.01.patch, 
> HDFS-10458-branch-2.7.00.patch, HDFS-10458.00.patch, HDFS-10458.03.patch, 
> HDFS-10458.04.patch, HDFS-10458.05.patch, HDFSA-10458.01.patch, 
> HDFSA-10458.02.patch
>
>
> {{FSDirectory#getFileEncryptionInfo}} always acquires {{readLock}} and checks 
> if the path belongs to an EZ. For a busy system with potentially many listing 
> operations, this could cause locking contention.
> I think we should add a call {{EncryptionZoneManager#hasEncryptionZone()}} to 
> return whether the system has any EZ. If no EZ at all, 
> {{getFileEncryptionInfo}} should return null without {{readLock}}.
> If {{hasEncryptionZone}} is only used in the above scenario, maybe itself 
> doesn't need a {{readLock}} -- if the system doesn't have any EZ when 
> {{getFileEncryptionInfo}} is called on a path, it means the path cannot be 
> encrypted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10458) getFileEncryptionInfo should return quickly for non-encrypted cluster

2016-06-07 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10458:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.6.5
   Status: Resolved  (was: Patch Available)

Verified branch-2.6 patch with a local build. Committed to branch-2.6 as well. 
Resolving the issue now. Thanks [~shv] for the review.

> getFileEncryptionInfo should return quickly for non-encrypted cluster
> -
>
> Key: HDFS-10458
> URL: https://issues.apache.org/jira/browse/HDFS-10458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 2.6.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: HDFS-10458-branch-2.00.patch, 
> HDFS-10458-branch-2.6.00.patch, HDFS-10458-branch-2.6.01.patch, 
> HDFS-10458-branch-2.7.00.patch, HDFS-10458.00.patch, HDFS-10458.03.patch, 
> HDFS-10458.04.patch, HDFS-10458.05.patch, HDFSA-10458.01.patch, 
> HDFSA-10458.02.patch
>
>
> {{FSDirectory#getFileEncryptionInfo}} always acquires {{readLock}} and checks 
> if the path belongs to an EZ. For a busy system with potentially many listing 
> operations, this could cause locking contention.
> I think we should add a call {{EncryptionZoneManager#hasEncryptionZone()}} to 
> return whether the system has any EZ. If no EZ at all, 
> {{getFileEncryptionInfo}} should return null without {{readLock}}.
> If {{hasEncryptionZone}} is only used in the above scenario, maybe itself 
> doesn't need a {{readLock}} -- if the system doesn't have any EZ when 
> {{getFileEncryptionInfo}} is called on a path, it means the path cannot be 
> encrypted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10478) DiskBalancer: resolve volume path names

2016-06-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10478:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~arpitagarwal] Thanks for the review. I have committed this to the feature 
branch.

> DiskBalancer: resolve volume path names
> ---
>
> Key: HDFS-10478
> URL: https://issues.apache.org/jira/browse/HDFS-10478
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10478-HDFS-1312.001.patch
>
>
> when creating a plan we don't fetch the name of volumes. But with -v option 
> we try to print those paths for users to see how the data is being moved. 
> This patch gets the volumes names before a plan is persisted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10476) DiskBalancer: Plan command output directory should be a sub-directory

2016-06-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10476:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~arpitagarwal] & [~eddyxu] Thanks for the reviews. I have committed this to 
the feature branch.

> DiskBalancer: Plan command output directory should be a sub-directory
> -
>
> Key: HDFS-10476
> URL: https://issues.apache.org/jira/browse/HDFS-10476
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10476-HDFS-1312.001.patch, 
> HDFS-10476-HDFS-1312.002.patch
>
>
> The plan command output is is placed in a default directory of 
> /system/diskbalancer instead it should be placed in 
> /system/diskbalancer/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10491) libhdfs++: Implement GetFsStats

2016-06-07 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-10491:
-
Attachment: HDFS-10491.HDFS-8707.000.patch

Trying again

> libhdfs++: Implement GetFsStats
> ---
>
> Key: HDFS-10491
> URL: https://issues.apache.org/jira/browse/HDFS-10491
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-10491.HDFS-8707.000.patch, 
> HDFS-10491.HDFS-8707.000.patch, HDFS-10491.HDFS-8707.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10476) DiskBalancer: Plan command output directory should be a sub-directory

2016-06-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10476:

Attachment: HDFS-10476-HDFS-1312.002.patch

[~arpitagarwal] & [~eddyxu] Thanks for the reviews. I have updated the patch 
with suggested changes. I will commit the version 2, without further reviews.



> DiskBalancer: Plan command output directory should be a sub-directory
> -
>
> Key: HDFS-10476
> URL: https://issues.apache.org/jira/browse/HDFS-10476
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10476-HDFS-1312.001.patch, 
> HDFS-10476-HDFS-1312.002.patch
>
>
> The plan command output is is placed in a default directory of 
> /system/diskbalancer instead it should be placed in 
> /system/diskbalancer/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10458) getFileEncryptionInfo should return quickly for non-encrypted cluster

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15318840#comment-15318840
 ] 

Hadoop QA commented on HDFS-10458:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
15s {color} | {color:green} branch-2.6 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 44s 
{color} | {color:red} hadoop-hdfs in branch-2.6 failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 45s 
{color} | {color:red} hadoop-hdfs in branch-2.6 failed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} branch-2.6 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} branch-2.6 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} branch-2.6 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 59s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in branch-2.6 has 273 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} branch-2.6 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s 
{color} | {color:green} branch-2.6 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 43s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 43s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 45s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_101. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 45s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 4452 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 1m 42s 
{color} | {color:red} The patch 74 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 41s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 41s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_101. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 37s 
{color} | {color:red} The patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 11s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:44eef0e |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808701/HDFS-10458-branch-2.6.01.patch
 |
|

[jira] [Commented] (HDFS-10458) getFileEncryptionInfo should return quickly for non-encrypted cluster

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15318834#comment-15318834
 ] 

Hadoop QA commented on HDFS-10458:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
45s {color} | {color:green} branch-2.6 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 45s 
{color} | {color:red} hadoop-hdfs in branch-2.6 failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 46s 
{color} | {color:red} hadoop-hdfs in branch-2.6 failed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} branch-2.6 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} branch-2.6 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} branch-2.6 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 52s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in branch-2.6 has 273 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} branch-2.6 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s 
{color} | {color:green} branch-2.6 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 43s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 43s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 45s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_101. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 45s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 4452 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 1m 40s 
{color} | {color:red} The patch 74 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 41s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 44s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_101. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 30s 
{color} | {color:red} The patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 10s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:44eef0e |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808701/HDFS-10458-branch-2.6.01.patch
 |
| 

[jira] [Reopened] (HDFS-9890) libhdfs++: Add test suite to simulate network issues

2016-06-07 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer reopened HDFS-9890:
---

 Ended up having merge issues. "git apply -3" worked fine but some of the 
changes weren't compatible with the current codebase.

I've reverted the change on HDFS-8707 and will work on getting a good rebase 
posted.

> libhdfs++: Add test suite to simulate network issues
> 
>
> Key: HDFS-9890
> URL: https://issues.apache.org/jira/browse/HDFS-9890
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-9890.HDFS-8707.000.patch, 
> HDFS-9890.HDFS-8707.001.patch, HDFS-9890.HDFS-8707.002.patch, 
> HDFS-9890.HDFS-8707.003.patch, HDFS-9890.HDFS-8707.004.patch, 
> HDFS-9890.HDFS-8707.005.patch, HDFS-9890.HDFS-8707.006.patch, 
> HDFS-9890.HDFS-8707.007.patch, hs_err_pid26832.log, hs_err_pid4944.log
>
>
> I propose adding a test suite to simulate various network issues/failures in 
> order to get good test coverage on some of the retry paths that aren't easy 
> to hit in mock unit tests.
> At the moment the only things that hit the retry paths are the gmock unit 
> tests.  The gmock are only as good as their mock implementations which do a 
> great job of simulating protocol correctness but not more complex 
> interactions.  They also can't really simulate the types of lock contention 
> and subtle memory stomps that show up while doing hundreds or thousands of 
> concurrent reads.   We should add a new minidfscluster test that focuses on 
> heavy read/seek load and then randomly convert error codes returned by 
> network functions into errors.
> List of things to simulate(while heavily loaded), roughly in order of how 
> badly I think they need to be tested at the moment:
> -Rpc connection disconnect
> -Rpc connection slowed down enough to cause a timeout and trigger retry
> -DN connection disconnect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10458) getFileEncryptionInfo should return quickly for non-encrypted cluster

2016-06-07 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15318791#comment-15318791
 ] 

Zhe Zhang commented on HDFS-10458:
--

Committed to branch-2 and branch-2.8. Now only waiting for branch-2.6 Jenkins 
run.

> getFileEncryptionInfo should return quickly for non-encrypted cluster
> -
>
> Key: HDFS-10458
> URL: https://issues.apache.org/jira/browse/HDFS-10458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 2.6.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: HDFS-10458-branch-2.00.patch, 
> HDFS-10458-branch-2.6.00.patch, HDFS-10458-branch-2.6.01.patch, 
> HDFS-10458-branch-2.7.00.patch, HDFS-10458.00.patch, HDFS-10458.03.patch, 
> HDFS-10458.04.patch, HDFS-10458.05.patch, HDFSA-10458.01.patch, 
> HDFSA-10458.02.patch
>
>
> {{FSDirectory#getFileEncryptionInfo}} always acquires {{readLock}} and checks 
> if the path belongs to an EZ. For a busy system with potentially many listing 
> operations, this could cause locking contention.
> I think we should add a call {{EncryptionZoneManager#hasEncryptionZone()}} to 
> return whether the system has any EZ. If no EZ at all, 
> {{getFileEncryptionInfo}} should return null without {{readLock}}.
> If {{hasEncryptionZone}} is only used in the above scenario, maybe itself 
> doesn't need a {{readLock}} -- if the system doesn't have any EZ when 
> {{getFileEncryptionInfo}} is called on a path, it means the path cannot be 
> encrypted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10458) getFileEncryptionInfo should return quickly for non-encrypted cluster

2016-06-07 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10458:
-
Attachment: HDFS-10458-branch-2.6.01.patch

I verified branch-2 patch Jenkins failures, all pass locally. I will recommit 
to branch-2 and branch-2.8 shortly. Thanks again for [~leftnoteasy]'s note.

Attaching branch-2.6 patch again to trigger Jenkins.

> getFileEncryptionInfo should return quickly for non-encrypted cluster
> -
>
> Key: HDFS-10458
> URL: https://issues.apache.org/jira/browse/HDFS-10458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 2.6.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: HDFS-10458-branch-2.00.patch, 
> HDFS-10458-branch-2.6.00.patch, HDFS-10458-branch-2.6.01.patch, 
> HDFS-10458-branch-2.7.00.patch, HDFS-10458.00.patch, HDFS-10458.03.patch, 
> HDFS-10458.04.patch, HDFS-10458.05.patch, HDFSA-10458.01.patch, 
> HDFSA-10458.02.patch
>
>
> {{FSDirectory#getFileEncryptionInfo}} always acquires {{readLock}} and checks 
> if the path belongs to an EZ. For a busy system with potentially many listing 
> operations, this could cause locking contention.
> I think we should add a call {{EncryptionZoneManager#hasEncryptionZone()}} to 
> return whether the system has any EZ. If no EZ at all, 
> {{getFileEncryptionInfo}} should return null without {{readLock}}.
> If {{hasEncryptionZone}} is only used in the above scenario, maybe itself 
> doesn't need a {{readLock}} -- if the system doesn't have any EZ when 
> {{getFileEncryptionInfo}} is called on a path, it means the path cannot be 
> encrypted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9580) TestComputeInvalidateWork#testDatanodeReRegistration failed due to unexpected number of invalidate blocks.

2016-06-07 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HDFS-9580:
-
Fix Version/s: (was: 3.0.0-alpha1)
   2.8.0

Thanks, [~jojochuang]!  I committed this to branch-2 and branch-2.8 as well.

> TestComputeInvalidateWork#testDatanodeReRegistration failed due to unexpected 
> number of invalidate blocks.
> --
>
> Key: HDFS-9580
> URL: https://issues.apache.org/jira/browse/HDFS-9580
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode, test
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0
>
> Attachments: HDFS-9580.001.patch
>
>
> The failure appeared in the trunk jenkins job.
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2646/
> {noformat}
> Error Message
> Expected invalidate blocks to be the number of DNs expected:<3> but was:<2>
> Stacktrace
> java.lang.AssertionError: Expected invalidate blocks to be the number of DNs 
> expected:<3> but was:<2>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork.testDatanodeReRegistration(TestComputeInvalidateWork.java:160)
> {noformat}
> I think there could be a race condition between creating a file and shutting 
> down data nodes, which failed the test.
> {noformat}
> 2015-12-19 07:11:02,765 [PacketResponder: 
> BP-1551077294-67.195.81.149-1450509060247:blk_1073741825_1001, 
> type=LAST_IN_PIPELINE, downstreams=0:[]] INFO  datanode.DataNode 
> (BlockReceiver.java:run(1404)) - PacketResponder: 
> BP-1551077294-67.195.81.149-1450509060247:blk_1073741825_1001, 
> type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2015-12-19 07:11:02,768 [PacketResponder: 
> BP-1551077294-67.195.81.149-1450509060247:blk_1073741825_1001, 
> type=HAS_DOWNSTREAM_IN_PIPELINE] INFO  DataNode.clienttrace 
> (BlockReceiver.java:finalizeBlock(1431)) - src: /127.0.0.1:45655, dest: 
> /127.0.0.1:54890, bytes: 134217728, op: HDFS_WRITE, cliID: 
> DFSClient_NONMAPREDUCE_147911011_935, offset: 0, srvID: 
> 6a13ec05-e1c1-4086-8a4d-d5a09636afcd, blockid: 
> BP-1551077294-67.195.81.149-1450509060247:blk_1073741825_1001, duration: 
> 954174423
> 2015-12-19 07:11:02,768 [PacketResponder: 
> BP-1551077294-67.195.81.149-1450509060247:blk_1073741825_1001, 
> type=HAS_DOWNSTREAM_IN_PIPELINE] INFO  datanode.DataNode 
> (BlockReceiver.java:run(1404)) - PacketResponder: 
> BP-1551077294-67.195.81.149-1450509060247:blk_1073741825_1001, 
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-12-19 07:11:02,772 [PacketResponder: 
> BP-1551077294-67.195.81.149-1450509060247:blk_1073741825_1001, 
> type=HAS_DOWNSTREAM_IN_PIPELINE] INFO  DataNode.clienttrace 
> (BlockReceiver.java:finalizeBlock(1431)) - src: /127.0.0.1:33252, dest: 
> /127.0.0.1:54426, bytes: 134217728, op: HDFS_WRITE, cliID: 
> DFSClient_NONMAPREDUCE_147911011_935, offset: 0, srvID: 
> d81751db-02a9-48fe-b697-77623048784b, blockid: 
> BP-1551077294-67.195.81.149-1450509060247:blk_1073741825_1001, duration: 
> 957463510
> 2015-12-19 07:11:02,772 [PacketResponder: 
> BP-1551077294-67.195.81.149-1450509060247:blk_1073741825_1001, 
> type=HAS_DOWNSTREAM_IN_PIPELINE] INFO  datanode.DataNode 
> (BlockReceiver.java:run(1404)) - PacketResponder: 
> BP-1551077294-67.195.81.149-1450509060247:blk_1073741825_1001, 
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-12-19 07:11:02,782 [IPC Server handler 4 on 36404] INFO  
> blockmanagement.BlockManager 
> (BlockManager.java:checkBlocksProperlyReplicated(3871)) - BLOCK* 
> blk_1073741825_1001 is not COMPLETE (ucState = COMMITTED, replication# = 0 <  
> minimum = 1) in file /testRR
> 2015-12-19 07:11:02,783 [IPC Server handler 4 on 36404] INFO  
> namenode.EditLogFileOutputStream 
> (EditLogFileOutputStream.java:flushAndSync(200)) - Nothing to flush
> 2015-12-19 07:11:02,783 [IPC Server handler 4 on 36404] INFO  
> namenode.EditLogFileOutputStream 
> (EditLogFileOutputStream.java:flushAndSync(200)) - Nothing to flush
> 2015-12-19 07:11:03,190 [IPC Server handler 8 on 36404] INFO  
> hdfs.StateChange (FSNamesystem.java:completeFile(2557)) - DIR* completeFile: 
> /testRR is closed by DFSClient_NONMAPREDUCE_147911011_935
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10493) Add links to datanode web UI in namenode datanodes page

2016-06-07 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-10493:
--
Assignee: Weiwei Yang

> Add links to datanode web UI in namenode datanodes page
> ---
>
> Key: HDFS-10493
> URL: https://issues.apache.org/jira/browse/HDFS-10493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10493.001.patch, NN_DN_Links.jpg, 
> secure_nn_dn_links.jpg
>
>
> HDFS-10440 makes some improvements to datanode UI, it will be good to provide 
> links from namenode datanodes information page to individual datanode UI, to 
> check more datanode information easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >