[jira] [Commented] (HDFS-9839) Reduce verbosity of processReport logging

2016-02-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155935#comment-15155935
 ] 

Hudson commented on HDFS-9839:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9335 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9335/])
HDFS-9839. Reduce verbosity of processReport logging. (Contributed by (arp: rev 
d5abd293a890a8a1da48a166a291ae1c5644ad57)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Reduce verbosity of processReport logging
> -
>
> Key: HDFS-9839
> URL: https://issues.apache.org/jira/browse/HDFS-9839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HDFS-9839.01.patch
>
>
> {{BlockManager#processReport}} logs one line for each invalidated block at 
> INFO. HDFS-7503 moved this logging outside the NameSystem write lock but we 
> still see the NameNode being slowed down when the number of block 
> invalidations is very large e.g. just after a large amount of data is deleted.
> {code}
>   for (Block b : invalidatedBlocks) {
> blockLog.info("BLOCK* processReport: {} on node {} size {} does not " 
> +
> "belong to any file", b, node, b.getNumBytes());
>   }
> {code}
> We can change this statement to DEBUG and just log the number of block 
> invalidations at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9746) Some Kerberos related tests intermittently fail.

2016-02-20 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155932#comment-15155932
 ] 

Daniel Templeton commented on HDFS-9746:


Looks like it might be HADOOP-12090.

> Some Kerberos related tests intermittently fail.
> 
>
> Key: HDFS-9746
> URL: https://issues.apache.org/jira/browse/HDFS-9746
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>
> So far I've seen {{TestSecureNNWithQJM#testSecureMode}} and 
> {{TestKMS#testACLs}} failing. More details coming in the 1st comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9839) Reduce verbosity of processReport logging

2016-02-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155925#comment-15155925
 ] 

ASF GitHub Bot commented on HDFS-9839:
--

Github user asfgit closed the pull request at:

https://github.com/apache/hadoop/pull/78


> Reduce verbosity of processReport logging
> -
>
> Key: HDFS-9839
> URL: https://issues.apache.org/jira/browse/HDFS-9839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HDFS-9839.01.patch
>
>
> {{BlockManager#processReport}} logs one line for each invalidated block at 
> INFO. HDFS-7503 moved this logging outside the NameSystem write lock but we 
> still see the NameNode being slowed down when the number of block 
> invalidations is very large e.g. just after a large amount of data is deleted.
> {code}
>   for (Block b : invalidatedBlocks) {
> blockLog.info("BLOCK* processReport: {} on node {} size {} does not " 
> +
> "belong to any file", b, node, b.getNumBytes());
>   }
> {code}
> We can change this statement to DEBUG and just log the number of block 
> invalidations at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9839) Reduce verbosity of processReport logging

2016-02-20 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-9839:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 2.8.0
Target Version/s:   (was: 2.8.0)
  Status: Resolved  (was: Patch Available)

Thank you for the review [~xyao].

I committed this to trunk, branch-2 and branch-2.8.

> Reduce verbosity of processReport logging
> -
>
> Key: HDFS-9839
> URL: https://issues.apache.org/jira/browse/HDFS-9839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HDFS-9839.01.patch
>
>
> {{BlockManager#processReport}} logs one line for each invalidated block at 
> INFO. HDFS-7503 moved this logging outside the NameSystem write lock but we 
> still see the NameNode being slowed down when the number of block 
> invalidations is very large e.g. just after a large amount of data is deleted.
> {code}
>   for (Block b : invalidatedBlocks) {
> blockLog.info("BLOCK* processReport: {} on node {} size {} does not " 
> +
> "belong to any file", b, node, b.getNumBytes());
>   }
> {code}
> We can change this statement to DEBUG and just log the number of block 
> invalidations at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7452) Can we skip getCorruptFiles() call for standby NameNode..?

2016-02-20 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155897#comment-15155897
 ] 

Brahma Reddy Battula commented on HDFS-7452:


Uploaded the patch to fix the checkstyle comment.

> Can we skip getCorruptFiles() call for standby NameNode..?
> --
>
> Key: HDFS-7452
> URL: https://issues.apache.org/jira/browse/HDFS-7452
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HDFS-7452-002.patch, HDFS-7452.patch
>
>
> Seen following warns logs from StandBy Namenode logs ..
> {noformat}
> 2014-11-27 17:50:32,497 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:50:42,557 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:50:52,617 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:00,058 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:00,117 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:02,678 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:12,738 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:22,798 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:30,058 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:30,119 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> {noformat}
> do we need to call for SNN..? I feel, it might not be required.can we 
> maintain state wide..Please let me know, If I am wrong..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7452) Can we skip getCorruptFiles() call for standby NameNode..?

2016-02-20 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-7452:
---
Priority: Minor  (was: Trivial)

> Can we skip getCorruptFiles() call for standby NameNode..?
> --
>
> Key: HDFS-7452
> URL: https://issues.apache.org/jira/browse/HDFS-7452
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HDFS-7452-002.patch, HDFS-7452.patch
>
>
> Seen following warns logs from StandBy Namenode logs ..
> {noformat}
> 2014-11-27 17:50:32,497 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:50:42,557 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:50:52,617 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:00,058 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:00,117 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:02,678 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:12,738 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:22,798 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:30,058 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:30,119 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> {noformat}
> do we need to call for SNN..? I feel, it might not be required.can we 
> maintain state wide..Please let me know, If I am wrong..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7452) Can we skip getCorruptFiles() call for standby NameNode..?

2016-02-20 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-7452:
---
Attachment: HDFS-7452-002.patch

> Can we skip getCorruptFiles() call for standby NameNode..?
> --
>
> Key: HDFS-7452
> URL: https://issues.apache.org/jira/browse/HDFS-7452
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Trivial
> Attachments: HDFS-7452-002.patch, HDFS-7452.patch
>
>
> Seen following warns logs from StandBy Namenode logs ..
> {noformat}
> 2014-11-27 17:50:32,497 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:50:42,557 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:50:52,617 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:00,058 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:00,117 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:02,678 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:12,738 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:22,798 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:30,058 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:30,119 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> {noformat}
> do we need to call for SNN..? I feel, it might not be required.can we 
> maintain state wide..Please let me know, If I am wrong..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9839) Reduce verbosity of processReport logging

2016-02-20 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155858#comment-15155858
 ] 

Xiaoyu Yao commented on HDFS-9839:
--

Patch LGTM. +1.

> Reduce verbosity of processReport logging
> -
>
> Key: HDFS-9839
> URL: https://issues.apache.org/jira/browse/HDFS-9839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-9839.01.patch
>
>
> {{BlockManager#processReport}} logs one line for each invalidated block at 
> INFO. HDFS-7503 moved this logging outside the NameSystem write lock but we 
> still see the NameNode being slowed down when the number of block 
> invalidations is very large e.g. just after a large amount of data is deleted.
> {code}
>   for (Block b : invalidatedBlocks) {
> blockLog.info("BLOCK* processReport: {} on node {} size {} does not " 
> +
> "belong to any file", b, node, b.getNumBytes());
>   }
> {code}
> We can change this statement to DEBUG and just log the number of block 
> invalidations at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9839) Reduce verbosity of processReport logging

2016-02-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155803#comment-15155803
 ] 

Hadoop QA commented on HDFS-9839:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: patch generated 0 
new + 151 unchanged - 1 fixed = 151 total (was 152) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 8s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 59s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 175m 21s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | hadoop.tracing.TestTracing |
|   | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.TestFileAppend |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.mover.TestStorageMover |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | 

[jira] [Updated] (HDFS-9839) Reduce verbosity of processReport logging

2016-02-20 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-9839:

Attachment: HDFS-9839.01.patch

> Reduce verbosity of processReport logging
> -
>
> Key: HDFS-9839
> URL: https://issues.apache.org/jira/browse/HDFS-9839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-9839.01.patch
>
>
> {{BlockManager#processReport}} logs one line for each invalidated block at 
> INFO. HDFS-7503 moved this logging outside the NameSystem write lock but we 
> still see the NameNode being slowed down when the number of block 
> invalidations is very large e.g. just after a large amount of data is deleted.
> {code}
>   for (Block b : invalidatedBlocks) {
> blockLog.info("BLOCK* processReport: {} on node {} size {} does not " 
> +
> "belong to any file", b, node, b.getNumBytes());
>   }
> {code}
> We can change this statement to DEBUG and just log the number of block 
> invalidations at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9836) RequestHedgingInvocationHandler can't be cast to org.apache.hadoop.ipc.RpcInvocationHandler

2016-02-20 Thread Guocui Mi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guocui Mi updated HDFS-9836:

Fix Version/s: (was: 3.0.0)

> RequestHedgingInvocationHandler can't be cast to 
> org.apache.hadoop.ipc.RpcInvocationHandler
> ---
>
> Key: HDFS-9836
> URL: https://issues.apache.org/jira/browse/HDFS-9836
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.8.0
>Reporter: Guocui Mi
>Assignee: Guocui Mi
> Attachments: HDFS-9836-000.patch, HDFS-9836-001.patch
>
>
> RequestHedgingInvocationHandler cannot be cast to 
> org.apache.hadoop.ipc.RpcInvocationHandler
> Reproduce steps:
> 1: Set client failover provider as RequestHedgingProxyProvider.
> 
> dfs.client.failover.proxy.provider.[nameservice]
> 
> org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider
>   
> 2: run hdfs fsck / will get following exceptions.
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler
>  cannot be cast to org.apache.hadoop.ipc.RpcInvocationHandler
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:613)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.getConnectionId(RetryInvocationHandler.java:281)
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:615)
> at org.apache.hadoop.ipc.RPC.getServerAddress(RPC.java:598)
> at org.apache.hadoop.hdfs.HAUtil.getAddressOfActive(HAUtil.java:380)
> at 
> org.apache.hadoop.hdfs.tools.DFSck.getCurrentNamenodeAddress(DFSck.java:248)
> at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:255)
> at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:148)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:145)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:144)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:360)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9839) Reduce verbosity of processReport logging

2016-02-20 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-9839:

Status: Patch Available  (was: Open)

> Reduce verbosity of processReport logging
> -
>
> Key: HDFS-9839
> URL: https://issues.apache.org/jira/browse/HDFS-9839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> {{BlockManager#processReport}} logs one line for each invalidated block at 
> INFO. HDFS-7503 moved this logging outside the NameSystem write lock but we 
> still see the NameNode being slowed down when the number of block 
> invalidations is very large e.g. just after a large amount of data is deleted.
> {code}
>   for (Block b : invalidatedBlocks) {
> blockLog.info("BLOCK* processReport: {} on node {} size {} does not " 
> +
> "belong to any file", b, node, b.getNumBytes());
>   }
> {code}
> We can change this statement to DEBUG and just log the number of block 
> invalidations at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9839) Reduce verbosity of processReport logging

2016-02-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155732#comment-15155732
 ] 

ASF GitHub Bot commented on HDFS-9839:
--

GitHub user arp7 opened a pull request:

https://github.com/apache/hadoop/pull/78

HDFS-9839. Reduce verbosity of processReport logging



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/arp7/hadoop HDFS-9839

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/78.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #78


commit 8b23b41ada23168fe2cb71f4a3b920c68e66ee74
Author: Arpit Agarwal 
Date:   2016-02-20T18:43:14Z

HDFS-9839. Reduce verbosity of processReport logging




> Reduce verbosity of processReport logging
> -
>
> Key: HDFS-9839
> URL: https://issues.apache.org/jira/browse/HDFS-9839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> {{BlockManager#processReport}} logs one line for each invalidated block at 
> INFO. HDFS-7503 moved this logging outside the NameSystem write lock but we 
> still see the NameNode being slowed down when the number of block 
> invalidations is very large e.g. just after a large amount of data is deleted.
> {code}
>   for (Block b : invalidatedBlocks) {
> blockLog.info("BLOCK* processReport: {} on node {} size {} does not " 
> +
> "belong to any file", b, node, b.getNumBytes());
>   }
> {code}
> We can change this statement to DEBUG and just log the number of block 
> invalidations at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9839) Reduce verbosity of processReport logging

2016-02-20 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-9839:
---

 Summary: Reduce verbosity of processReport logging
 Key: HDFS-9839
 URL: https://issues.apache.org/jira/browse/HDFS-9839
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.8.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


{{BlockManager#processReport}} logs one line for each invalidated block at 
INFO. HDFS-7503 moved this logging outside the NameSystem write lock but we 
still see the NameNode being slowed down when the number of block invalidations 
is very large e.g. just after a large amount of data is deleted.

{code}
  for (Block b : invalidatedBlocks) {
blockLog.info("BLOCK* processReport: {} on node {} size {} does not " +
"belong to any file", b, node, b.getNumBytes());
  }
{code}

We can change this statement to DEBUG and just log the number of block 
invalidations at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9640) Remove hsftp from DistCp in trunk

2016-02-20 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9640:
--
Attachment: HDFS-9640.002.patch

Rev02: (1) fixed check style issues. (2) rebased and fixed conflicts.

> Remove hsftp from DistCp in trunk
> -
>
> Key: HDFS-9640
> URL: https://issues.apache.org/jira/browse/HDFS-9640
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: distcp
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9640.001.patch, HDFS-9640.002.patch
>
>
> Per discussion in HDFS-9638,
> after HDFS-5570, hftp/hsftp are removed from Hadoop 3.0.0. But DistCp still 
> makes reference to hsftp via parameter -mapredSslConf. This parameter would 
> be useless after Hadoop 3.0.0;  therefore it should be removed, and then 
> document the changes.
> This JIRA is intended to track the status of the code/docs change involving 
> the removal of hsftp in DistCp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)