[jira] [Commented] (HDFS-9350) Avoid creating temprorary strings in Block.toString() and getBlockName()

2015-11-02 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985387#comment-14985387
 ] 

Daryn Sharp commented on HDFS-9350:
---

I know SB should be used to append inside a loop, but doesn't the jvm use a SB 
internally for simple plus operations?  If yes, explicit use of a SB only seems 
to decrease readability.

> Avoid creating temprorary strings in Block.toString() and getBlockName()
> 
>
> Key: HDFS-9350
> URL: https://issues.apache.org/jira/browse/HDFS-9350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.7.1
>Reporter: Staffan Friberg
>Assignee: Staffan Friberg
>Priority: Minor
> Attachments: HDFS-9350.001.patch
>
>
> Minor change to use StringBuilders directly to avoid creating temporary 
> strings of Long and Block name when doing toString on a Block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9289) Make DataStreamer#block thread safe and verify genStamp in commitBlock

2015-11-02 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985415#comment-14985415
 ] 

Daryn Sharp commented on HDFS-9289:
---

I'd rather see the InvalidGenStampException instead of a generic IOE.  Else 
it's hard for client code to intelligently deal with exceptions and for tests 
to verify that the correct/expected IOE.

> Make DataStreamer#block thread safe and verify genStamp in commitBlock
> --
>
> Key: HDFS-9289
> URL: https://issues.apache.org/jira/browse/HDFS-9289
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chang Li
>Assignee: Chang Li
>Priority: Critical
> Attachments: HDFS-9289.1.patch, HDFS-9289.2.patch, HDFS-9289.3.patch, 
> HDFS-9289.4.patch, HDFS-9289.5.patch, HDFS-9289.6.patch
>
>
> we have seen a case of corrupt block which is caused by file complete after a 
> pipelineUpdate, but the file complete with the old block genStamp. This 
> caused the replicas of two datanodes in updated pipeline to be viewed as 
> corrupte. Propose to check genstamp when commit block



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-9289) Make DataStreamer#block thread safe and verify genStamp in commitBlock

2015-11-02 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985415#comment-14985415
 ] 

Daryn Sharp edited comment on HDFS-9289 at 11/2/15 3:49 PM:


I'd rather see the InvalidGenStampException instead of a generic IOE.  Else 
it's hard for client code to intelligently deal with exceptions and for tests 
to verify that the correct/expected IOE was thrown.


was (Author: daryn):
I'd rather see the InvalidGenStampException instead of a generic IOE.  Else 
it's hard for client code to intelligently deal with exceptions and for tests 
to verify that the correct/expected IOE.

> Make DataStreamer#block thread safe and verify genStamp in commitBlock
> --
>
> Key: HDFS-9289
> URL: https://issues.apache.org/jira/browse/HDFS-9289
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chang Li
>Assignee: Chang Li
>Priority: Critical
> Attachments: HDFS-9289.1.patch, HDFS-9289.2.patch, HDFS-9289.3.patch, 
> HDFS-9289.4.patch, HDFS-9289.5.patch, HDFS-9289.6.patch
>
>
> we have seen a case of corrupt block which is caused by file complete after a 
> pipelineUpdate, but the file complete with the old block genStamp. This 
> caused the replicas of two datanodes in updated pipeline to be viewed as 
> corrupte. Propose to check genstamp when commit block



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9358) TestNodeCount#testNodeCount timed out

2015-11-02 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-9358:
-

 Summary: TestNodeCount#testNodeCount timed out
 Key: HDFS-9358
 URL: https://issues.apache.org/jira/browse/HDFS-9358
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Wei-Chiu Chuang


I have seen this test failure occurred a few times in trunk:

Error Message

Timeout: excess replica count not equal to 2 for block blk_1073741825_1001 
after 2 msec.  Last counts: live = 2, excess = 0, corrupt = 0

Stacktrace

java.util.concurrent.TimeoutException: Timeout: excess replica count not equal 
to 2 for block blk_1073741825_1001 after 2 msec.  Last counts: live = 2, 
excess = 0, corrupt = 0
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:152)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:146)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.__CLR4_0_39bdgm666uf(TestNodeCount.java:130)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.testNodeCount(TestNodeCount.java:54)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9221) HdfsServerConstants#ReplicaState#getState should avoid calling values() since it creates a temporary array

2015-11-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985361#comment-14985361
 ] 

Kihwal Lee commented on HDFS-9221:
--

Cherry-picked to branch-2.7.

> HdfsServerConstants#ReplicaState#getState should avoid calling values() since 
> it creates a temporary array
> --
>
> Key: HDFS-9221
> URL: https://issues.apache.org/jira/browse/HDFS-9221
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.7.1
>Reporter: Staffan Friberg
>Assignee: Staffan Friberg
> Fix For: 3.0.0, 2.7.2
>
> Attachments: HADOOP-9221.001.patch
>
>
> When the BufferDecoder in BlockListAsLongs converts the stored value to a 
> ReplicaState enum it calls ReplicaState.getState(int) unfortunately this 
> method creates a ReplicaState[] for each call since it calls 
> ReplicaState.values().
> This patch creates a cached version of the values and thus avoid all 
> allocation when doing the conversion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9359) Enable libhdfs++ to use existing libhdfs CLI tests

2015-11-02 Thread James Clampffer (JIRA)
James Clampffer created HDFS-9359:
-

 Summary: Enable libhdfs++ to use existing libhdfs CLI tests
 Key: HDFS-9359
 URL: https://issues.apache.org/jira/browse/HDFS-9359
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9221) HdfsServerConstants#ReplicaState#getState should avoid calling values() since it creates a temporary array

2015-11-02 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985353#comment-14985353
 ] 

Daryn Sharp commented on HDFS-9221:
---

+1 on 2.7.2.  It's a zero risk change with plenty of benefit.

> HdfsServerConstants#ReplicaState#getState should avoid calling values() since 
> it creates a temporary array
> --
>
> Key: HDFS-9221
> URL: https://issues.apache.org/jira/browse/HDFS-9221
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.7.1
>Reporter: Staffan Friberg
>Assignee: Staffan Friberg
> Fix For: 2.8.0
>
> Attachments: HADOOP-9221.001.patch
>
>
> When the BufferDecoder in BlockListAsLongs converts the stored value to a 
> ReplicaState enum it calls ReplicaState.getState(int) unfortunately this 
> method creates a ReplicaState[] for each call since it calls 
> ReplicaState.values().
> This patch creates a cached version of the values and thus avoid all 
> allocation when doing the conversion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9320) libhdfspp should not use sizeof for stream parsing

2015-11-02 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-9320:
--
Attachment: HDFS-9320.HDFS-8707.001.patch

Changed size in bytes to sizeof(int16_t | int32_t).

> libhdfspp should not use sizeof for stream parsing
> --
>
> Key: HDFS-9320
> URL: https://issues.apache.org/jira/browse/HDFS-9320
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: James Clampffer
> Attachments: HDFS-9320.HDFS-8707.000.patch, 
> HDFS-9320.HDFS-8707.001.patch
>
>
> In a few places, we're using sizeof(int) and sizeof(short) to determine where 
> in the received buffers we should be looking for data.  Those values are 
> compiler- and platform-dependent.  We should use specified sizes, or at least 
> sizeof(int32_t).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9353) Code and comment mismatch in JavaKeyStoreProvider

2015-11-02 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-9353:
---
Assignee: Nicole Pazmany

> Code and comment mismatch in  JavaKeyStoreProvider 
> ---
>
> Key: HDFS-9353
> URL: https://issues.apache.org/jira/browse/HDFS-9353
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: nijel
>Assignee: Nicole Pazmany
>Priority: Trivial
>
> In
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.JavaKeyStoreProvider(URI 
> uri, Configuration conf) throws IOException
> The comment mentioned is
> {code}
> // Get the password file from the conf, if not present from the user's
> // environment var
> {code}
> But the code takes the value form ENV first
> I think this make sense since the user can pass the ENV for a particular run.
> My suggestion is to change the comment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8777) Erasure Coding: add tests for taking snapshots on EC files

2015-11-02 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8777:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Rakesh! +1 on the latest patch. I just committed to trunk.

> Erasure Coding: add tests for taking snapshots on EC files
> --
>
> Key: HDFS-8777
> URL: https://issues.apache.org/jira/browse/HDFS-8777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Jing Zhao
>Assignee: Rakesh R
>  Labels: test
> Fix For: 3.0.0
>
> Attachments: HDFS-8777-01.patch, HDFS-8777-02.patch, 
> HDFS-8777-03.patch, HDFS-8777-HDFS-7285-00.patch, HDFS-8777-HDFS-7285-01.patch
>
>
> We need to add more tests for (EC + snapshots). The tests need to verify the 
> fsimage saving/loading is correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9350) Avoid creating temprorary strings in Block.toString() and getBlockName()

2015-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985487#comment-14985487
 ] 

Hadoop QA commented on HDFS-9350:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 38s 
{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
43s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 30s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-02 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12769869/HDFS-9350.001.patch |
| JIRA Issue | HDFS-9350 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 80bb0c86f4c9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-e77b1ce/precommit/personality/hadoop.sh
 |
| git revision | trunk / 90e1405 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 

[jira] [Commented] (HDFS-9260) Improve performance and GC friendliness of startup and FBRs

2015-11-02 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985524#comment-14985524
 ] 

Daryn Sharp commented on HDFS-9260:
---

I'll try to review the patch today.  Only skimmed comments, and it's a big 
change.  My initial questions:
# Performance.  What impact does it have on FBRs?  Especially startup.
# Time to initialize replication queue.
# Time to decommission.
# Does memory usage increase or decrease?

> Improve performance and GC friendliness of startup and FBRs
> ---
>
> Key: HDFS-9260
> URL: https://issues.apache.org/jira/browse/HDFS-9260
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, performance
>Affects Versions: 2.7.1
>Reporter: Staffan Friberg
>Assignee: Staffan Friberg
> Attachments: HDFS Block and Replica Management 20151013.pdf, 
> HDFS-7435.001.patch, HDFS-7435.002.patch, HDFS-7435.003.patch, 
> HDFS-7435.004.patch, HDFS-7435.005.patch, HDFS-7435.006.patch, 
> HDFS-7435.007.patch, HDFS-9260.008.patch, HDFS-9260.009.patch
>
>
> This patch changes the datastructures used for BlockInfos and Replicas to 
> keep them sorted. This allows faster and more GC friendly handling of full 
> block reports.
> Would like to hear peoples feedback on this change and also some help 
> investigating/understanding a few outstanding issues if we are interested in 
> moving forward with this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9360) Storage type usage isn't updated properly after file deletion

2015-11-02 Thread Ming Ma (JIRA)
Ming Ma created HDFS-9360:
-

 Summary: Storage type usage isn't updated properly after file 
deletion
 Key: HDFS-9360
 URL: https://issues.apache.org/jira/browse/HDFS-9360
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ming Ma


For a directory that doesn't have any storage policy defined, its quota usage 
is deducted when a file is deleted. This means incorrect value for storage 
quota usage. Later when applications set the storage type, it can exceed its 
storage quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9360) Storage type usage isn't updated properly after file deletion

2015-11-02 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-9360:
--
Attachment: HDFS-9360.patch

The issue is in {{computeQuotaUsage}}, where {{bsps.getPolicy}} returns the 
default storage policy instead of null pointer. Here is the patch with the unit 
test that verifies the fix. [~xyao] or others, can you please take a look?

> Storage type usage isn't updated properly after file deletion
> -
>
> Key: HDFS-9360
> URL: https://issues.apache.org/jira/browse/HDFS-9360
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
> Attachments: HDFS-9360.patch
>
>
> For a directory that doesn't have any storage policy defined, its quota usage 
> is deducted when a file is deleted. This means incorrect value for storage 
> quota usage. Later when applications set the storage type, it can exceed its 
> storage quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8898) Create API and command-line argument to get quota without need to get file and directory counts

2015-11-02 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-8898:
--
Attachment: HDFS-8898-2.patch

Here is the updated patch with new unit tests. Another change in the updated 
patch is to have {{ContentSummary}} reuse {{QuotaUsage}} structure given they 
have lots of overlaps. I have manually verified that existing applications 
compiled with the old ContentSummary will still work with the new binary 
without recompilation. Procobuf definition for these two structures remain 
separate.

> Create API and command-line argument to get quota without need to get file 
> and directory counts
> ---
>
> Key: HDFS-8898
> URL: https://issues.apache.org/jira/browse/HDFS-8898
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Joep Rottinghuis
> Attachments: HDFS-8898-2.patch, HDFS-8898.patch
>
>
> On large directory structures it takes significant time to iterate through 
> the file and directory counts recursively to get a complete ContentSummary.
> When you want to just check for the quota on a higher level directory it 
> would be good to have an option to skip the file and directory counts.
> Moreover, currently one can only check the quota if you have access to all 
> the directories underneath. For example, if I have a large home directory 
> under /user/joep and I host some files for another user in a sub-directory, 
> the moment they create an unreadable sub-directory under my home I can no 
> longer check what my quota is. Understood that I cannot check the current 
> file counts unless I can iterate through all the usage, but for 
> administrative purposes it is nice to be able to get the current quota 
> setting on a directory without the need to iterate through and run into 
> permission issues on sub-directories.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8898) Create API and command-line argument to get quota without need to get file and directory counts

2015-11-02 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-8898:
--
Assignee: Ming Ma
  Status: Patch Available  (was: Open)

> Create API and command-line argument to get quota without need to get file 
> and directory counts
> ---
>
> Key: HDFS-8898
> URL: https://issues.apache.org/jira/browse/HDFS-8898
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Joep Rottinghuis
>Assignee: Ming Ma
> Attachments: HDFS-8898-2.patch, HDFS-8898.patch
>
>
> On large directory structures it takes significant time to iterate through 
> the file and directory counts recursively to get a complete ContentSummary.
> When you want to just check for the quota on a higher level directory it 
> would be good to have an option to skip the file and directory counts.
> Moreover, currently one can only check the quota if you have access to all 
> the directories underneath. For example, if I have a large home directory 
> under /user/joep and I host some files for another user in a sub-directory, 
> the moment they create an unreadable sub-directory under my home I can no 
> longer check what my quota is. Understood that I cannot check the current 
> file counts unless I can iterate through all the usage, but for 
> administrative purposes it is nice to be able to get the current quota 
> setting on a directory without the need to iterate through and run into 
> permission issues on sub-directories.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9360) Storage type usage isn't updated properly after file deletion

2015-11-02 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-9360:
--
Description: For a directory that doesn't have any storage policy defined, 
its storage quota usage is deducted when a file is deleted (addBlock skips 
storage quota usage update in such case). This means negative value for storage 
quota usage. Later after applications set the storage policy and storage type 
quota, it allows the applications to use more than its storage type quota.  
(was: For a directory that doesn't have any storage policy defined, its quota 
usage is deducted when a file is deleted. This means incorrect value for 
storage quota usage. Later when applications set the storage type, it can 
exceed its storage quota.)

> Storage type usage isn't updated properly after file deletion
> -
>
> Key: HDFS-9360
> URL: https://issues.apache.org/jira/browse/HDFS-9360
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-9360.patch
>
>
> For a directory that doesn't have any storage policy defined, its storage 
> quota usage is deducted when a file is deleted (addBlock skips storage quota 
> usage update in such case). This means negative value for storage quota 
> usage. Later after applications set the storage policy and storage type 
> quota, it allows the applications to use more than its storage type quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8777) Erasure Coding: add tests for taking snapshots on EC files

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985664#comment-14985664
 ] 

Hudson commented on HDFS-8777:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1350 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1350/])
HDFS-8777. Erasure Coding: add tests for taking snapshots on EC files. (zhz: 
rev 90e14055168afdb93fa8089158c03a6a694e066c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicyWithSnapshot.java


> Erasure Coding: add tests for taking snapshots on EC files
> --
>
> Key: HDFS-8777
> URL: https://issues.apache.org/jira/browse/HDFS-8777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Jing Zhao
>Assignee: Rakesh R
>  Labels: test
> Fix For: 3.0.0
>
> Attachments: HDFS-8777-01.patch, HDFS-8777-02.patch, 
> HDFS-8777-03.patch, HDFS-8777-HDFS-7285-00.patch, HDFS-8777-HDFS-7285-01.patch
>
>
> We need to add more tests for (EC + snapshots). The tests need to verify the 
> fsimage saving/loading is correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9329) TestBootstrapStandby#testRateThrottling is flaky because fsimage size is smaller than IO buffer size

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985732#comment-14985732
 ] 

Hudson commented on HDFS-9329:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8743 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8743/])
HDFS-9329. TestBootstrapStandby#testRateThrottling is flaky because (zhz: rev 
259bea3b48de7469a500831efb3306e8464a2dc9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestBootstrapStandby.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestBootstrapStandby#testRateThrottling is flaky because fsimage size is 
> smaller than IO buffer size
> 
>
> Key: HDFS-9329
> URL: https://issues.apache.org/jira/browse/HDFS-9329
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9329.00.patch, HDFS-9329.01.patch
>
>
> {{testRateThrottling}} verifies that bootstrap transfer should timeout with a 
> very small {{DFS_IMAGE_TRANSFER_BOOTSTRAP_STANDBY_RATE_KEY}} value. However, 
> throttling on the image sender only happens after sending each IO buffer. 
> Therefore, the test sometimes fails if the receiver receives the full fsimage 
> (which is smaller than IO buffer size) before throttling begins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7984) webhdfs:// needs to support provided delegation tokens

2015-11-02 Thread HeeSoo Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HeeSoo Kim updated HDFS-7984:
-
Attachment: HDFS-7984.005.patch

> webhdfs:// needs to support provided delegation tokens
> --
>
> Key: HDFS-7984
> URL: https://issues.apache.org/jira/browse/HDFS-7984
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: HeeSoo Kim
>Priority: Blocker
> Attachments: HDFS-7984.001.patch, HDFS-7984.002.patch, 
> HDFS-7984.003.patch, HDFS-7984.004.patch, HDFS-7984.005.patch, HDFS-7984.patch
>
>
> When using the webhdfs:// filesystem (especially from distcp), we need the 
> ability to inject a delegation token rather than webhdfs initialize its own.  
> This would allow for cross-authentication-zone file system accesses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9313) Possible NullPointerException in BlockManager if no excess replica can be chosen

2015-11-02 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985638#comment-14985638
 ] 

Zhe Zhang commented on HDFS-9313:
-

Thanks Ming for explaining this. I agree {{break}} is the right logic here. +1 
on the latest patch.

> Possible NullPointerException in BlockManager if no excess replica can be 
> chosen
> 
>
> Key: HDFS-9313
> URL: https://issues.apache.org/jira/browse/HDFS-9313
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-9313-2.patch, HDFS-9313.patch
>
>
> HDFS-8647 makes it easier to reason about various block placement scenarios. 
> Here is one possible case where BlockManager won't be able to find the excess 
> replica to delete: when storage policy changes around the same time balancer 
> moves the block. When this happens, it will cause NullPointerException.
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.adjustSetsWithChosenReplica(BlockPlacementPolicy.java:156)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseReplicasToDelete(BlockPlacementPolicyDefault.java:978)
> {noformat}
> Note that it isn't found in any production clusters. Instead, it is found 
> from new unit tests. In addition, the issue has been there before HDFS-8647.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9329) TestBootstrapStandby#testRateThrottling is flaky because fsimage size is smaller than IO buffer size

2015-11-02 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9329:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks Andrew and Xiao for the reviews! I just committed the patch to trunk and 
branch-2. 

While committing to branch-2 I had to update the patch, both to fix a conflict 
and because branch-2's version of {{testSuccessfulBaseCase}} cleans up the 
generated primary NN directory after each run.

> TestBootstrapStandby#testRateThrottling is flaky because fsimage size is 
> smaller than IO buffer size
> 
>
> Key: HDFS-9329
> URL: https://issues.apache.org/jira/browse/HDFS-9329
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9329.00.patch, HDFS-9329.01.patch
>
>
> {{testRateThrottling}} verifies that bootstrap transfer should timeout with a 
> very small {{DFS_IMAGE_TRANSFER_BOOTSTRAP_STANDBY_RATE_KEY}} value. However, 
> throttling on the image sender only happens after sending each IO buffer. 
> Therefore, the test sometimes fails if the receiver receives the full fsimage 
> (which is smaller than IO buffer size) before throttling begins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9320) libhdfspp should not use sizeof for stream parsing

2015-11-02 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985757#comment-14985757
 ] 

Haohui Mai commented on HDFS-9320:
--

{code}
+const unsigned int kSizeofJavaShort = sizeof(int16_t);
+const unsigned int kSizeofJavaInt = sizeof(int32_t);
+
{code}

I should have worded it more clear. To avoid polluting the namespace, it's 
better to get rid of these two statements in the header files and inline them 
directly to the places that use them.

+1 once addressed.


> libhdfspp should not use sizeof for stream parsing
> --
>
> Key: HDFS-9320
> URL: https://issues.apache.org/jira/browse/HDFS-9320
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: James Clampffer
> Attachments: HDFS-9320.HDFS-8707.000.patch, 
> HDFS-9320.HDFS-8707.001.patch
>
>
> In a few places, we're using sizeof(int) and sizeof(short) to determine where 
> in the received buffers we should be looking for data.  Those values are 
> compiler- and platform-dependent.  We should use specified sizes, or at least 
> sizeof(int32_t).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8777) Erasure Coding: add tests for taking snapshots on EC files

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985715#comment-14985715
 ] 

Hudson commented on HDFS-8777:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #616 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/616/])
HDFS-8777. Erasure Coding: add tests for taking snapshots on EC files. (zhz: 
rev 90e14055168afdb93fa8089158c03a6a694e066c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicyWithSnapshot.java


> Erasure Coding: add tests for taking snapshots on EC files
> --
>
> Key: HDFS-8777
> URL: https://issues.apache.org/jira/browse/HDFS-8777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Jing Zhao
>Assignee: Rakesh R
>  Labels: test
> Fix For: 3.0.0
>
> Attachments: HDFS-8777-01.patch, HDFS-8777-02.patch, 
> HDFS-8777-03.patch, HDFS-8777-HDFS-7285-00.patch, HDFS-8777-HDFS-7285-01.patch
>
>
> We need to add more tests for (EC + snapshots). The tests need to verify the 
> fsimage saving/loading is correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9329) TestBootstrapStandby#testRateThrottling is flaky because fsimage size is smaller than IO buffer size

2015-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985600#comment-14985600
 ] 

Hadoop QA commented on HDFS-9329:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 7s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 13s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run 
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 12s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 55s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 8s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 171m 18s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.fs.TestSymlinkHdfsFileContext |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.server.datanode.TestBlockRecovery |
| JDK v1.7.0_79 Failed junit tests | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.server.namenode.TestHostsFiles |
|   | hadoop.hdfs.tools.TestDFSAdmin |
|   | 

[jira] [Commented] (HDFS-9360) Storage type usage isn't updated properly after file deletion

2015-11-02 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985720#comment-14985720
 ] 

Xiaoyu Yao commented on HDFS-9360:
--

Thanks [~mingma] for reporting the issue, investigating and providing the patch 
that restores the similar logic in INodeFile#computeQuotaUsage()  before 
HDFS-7728. The change looks good to me. One NIT: Can you separate the new unit 
tests into a single case in TestQuotaByStorageType where we can have all the 
storage type quota related tests in one place?  +1 pending Jenkins.




> Storage type usage isn't updated properly after file deletion
> -
>
> Key: HDFS-9360
> URL: https://issues.apache.org/jira/browse/HDFS-9360
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-9360.patch
>
>
> For a directory that doesn't have any storage policy defined, its storage 
> quota usage is deducted when a file is deleted (addBlock skips storage quota 
> usage update in such case). This means negative value for storage quota 
> usage. Later after applications set the storage policy and storage type 
> quota, it allows the applications to use more than its storage type quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9350) Avoid creating temprorary strings in Block.toString() and getBlockName()

2015-11-02 Thread Staffan Friberg (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985738#comment-14985738
 ] 

Staffan Friberg commented on HDFS-9350:
---

It does, but the problem is that the call to Long.toString will actually 
allocate a new String to be appended, and the call to getBlockname sometimes 
(unless correctly inlined will do the same). So you don't get a single append 
chain with SB in this case.

> Avoid creating temprorary strings in Block.toString() and getBlockName()
> 
>
> Key: HDFS-9350
> URL: https://issues.apache.org/jira/browse/HDFS-9350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.7.1
>Reporter: Staffan Friberg
>Assignee: Staffan Friberg
>Priority: Minor
> Attachments: HDFS-9350.001.patch
>
>
> Minor change to use StringBuilders directly to avoid creating temporary 
> strings of Long and Block name when doing toString on a Block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9363) Add fetchReplica() to FsDatasetTestUtils()

2015-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986751#comment-14986751
 ] 

Hadoop QA commented on HDFS-9363:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 52s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run 
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 50s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 21s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 55s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 177m 36s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestDNFencing |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-03 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770222/HDFS-9363.001.patch |
| JIRA Issue | HDFS-9363 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 

[jira] [Commented] (HDFS-9361) Default block placement policy causes TestReplaceDataNodeOnFailure to fail intermittently

2015-11-02 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986821#comment-14986821
 ] 

Walter Su commented on HDFS-9361:
-

I think it's not an issue. It's because the defect of the test. I put my 
comment at HDFS-6101.

> Default block placement policy causes TestReplaceDataNodeOnFailure to fail 
> intermittently
> -
>
> Key: HDFS-9361
> URL: https://issues.apache.org/jira/browse/HDFS-9361
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Reporter: Wei-Chiu Chuang
>
> TestReplaceDatanodeOnFailure sometimes fail (See HDFS-6101).
> (For background information, the test case set up a cluster with three data 
> nodes, add two more data nodes, remove one data nodes, and verify that 
> clients can correctly recover from the failure and set up three replicas)
> I traced down and found that some times a client only set up a pipeline with 
> only two data nodes, which is one less than configured in the test case, even 
> though the test case configures to always replace failed nodes.
> Digging into the log, I saw:
> {noformat}
> 2015-11-02 12:07:38,634 [IPC Server handler 8 on 50673] WARN  
> blockmanagement.BlockPlacementPolicy 
> (BlockPlacementPolicyDefault.java:chooseTarget(355)) - Failed to place enough 
> replicas, still in nee
> d of 1 to reach 3 (unavailableStorages=[], 
> storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
> creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException:
>  [
> Node /rack0/127.0.0.1:32931 [
>   Datanode 127.0.0.1:32931 is not chosen since the rack has too many chosen 
> nodes .
> ]
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:723)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRemoteRack(BlockPlacementPolicyDefault.java:624)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:429)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:342)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:220)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:105)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:120)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1727)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:299)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2457)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:796)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:500)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:637)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2305)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2301)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2299)
> {noformat}
> So from the log, it seems the policy causes the pipeline selection to give up 
> on the data node.
> I wonder whether this is appropriate or not. If the load factor exceeds 
> certain threshold, but the file is insufficient of replicas, should it accept 
> it as is, or should it attempt to acquire more replicas? 
> I am filing this JIRA for discussion. I am very unfamiliar with block 
> placement, so I may be wrong about my hypothesis.
> (Edit: I turned on DEBUG option for Log4j, and changed the logging message a 
> bit to make it show the stack trace)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9313) Possible NullPointerException in BlockManager if no excess replica can be chosen

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986857#comment-14986857
 ] 

Hudson commented on HDFS-9313:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #631 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/631/])
HDFS-9313. Possible NullPointerException in BlockManager if no excess (mingma: 
rev d565480da2f646b40c3180e1ccb2935c9863dfef)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Possible NullPointerException in BlockManager if no excess replica can be 
> chosen
> 
>
> Key: HDFS-9313
> URL: https://issues.apache.org/jira/browse/HDFS-9313
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 2.8.0
>
> Attachments: HDFS-9313-2.patch, HDFS-9313.patch
>
>
> HDFS-8647 makes it easier to reason about various block placement scenarios. 
> Here is one possible case where BlockManager won't be able to find the excess 
> replica to delete: when storage policy changes around the same time balancer 
> moves the block. When this happens, it will cause NullPointerException.
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.adjustSetsWithChosenReplica(BlockPlacementPolicy.java:156)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseReplicasToDelete(BlockPlacementPolicyDefault.java:978)
> {noformat}
> Note that it isn't found in any production clusters. Instead, it is found 
> from new unit tests. In addition, the issue has been there before HDFS-8647.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9364) Unnecessary DNS resolution attempts when NameNodeProxies

2015-11-02 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9364:

Status: Patch Available  (was: Open)

> Unnecessary DNS resolution attempts when NameNodeProxies
> 
>
> Key: HDFS-9364
> URL: https://issues.apache.org/jira/browse/HDFS-9364
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9364.001.patch
>
>
> When creating NameNodeProxies, we always try to DNS-resolve namenode URIs. 
> This is unnecessary if the URI is logical, and may be significantly slow if 
> the DNS is having problems. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9364) Unnecessary DNS resolution attempts when NameNodeProxies

2015-11-02 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9364:

Attachment: HDFS-9364.001.patch

Patch 001 fixes the unnecessary DNS resolution by checking against configured 
service name. Added a test similar to what HADOOP-9150 did to guarantee the URI 
is not DNS-resolved.
The original {{DFSUtilClient#getNNAddress(URI)}} is untouched, given that: 1. 
It's public; 2. We need configuration to check URI logicality.

> Unnecessary DNS resolution attempts when NameNodeProxies
> 
>
> Key: HDFS-9364
> URL: https://issues.apache.org/jira/browse/HDFS-9364
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9364.001.patch
>
>
> When creating NameNodeProxies, we always try to DNS-resolve namenode URIs. 
> This is unnecessary if the URI is logical, and may be significantly slow if 
> the DNS is having problems. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6101) TestReplaceDatanodeOnFailure fails occasionally

2015-11-02 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986818#comment-14986818
 ] 

Walter Su commented on HDFS-6101:
-

The test failed is possibly because the stopped DN doesn't be removed from 
cluster map, and {{sleepSeconds(5)}} doesn't make sure it's removed from 
cluster map.

1. Please don't remove this. It's intended. After sleeping, we want some writer 
NOT yet started.
{code}
-  // Some of them are too slow and will be not yet started. 
-  sleepSeconds(1);
{code}

2. Instead of hardcode sleep time 5s. We can use 
{{GenericTestUtils.waitFor(..)}} to check the block replication. The 
wait/notify is unnecessary.

3. After
{code}
cluster.stopDataNode(AppendTestUtil.nextInt(REPLICATION));
{code}
We should call cluster.setDataNodeDead(..) to remove it from cluster map.

> TestReplaceDatanodeOnFailure fails occasionally
> ---
>
> Key: HDFS-6101
> URL: https://issues.apache.org/jira/browse/HDFS-6101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-6101.001.patch, HDFS-6101.002.patch, 
> HDFS-6101.003.patch, TestReplaceDatanodeOnFailure.log
>
>
> Exception details in a comment below.
> The failure repros on both OS X and Linux if I run the test ~10 times in a 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9345) Erasure Coding: create dummy coder and schema

2015-11-02 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986864#comment-14986864
 ] 

Rui Li commented on HDFS-9345:
--

[~drankye] told me that dummy schema depends on some other JIRAs.
So I just created HADOOP-12544 to implement the dummy coder. I'll use this JIRA 
for the dummy schema when the depended tasks are done.

> Erasure Coding: create dummy coder and schema
> -
>
> Key: HDFS-9345
> URL: https://issues.apache.org/jira/browse/HDFS-9345
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
>
> We  can create dummy coder which does no computation and simply returns zero 
> bytes. Similarly, we can create a test-only schema with no parity blocks.
> Such coder and schema can be used to isolate the performance issue to 
> HDFS-side logic instead of codec, which would be useful when tuning 
> performance of EC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9129) Move the safemode block count into BlockManager

2015-11-02 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9129:

Attachment: HDFS-9129.019.patch

The v19 patch is the latest effort of simplifying the {{BlockManagerSafeMode}} 
status. The {{INITIALIZED}} enum is considered making the safe mode complicated 
and thus removed.

Let's see Jenkins report for the functional test. Will revisit the synchronized 
behavior.

> Move the safemode block count into BlockManager
> ---
>
> Key: HDFS-9129
> URL: https://issues.apache.org/jira/browse/HDFS-9129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Attachments: HDFS-9129.000.patch, HDFS-9129.001.patch, 
> HDFS-9129.002.patch, HDFS-9129.003.patch, HDFS-9129.004.patch, 
> HDFS-9129.005.patch, HDFS-9129.006.patch, HDFS-9129.007.patch, 
> HDFS-9129.008.patch, HDFS-9129.009.patch, HDFS-9129.010.patch, 
> HDFS-9129.011.patch, HDFS-9129.012.patch, HDFS-9129.013.patch, 
> HDFS-9129.014.patch, HDFS-9129.015.patch, HDFS-9129.016.patch, 
> HDFS-9129.017.patch, HDFS-9129.018.patch, HDFS-9129.019.patch
>
>
> The {{SafeMode}} needs to track whether there are enough blocks so that the 
> NN can get out of the safemode. These fields can moved to the 
> {{BlockManager}} class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9313) Possible NullPointerException in BlockManager if no excess replica can be chosen

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986819#comment-14986819
 ] 

Hudson commented on HDFS-9313:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1354 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1354/])
HDFS-9313. Possible NullPointerException in BlockManager if no excess (mingma: 
rev d565480da2f646b40c3180e1ccb2935c9863dfef)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Possible NullPointerException in BlockManager if no excess replica can be 
> chosen
> 
>
> Key: HDFS-9313
> URL: https://issues.apache.org/jira/browse/HDFS-9313
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 2.8.0
>
> Attachments: HDFS-9313-2.patch, HDFS-9313.patch
>
>
> HDFS-8647 makes it easier to reason about various block placement scenarios. 
> Here is one possible case where BlockManager won't be able to find the excess 
> replica to delete: when storage policy changes around the same time balancer 
> moves the block. When this happens, it will cause NullPointerException.
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.adjustSetsWithChosenReplica(BlockPlacementPolicy.java:156)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseReplicasToDelete(BlockPlacementPolicyDefault.java:978)
> {noformat}
> Note that it isn't found in any production clusters. Instead, it is found 
> from new unit tests. In addition, the issue has been there before HDFS-8647.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9313) Possible NullPointerException in BlockManager if no excess replica can be chosen

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986793#comment-14986793
 ] 

Hudson commented on HDFS-9313:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2561 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2561/])
HDFS-9313. Possible NullPointerException in BlockManager if no excess (mingma: 
rev d565480da2f646b40c3180e1ccb2935c9863dfef)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java


> Possible NullPointerException in BlockManager if no excess replica can be 
> chosen
> 
>
> Key: HDFS-9313
> URL: https://issues.apache.org/jira/browse/HDFS-9313
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 2.8.0
>
> Attachments: HDFS-9313-2.patch, HDFS-9313.patch
>
>
> HDFS-8647 makes it easier to reason about various block placement scenarios. 
> Here is one possible case where BlockManager won't be able to find the excess 
> replica to delete: when storage policy changes around the same time balancer 
> moves the block. When this happens, it will cause NullPointerException.
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.adjustSetsWithChosenReplica(BlockPlacementPolicy.java:156)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseReplicasToDelete(BlockPlacementPolicyDefault.java:978)
> {noformat}
> Note that it isn't found in any production clusters. Instead, it is found 
> from new unit tests. In addition, the issue has been there before HDFS-8647.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9312) Fix TestReplication to be FsDataset-agnostic.

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986796#comment-14986796
 ] 

Hudson commented on HDFS-9312:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2561 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2561/])
HDFS-9312. Fix TestReplication to be FsDataset-agnostic. (lei) (lei: rev 
7632409482aaf06ecc6fe370a9f519afb969ad30)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetTestUtils.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImplTestUtils.java


> Fix TestReplication to be FsDataset-agnostic.
> -
>
> Key: HDFS-9312
> URL: https://issues.apache.org/jira/browse/HDFS-9312
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9312.00.patch, HDFS-9312.01.patch
>
>
> {{TestReplication}} uses raw file system access to inject dummy replica 
> files. It makes {{TestReplication}} not compatible to non-fs dataset 
> implementation.
> We can fix it by using existing {{FsDatasetTestUtils}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9308) Add truncateMeta() and deleteMeta() to MiniDFSCluster

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986794#comment-14986794
 ] 

Hudson commented on HDFS-9308:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2561 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2561/])
HDFS-9308. Add truncateMeta() and deleteMeta() to MiniDFSCluster. (Tony (lei: 
rev 8e05dbf2bddce95d5f5a5bae5df61acabf0ba7c5)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestCrcCorruption.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Add truncateMeta() and deleteMeta() to MiniDFSCluster
> -
>
> Key: HDFS-9308
> URL: https://issues.apache.org/jira/browse/HDFS-9308
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9308.001.patch, HDFS-9308.002.patch, 
> HDFS-9308.003.patch
>
>
> HDFS-9188 introduced {{corruptMeta()}} method to make corrupting the metadata 
> file filesystem agnostic. There should also be a {{truncateMeta()}} and 
> {{deleteMeta()}} method in MiniDFSCluster to allow truncation of metadata 
> files on DataNodes without writing code that's specific to underling file 
> system. {{FsDatasetTestUtils#truncateMeta()}} is already implemented by 
> HDFS-9188 and cam be exposed easily in {{MiniDFSCluster}}.
> This will be useful for tests such as 
> {{TestLeaseRecovery#testBlockRecoveryWithLessMetafile}} and 
> {{TestCrcCorruption#testCrcCorruption}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9275) Wait previous ErasureCodingWork to finish before schedule another one

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986795#comment-14986795
 ] 

Hudson commented on HDFS-9275:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2561 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2561/])
HDFS-9275. Wait previous ErasureCodingWork to finish before schedule (yliu: rev 
5ba2b98d0fe29603e136fc43a14f853e820cf7e2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRecoverStripedFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeModeWithStripedFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFSStriped.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadStripedFileWithMissingBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteStripedFileWithFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestRecoverStripedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/StripedFileTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Wait previous ErasureCodingWork to finish before schedule another one
> -
>
> Key: HDFS-9275
> URL: https://issues.apache.org/jira/browse/HDFS-9275
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Walter Su
>Assignee: Walter Su
> Fix For: 3.0.0
>
> Attachments: HDFS-9275.01.patch, HDFS-9275.02.patch, 
> HDFS-9275.03.patch, HDFS-9275.04.patch, HDFS-9275.05.patch
>
>
> In {{ErasureCodingWorker}}, for the same block group, one task doesn't know 
> which internal blocks is in recovering by other tasks. We could end up with 
> recovering 2 identical block with same index. So, {{ReplicationMonitor}} 
> should wait previous work to finish before schedule another one.
> This is related to the occasional failure of {{TestRecoverStripedFile}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9276) Failed to Update HDFS Delegation Token for long running application in HA mode

2015-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986837#comment-14986837
 ] 

Hadoop QA commented on HDFS-9276:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 58s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run 
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 25s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 11s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 55s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 58s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 15s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 19s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 20s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 175m 45s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
| JDK v1.7.0_79 Failed junit tests | hadoop.ipc.TestIPC |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-03 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-9329) TestBootstrapStandby#testRateThrottling is flaky because fsimage size is smaller than IO buffer size

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985963#comment-14985963
 ] 

Hudson commented on HDFS-9329:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #628 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/628/])
HDFS-9329. TestBootstrapStandby#testRateThrottling is flaky because (zhz: rev 
259bea3b48de7469a500831efb3306e8464a2dc9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestBootstrapStandby.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestBootstrapStandby#testRateThrottling is flaky because fsimage size is 
> smaller than IO buffer size
> 
>
> Key: HDFS-9329
> URL: https://issues.apache.org/jira/browse/HDFS-9329
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9329.00.patch, HDFS-9329.01.patch
>
>
> {{testRateThrottling}} verifies that bootstrap transfer should timeout with a 
> very small {{DFS_IMAGE_TRANSFER_BOOTSTRAP_STANDBY_RATE_KEY}} value. However, 
> throttling on the image sender only happens after sending each IO buffer. 
> Therefore, the test sometimes fails if the receiver receives the full fsimage 
> (which is smaller than IO buffer size) before throttling begins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9329) TestBootstrapStandby#testRateThrottling is flaky because fsimage size is smaller than IO buffer size

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986000#comment-14986000
 ] 

Hudson commented on HDFS-9329:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2558 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2558/])
HDFS-9329. TestBootstrapStandby#testRateThrottling is flaky because (zhz: rev 
259bea3b48de7469a500831efb3306e8464a2dc9)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestBootstrapStandby.java


> TestBootstrapStandby#testRateThrottling is flaky because fsimage size is 
> smaller than IO buffer size
> 
>
> Key: HDFS-9329
> URL: https://issues.apache.org/jira/browse/HDFS-9329
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9329.00.patch, HDFS-9329.01.patch
>
>
> {{testRateThrottling}} verifies that bootstrap transfer should timeout with a 
> very small {{DFS_IMAGE_TRANSFER_BOOTSTRAP_STANDBY_RATE_KEY}} value. However, 
> throttling on the image sender only happens after sending each IO buffer. 
> Therefore, the test sometimes fails if the receiver receives the full fsimage 
> (which is smaller than IO buffer size) before throttling begins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-11-02 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985829#comment-14985829
 ] 

Mingliang Liu commented on HDFS-9242:
-

Thanks to [~ozawa] for the catch.

Agree with [~wheat9]. 

> Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache 
> ---
>
> Key: HDFS-9242
> URL: https://issues.apache.org/jira/browse/HDFS-9242
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-9242-002.patch, HDFS-9242-003.patch, HDFS-9242.patch
>
>
> This was introduced by HDFS-8855 and pre-patch warning can be found at 
> https://builds.apache.org/job/PreCommit-HDFS-Build/12975/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html#DC_DOUBLECHECK
> {code}
> Code  Warning
> DCPossible doublecheck on 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
>  in new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> Bug type DC_DOUBLECHECK (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider
> In method new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> On field 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
> At DataNodeUGIProvider.java:[lines 49-51]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9312) Fix TestReplication to be FsDataset-agnostic.

2015-11-02 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-9312:

Attachment: HDFS-9312.01.patch

Thanks a lot for the reviews [~zhz]. Updated the patch to address your reviews. 
Will commit once jenkins finishes

> Fix TestReplication to be FsDataset-agnostic.
> -
>
> Key: HDFS-9312
> URL: https://issues.apache.org/jira/browse/HDFS-9312
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HDFS-9312.00.patch, HDFS-9312.01.patch
>
>
> {{TestReplication}} uses raw file system access to inject dummy replica 
> files. It makes {{TestReplication}} not compatible to non-fs dataset 
> implementation.
> We can fix it by using existing {{FsDatasetTestUtils}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9356) Last Contact value is empty in Datanode Info tab while Decommissioning

2015-11-02 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-9356:
-
Status: Patch Available  (was: Open)

> Last Contact value is empty in Datanode Info tab while Decommissioning 
> ---
>
> Key: HDFS-9356
> URL: https://issues.apache.org/jira/browse/HDFS-9356
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9356.patch, decomm.png
>
>
> While DN is in decommissioning state, the Last contact value is empty in the 
> Datanode Information tab of Namenode UI.
> Attaching the snapshot of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7725) Incorrect "nodes in service" metrics caused all writes to fail

2015-11-02 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-7725:

Fix Version/s: (was: 3.0.0)

> Incorrect "nodes in service" metrics caused all writes to fail
> --
>
> Key: HDFS-7725
> URL: https://issues.apache.org/jira/browse/HDFS-7725
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Ming Ma
>  Labels: 2.7.2-candidate
> Fix For: 2.7.2
>
> Attachments: HDFS-7725-2.patch, HDFS-7725-3.patch, HDFS-7725.patch
>
>
> One of our clusters sometimes couldn't allocate blocks from any DNs. 
> BlockPlacementPolicyDefault complains with the following messages for all DNs.
> {noformat}
> the node is too busy (load:x > y)
> {noformat}
> It turns out the {{HeartbeatManager}}'s {{nodesInService}} was computed 
> incorrectly when admins decomm or recomm dead nodes. Here are two scenarios.
> * Decomm dead nodes. It turns out HDFS-7374 has fixed it; not sure if it is 
> intentional. cc / [~zhz], [~andrew.wang], [~atm] Here is the sequence of 
> event without HDFS-7374.
> ** Cluster has one live node. nodesInService == 1
> ** The node becomes dead. nodesInService == 0
> ** Decomm the node. nodesInService == -1
> * However, HDFS-7374 introduces another inconsistency when recomm is involved.
> ** Cluster has one live node. nodesInService == 1
> ** The node becomes dead. nodesInService == 0
> ** Decomm the node. nodesInService == 0
> ** Recomm the node. nodesInService == 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8777) Erasure Coding: add tests for taking snapshots on EC files

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985967#comment-14985967
 ] 

Hudson commented on HDFS-8777:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #628 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/628/])
HDFS-8777. Erasure Coding: add tests for taking snapshots on EC files. (zhz: 
rev 90e14055168afdb93fa8089158c03a6a694e066c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicyWithSnapshot.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Erasure Coding: add tests for taking snapshots on EC files
> --
>
> Key: HDFS-8777
> URL: https://issues.apache.org/jira/browse/HDFS-8777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Jing Zhao
>Assignee: Rakesh R
>  Labels: test
> Fix For: 3.0.0
>
> Attachments: HDFS-8777-01.patch, HDFS-8777-02.patch, 
> HDFS-8777-03.patch, HDFS-8777-HDFS-7285-00.patch, HDFS-8777-HDFS-7285-01.patch
>
>
> We need to add more tests for (EC + snapshots). The tests need to verify the 
> fsimage saving/loading is correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8777) Erasure Coding: add tests for taking snapshots on EC files

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986003#comment-14986003
 ] 

Hudson commented on HDFS-8777:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2558 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2558/])
HDFS-8777. Erasure Coding: add tests for taking snapshots on EC files. (zhz: 
rev 90e14055168afdb93fa8089158c03a6a694e066c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicyWithSnapshot.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Erasure Coding: add tests for taking snapshots on EC files
> --
>
> Key: HDFS-8777
> URL: https://issues.apache.org/jira/browse/HDFS-8777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Jing Zhao
>Assignee: Rakesh R
>  Labels: test
> Fix For: 3.0.0
>
> Attachments: HDFS-8777-01.patch, HDFS-8777-02.patch, 
> HDFS-8777-03.patch, HDFS-8777-HDFS-7285-00.patch, HDFS-8777-HDFS-7285-01.patch
>
>
> We need to add more tests for (EC + snapshots). The tests need to verify the 
> fsimage saving/loading is correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9356) Last Contact value is empty in Datanode Info tab while Decommissioning

2015-11-02 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-9356:
-
Attachment: HDFS-9356.patch

> Last Contact value is empty in Datanode Info tab while Decommissioning 
> ---
>
> Key: HDFS-9356
> URL: https://issues.apache.org/jira/browse/HDFS-9356
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9356.patch, decomm.png
>
>
> While DN is in decommissioning state, the Last contact value is empty in the 
> Datanode Information tab of Namenode UI.
> Attaching the snapshot of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9329) TestBootstrapStandby#testRateThrottling is flaky because fsimage size is smaller than IO buffer size

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985923#comment-14985923
 ] 

Hudson commented on HDFS-9329:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #617 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/617/])
HDFS-9329. TestBootstrapStandby#testRateThrottling is flaky because (zhz: rev 
259bea3b48de7469a500831efb3306e8464a2dc9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestBootstrapStandby.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestBootstrapStandby#testRateThrottling is flaky because fsimage size is 
> smaller than IO buffer size
> 
>
> Key: HDFS-9329
> URL: https://issues.apache.org/jira/browse/HDFS-9329
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9329.00.patch, HDFS-9329.01.patch
>
>
> {{testRateThrottling}} verifies that bootstrap transfer should timeout with a 
> very small {{DFS_IMAGE_TRANSFER_BOOTSTRAP_STANDBY_RATE_KEY}} value. However, 
> throttling on the image sender only happens after sending each IO buffer. 
> Therefore, the test sometimes fails if the receiver receives the full fsimage 
> (which is smaller than IO buffer size) before throttling begins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9117) Config file reader / options classes for libhdfs++

2015-11-02 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985948#comment-14985948
 ] 

James Clampffer commented on HDFS-9117:
---

That sounds good to me.

+1 on the new patch

> Config file reader / options classes for libhdfs++
> --
>
> Key: HDFS-9117
> URL: https://issues.apache.org/jira/browse/HDFS-9117
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS-8707
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9117.HDFS-8707.001.patch, 
> HDFS-9117.HDFS-8707.002.patch, HDFS-9117.HDFS-8707.003.patch, 
> HDFS-9117.HDFS-8707.004.patch, HDFS-9117.HDFS-8707.005.patch, 
> HDFS-9117.HDFS-8707.006.patch, HDFS-9117.HDFS-8707.008.patch, 
> HDFS-9117.HDFS-8707.009.patch, HDFS-9117.HDFS-8707.010.patch, 
> HDFS-9117.HDFS-9288.007.patch
>
>
> For environmental compatability with HDFS installations, libhdfs++ should be 
> able to read the configurations from Hadoop XML files and behave in line with 
> the Java implementation.
> Most notably, machine names and ports should be readable from Hadoop XML 
> configuration files.
> Similarly, an internal Options architecture for libhdfs++ should be developed 
> to efficiently transport the configuration information within the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9356) Last Contact value is empty in Datanode Info tab while Decommissioning

2015-11-02 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-9356:
-
Attachment: HDFS-9356.patch

Attaching initial patch...
I feel last contact info is not required for decommissioning table..
Last contact info already displayed in "In operation" table.

> Last Contact value is empty in Datanode Info tab while Decommissioning 
> ---
>
> Key: HDFS-9356
> URL: https://issues.apache.org/jira/browse/HDFS-9356
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9356.patch, decomm.png
>
>
> While DN is in decommissioning state, the Last contact value is empty in the 
> Datanode Information tab of Namenode UI.
> Attaching the snapshot of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9356) Last Contact value is empty in Datanode Info tab while Decommissioning

2015-11-02 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-9356:
-
Attachment: (was: HDFS-9356.patch)

> Last Contact value is empty in Datanode Info tab while Decommissioning 
> ---
>
> Key: HDFS-9356
> URL: https://issues.apache.org/jira/browse/HDFS-9356
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Surendra Singh Lilhore
> Attachments: decomm.png
>
>
> While DN is in decommissioning state, the Last contact value is empty in the 
> Datanode Information tab of Namenode UI.
> Attaching the snapshot of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9263) tests are using /test/build/data; breaking Jenkins

2015-11-02 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HDFS-9263:
--
Target Version/s: 3.0.0, 2.7.3, 2.6.3  (was: 3.0.0, 2.6.2, 2.7.3)

Targeting 2.6.3 now that 2.6.2 has shipped.

> tests are using /test/build/data; breaking Jenkins
> --
>
> Key: HDFS-9263
> URL: https://issues.apache.org/jira/browse/HDFS-9263
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HDFS-9263-001.patch, HDFS-9263-002.patch
>
>
> Some of the HDFS tests are using the path {{test/build/data}} to store files, 
> so leaking files which fail the new post-build RAT test checks on Jenkins 
> (and dirtying all development systems with paths which {{mvn clean}} will 
> miss.
> fix



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8871) Decommissioning of a node with a failed volume may not start

2015-11-02 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HDFS-8871:
--
Target Version/s: 2.7.2, 2.6.3  (was: 2.7.2, 2.6.2)

Targeting 2.6.3 now that 2.6.2 has shipped.

> Decommissioning of a node with a failed volume may not start
> 
>
> Key: HDFS-8871
> URL: https://issues.apache.org/jira/browse/HDFS-8871
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Kihwal Lee
>Assignee: Daryn Sharp
>Priority: Critical
>
> Since staleness may not be properly cleared, a node with a failed volume may 
> not actually get scanned for block replication. Nothing is being replicated 
> from these nodes.
> This bug does not manifest unless the datanode has a unique storage ID per 
> volume. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8870) Lease is leaked on write failure

2015-11-02 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HDFS-8870:
--
Target Version/s: 2.7.2, 2.6.3  (was: 2.7.2, 2.6.2)

Targeting 2.6.3 now that 2.6.2 has shipped.

> Lease is leaked on write failure
> 
>
> Key: HDFS-8870
> URL: https://issues.apache.org/jira/browse/HDFS-8870
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Daryn Sharp
>
> Creating this ticket on behalf of [~daryn]
> We've seen this in our of our cluster. When a long running process has a 
> write failure, the lease is leaked and gets renewed until the token is 
> expired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9320) libhdfspp should not use sizeof for stream parsing

2015-11-02 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-9320:
--
Attachment: HDFS-9320.HDFS-8707.002.patch

Ah ok.  This fixes it.

> libhdfspp should not use sizeof for stream parsing
> --
>
> Key: HDFS-9320
> URL: https://issues.apache.org/jira/browse/HDFS-9320
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: James Clampffer
> Attachments: HDFS-9320.HDFS-8707.000.patch, 
> HDFS-9320.HDFS-8707.001.patch, HDFS-9320.HDFS-8707.002.patch
>
>
> In a few places, we're using sizeof(int) and sizeof(short) to determine where 
> in the received buffers we should be looking for data.  Those values are 
> compiler- and platform-dependent.  We should use specified sizes, or at least 
> sizeof(int32_t).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9354) Fix TestBalancer#testBalancerWithZeroThreadsForMove on Windows

2015-11-02 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985887#comment-14985887
 ] 

Xiaoyu Yao commented on HDFS-9354:
--

Thanks [~cnauroth] for reviewing the patch and providing helpful suggestions. 

bq. 1. We could add a JUnit @After method that always shuts down cluster if it 
is non-null. Then, the individual tests wouldn't need to do try-finally, and 
any new tests that get added over time will get the automatic shutdown for 
free. This would require a bigger patch though.

That's a good idea and I had similar thoughts too. Compared with the small 
change in patch v0, it would require a bigger patch as you mentioned but can 
help us avoid leaks in future. I can update the patch based on that.

bq. 2. The check for HadoopIllegalArgumentException could be simplified by 
using JUnit's ExpectedException rule. If you'd like to see a simple example of 
this, I recommend looking at TestAclConfigFlag.

My understanding of "Rule and ExpectedException" (JUnit 4.7) is an alternative 
to the @Test(expected= HadoopIllegalArgumentException.class), which allows 
finer grain validation of exception message. But both will need to rely on 
JUnit @After method to ensure cluster is shutdown upon exception. 

> Fix TestBalancer#testBalancerWithZeroThreadsForMove on Windows
> --
>
> Key: HDFS-9354
> URL: https://issues.apache.org/jira/browse/HDFS-9354
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-9354.00.patch
>
>
> This negative test expect HadoopIllegalArgumentException on illegal 
> configuration. It uses JUnit (expected=HadoopIllegalArgumentException.class)  
> and passed fine on Linux.
> On windows, this test passes as well. But it left open handles on NN metadata 
> directories used by MiniDFSCluster. As a result, quite a few of subsequent 
> TestBalancer unit tests can't start MiniDFSCluster. The open handles prevents 
> them from cleaning up NN metadata directories on Windows. 
> This JIRA is opened to explicitly catch the Exception and ensure the test 
> cluster is properly shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9182) Cleanup the findbugs and other issues after HDFS EC merged to trunk.

2015-11-02 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9182:

Component/s: erasure-coding

> Cleanup the findbugs and other issues after HDFS EC merged to trunk.
> 
>
> Key: HDFS-9182
> URL: https://issues.apache.org/jira/browse/HDFS-9182
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Uma Maheswara Rao G
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HDFSS-9182.00.patch, HDFSS-9182.01.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/patchReleaseAuditProblems.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-11-02 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985879#comment-14985879
 ] 

Xiaobing Zhou commented on HDFS-9242:
-

Patch 003 LGTM. Thanks everyone to do this makeup fix for HDFS-8855!

[~brahmareddy] can you confirm the UT failure? Why is the latest findbugs link 
broken? 

> Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache 
> ---
>
> Key: HDFS-9242
> URL: https://issues.apache.org/jira/browse/HDFS-9242
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-9242-002.patch, HDFS-9242-003.patch, HDFS-9242.patch
>
>
> This was introduced by HDFS-8855 and pre-patch warning can be found at 
> https://builds.apache.org/job/PreCommit-HDFS-Build/12975/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html#DC_DOUBLECHECK
> {code}
> Code  Warning
> DCPossible doublecheck on 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
>  in new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> Bug type DC_DOUBLECHECK (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider
> In method new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> On field 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
> At DataNodeUGIProvider.java:[lines 49-51]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4937) ReplicationMonitor can infinite-loop in BlockPlacementPolicyDefault#chooseRandom()

2015-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985997#comment-14985997
 ] 

Hadoop QA commented on HDFS-4937:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 49s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run 
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 21s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 46s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 20s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 119m 34s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
|
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyConsiderLoad |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
|   | hadoop.hdfs.TestDecommission |
|   | 

[jira] [Commented] (HDFS-9360) Storage type usage isn't updated properly after file deletion

2015-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986782#comment-14986782
 ] 

Hadoop QA commented on HDFS-9360:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 5s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 53s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run 
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 10s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 34s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 28s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 131m 15s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencing |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-03 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770241/HDFS-9360-2.patch |
| JIRA Issue | HDFS-9360 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 973085ba0859 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 

[jira] [Commented] (HDFS-9313) Possible NullPointerException in BlockManager if no excess replica can be chosen

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986693#comment-14986693
 ] 

Hudson commented on HDFS-9313:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8746 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8746/])
HDFS-9313. Possible NullPointerException in BlockManager if no excess (mingma: 
rev d565480da2f646b40c3180e1ccb2935c9863dfef)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Possible NullPointerException in BlockManager if no excess replica can be 
> chosen
> 
>
> Key: HDFS-9313
> URL: https://issues.apache.org/jira/browse/HDFS-9313
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 2.8.0
>
> Attachments: HDFS-9313-2.patch, HDFS-9313.patch
>
>
> HDFS-8647 makes it easier to reason about various block placement scenarios. 
> Here is one possible case where BlockManager won't be able to find the excess 
> replica to delete: when storage policy changes around the same time balancer 
> moves the block. When this happens, it will cause NullPointerException.
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.adjustSetsWithChosenReplica(BlockPlacementPolicy.java:156)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseReplicasToDelete(BlockPlacementPolicyDefault.java:978)
> {noformat}
> Note that it isn't found in any production clusters. Instead, it is found 
> from new unit tests. In addition, the issue has been there before HDFS-8647.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9275) Wait previous ErasureCodingWork to finish before schedule another one

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986728#comment-14986728
 ] 

Hudson commented on HDFS-9275:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #565 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/565/])
HDFS-9275. Wait previous ErasureCodingWork to finish before schedule (yliu: rev 
5ba2b98d0fe29603e136fc43a14f853e820cf7e2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeModeWithStripedFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFSStriped.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteStripedFileWithFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadStripedFileWithMissingBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRecoverStripedFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestRecoverStripedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/StripedFileTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Wait previous ErasureCodingWork to finish before schedule another one
> -
>
> Key: HDFS-9275
> URL: https://issues.apache.org/jira/browse/HDFS-9275
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Walter Su
>Assignee: Walter Su
> Fix For: 3.0.0
>
> Attachments: HDFS-9275.01.patch, HDFS-9275.02.patch, 
> HDFS-9275.03.patch, HDFS-9275.04.patch, HDFS-9275.05.patch
>
>
> In {{ErasureCodingWorker}}, for the same block group, one task doesn't know 
> which internal blocks is in recovering by other tasks. We could end up with 
> recovering 2 identical block with same index. So, {{ReplicationMonitor}} 
> should wait previous work to finish before schedule another one.
> This is related to the occasional failure of {{TestRecoverStripedFile}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9339) Extend full test of KMS ACLs

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986730#comment-14986730
 ] 

Hudson commented on HDFS-9339:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #565 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/565/])
HDFS-9339. Extend full test of KMS ACLs. Contributed by Daniel (zhz: rev 
78d6890865424db850faecfc5c76f14c64925063)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAclsEndToEnd.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Extend full test of KMS ACLs
> 
>
> Key: HDFS-9339
> URL: https://issues.apache.org/jira/browse/HDFS-9339
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Fix For: 2.8.0
>
> Attachments: HDFS-9339.001.patch, HDFS-9339.002.patch
>
>
> HDFS-9295 adds an end-to-end test for KMS, but it is missing a dimension.  
> The tests added in that JIRA hold the configuration constant and test that 
> all operations succeed or fail as expected.  More tests are needed that hold 
> the operation constant and test that all possible configurations cause the 
> operations to succeed or fail as expected.  This JIRA is to add those tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9308) Add truncateMeta() and deleteMeta() to MiniDFSCluster

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986727#comment-14986727
 ] 

Hudson commented on HDFS-9308:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #565 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/565/])
HDFS-9308. Add truncateMeta() and deleteMeta() to MiniDFSCluster. (Tony (lei: 
rev 8e05dbf2bddce95d5f5a5bae5df61acabf0ba7c5)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestCrcCorruption.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java


> Add truncateMeta() and deleteMeta() to MiniDFSCluster
> -
>
> Key: HDFS-9308
> URL: https://issues.apache.org/jira/browse/HDFS-9308
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9308.001.patch, HDFS-9308.002.patch, 
> HDFS-9308.003.patch
>
>
> HDFS-9188 introduced {{corruptMeta()}} method to make corrupting the metadata 
> file filesystem agnostic. There should also be a {{truncateMeta()}} and 
> {{deleteMeta()}} method in MiniDFSCluster to allow truncation of metadata 
> files on DataNodes without writing code that's specific to underling file 
> system. {{FsDatasetTestUtils#truncateMeta()}} is already implemented by 
> HDFS-9188 and cam be exposed easily in {{MiniDFSCluster}}.
> This will be useful for tests such as 
> {{TestLeaseRecovery#testBlockRecoveryWithLessMetafile}} and 
> {{TestCrcCorruption#testCrcCorruption}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9312) Fix TestReplication to be FsDataset-agnostic.

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986729#comment-14986729
 ] 

Hudson commented on HDFS-9312:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #565 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/565/])
HDFS-9312. Fix TestReplication to be FsDataset-agnostic. (lei) (lei: rev 
7632409482aaf06ecc6fe370a9f519afb969ad30)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImplTestUtils.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetTestUtils.java


> Fix TestReplication to be FsDataset-agnostic.
> -
>
> Key: HDFS-9312
> URL: https://issues.apache.org/jira/browse/HDFS-9312
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9312.00.patch, HDFS-9312.01.patch
>
>
> {{TestReplication}} uses raw file system access to inject dummy replica 
> files. It makes {{TestReplication}} not compatible to non-fs dataset 
> implementation.
> We can fix it by using existing {{FsDatasetTestUtils}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9007) Fix HDFS Balancer to honor upgrade domain policy

2015-11-02 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-9007:
--
Attachment: (was: HDFS-9007-2.patch)

> Fix HDFS Balancer to honor upgrade domain policy
> 
>
> Key: HDFS-9007
> URL: https://issues.apache.org/jira/browse/HDFS-9007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-9007-2.patch, HDFS-9007.patch
>
>
> In the current design of HDFS Balancer, it doesn't use BlockPlacementPolicy 
> used by namenode runtime. Instead, it has somewhat redundant code to make 
> sure block allocation conforms with the rack policy.
> When namenode uses upgrade domain based policy, we need to make sure that 
> HDFS balancer doesn't move blocks in a way that could violate upgrade domain 
> block placement policy.
> In the longer term, we should consider how to make Balancer independent of 
> the actual BlockPlacementPolicy as in HDFS-1431. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9007) Fix HDFS Balancer to honor upgrade domain policy

2015-11-02 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-9007:
--
Attachment: HDFS-9007-2.patch

> Fix HDFS Balancer to honor upgrade domain policy
> 
>
> Key: HDFS-9007
> URL: https://issues.apache.org/jira/browse/HDFS-9007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-9007-2.patch, HDFS-9007.patch
>
>
> In the current design of HDFS Balancer, it doesn't use BlockPlacementPolicy 
> used by namenode runtime. Instead, it has somewhat redundant code to make 
> sure block allocation conforms with the rack policy.
> When namenode uses upgrade domain based policy, we need to make sure that 
> HDFS balancer doesn't move blocks in a way that could violate upgrade domain 
> block placement policy.
> In the longer term, we should consider how to make Balancer independent of 
> the actual BlockPlacementPolicy as in HDFS-1431. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8777) Erasure Coding: add tests for taking snapshots on EC files

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985465#comment-14985465
 ] 

Hudson commented on HDFS-8777:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8742 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8742/])
HDFS-8777. Erasure Coding: add tests for taking snapshots on EC files. (zhz: 
rev 90e14055168afdb93fa8089158c03a6a694e066c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicyWithSnapshot.java


> Erasure Coding: add tests for taking snapshots on EC files
> --
>
> Key: HDFS-8777
> URL: https://issues.apache.org/jira/browse/HDFS-8777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Jing Zhao
>Assignee: Rakesh R
>  Labels: test
> Fix For: 3.0.0
>
> Attachments: HDFS-8777-01.patch, HDFS-8777-02.patch, 
> HDFS-8777-03.patch, HDFS-8777-HDFS-7285-00.patch, HDFS-8777-HDFS-7285-01.patch
>
>
> We need to add more tests for (EC + snapshots). The tests need to verify the 
> fsimage saving/loading is correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4937) ReplicationMonitor can infinite-loop in BlockPlacementPolicyDefault#chooseRandom()

2015-11-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985507#comment-14985507
 ] 

Kihwal Lee commented on HDFS-4937:
--

So sorry about the spectacular 118 test failures! It should have refreshed the 
count with an empty exclude node set to obtain the correct count. Looks like a 
few failed test cases are passing with the change. Let's see if the precommit 
agrees.

> ReplicationMonitor can infinite-loop in 
> BlockPlacementPolicyDefault#chooseRandom()
> --
>
> Key: HDFS-4937
> URL: https://issues.apache.org/jira/browse/HDFS-4937
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.4-alpha, 0.23.8
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0, 2.7.2
>
> Attachments: HDFS-4937.patch, HDFS-4937.v1.patch, HDFS-4937.v2.patch, 
> HDFS-4937.v3.patch
>
>
> When a large number of nodes are removed by refreshing node lists, the 
> network topology is updated. If the refresh happens at the right moment, the 
> replication monitor thread may stuck in the while loop of {{chooseRandom()}}. 
> This is because the cached cluster size is used in the terminal condition 
> check of the loop. This usually happens when a block with a high replication 
> factor is being processed. Since replicas/rack is also calculated beforehand, 
> no node choice may satisfy the goodness criteria if refreshing removed racks. 
> All nodes will end up in the excluded list, but the size will still be less 
> than the cached cluster size, so it will loop infinitely. This was observed 
> in a production environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4937) ReplicationMonitor can infinite-loop in BlockPlacementPolicyDefault#chooseRandom()

2015-11-02 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-4937:
-
Attachment: HDFS-4937.v3.patch

> ReplicationMonitor can infinite-loop in 
> BlockPlacementPolicyDefault#chooseRandom()
> --
>
> Key: HDFS-4937
> URL: https://issues.apache.org/jira/browse/HDFS-4937
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.4-alpha, 0.23.8
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0, 2.7.2
>
> Attachments: HDFS-4937.patch, HDFS-4937.v1.patch, HDFS-4937.v2.patch, 
> HDFS-4937.v3.patch
>
>
> When a large number of nodes are removed by refreshing node lists, the 
> network topology is updated. If the refresh happens at the right moment, the 
> replication monitor thread may stuck in the while loop of {{chooseRandom()}}. 
> This is because the cached cluster size is used in the terminal condition 
> check of the loop. This usually happens when a block with a high replication 
> factor is being processed. Since replicas/rack is also calculated beforehand, 
> no node choice may satisfy the goodness criteria if refreshing removed racks. 
> All nodes will end up in the excluded list, but the size will still be less 
> than the cached cluster size, so it will loop infinitely. This was observed 
> in a production environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9320) libhdfspp should not use sizeof for stream parsing

2015-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985448#comment-14985448
 ] 

Hadoop QA commented on HDFS-9320:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
22s {color} | {color:green} HDFS-8707 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-hdfs-native-client in HDFS-8707 failed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-hdfs-native-client in HDFS-8707 failed with JDK 
v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 14s {color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 14s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 15s {color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 15s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 14s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 15s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s 
{color} | {color:red} Patch generated 425 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 23s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.0 Server=1.7.0 
Image:test-patch-base-hadoop-date2015-11-02 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770100/HDFS-9320.HDFS-8707.001.patch
 |
| JIRA Issue | HDFS-9320 |
| Optional Tests |  asflicense  cc  unit  javac  compile  |
| uname | Linux 5b2daa6415b7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-e77b1ce/precommit/personality/hadoop.sh
 |
| git revision | HDFS-8707 / d43c905 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_66 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13332/artifact/patchprocess/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_66.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13332/artifact/patchprocess/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_79.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13332/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_66.txt
 |
| cc | 

[jira] [Commented] (HDFS-9079) Erasure coding: preallocate multiple generation stamps and serialize updates from data streamers

2015-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985566#comment-14985566
 ] 

Hadoop QA commented on HDFS-9079:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 7s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 53s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run 
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 19s 
{color} | {color:red} Patch generated 98 new checkstyle issues in 
hadoop-hdfs-project (total was 366, now 456). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 57s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client introduced 1 new 
FindBugs issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 56s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 49s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 49s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 53s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 19s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 146m 57s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Unread field:BlockMetadataCoordinator.java:[line 95] |
| JDK v1.8.0_60 Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | 

[jira] [Updated] (HDFS-4937) ReplicationMonitor can infinite-loop in BlockPlacementPolicyDefault#chooseRandom()

2015-11-02 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-4937:
-
   Fix Version/s: (was: 2.7.2)
  (was: 3.0.0)
Target Version/s: 2.7.2  (was: 2.8.0)
  Status: Patch Available  (was: Reopened)

> ReplicationMonitor can infinite-loop in 
> BlockPlacementPolicyDefault#chooseRandom()
> --
>
> Key: HDFS-4937
> URL: https://issues.apache.org/jira/browse/HDFS-4937
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.23.8, 2.0.4-alpha
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>  Labels: BB2015-05-TBR
> Attachments: HDFS-4937.patch, HDFS-4937.v1.patch, HDFS-4937.v2.patch, 
> HDFS-4937.v3.patch
>
>
> When a large number of nodes are removed by refreshing node lists, the 
> network topology is updated. If the refresh happens at the right moment, the 
> replication monitor thread may stuck in the while loop of {{chooseRandom()}}. 
> This is because the cached cluster size is used in the terminal condition 
> check of the loop. This usually happens when a block with a high replication 
> factor is being processed. Since replicas/rack is also calculated beforehand, 
> no node choice may satisfy the goodness criteria if refreshing removed racks. 
> All nodes will end up in the excluded list, but the size will still be less 
> than the cached cluster size, so it will loop infinitely. This was observed 
> in a production environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9360) Storage type usage isn't updated properly after file deletion

2015-11-02 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-9360:
--
Assignee: Ming Ma
  Status: Patch Available  (was: Open)

> Storage type usage isn't updated properly after file deletion
> -
>
> Key: HDFS-9360
> URL: https://issues.apache.org/jira/browse/HDFS-9360
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-9360.patch
>
>
> For a directory that doesn't have any storage policy defined, its quota usage 
> is deducted when a file is deleted. This means incorrect value for storage 
> quota usage. Later when applications set the storage type, it can exceed its 
> storage quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9249) NPE thrown if an IOException is thrown in NameNode.

2015-11-02 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985581#comment-14985581
 ] 

Wei-Chiu Chuang commented on HDFS-9249:
---

The test failure is unrelated to this patch. The warning is also not related.

> NPE thrown if an IOException is thrown in NameNode.
> -
>
> Key: HDFS-9249
> URL: https://issues.apache.org/jira/browse/HDFS-9249
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-9249.001.patch, HDFS-9249.002.patch, 
> HDFS-9249.003.patch
>
>
> This issue was found when running test case 
> TestBackupNode.testCheckpointNode, but upon closer look, the problem is not 
> due to the test case.
> Looks like an IOException was thrown in
> try {
>   initializeGenericKeys(conf, nsId, namenodeId);
>   initialize(conf);
>   try {
> haContext.writeLock();
> state.prepareToEnterState(haContext);
> state.enterState(haContext);
>   } finally {
> haContext.writeUnlock();
>   }
> causing the namenode to stop, but the namesystem was not yet properly 
> instantiated, causing NPE.
> I tried to reproduce locally, but to no avail.
> Because I could not reproduce the bug, and the log does not indicate what 
> caused the IOException, I suggest make this a supportability JIRA to log the 
> exception for future improvement.
> Stacktrace
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.getFSImage(NameNode.java:906)
> at org.apache.hadoop.hdfs.server.namenode.BackupNode.stop(BackupNode.java:210)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:827)
> at 
> org.apache.hadoop.hdfs.server.namenode.BackupNode.(BackupNode.java:89)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1474)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.startBackupNode(TestBackupNode.java:102)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint(TestBackupNode.java:298)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpointNode(TestBackupNode.java:130)
> The last few lines of log:
> 2015-10-14 19:45:07,807 INFO namenode.NameNode 
> (NameNode.java:createNameNode(1422)) - createNameNode [-checkpoint]
> 2015-10-14 19:45:07,807 INFO impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:init(158)) - CheckpointNode metrics system started 
> (again)
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(402)) - fs.defaultFS is 
> hdfs://localhost:37835
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(422)) - Clients are to use 
> localhost:37835 to access this namenode/service.
> 2015-10-14 19:45:07,810 INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1708)) - Shutting down the Mini HDFS Cluster
> 2015-10-14 19:45:07,810 INFO namenode.FSNamesystem 
> (FSNamesystem.java:stopActiveServices(1298)) - Stopping services started for 
> active state
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:endCurrentLogSegment(1228)) - Ending log segment 1
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5306)) - NameNodeEditLogRoller was interrupted, exiting
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:printStatistics(703)) - Number of transactions: 3 Total time 
> for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of 
> syncs: 4 SyncTimes(ms): 2 1 
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5373)) - LazyPersistFileScrubber was interrupted, 
> exiting
> 2015-10-14 19:45:07,822 INFO namenode.FileJournalManager 
> (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1/current/edits_inprogress_001
>  -> 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1/current/edits_001-003
> 2015-10-14 19:45:07,835 INFO namenode.FileJournalManager 
> (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name2/current/edits_inprogress_001
>  -> 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name2/current/edits_001-003
> 2015-10-14 19:45:07,836 INFO 

[jira] [Commented] (HDFS-9007) Fix HDFS Balancer to honor upgrade domain policy

2015-11-02 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986007#comment-14986007
 ] 

Lei (Eddy) Xu commented on HDFS-9007:
-

Hi, [~mingma] . This patch looks great to me.

{code}
protected  DatanodeInfo getDatanodeInfo(T datanode) {
{code}

I feel like it'd better to use function overloading here. It'd be safer because 
the compiler will handle the {{null}} case.

+1 once address the comment.

> Fix HDFS Balancer to honor upgrade domain policy
> 
>
> Key: HDFS-9007
> URL: https://issues.apache.org/jira/browse/HDFS-9007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-9007.patch
>
>
> In the current design of HDFS Balancer, it doesn't use BlockPlacementPolicy 
> used by namenode runtime. Instead, it has somewhat redundant code to make 
> sure block allocation conforms with the rack policy.
> When namenode uses upgrade domain based policy, we need to make sure that 
> HDFS balancer doesn't move blocks in a way that could violate upgrade domain 
> block placement policy.
> In the longer term, we should consider how to make Balancer independent of 
> the actual BlockPlacementPolicy as in HDFS-1431. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8893) DNs with failed volumes stop serving during rolling upgrade

2015-11-02 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-8893:
--
Target Version/s: 2.7.3  (was: 2.7.2)

Moving this out of 2.7.2 as there's been no update in a while.

> DNs with failed volumes stop serving during rolling upgrade
> ---
>
> Key: HDFS-8893
> URL: https://issues.apache.org/jira/browse/HDFS-8893
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Daryn Sharp
>Priority: Critical
>
> When a rolling upgrade starts, all DNs try to write a rolling_upgrade marker 
> to each of their volumes. If one of the volumes is bad, this will fail. When 
> this failure happens, the DN does not update the key it received from the NN.
> Unfortunately we had one failed volume on all the 3 datanodes which were 
> having replica.
> Keys expire after 20 hours so at about 20 hours into the rolling upgrade, the 
> DNs with failed volumes will stop serving clients.
> Here is the stack trace on the datanode size:
> {noformat}
> 2015-08-11 07:32:28,827 [DataNode: heartbeating to 8020] WARN 
> datanode.DataNode: IOException in offerService
> java.io.IOException: Read-only file system
> at java.io.UnixFileSystem.createFileExclusively(Native Method)
> at java.io.File.createNewFile(File.java:947)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.setRollingUpgradeMarkers(BlockPoolSliceStorage.java:721)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.setRollingUpgradeMarker(DataStorage.java:173)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.setRollingUpgradeMarker(FsDatasetImpl.java:2357)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.signalRollingUpgrade(BPOfferService.java:480)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.handleRollingUpgradeStatus(BPServiceActor.java:626)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:677)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:833)
> at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9339) Extend full test of KMS ACLs

2015-11-02 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9339:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks Daniel for exploring this change. I agree that keeping the individual 
{{try-finally}} structures makes each subtest look cleaner. I was suggesting 
the consolidation to avoid duplicate code. So each option has its merits and 
I'm with you to use the current structure. +1 on the patch, I just committed to 
trunk and branch-2. Thanks for adding this thorough test!

> Extend full test of KMS ACLs
> 
>
> Key: HDFS-9339
> URL: https://issues.apache.org/jira/browse/HDFS-9339
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Fix For: 2.8.0
>
> Attachments: HDFS-9339.001.patch, HDFS-9339.002.patch
>
>
> HDFS-9295 adds an end-to-end test for KMS, but it is missing a dimension.  
> The tests added in that JIRA hold the configuration constant and test that 
> all operations succeed or fail as expected.  More tests are needed that hold 
> the operation constant and test that all possible configurations cause the 
> operations to succeed or fail as expected.  This JIRA is to add those tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9348) DFS GetErasureCodingPolicy API on a non-existent file should be handled properly

2015-11-02 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9348:

Component/s: erasure-coding

> DFS GetErasureCodingPolicy API on a non-existent file should be handled 
> properly
> 
>
> Key: HDFS-9348
> URL: https://issues.apache.org/jira/browse/HDFS-9348
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HDFS-9348-00.patch
>
>
> Presently calling {{dfs#getErasureCodingPolicy()}} on a non-existent file is 
> returning the ErasureCodingPolicy info. As per the 
> [discussion|https://issues.apache.org/jira/browse/HDFS-8777?focusedCommentId=14981077=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14981077]
>  it has to validate and throw FileNotFoundException.
> Also, {{dfs#getEncryptionZoneForPath()}} API has the same behavior. Again we 
> can discuss to add the file existence validation in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9320) libhdfspp should use sizeof(int32_t) instead of sizeof(int) when parsing data

2015-11-02 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9320:
-
Summary: libhdfspp should use sizeof(int32_t) instead of sizeof(int) when 
parsing data  (was: libhdfspp should not use sizeof for stream parsing)

> libhdfspp should use sizeof(int32_t) instead of sizeof(int) when parsing data
> -
>
> Key: HDFS-9320
> URL: https://issues.apache.org/jira/browse/HDFS-9320
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: James Clampffer
> Attachments: HDFS-9320.HDFS-8707.000.patch, 
> HDFS-9320.HDFS-8707.001.patch, HDFS-9320.HDFS-8707.002.patch
>
>
> In a few places, we're using sizeof(int) and sizeof(short) to determine where 
> in the received buffers we should be looking for data.  Those values are 
> compiler- and platform-dependent.  We should use specified sizes, or at least 
> sizeof(int32_t).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8674) Improve performance of postponed block scans

2015-11-02 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-8674:
--
Target Version/s: 2.7.3  (was: 2.7.2)

Moving this out of 2.7.2 as there's been no update in a while.

> Improve performance of postponed block scans
> 
>
> Key: HDFS-8674
> URL: https://issues.apache.org/jira/browse/HDFS-8674
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-8674.patch
>
>
> When a standby goes active, it marks all nodes as "stale" which will cause 
> block invalidations for over-replicated blocks to be queued until full block 
> reports are received from the nodes with the block.  The replication monitor 
> scans the queue with O(N) runtime.  It picks a random offset and iterates 
> through the set to randomize blocks scanned.
> The result is devastating when a cluster loses multiple nodes during a 
> rolling upgrade. Re-replication occurs, the nodes come back, the excess block 
> invalidations are postponed. Rescanning just 2k blocks out of millions of 
> postponed blocks may take multiple seconds. During the scan, the write lock 
> is held which stalls all other processing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9308) Add truncateMeta() and deleteMeta() to MiniDFSCluster

2015-11-02 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986045#comment-14986045
 ] 

Lei (Eddy) Xu commented on HDFS-9308:
-

Triggered a new jenkins run.

+1 pending jenkins.

> Add truncateMeta() and deleteMeta() to MiniDFSCluster
> -
>
> Key: HDFS-9308
> URL: https://issues.apache.org/jira/browse/HDFS-9308
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
>Priority: Minor
> Attachments: HDFS-9308.001.patch, HDFS-9308.002.patch, 
> HDFS-9308.003.patch
>
>
> HDFS-9188 introduced {{corruptMeta()}} method to make corrupting the metadata 
> file filesystem agnostic. There should also be a {{truncateMeta()}} and 
> {{deleteMeta()}} method in MiniDFSCluster to allow truncation of metadata 
> files on DataNodes without writing code that's specific to underling file 
> system. {{FsDatasetTestUtils#truncateMeta()}} is already implemented by 
> HDFS-9188 and cam be exposed easily in {{MiniDFSCluster}}.
> This will be useful for tests such as 
> {{TestLeaseRecovery#testBlockRecoveryWithLessMetafile}} and 
> {{TestCrcCorruption#testCrcCorruption}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9320) libhdfspp should use sizeof(int32_t) instead of sizeof(int) when parsing data

2015-11-02 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9320:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-8707
   Status: Resolved  (was: Patch Available)

I've committed the patch to the HDFS-8707 branch. Thanks [~James Clampffer] for 
the contribution.

> libhdfspp should use sizeof(int32_t) instead of sizeof(int) when parsing data
> -
>
> Key: HDFS-9320
> URL: https://issues.apache.org/jira/browse/HDFS-9320
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: James Clampffer
> Fix For: HDFS-8707
>
> Attachments: HDFS-9320.HDFS-8707.000.patch, 
> HDFS-9320.HDFS-8707.001.patch, HDFS-9320.HDFS-8707.002.patch
>
>
> In a few places, we're using sizeof(int) and sizeof(short) to determine where 
> in the received buffers we should be looking for data.  Those values are 
> compiler- and platform-dependent.  We should use specified sizes, or at least 
> sizeof(int32_t).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9129) Move the safemode block count into BlockManager

2015-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986097#comment-14986097
 ] 

Hadoop QA commented on HDFS-9129:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 58s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run 
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s 
{color} | {color:red} Patch generated 4 new checkstyle issues in 
hadoop-hdfs-project/hadoop-hdfs (total was 808, now 756). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 51m 36s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 59s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 19s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 122m 30s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-02 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12769665/HDFS-9129.017.patch |
| JIRA Issue | HDFS-9129 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 6100503ffaf0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Updated] (HDFS-9129) Move the safemode block count into BlockManager

2015-11-02 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9129:

Attachment: HDFS-9129.018.patch

The v18 patch fixes the flaky {{TestBlockManagerSafeMode}} unit test.

> Move the safemode block count into BlockManager
> ---
>
> Key: HDFS-9129
> URL: https://issues.apache.org/jira/browse/HDFS-9129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Attachments: HDFS-9129.000.patch, HDFS-9129.001.patch, 
> HDFS-9129.002.patch, HDFS-9129.003.patch, HDFS-9129.004.patch, 
> HDFS-9129.005.patch, HDFS-9129.006.patch, HDFS-9129.007.patch, 
> HDFS-9129.008.patch, HDFS-9129.009.patch, HDFS-9129.010.patch, 
> HDFS-9129.011.patch, HDFS-9129.012.patch, HDFS-9129.013.patch, 
> HDFS-9129.014.patch, HDFS-9129.015.patch, HDFS-9129.016.patch, 
> HDFS-9129.017.patch, HDFS-9129.018.patch
>
>
> The {{SafeMode}} needs to track whether there are enough blocks so that the 
> NN can get out of the safemode. These fields can moved to the 
> {{BlockManager}} class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9329) TestBootstrapStandby#testRateThrottling is flaky because fsimage size is smaller than IO buffer size

2015-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986014#comment-14986014
 ] 

Hudson commented on HDFS-9329:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1351 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1351/])
HDFS-9329. TestBootstrapStandby#testRateThrottling is flaky because (zhz: rev 
259bea3b48de7469a500831efb3306e8464a2dc9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestBootstrapStandby.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestBootstrapStandby#testRateThrottling is flaky because fsimage size is 
> smaller than IO buffer size
> 
>
> Key: HDFS-9329
> URL: https://issues.apache.org/jira/browse/HDFS-9329
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9329.00.patch, HDFS-9329.01.patch
>
>
> {{testRateThrottling}} verifies that bootstrap transfer should timeout with a 
> very small {{DFS_IMAGE_TRANSFER_BOOTSTRAP_STANDBY_RATE_KEY}} value. However, 
> throttling on the image sender only happens after sending each IO buffer. 
> Therefore, the test sometimes fails if the receiver receives the full fsimage 
> (which is smaller than IO buffer size) before throttling begins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9354) Fix TestBalancer#testBalancerWithZeroThreadsForMove on Windows

2015-11-02 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986041#comment-14986041
 ] 

Chris Nauroth commented on HDFS-9354:
-

Thanks, [~xyao].

Yes, you're right about {{ExpectedException}}.  On further review of the code, 
I don't think this part is really relevant to the current patch.  Please 
disregard this part of my feedback.  I'll review again when the change to use 
an {{@After}} cleanup method is available.

> Fix TestBalancer#testBalancerWithZeroThreadsForMove on Windows
> --
>
> Key: HDFS-9354
> URL: https://issues.apache.org/jira/browse/HDFS-9354
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-9354.00.patch
>
>
> This negative test expect HadoopIllegalArgumentException on illegal 
> configuration. It uses JUnit (expected=HadoopIllegalArgumentException.class)  
> and passed fine on Linux.
> On windows, this test passes as well. But it left open handles on NN metadata 
> directories used by MiniDFSCluster. As a result, quite a few of subsequent 
> TestBalancer unit tests can't start MiniDFSCluster. The open handles prevents 
> them from cleaning up NN metadata directories on Windows. 
> This JIRA is opened to explicitly catch the Exception and ensure the test 
> cluster is properly shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8871) Decommissioning of a node with a failed volume may not start

2015-11-02 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-8871:
--
Target Version/s: 2.7.3, 2.6.3  (was: 2.7.2, 2.6.3)

Moving this to 2.7.3 since there's been no update in a while.

> Decommissioning of a node with a failed volume may not start
> 
>
> Key: HDFS-8871
> URL: https://issues.apache.org/jira/browse/HDFS-8871
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Kihwal Lee
>Assignee: Daryn Sharp
>Priority: Critical
>
> Since staleness may not be properly cleared, a node with a failed volume may 
> not actually get scanned for block replication. Nothing is being replicated 
> from these nodes.
> This bug does not manifest unless the datanode has a unique storage ID per 
> volume. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9260) Improve performance and GC friendliness of startup and FBRs

2015-11-02 Thread Staffan Friberg (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986062#comment-14986062
 ] 

Staffan Friberg commented on HDFS-9260:
---

Hi Daryn,

Thanks for taking a look at the patch.

1. FBR and startup improves, please see the attached PDF.
2. Will need to check what we do here (and if I still have the old logs), but 
doesn't feel like it should be affected
3. We will be slightly slower when deleting a file or removing with the current 
algorithms as it goes through the LightWeightGSet to first lookup/remove each 
affected blockinfo, and after that remove it from the linked list. In my case 
it will be removed from treeset which requires a new lookup. However while this 
is slower I think the time it takes to that process is far outweighed by the 
time it takes for deleting or redistributing blocks on all DN. Deleting files 
with a large number of blocks seems to take on the order of hours since we only 
send small parts of the total block list to each node on every heartbeat. No to 
familiar with how aggressive the redistribution is in the event of a DN 
decommission.
4. It will decrease as long as the TreeSet is kept above ~50% fill ratio, since 
the reference to each blockinfo no is a single pointer from the treeset instead 
of the double linked list.

> Improve performance and GC friendliness of startup and FBRs
> ---
>
> Key: HDFS-9260
> URL: https://issues.apache.org/jira/browse/HDFS-9260
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, performance
>Affects Versions: 2.7.1
>Reporter: Staffan Friberg
>Assignee: Staffan Friberg
> Attachments: HDFS Block and Replica Management 20151013.pdf, 
> HDFS-7435.001.patch, HDFS-7435.002.patch, HDFS-7435.003.patch, 
> HDFS-7435.004.patch, HDFS-7435.005.patch, HDFS-7435.006.patch, 
> HDFS-7435.007.patch, HDFS-9260.008.patch, HDFS-9260.009.patch
>
>
> This patch changes the datastructures used for BlockInfos and Replicas to 
> keep them sorted. This allows faster and more GC friendly handling of full 
> block reports.
> Would like to hear peoples feedback on this change and also some help 
> investigating/understanding a few outstanding issues if we are interested in 
> moving forward with this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9348) DFS GetErasureCodingPolicy API on a non-existent file should be handled properly

2015-11-02 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986075#comment-14986075
 ] 

Andrew Wang commented on HDFS-9348:
---

Nice find here, yea I think this is a bug, we should throw an exception. The 
javadoc in HdfsAdmin for getEncryptionZoneForPath says:

{noformat}
   * Get the path of the encryption zone for a given file or directory.
   *
   * @param path The path to get the ez for.
   *
   * @return The EncryptionZone of the ez, or null if path is not in an ez.
   * @throws IOExceptionif there was a general IO exception
   * @throws AccessControlException if the caller does not have access to path
   * @throws FileNotFoundException  if the path does not exist
{noformat}

> DFS GetErasureCodingPolicy API on a non-existent file should be handled 
> properly
> 
>
> Key: HDFS-9348
> URL: https://issues.apache.org/jira/browse/HDFS-9348
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HDFS-9348-00.patch
>
>
> Presently calling {{dfs#getErasureCodingPolicy()}} on a non-existent file is 
> returning the ErasureCodingPolicy info. As per the 
> [discussion|https://issues.apache.org/jira/browse/HDFS-8777?focusedCommentId=14981077=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14981077]
>  it has to validate and throw FileNotFoundException.
> Also, {{dfs#getEncryptionZoneForPath()}} API has the same behavior. Again we 
> can discuss to add the file existence validation in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9339) Extend full test of KMS ACLs

2015-11-02 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-9339:
---
Attachment: HDFS-9339.002.patch

[~zhz], I tried making the tests use one big try-finally wrapper, and I just 
didn't like it as much.  One thing the repeated try-finally blocks do is make 
it really obvious where the subsections of the tests are.  Taking the blocks 
out leaves the tests rambling a bit.  Unless its a big deal for you, I think 
the code is actually clearer with the try-finally blocks.

I did happen across a bug in HDFS-9295 that I fixed in the patch I just posted.

> Extend full test of KMS ACLs
> 
>
> Key: HDFS-9339
> URL: https://issues.apache.org/jira/browse/HDFS-9339
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9339.001.patch, HDFS-9339.002.patch
>
>
> HDFS-9295 adds an end-to-end test for KMS, but it is missing a dimension.  
> The tests added in that JIRA hold the configuration constant and test that 
> all operations succeed or fail as expected.  More tests are needed that hold 
> the operation constant and test that all possible configurations cause the 
> operations to succeed or fail as expected.  This JIRA is to add those tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9219) Even if permission is enabled in an environment, while resolving reserved paths there is no check on permission.

2015-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984936#comment-14984936
 ] 

Hadoop QA commented on HDFS-9219:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 7s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 22s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run 
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 20s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 29s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 148m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestNodeCount |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-02 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770038/HDFS-9219.3.patch |
| JIRA Issue | HDFS-9219 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 6fa2b9a9f23e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 

[jira] [Commented] (HDFS-9049) Make Datanode Netty reverse proxy port to be configurable

2015-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984990#comment-14984990
 ] 

Hadoop QA commented on HDFS-9049:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 16s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run 
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 16s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-hdfs-project/hadoop-hdfs (total was 396, now 397). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 5s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 30s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 54s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 144m 27s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | hadoop.hdfs.TestFileCreationDelete |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | 

[jira] [Commented] (HDFS-8425) [umbrella] Performance tuning, investigation and optimization for erasure coding

2015-11-02 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984928#comment-14984928
 ] 

Walter Su commented on HDFS-8425:
-

Thanks [~tfukudom]! The results looks good.

I agree we should test read with some DN killed. But I'm afraid it won't be 
much different in the TestDFSIO.

I've only tested writing. when I ran TestDFSIO, I found the throughput of ec is 
slightly better than repl. It's the same as [~tfukudom]'s tests. I open disk 
monitor and network monitor. The disk monitor shows that disk utilization often 
hits 100%. I think it's because we can use all the cpus of NodeManagers, so the 
bottleneck is disk/network io. It's useful because we can write ec files in 
batch. For example, converting multiple repl files to ec files.

The speed of single client writing is constrained by coding speed. Per local 
test, it's 2.5x slower than repl. We need a faster codec. I think it's also 
important, right? But I'm not sure there's use-case is bounded by the speed of 
single client writing. Usually we write files using repl, and convert them to 
ec files later.

How do you think?

> [umbrella] Performance tuning, investigation and optimization for erasure 
> coding
> 
>
> Key: HDFS-8425
> URL: https://issues.apache.org/jira/browse/HDFS-8425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
> Attachments: testClientWriteReadFile_v1.pdf, 
> testdfsio-read-mbsec.png, testdfsio-write-mbsec.png
>
>
> This {{umbrella}} jira aims to track performance tuning, investigation and 
> optimization for erasure coding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9357) NN UI is not showing which DN is "Decommissioned "and "Decommissioned & dead".

2015-11-02 Thread Archana T (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Archana T updated HDFS-9357:

Attachment: decommisioned_n_dead_.png

> NN UI is not showing which DN is "Decommissioned "and "Decommissioned & dead".
> --
>
> Key: HDFS-9357
> URL: https://issues.apache.org/jira/browse/HDFS-9357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Surendra Singh Lilhore
>Priority: Critical
> Attachments: decommisioned_n_dead_.png, decommissioned_.png
>
>
> NN UI is not showing which DN is "Decommissioned "and "Decommissioned & dead"
> Root Cause --
> "Decommissioned" and "Decommissioned & dead" icon not reflected on NN UI
> When DN is in Decommissioned status or in "Decommissioned & dead" status, 
> same status is not reflected on NN UI 
> DN status is as below --
> hdfs dfsadmin -report
> Name: 10.xx.xx.xx1:50076 (host-xx1)
> Hostname: host-xx
> Decommission Status : Decommissioned
> Configured Capacity: 230501634048 (214.67 GB)
> DFS Used: 36864 (36 KB)
> Dead datanodes (1):
> Name: 10.xx.xx.xx2:50076 (host-xx2)
> Hostname: host-xx
> Decommission Status : Decommissioned
> Same is not reflected on NN UI.
> Attached NN UI snapshots for the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9357) NN UI is not showing which DN is "Decommissioned "and "Decommissioned & dead".

2015-11-02 Thread Archana T (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Archana T updated HDFS-9357:

Attachment: decommissioned_.png

> NN UI is not showing which DN is "Decommissioned "and "Decommissioned & dead".
> --
>
> Key: HDFS-9357
> URL: https://issues.apache.org/jira/browse/HDFS-9357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Surendra Singh Lilhore
>Priority: Critical
> Attachments: decommisioned_n_dead_.png, decommissioned_.png
>
>
> NN UI is not showing which DN is "Decommissioned "and "Decommissioned & dead"
> Root Cause --
> "Decommissioned" and "Decommissioned & dead" icon not reflected on NN UI
> When DN is in Decommissioned status or in "Decommissioned & dead" status, 
> same status is not reflected on NN UI 
> DN status is as below --
> hdfs dfsadmin -report
> Name: 10.xx.xx.xx1:50076 (host-xx1)
> Hostname: host-xx
> Decommission Status : Decommissioned
> Configured Capacity: 230501634048 (214.67 GB)
> DFS Used: 36864 (36 KB)
> Dead datanodes (1):
> Name: 10.xx.xx.xx2:50076 (host-xx2)
> Hostname: host-xx
> Decommission Status : Decommissioned
> Same is not reflected on NN UI.
> Attached NN UI snapshots for the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985007#comment-14985007
 ] 

Hadoop QA commented on HDFS-9242:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 14s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run 
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 40s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 27s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 20s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 141m 27s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.TestBlockReaderLocal |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshot |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.TestPersistBlocks |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencing |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestInterDatanodeProtocol |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencing |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-02 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770046/HDFS-9242-003.patch |
| JIRA Issue | HDFS-9242 |
| Optional Tests |  asflicense  

  1   2   >