[jira] [Updated] (HDFS-6826) Plugin interface to enable delegation of HDFS authorization assertions

2015-03-20 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-6826:
--
Attachment: HDFS-6826.14.patch

Fixing Javadoc error.
The remaining test case failure is unrelated

 Plugin interface to enable delegation of HDFS authorization assertions
 --

 Key: HDFS-6826
 URL: https://issues.apache.org/jira/browse/HDFS-6826
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HDFS-6826-idea.patch, HDFS-6826-idea2.patch, 
 HDFS-6826-permchecker.patch, HDFS-6826.10.patch, HDFS-6826.11.patch, 
 HDFS-6826.12.patch, HDFS-6826.13.patch, HDFS-6826.14.patch, 
 HDFS-6826v3.patch, HDFS-6826v4.patch, HDFS-6826v5.patch, HDFS-6826v6.patch, 
 HDFS-6826v7.1.patch, HDFS-6826v7.2.patch, HDFS-6826v7.3.patch, 
 HDFS-6826v7.4.patch, HDFS-6826v7.5.patch, HDFS-6826v7.6.patch, 
 HDFS-6826v7.patch, HDFS-6826v8.patch, HDFS-6826v9.patch, 
 HDFSPluggableAuthorizationProposal-v2.pdf, 
 HDFSPluggableAuthorizationProposal.pdf


 When Hbase data, HiveMetaStore data or Search data is accessed via services 
 (Hbase region servers, HiveServer2, Impala, Solr) the services can enforce 
 permissions on corresponding entities (databases, tables, views, columns, 
 search collections, documents). It is desirable, when the data is accessed 
 directly by users accessing the underlying data files (i.e. from a MapReduce 
 job), that the permission of the data files map to the permissions of the 
 corresponding data entity (i.e. table, column family or search collection).
 To enable this we need to have the necessary hooks in place in the NameNode 
 to delegate authorization to an external system that can map HDFS 
 files/directories to data entities and resolve their permissions based on the 
 data entities permissions.
 I’ll be posting a design proposal in the next few days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7961) Trigger full block report after hot swapping disk

2015-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370944#comment-14370944
 ] 

Hadoop QA commented on HDFS-7961:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12705823/HDFS-7961.001.patch
  against trunk revision e37ca22.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.mover.TestMover
  org.apache.hadoop.tracing.TestTracing

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9997//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9997//console

This message is automatically generated.

 Trigger full block report after hot swapping disk
 -

 Key: HDFS-7961
 URL: https://issues.apache.org/jira/browse/HDFS-7961
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Attachments: HDFS-7961.000.patch, HDFS-7961.001.patch


 As discussed in HDFS-7960, NN could not remove the data storage metadata from 
 its memory. 
 DN should trigger a full block report immediately after running hot swapping 
 drives.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7833) DataNode reconfiguration does not recalculate valid volumes required, based on configured failed volumes tolerated.

2015-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370812#comment-14370812
 ] 

Hadoop QA commented on HDFS-7833:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12705796/HDFS-7833.001.patch
  against trunk revision e37ca22.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
  org.apache.hadoop.tracing.TestTracing

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9995//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9995//console

This message is automatically generated.

 DataNode reconfiguration does not recalculate valid volumes required, based 
 on configured failed volumes tolerated.
 ---

 Key: HDFS-7833
 URL: https://issues.apache.org/jira/browse/HDFS-7833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Chris Nauroth
Assignee: Lei (Eddy) Xu
 Attachments: HDFS-7833.000.patch, HDFS-7833.001.patch


 DataNode reconfiguration never recalculates 
 {{FsDatasetImpl#validVolsRequired}}.  This may cause incorrect behavior of 
 the {{dfs.datanode.failed.volumes.tolerated}} property if reconfiguration 
 causes the DataNode to run with a different total number of volumes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7835) make initial sleeptime in locateFollowingBlock configurable for DFSClient.

2015-03-20 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370828#comment-14370828
 ] 

Yongjun Zhang commented on HDFS-7835:
-

Thanks [~zxu]. 

+1, and I will commit tomorrow.


 make initial sleeptime in locateFollowingBlock configurable for DFSClient.
 --

 Key: HDFS-7835
 URL: https://issues.apache.org/jira/browse/HDFS-7835
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: dfsclient
Reporter: zhihai xu
Assignee: zhihai xu
 Attachments: HDFS-7835.000.patch, HDFS-7835.001.patch, 
 HDFS-7835.002.patch


 Make initial sleeptime in locateFollowingBlock configurable for DFSClient.
 Current the sleeptime/localTimeout in locateFollowingBlock/completeFile from 
 DFSOutputStream is hard-coded as 400 ms, but retries can be configured by 
 dfs.client.block.write.locateFollowingBlock.retries. We should also make 
 the initial sleeptime configurable to give user more flexibility to control 
 both retry and delay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7962) Remove duplicated logs in BlockManager

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370854#comment-14370854
 ] 

Hudson commented on HDFS-7962:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7380 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7380/])
HDFS-7962. Remove duplicated logs in BlockManager. (yliu) (yliu: rev 
978ef11f26794c22c7289582653b32268478e23e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Remove duplicated logs in BlockManager
 --

 Key: HDFS-7962
 URL: https://issues.apache.org/jira/browse/HDFS-7962
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7962.001.patch


 There are few duplicated log in {{BlockManager}}.
 Also do few refinement of log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7957) Truncate should verify quota before making changes

2015-03-20 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370905#comment-14370905
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7957:
---

In INodeFile.computeQuotaDeltaForTruncate, when sf != null, I think the code 
can be simplified to below:
{code}
if (sf != null) {
  FileDiff diff = sf.getDiffs().getLast();
  if (diff != null) {
final BlockInfoContiguous[] last = diff.getBlocks();
if (last != null) {
  for (int i = (onBoundary ? n : n-1);
  i  blocks.length  i  last.length  last[i].equals(blocks[i]);
  i++) {
truncateSize -= blocks[i].getNumBytes();
  }
}
  }
}
{code}
The file could be appended and truncated previously so that it is impossible to 
have last\[i].equals(blocks\[j]) for i != j.  Also if 
last\[i].equals(blocks\[i]) == false for some i, then 
last\[j].equals(blocks\[j]) == false for all j = i.  Do you agree?



 Truncate should verify quota before making changes
 --

 Key: HDFS-7957
 URL: https://issues.apache.org/jira/browse/HDFS-7957
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Critical
 Attachments: HDFS-7957.000.patch, HDFS-7957.001.patch


 This is a similar issue with HDFS-7587: for truncate we should also verify 
 quota in the beginning and update quota in the end.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7833) DataNode reconfiguration does not recalculate valid volumes required, based on configured failed volumes tolerated.

2015-03-20 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370814#comment-14370814
 ] 

Lei (Eddy) Xu commented on HDFS-7833:
-

These failing tests are not related.

 DataNode reconfiguration does not recalculate valid volumes required, based 
 on configured failed volumes tolerated.
 ---

 Key: HDFS-7833
 URL: https://issues.apache.org/jira/browse/HDFS-7833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Chris Nauroth
Assignee: Lei (Eddy) Xu
 Attachments: HDFS-7833.000.patch, HDFS-7833.001.patch


 DataNode reconfiguration never recalculates 
 {{FsDatasetImpl#validVolsRequired}}.  This may cause incorrect behavior of 
 the {{dfs.datanode.failed.volumes.tolerated}} property if reconfiguration 
 causes the DataNode to run with a different total number of volumes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7835) make initial sleeptime in locateFollowingBlock configurable for DFSClient.

2015-03-20 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370821#comment-14370821
 ] 

zhihai xu commented on HDFS-7835:
-

All these test failure are not related to my change.
TestTracing is reported at HDFS-7963
TestRetryCacheWithHA and TestEncryptionZonesWithKMS are passed at my latest 
local build:
{code}
---
 T E S T S
---
Running org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.994 sec - 
in org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
Results :
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0
---
 T E S T S
---
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.67 sec - in 
org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Results :
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0
{code}

 make initial sleeptime in locateFollowingBlock configurable for DFSClient.
 --

 Key: HDFS-7835
 URL: https://issues.apache.org/jira/browse/HDFS-7835
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: dfsclient
Reporter: zhihai xu
Assignee: zhihai xu
 Attachments: HDFS-7835.000.patch, HDFS-7835.001.patch, 
 HDFS-7835.002.patch


 Make initial sleeptime in locateFollowingBlock configurable for DFSClient.
 Current the sleeptime/localTimeout in locateFollowingBlock/completeFile from 
 DFSOutputStream is hard-coded as 400 ms, but retries can be configured by 
 dfs.client.block.write.locateFollowingBlock.retries. We should also make 
 the initial sleeptime configurable to give user more flexibility to control 
 both retry and delay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6599) 2.4 addBlock is 10 to 20 times slower compared to 0.23

2015-03-20 Thread Anthony Hsu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370834#comment-14370834
 ] 

Anthony Hsu commented on HDFS-6599:
---

We're encountering NameNode slowness on Hadoop 2.3 and wondering whether this 
patch will solve the problem. Is there any way to benchmark on a smaller 
cluster? Why does this problem only become evident on large, multi-thousand 
node clusters?

 2.4 addBlock is 10 to 20 times slower compared to 0.23
 --

 Key: HDFS-6599
 URL: https://issues.apache.org/jira/browse/HDFS-6599
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0, 2.4.0
Reporter: Kihwal Lee
Assignee: Daryn Sharp
Priority: Blocker
 Fix For: 2.5.0

 Attachments: HDFS-6599.patch


 From one of our busiest 0.23 clusters:
 {panel}
 AddBlockAvgTime : 0.9514711501719515
 CreateAvgTime : 1.7564162389174
 CompleteAvgTime : 1.3310406035056548
 BlockReceivedAndDeletedAvgTime : 0.661210005151392
 {panel}
 From a not-so-busy 2.4 cluster:
 {panel}
 AddBlockAvgTime : 10.084
 CreateAvgTime : 1.0
 CompleteAvgTime : 1.1112
 BlockReceivedAndDeletedAvgTime : 0.07692307692307694
 {panel}
 When the 2.4 cluster gets a moderate amount of write requests, the latency is 
 terrible. E.g. addBlock goes upward of 60ms. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7962) Remove duplicated logs in BlockManager

2015-03-20 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7962:
-
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks for the review, Andrew! Committed to trunk, branch-2, branch-2.7.

 Remove duplicated logs in BlockManager
 --

 Key: HDFS-7962
 URL: https://issues.apache.org/jira/browse/HDFS-7962
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7962.001.patch


 There are few duplicated log in {{BlockManager}}.
 Also do few refinement of log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7427) [fetchimage] Should give correct error message when it's not able flush the image file.

2015-03-20 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370893#comment-14370893
 ] 

Walter Su commented on HDFS-7427:
-

It's fixed in my hadoop 2.6.0 version.
{code}
ds-35:/home/skh/hadoop-2.6.0 # bin/hdfs dfsadmin -fetchImage ./
15/03/20 15:23:49 INFO namenode.TransferFsImage: Opening connection to 
https://ds-34:50470/imagetransfer?getimage=1txid=latest
15/03/20 15:23:49 INFO namenode.TransferFsImage: Image Transfer timeout 
configured to 6 milliseconds
fetchImage: Image transfer servlet at 
https://ds-34:50470/imagetransfer?getimage=1txid=latest failed with status 
code 403
Response message:
Only Namenode, Secondary Namenode, and administrators may access this servlet
{code}

 [fetchimage] Should give correct error message when it's not able flush the 
 image file.
 ---

 Key: HDFS-7427
 URL: https://issues.apache.org/jira/browse/HDFS-7427
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula

 Scenario:
 Start cluster in securemode and enable only HTTPS
 Run fectchimage command where user not having permission to access the 
 folder..
  *From Namenode log* 
 {noformat}
 2014-11-24 16:46:49,072 | WARN  | 614008292@qtp-1263063368-200 | Committed 
 before 410 GetImage failed. org.mortbay.jetty.EofException
 at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:791)
 at 
 org.mortbay.jetty.HttpConnection.flushResponse(HttpConnection.java:693)
 at 
 org.mortbay.jetty.HttpConnection$Output.close(HttpConnection.java:999)
 at 
 org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:376)
 at 
 org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:332)
 at 
 org.apache.hadoop.hdfs.server.namenode.ImageServlet$1.serveFile(ImageServlet.java:158)
 at 
 org.apache.hadoop.hdfs.server.namenode.ImageServlet$1.run(ImageServlet.java:120)
 at 
 org.apache.hadoop.hdfs.server.namenode.ImageServlet$1.run(ImageServlet.java:101)
 at java.security.AccessController.doPrivileged(Native Method)
 {noformat}
  *From Commandline* 
 [omm@linux158 bin]$ ./hdfs dfsadmin -fetchImage /srv
 OutPut : 123456
  *{color:red}fetchImage: Unable to download to any storage directory{color}* 
  *It's not unable to download, it should be like permission denied* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7963) Fix expected tracing spans in TestTracing along with HDFS-7054

2015-03-20 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-7963:
---
Affects Version/s: 2.7.0
   Status: Patch Available  (was: Open)

 Fix expected tracing spans in TestTracing along with HDFS-7054
 --

 Key: HDFS-7963
 URL: https://issues.apache.org/jira/browse/HDFS-7963
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.7.0
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HDFS-7963.001.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7963) Fix expected tracing spans in TestTracing along with HDFS-7054

2015-03-20 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-7963:
---
Attachment: HDFS-7963.001.patch

There are no tracing spans named DFSOutputStream any more. In addition, spans 
having multiple parents do not have specific trace id.

 Fix expected tracing spans in TestTracing along with HDFS-7054
 --

 Key: HDFS-7963
 URL: https://issues.apache.org/jira/browse/HDFS-7963
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HDFS-7963.001.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6841) Use Time.monotonicNow() wherever applicable instead of Time.now()

2015-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370950#comment-14370950
 ] 

Hadoop QA commented on HDFS-6841:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12705831/HDFS-6841-006.patch
  against trunk revision e37ca22.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 20 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.tracing.TestTracing
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9998//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9998//console

This message is automatically generated.

 Use Time.monotonicNow() wherever applicable instead of Time.now()
 -

 Key: HDFS-6841
 URL: https://issues.apache.org/jira/browse/HDFS-6841
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-6841-001.patch, HDFS-6841-002.patch, 
 HDFS-6841-003.patch, HDFS-6841-004.patch, HDFS-6841-005.patch, 
 HDFS-6841-006.patch


 {{Time.now()}} used  in many places to calculate elapsed time.
 This should be replaced with {{Time.monotonicNow()}} to avoid effect of 
 System time changes on elapsed time calculations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7881) TestHftpFileSystem#testSeek fails in branch-2

2015-03-20 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371076#comment-14371076
 ] 

Akira AJISAKA commented on HDFS-7881:
-

Thanks [~brahmareddy] for the update. Mostly looks good to me. Some minor 
comments:

{code}
  // Try to get the content length by parsing the content range
  // because HftpFileSystem does not return the content length
  // if the content is partial.
{code}
1. I'm thinking it's better to add the above comment  between
{code}
if (cl == null) {
{code}
and
{code}
  if (connection.getResponseCode() == HttpStatus.SC_PARTIAL_CONTENT) {
{code}
.
{code}
} catch (Exception ie) {
{code}
2. {{ie}} should be {{e}} since the expected exceptions are not {{IOException}}.

{code}
  throw new IOException(
  failed to get content length by parsing the content range);
{code}
3. Would you add {{range}} and the original error message ({{e.getMessage()}}) 
to the error message?

 TestHftpFileSystem#testSeek fails in branch-2
 -

 Key: HDFS-7881
 URL: https://issues.apache.org/jira/browse/HDFS-7881
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
Priority: Blocker
 Attachments: HDFS-7881-002.patch, HDFS-7881.patch


 TestHftpFileSystem#testSeek fails in branch-2.
 {code}
 ---
  T E S T S
 ---
 Running org.apache.hadoop.hdfs.web.TestHftpFileSystem
 Tests run: 14, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 6.201 sec 
  FAILURE! - in org.apache.hadoop.hdfs.web.TestHftpFileSystem
 testSeek(org.apache.hadoop.hdfs.web.TestHftpFileSystem)  Time elapsed: 0.054 
 sec   ERROR!
 java.io.IOException: Content-Length is missing: {null=[HTTP/1.1 206 Partial 
 Content], Date=[Wed, 04 Mar 2015 05:32:30 GMT, Wed, 04 Mar 2015 05:32:30 
 GMT], Expires=[Wed, 04 Mar 2015 05:32:30 GMT, Wed, 04 Mar 2015 05:32:30 GMT], 
 Connection=[close], Content-Type=[text/plain; charset=utf-8], 
 Server=[Jetty(6.1.26)], Content-Range=[bytes 7-9/10], Pragma=[no-cache, 
 no-cache], Cache-Control=[no-cache]}
   at 
 org.apache.hadoop.hdfs.web.ByteRangeInputStream.openInputStream(ByteRangeInputStream.java:132)
   at 
 org.apache.hadoop.hdfs.web.ByteRangeInputStream.getInputStream(ByteRangeInputStream.java:104)
   at 
 org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:181)
   at java.io.FilterInputStream.read(FilterInputStream.java:83)
   at 
 org.apache.hadoop.hdfs.web.TestHftpFileSystem.testSeek(TestHftpFileSystem.java:253)
 Results :
 Tests in error: 
   TestHftpFileSystem.testSeek:253 » IO Content-Length is missing: 
 {null=[HTTP/1
 Tests run: 14, Failures: 0, Errors: 1, Skipped: 0
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7854) Separate class DataStreamer out of DFSOutputStream

2015-03-20 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-7854:

Attachment: HDFS-7854-006.patch

 Separate class DataStreamer out of DFSOutputStream
 --

 Key: HDFS-7854
 URL: https://issues.apache.org/jira/browse/HDFS-7854
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-7854-001.patch, HDFS-7854-002.patch, 
 HDFS-7854-003.patch, HDFS-7854-004-duplicate.patch, 
 HDFS-7854-004-duplicate2.patch, HDFS-7854-004-duplicate3.patch, 
 HDFS-7854-004.patch, HDFS-7854-005.patch, HDFS-7854-006.patch


 This sub task separate DataStreamer from DFSOutputStream. New DataStreamer 
 will accept packets and write them to remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7854) Separate class DataStreamer out of DFSOutputStream

2015-03-20 Thread Li Bo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371090#comment-14371090
 ] 

Li Bo commented on HDFS-7854:
-

The failure of {{TestTracing#testWriteTraceHooks}} seems caused by HDFS-7054, 
and HDFS-7963 is just created to fix this problem. I download current code from 
trunk and also find this test failed.

 Separate class DataStreamer out of DFSOutputStream
 --

 Key: HDFS-7854
 URL: https://issues.apache.org/jira/browse/HDFS-7854
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-7854-001.patch, HDFS-7854-002.patch, 
 HDFS-7854-003.patch, HDFS-7854-004-duplicate.patch, 
 HDFS-7854-004-duplicate2.patch, HDFS-7854-004-duplicate3.patch, 
 HDFS-7854-004.patch, HDFS-7854-005.patch, HDFS-7854-006.patch


 This sub task separate DataStreamer from DFSOutputStream. New DataStreamer 
 will accept packets and write them to remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7962) Remove duplicated logs in BlockManager

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371159#comment-14371159
 ] 

Hudson commented on HDFS-7962:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #138 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/138/])
HDFS-7962. Remove duplicated logs in BlockManager. (yliu) (yliu: rev 
978ef11f26794c22c7289582653b32268478e23e)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


 Remove duplicated logs in BlockManager
 --

 Key: HDFS-7962
 URL: https://issues.apache.org/jira/browse/HDFS-7962
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7962.001.patch


 There are few duplicated log in {{BlockManager}}.
 Also do few refinement of log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7816) Unable to open webhdfs paths with +

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371162#comment-14371162
 ] 

Hudson commented on HDFS-7816:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #138 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/138/])
HDFS-7816. Unable to open webhdfs paths with +. Contributed by Haohui Mai 
(kihwal: rev e79be0ee123d05104eb34eb854afcf9fa78baef2)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ParameterParser.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/TestParameterParser.java


 Unable to open webhdfs paths with +
 -

 Key: HDFS-7816
 URL: https://issues.apache.org/jira/browse/HDFS-7816
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.7.0
Reporter: Jason Lowe
Assignee: Haohui Mai
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7816.002.patch, HDFS-7816.patch, HDFS-7816.patch


 webhdfs requests to open files with % characters in the filename fail because 
 the filename is not being decoded properly.  For example:
 $ hadoop fs -cat 'webhdfs://nn/user/somebody/abc%def'
 cat: File does not exist: /user/somebody/abc%25def



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7932) Speed up the shutdown of datanode during rolling upgrade

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371164#comment-14371164
 ] 

Hudson commented on HDFS-7932:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #138 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/138/])
HDFS-7932. Speed up the shutdown of datanode during rolling upgrade. 
Contributed by Kihwal Lee. (kihwal: rev 
61a4c7fc9891def0e85edf7e41d74c6b92c85fdb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Speed up the shutdown of datanode during rolling upgrade
 

 Key: HDFS-7932
 URL: https://issues.apache.org/jira/browse/HDFS-7932
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.7.0

 Attachments: HDFS-7932.patch, HDFS-7932.patch


 Datanode normally exits in 3 seconds after receiving {{shutdownDatanode}} 
 command. However, sometimes it doesn't, especially when the IO is busy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7963) Fix expected tracing spans in TestTracing along with HDFS-7054

2015-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371167#comment-14371167
 ] 

Hadoop QA commented on HDFS-7963:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12705864/HDFS-7963.001.patch
  against trunk revision 978ef11.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1//console

This message is automatically generated.

 Fix expected tracing spans in TestTracing along with HDFS-7054
 --

 Key: HDFS-7963
 URL: https://issues.apache.org/jira/browse/HDFS-7963
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.7.0
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HDFS-7963.001.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7930) commitBlockSynchronization() does not remove locations

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371161#comment-14371161
 ] 

Hudson commented on HDFS-7930:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #138 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/138/])
HDFS-7930. commitBlockSynchronization() does not remove locations. (yliu) 
(yliu: rev e37ca221bf4e9ae5d5e667d8ca284df9fdb33199)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 commitBlockSynchronization() does not remove locations
 --

 Key: HDFS-7930
 URL: https://issues.apache.org/jira/browse/HDFS-7930
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Yi Liu
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7930.001.patch, HDFS-7930.002.patch, 
 HDFS-7930.003.patch


 When {{commitBlockSynchronization()}} has less {{newTargets}} than in the 
 original block it does not remove unconfirmed locations. This results in that 
 the the block stores locations of different lengths or genStamp (corrupt).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6826) Plugin interface to enable delegation of HDFS authorization assertions

2015-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371048#comment-14371048
 ] 

Hadoop QA commented on HDFS-6826:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12705842/HDFS-6826.14.patch
  against trunk revision 4e886eb.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.tracing.TestTracing
  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build///testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build///console

This message is automatically generated.

 Plugin interface to enable delegation of HDFS authorization assertions
 --

 Key: HDFS-6826
 URL: https://issues.apache.org/jira/browse/HDFS-6826
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HDFS-6826-idea.patch, HDFS-6826-idea2.patch, 
 HDFS-6826-permchecker.patch, HDFS-6826.10.patch, HDFS-6826.11.patch, 
 HDFS-6826.12.patch, HDFS-6826.13.patch, HDFS-6826.14.patch, 
 HDFS-6826v3.patch, HDFS-6826v4.patch, HDFS-6826v5.patch, HDFS-6826v6.patch, 
 HDFS-6826v7.1.patch, HDFS-6826v7.2.patch, HDFS-6826v7.3.patch, 
 HDFS-6826v7.4.patch, HDFS-6826v7.5.patch, HDFS-6826v7.6.patch, 
 HDFS-6826v7.patch, HDFS-6826v8.patch, HDFS-6826v9.patch, 
 HDFSPluggableAuthorizationProposal-v2.pdf, 
 HDFSPluggableAuthorizationProposal.pdf


 When Hbase data, HiveMetaStore data or Search data is accessed via services 
 (Hbase region servers, HiveServer2, Impala, Solr) the services can enforce 
 permissions on corresponding entities (databases, tables, views, columns, 
 search collections, documents). It is desirable, when the data is accessed 
 directly by users accessing the underlying data files (i.e. from a MapReduce 
 job), that the permission of the data files map to the permissions of the 
 corresponding data entity (i.e. table, column family or search collection).
 To enable this we need to have the necessary hooks in place in the NameNode 
 to delegate authorization to an external system that can map HDFS 
 files/directories to data entities and resolve their permissions based on the 
 data entities permissions.
 I’ll be posting a design proposal in the next few days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7966) New Data Transfer Protocol via HTTP/2

2015-03-20 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7966:
-
Labels: gsoc gsoc2015 mentor  (was: gsoc2015 mentor)

 New Data Transfer Protocol via HTTP/2
 -

 Key: HDFS-7966
 URL: https://issues.apache.org/jira/browse/HDFS-7966
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Haohui Mai
Assignee: Qianqian Shi
  Labels: gsoc, gsoc2015, mentor

 The current Data Transfer Protocol (DTP) implements a rich set of features 
 that span across multiple layers, including:
 * Connection pooling and authentication (session layer)
 * Encryption (presentation layer)
 * Data writing pipeline (application layer)
 All these features are HDFS-specific and defined by implementation. As a 
 result it requires non-trivial amount of work to implement HDFS clients and 
 servers.
 This jira explores to delegate the responsibilities of the session and 
 presentation layers to the HTTP/2 protocol. Particularly, HTTP/2 handles 
 connection multiplexing, QoS, authentication and encryption, reducing the 
 scope of DTP to the application layer only. By leveraging the existing HTTP/2 
 library, it should simplify the implementation of both HDFS clients and 
 servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7928) Scanning blocks from disk during rolling upgrade startup takes a lot of time if disks are busy

2015-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372359#comment-14372359
 ] 

Hadoop QA commented on HDFS-7928:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706033/HDFS-7928-v2.patch
  against trunk revision d81109e.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.tracing.TestTracing

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10007//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10007//console

This message is automatically generated.

 Scanning blocks from disk during rolling upgrade startup takes a lot of time 
 if disks are busy
 --

 Key: HDFS-7928
 URL: https://issues.apache.org/jira/browse/HDFS-7928
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.6.0
Reporter: Rushabh S Shah
Assignee: Rushabh S Shah
 Attachments: HDFS-7928-v1.patch, HDFS-7928-v2.patch, HDFS-7928.patch


 We observed this issue in rolling upgrade to 2.6.x on one of our cluster.
 One of the disks was very busy and it took long time to scan that disk 
 compared to other disks.
 Seeing the sar (System Activity Reporter) data we saw that the particular 
 disk was very busy performing IO operations.
 Requesting for an improvement during datanode rolling upgrade.
 During shutdown, we can persist the whole volume map on the disk and let the 
 datanode read that file and create the volume map during startup  after 
 rolling upgrade.
 This will not require the datanode process to scan all the disk and read the 
 block.
 This will significantly improve the datanode startup time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-7969) Erasure coding: lease recovery for striped block groups

2015-03-20 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-7969 started by Zhe Zhang.
---
 Erasure coding: lease recovery for striped block groups
 ---

 Key: HDFS-7969
 URL: https://issues.apache.org/jira/browse/HDFS-7969
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7942) NFS: support regexp grouping in nfs.exports.allowed.hosts

2015-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372392#comment-14372392
 ] 

Hadoop QA commented on HDFS-7942:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706047/HDFS-7942.002.patch
  against trunk revision 586348e.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-nfs hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
  org.apache.hadoop.fs.TestHdfsNativeCodeLoader
  org.apache.hadoop.tracing.TestTracing

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10009//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10009//console

This message is automatically generated.

 NFS: support regexp grouping in nfs.exports.allowed.hosts
 -

 Key: HDFS-7942
 URL: https://issues.apache.org/jira/browse/HDFS-7942
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-7942.001.patch, HDFS-7942.002.patch


 Thanks, [~yeshavora], for reporting this problem.
 Set regex value in nfs.exports.allowed.hosts property.
 {noformat}
 propertynamenfs.exports.allowed.hosts/namevalue206.190.52.[26|23] 
 rw/value/property
 {noformat}
 With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and 
 act as nfs client. In conclusion, no host can mount nfs with this regex value 
 due to access denied error.
 {noformat}
 $ sudo su - -c mount -o 
 soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
 /tmp/tmp_mnt root
 mount.nfs: access denied by server while mounting 206.190.52.23:/
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7961) Trigger full block report after hot swapping disk

2015-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372433#comment-14372433
 ] 

Hadoop QA commented on HDFS-7961:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706060/HDFS-7961.002.patch
  against trunk revision 586348e.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.tracing.TestTracing

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10012//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10012//console

This message is automatically generated.

 Trigger full block report after hot swapping disk
 -

 Key: HDFS-7961
 URL: https://issues.apache.org/jira/browse/HDFS-7961
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Attachments: HDFS-7961.000.patch, HDFS-7961.001.patch, 
 HDFS-7961.002.patch


 As discussed in HDFS-7960, NN could not remove the data storage metadata from 
 its memory. 
 DN should trigger a full block report immediately after running hot swapping 
 drives.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7854) Separate class DataStreamer out of DFSOutputStream

2015-03-20 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372272#comment-14372272
 ] 

Zhe Zhang commented on HDFS-7854:
-

Thanks Bo for the updated patch. It looks good to me. It needs to be rebased 
against the trunk again.

It seems {{dataQueue}} is also moved to {{DataStreamer}}. [~jingzhao] Do you 
see other issues in the patch? 

 Separate class DataStreamer out of DFSOutputStream
 --

 Key: HDFS-7854
 URL: https://issues.apache.org/jira/browse/HDFS-7854
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-7854-001.patch, HDFS-7854-002.patch, 
 HDFS-7854-003.patch, HDFS-7854-004-duplicate.patch, 
 HDFS-7854-004-duplicate2.patch, HDFS-7854-004-duplicate3.patch, 
 HDFS-7854-004.patch, HDFS-7854-005.patch, HDFS-7854-006.patch


 This sub task separate DataStreamer from DFSOutputStream. New DataStreamer 
 will accept packets and write them to remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-20 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372295#comment-14372295
 ] 

Andrew Wang commented on HDFS-7960:
---

Reading through it again, few comments:

NNRpcServer:
* there's a TODO: FIXME, we aren't passing in the BlockReportContext. 
processReport doesn't need that last parameter anymore either I think, since 
the information is in the BR context.

BPServiceActor:
* Is there a need for BR ids to be monotonic increasing? Else using a random 
number seems better. I see you do a fixup by checking with the previous ID, but 
with random this shouldn't be necessary.

DatanodeDescriptor:
* it looks like we only get/set LastBlockReportId in removeZombieStorages. We 
need to be setting to the current BR id as BRs come in right? This is probably 
a holdover from processReport not being updated from the previous patch rev.

If you wanted to add comments about all this, BlockReportContext's class 
javadoc would be a good choice.

Nit:

{code}
assert (namesystem.hasWriteLock());
{code}

space after assert

Going to stop there for now, I think we need to see another rev (the 
processReport FIXME basically) to get a feel for BlockReportContext.

 The full block report should prune zombie storages even if they're not empty
 

 Key: HDFS-7960
 URL: https://issues.apache.org/jira/browse/HDFS-7960
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Colin Patrick McCabe
Priority: Critical
 Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
 HDFS-7960.004.patch


 The full block report should prune zombie storages even if they're not empty. 
  We have seen cases in production where zombie storages have not been pruned 
 subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
 is a block in some old storage which is actually not there.  In this case, 
 the block will not show up in the new storage (once old is renamed to new) 
 and the old storage will linger forever as a zombie, even with the HDFS-7596 
 fix applied.  This also happens with datanode hotplug, when a drive is 
 removed.  In this case, an entire storage (volume) goes away but the blocks 
 do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7713) Improve the HDFS Web UI browser to allow creating dirs

2015-03-20 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372343#comment-14372343
 ] 

Ravi Prakash commented on HDFS-7713:


I just noticed that browsing that directory leads to a bad request. I'll file a 
new JIRA for fixing that.

 Improve the HDFS Web UI browser to allow creating dirs
 --

 Key: HDFS-7713
 URL: https://issues.apache.org/jira/browse/HDFS-7713
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HDFS-7713.01.patch, HDFS-7713.02.patch, 
 HDFS-7713.03.patch, HDFS-7713.04.patch, HDFS-7713.05.patch, 
 HDFS-7713.06.patch, HDFS-7713.07.patch


 This sub-task JIRA is for improving the NN HTML5 UI to allow the user to 
 create directories. It uses WebHDFS and adds to the great work done in 
 HDFS-6252



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7969) Erasure coding: lease recovery for striped block groups

2015-03-20 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-7969:
---

 Summary: Erasure coding: lease recovery for striped block groups
 Key: HDFS-7969
 URL: https://issues.apache.org/jira/browse/HDFS-7969
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6353) Handle checkpoint failure more gracefully

2015-03-20 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-6353:

Attachment: HDFS-6353.002.patch

Rebase the patch to run Jenkins.

 Handle checkpoint failure more gracefully
 -

 Key: HDFS-6353
 URL: https://issues.apache.org/jira/browse/HDFS-6353
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS-6353.000.patch, HDFS-6353.001.patch, 
 HDFS-6353.002.patch


 One of the failure patterns I have seen is, in some rare circumstances, due 
 to some inconsistency the secondary or standby fails to consume editlog. The 
 only solution when this happens is to save the namespace at the current 
 active namenode. But sometimes when this happens, unsuspecting admin might 
 end up restarting the namenode, requiring more complicated solution to the 
 problem (such as ignore editlog record that cannot be consumed etc.).
 How about adding the following functionality:
 When checkpointer (standby or secondary) fails to consume editlog, based on a 
 configurable flag (on/off) to let the active namenode know about this 
 failure. Active namenode can enters safemode and saves namespace. When  in 
 this type of safemode, namenode UI also shows information about checkpoint 
 failure and that it is saving namespace. Once the namespace is saved, 
 namenode can come out of safemode.
 This means service unavailability (even in HA cluster). But it might be worth 
 it to avoid long startup times or need for other manual fixes. Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6826) Plugin interface to enable delegation of HDFS authorization assertions

2015-03-20 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372316#comment-14372316
 ] 

Jitendra Nath Pandey commented on HDFS-6826:


Hi [~asuresh], Thanks for the quick turn around on this. 
 I really liked your earlier HDFS-6826.10.patch, because AccessControlEnforcer 
was a very simple and pure interface. 
Now comparing with the later patches, I think it was a bad idea on my part to 
expect default AccessControlEnforcer to return callerUgi, supergroup etc, 
although we need it for default permission checking. The reason is that many 
implementations might want to re-use the AccessControlEnforcer objects and 
would like to avoid tracking a callerUgi in their state, even though, they need 
it for policy enforcement. 
  I really prefer AccessControlEnforcer as an interface instead of an abstract 
class, because abstract class requires the implementations to initialize the 
base class with many parameters that they don't need to track. Therefore, I 
would suggest following simple modification on top of HDFS-6826.10.patch. 
Change the AccessControlEnforcer#checkPermission interface to pass a few 
additional parameters. In the following snippet, I have added the suggested new 
parameters at the beginning of the parameter list.
{code}
   public static interface AccessControlEnforcer {

public void checkPermission(String fsOwner, String superGroup,
UserGroupInformation callerUgi, AccessControlEnforcer defaultEnforcer,
INodeAttributes[] inodeAttrs, INode[] inodes,
byte[][] pathByNameArr, int snapshotId, String path, int ancestorIndex,
boolean doCheckOwner, FsAction ancestorAccess, FsAction parentAccess,
FsAction access, FsAction subAccess, boolean ignoreEmptyDir)
throws AccessControlException;

}
{code}

I think with HDFS-6826.10.patch and the above change, it will be a very clean 
and simple implementation.

Thanks again for taking up this work.




 Plugin interface to enable delegation of HDFS authorization assertions
 --

 Key: HDFS-6826
 URL: https://issues.apache.org/jira/browse/HDFS-6826
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HDFS-6826-idea.patch, HDFS-6826-idea2.patch, 
 HDFS-6826-permchecker.patch, HDFS-6826.10.patch, HDFS-6826.11.patch, 
 HDFS-6826.12.patch, HDFS-6826.13.patch, HDFS-6826.14.patch, 
 HDFS-6826v3.patch, HDFS-6826v4.patch, HDFS-6826v5.patch, HDFS-6826v6.patch, 
 HDFS-6826v7.1.patch, HDFS-6826v7.2.patch, HDFS-6826v7.3.patch, 
 HDFS-6826v7.4.patch, HDFS-6826v7.5.patch, HDFS-6826v7.6.patch, 
 HDFS-6826v7.patch, HDFS-6826v8.patch, HDFS-6826v9.patch, 
 HDFSPluggableAuthorizationProposal-v2.pdf, 
 HDFSPluggableAuthorizationProposal.pdf


 When Hbase data, HiveMetaStore data or Search data is accessed via services 
 (Hbase region servers, HiveServer2, Impala, Solr) the services can enforce 
 permissions on corresponding entities (databases, tables, views, columns, 
 search collections, documents). It is desirable, when the data is accessed 
 directly by users accessing the underlying data files (i.e. from a MapReduce 
 job), that the permission of the data files map to the permissions of the 
 corresponding data entity (i.e. table, column family or search collection).
 To enable this we need to have the necessary hooks in place in the NameNode 
 to delegate authorization to an external system that can map HDFS 
 files/directories to data entities and resolve their permissions based on the 
 data entities permissions.
 I’ll be posting a design proposal in the next few days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7713) Improve the HDFS Web UI browser to allow creating dirs

2015-03-20 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-7713:
---
Attachment: HDFS-7713.07.patch

 Improve the HDFS Web UI browser to allow creating dirs
 --

 Key: HDFS-7713
 URL: https://issues.apache.org/jira/browse/HDFS-7713
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HDFS-7713.01.patch, HDFS-7713.02.patch, 
 HDFS-7713.03.patch, HDFS-7713.04.patch, HDFS-7713.05.patch, 
 HDFS-7713.06.patch, HDFS-7713.07.patch


 This sub-task JIRA is for improving the NN HTML5 UI to allow the user to 
 create directories. It uses WebHDFS and adds to the great work done in 
 HDFS-6252



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7854) Separate class DataStreamer out of DFSOutputStream

2015-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372424#comment-14372424
 ] 

Hadoop QA commented on HDFS-7854:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706092/HDFS-7854-007.patch
  against trunk revision 7f1e2f9.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10016//console

This message is automatically generated.

 Separate class DataStreamer out of DFSOutputStream
 --

 Key: HDFS-7854
 URL: https://issues.apache.org/jira/browse/HDFS-7854
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-7854-001.patch, HDFS-7854-002.patch, 
 HDFS-7854-003.patch, HDFS-7854-004-duplicate.patch, 
 HDFS-7854-004-duplicate2.patch, HDFS-7854-004-duplicate3.patch, 
 HDFS-7854-004.patch, HDFS-7854-005.patch, HDFS-7854-006.patch, 
 HDFS-7854-007.patch


 This sub task separate DataStreamer from DFSOutputStream. New DataStreamer 
 will accept packets and write them to remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7942) NFS: support regexp grouping in nfs.exports.allowed.hosts

2015-03-20 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372303#comment-14372303
 ] 

Jing Zhao commented on HDFS-7942:
-

The patch looks good to me. +1

In the meanwhile, looks like our parse semantic will be slightly different from 
the traditional one. Maybe we can handle it as future work.

 NFS: support regexp grouping in nfs.exports.allowed.hosts
 -

 Key: HDFS-7942
 URL: https://issues.apache.org/jira/browse/HDFS-7942
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-7942.001.patch, HDFS-7942.002.patch


 Thanks, [~yeshavora], for reporting this problem.
 Set regex value in nfs.exports.allowed.hosts property.
 {noformat}
 propertynamenfs.exports.allowed.hosts/namevalue206.190.52.[26|23] 
 rw/value/property
 {noformat}
 With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and 
 act as nfs client. In conclusion, no host can mount nfs with this regex value 
 due to access denied error.
 {noformat}
 $ sudo su - -c mount -o 
 soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
 /tmp/tmp_mnt root
 mount.nfs: access denied by server while mounting 206.190.52.23:/
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7967) Reduce the performance impact of the balancer

2015-03-20 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372305#comment-14372305
 ] 

Daryn Sharp commented on HDFS-7967:
---

The current implementation is so bad that on large clusters we have to restrict 
the balancer to using only one thread for block queries.  Multiple threads will 
destroy the performance of busy namenodes by causing call queue overflows.

 Reduce the performance impact of the balancer
 -

 Key: HDFS-7967
 URL: https://issues.apache.org/jira/browse/HDFS-7967
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.0.0-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp

 The balancer needs to query for blocks to move from overly full DNs.  The 
 block lookup is extremely inefficient.  An iterator of the node's blocks is 
 created from the iterators of its storages' blocks.  A random number is 
 chosen corresponding to how many blocks will be skipped via the iterator.  
 Each skip requires costly scanning of triplets.
 The current design also only considers node imbalances while ignoring 
 imbalances within the nodes's storages.  A more efficient and intelligent 
 design may eliminate the costly skipping of blocks via round-robin selection 
 of blocks from the storages based on remaining capacity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7713) Improve the HDFS Web UI browser to allow creating dirs

2015-03-20 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-7713:
---
Attachment: HDFS-7713.07.patch

Thanks a lot Haohui! Great catch! Here a patch which encodes the URI

 Improve the HDFS Web UI browser to allow creating dirs
 --

 Key: HDFS-7713
 URL: https://issues.apache.org/jira/browse/HDFS-7713
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HDFS-7713.01.patch, HDFS-7713.02.patch, 
 HDFS-7713.03.patch, HDFS-7713.04.patch, HDFS-7713.05.patch, 
 HDFS-7713.06.patch, HDFS-7713.07.patch


 This sub-task JIRA is for improving the NN HTML5 UI to allow the user to 
 create directories. It uses WebHDFS and adds to the great work done in 
 HDFS-6252



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7713) Improve the HDFS Web UI browser to allow creating dirs

2015-03-20 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-7713:
---
Attachment: (was: HDFS-7713.07.patch)

 Improve the HDFS Web UI browser to allow creating dirs
 --

 Key: HDFS-7713
 URL: https://issues.apache.org/jira/browse/HDFS-7713
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HDFS-7713.01.patch, HDFS-7713.02.patch, 
 HDFS-7713.03.patch, HDFS-7713.04.patch, HDFS-7713.05.patch, HDFS-7713.06.patch


 This sub-task JIRA is for improving the NN HTML5 UI to allow the user to 
 create directories. It uses WebHDFS and adds to the great work done in 
 HDFS-6252



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7854) Separate class DataStreamer out of DFSOutputStream

2015-03-20 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372281#comment-14372281
 ] 

Zhe Zhang commented on HDFS-7854:
-

Thanks Jing! In that case I'll do the rebase now. 

 Separate class DataStreamer out of DFSOutputStream
 --

 Key: HDFS-7854
 URL: https://issues.apache.org/jira/browse/HDFS-7854
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-7854-001.patch, HDFS-7854-002.patch, 
 HDFS-7854-003.patch, HDFS-7854-004-duplicate.patch, 
 HDFS-7854-004-duplicate2.patch, HDFS-7854-004-duplicate3.patch, 
 HDFS-7854-004.patch, HDFS-7854-005.patch, HDFS-7854-006.patch


 This sub task separate DataStreamer from DFSOutputStream. New DataStreamer 
 will accept packets and write them to remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7854) Separate class DataStreamer out of DFSOutputStream

2015-03-20 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-7854:

Attachment: HDFS-7854-007.patch

The rebase is mainly about 3 commits: HDFS-7835 and HDFS-6841. Their changes on 
{{DFSOutputStream}} are relatively simple and I believe I included all related 
changes in {{DataStreamer}}.

HDFS-7054 was also committed after this refactor started. I took a look and 
seems Bo has already taken care of it. [~cmccabe] would be great if you could 
verify that the new {{DataStreamer}} class includes the changes you made.

 Separate class DataStreamer out of DFSOutputStream
 --

 Key: HDFS-7854
 URL: https://issues.apache.org/jira/browse/HDFS-7854
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-7854-001.patch, HDFS-7854-002.patch, 
 HDFS-7854-003.patch, HDFS-7854-004-duplicate.patch, 
 HDFS-7854-004-duplicate2.patch, HDFS-7854-004-duplicate3.patch, 
 HDFS-7854-004.patch, HDFS-7854-005.patch, HDFS-7854-006.patch, 
 HDFS-7854-007.patch


 This sub task separate DataStreamer from DFSOutputStream. New DataStreamer 
 will accept packets and write them to remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7964) Add support for async edit logging

2015-03-20 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372334#comment-14372334
 ] 

Daryn Sharp commented on HDFS-7964:
---

As background, the problem was tackled after recurring slow IO issues caused 
some handlers to block with a small batch of edits.  Remaining handlers filled 
the other side of the edit log double-buffer.  In the worst case scenario, an 
auto-sync was triggered by logEdit while the write lock was held.  The call 
queue overflowed, further exacerbated by the resulting tcp listen queue 
overflows, tcp syn cookies, and client timeouts.  When the ipc machinery 
recovered, the process would repeat in an oscillating manner until the IO 
issues dissipated.  Even w/o an auto-sync, the high rate of read operations 
caused small batching of writes.

 Add support for async edit logging
 --

 Key: HDFS-7964
 URL: https://issues.apache.org/jira/browse/HDFS-7964
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.0.2-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-7964.patch


 Edit logging is a major source of contention within the NN.  LogEdit is 
 called within the namespace write log, while logSync is called outside of the 
 lock to allow greater concurrency.  The handler thread remains busy until 
 logSync returns to provide the client with a durability guarantee for the 
 response.
 Write heavy RPC load and/or slow IO causes handlers to stall in logSync.  
 Although the write lock is not held, readers are limited/starved and the call 
 queue fills.  Combining an edit log thread with postponed RPC responses from 
 HADOOP-10300 will provide the same durability guarantee but immediately free 
 up the handlers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7917) Use file to replace data dirs in test to simulate a disk failure.

2015-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372349#comment-14372349
 ] 

Hadoop QA commented on HDFS-7917:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706031/HDFS-7917.001.patch
  against trunk revision d81109e.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.tracing.TestTracing

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10006//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10006//console

This message is automatically generated.

 Use file to replace data dirs in test to simulate a disk failure. 
 --

 Key: HDFS-7917
 URL: https://issues.apache.org/jira/browse/HDFS-7917
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Attachments: HDFS-7917.000.patch, HDFS-7917.001.patch


 Currently, in several tests, e.g., {{TestDataNodeVolumeFailureXXX}} and 
 {{TestDataNotHowSwapVolumes}},  we simulate a disk failure by setting a 
 directory's executable permission as false. However, it raises the risk that 
 if the cleanup code could not be executed, the directory can not be easily 
 removed by Jenkins job. 
 Since in {{DiskChecker#checkDirAccess}}:
 {code}
 private static void checkDirAccess(File dir) throws DiskErrorException {
 if (!dir.isDirectory()) {
   throw new DiskErrorException(Not a directory: 
+ dir.toString());
 }
 checkAccessByFileMethods(dir);
   }
 {code}
 We can replace the DN data directory as a file to achieve the same fault 
 injection goal, while it is safer for cleaning up in any circumstance. 
 Additionally, as [~cnauroth] suggested: 
 bq. That might even let us enable some of these tests that are skipped on 
 Windows, because Windows allows access for the owner even after permissions 
 have been stripped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7854) Separate class DataStreamer out of DFSOutputStream

2015-03-20 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372278#comment-14372278
 ] 

Jing Zhao commented on HDFS-7854:
-

I quickly went through the latest patch and it looks good to me. I will take 
another review this weekend and try to get this committed asap.

 Separate class DataStreamer out of DFSOutputStream
 --

 Key: HDFS-7854
 URL: https://issues.apache.org/jira/browse/HDFS-7854
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-7854-001.patch, HDFS-7854-002.patch, 
 HDFS-7854-003.patch, HDFS-7854-004-duplicate.patch, 
 HDFS-7854-004-duplicate2.patch, HDFS-7854-004-duplicate3.patch, 
 HDFS-7854-004.patch, HDFS-7854-005.patch, HDFS-7854-006.patch


 This sub task separate DataStreamer from DFSOutputStream. New DataStreamer 
 will accept packets and write them to remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7967) Reduce the performance impact of the balancer

2015-03-20 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-7967:
-

 Summary: Reduce the performance impact of the balancer
 Key: HDFS-7967
 URL: https://issues.apache.org/jira/browse/HDFS-7967
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.0.0-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp


The balancer needs to query for blocks to move from overly full DNs.  The block 
lookup is extremely inefficient.  An iterator of the node's blocks is created 
from the iterators of its storages' blocks.  A random number is chosen 
corresponding to how many blocks will be skipped via the iterator.  Each skip 
requires costly scanning of triplets.

The current design also only considers node imbalances while ignoring 
imbalances within the nodes's storages.  A more efficient and intelligent 
design may eliminate the costly skipping of blocks via round-robin selection of 
blocks from the storages based on remaining capacity.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5523) Support multiple subdirectory exports in HDFS NFS gateway

2015-03-20 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372340#comment-14372340
 ] 

Brandon Li commented on HDFS-5523:
--

Sounds like a good start point. 
Additionally, how about also allowing / to be exported as a special case 
along with other subdirectory exports? The benefit is that, it makes it 
convenient for the admin to:
1. directly operate on subdirectories under /  
2. do backup of any top sub-directory under / without the need of 
share-and-mount them individually
Also, the root export can make it easier for some applications which require 
accessing multiple top level subdirectories. 



 Support multiple subdirectory exports in HDFS NFS gateway 
 --

 Key: HDFS-5523
 URL: https://issues.apache.org/jira/browse/HDFS-5523
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Reporter: Brandon Li

 Currently, the HDFS NFS Gateway only supports configuring a single 
 subdirectory export via the  {{dfs.nfs3.export.point}} configuration setting. 
 Supporting multiple subdirectory exports can make data and security 
 management easier when using the HDFS NFS Gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7968) Properly encode WebHDFS requests coming from the NN UI

2015-03-20 Thread Ravi Prakash (JIRA)
Ravi Prakash created HDFS-7968:
--

 Summary: Properly encode WebHDFS requests coming from the NN UI
 Key: HDFS-7968
 URL: https://issues.apache.org/jira/browse/HDFS-7968
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ravi Prakash


Thanks to [~wheat9] for pointing out this 
[issue|https://issues.apache.org/jira/browse/HDFS-7713?focusedCommentId=14371788page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14371788]
 e.g. you cannot descend into a directory named {{asdf#df+1}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6826) Plugin interface to enable delegation of HDFS authorization assertions

2015-03-20 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-6826:
--
Attachment: HDFS-6826.15.patch

[~jnp], yup.. agreed, interfaces are cleaner.. will revert 
uploading new patch (.15.patch) based on v10.. but fixing test cases..

 Plugin interface to enable delegation of HDFS authorization assertions
 --

 Key: HDFS-6826
 URL: https://issues.apache.org/jira/browse/HDFS-6826
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HDFS-6826-idea.patch, HDFS-6826-idea2.patch, 
 HDFS-6826-permchecker.patch, HDFS-6826.10.patch, HDFS-6826.11.patch, 
 HDFS-6826.12.patch, HDFS-6826.13.patch, HDFS-6826.14.patch, 
 HDFS-6826.15.patch, HDFS-6826v3.patch, HDFS-6826v4.patch, HDFS-6826v5.patch, 
 HDFS-6826v6.patch, HDFS-6826v7.1.patch, HDFS-6826v7.2.patch, 
 HDFS-6826v7.3.patch, HDFS-6826v7.4.patch, HDFS-6826v7.5.patch, 
 HDFS-6826v7.6.patch, HDFS-6826v7.patch, HDFS-6826v8.patch, HDFS-6826v9.patch, 
 HDFSPluggableAuthorizationProposal-v2.pdf, 
 HDFSPluggableAuthorizationProposal.pdf


 When Hbase data, HiveMetaStore data or Search data is accessed via services 
 (Hbase region servers, HiveServer2, Impala, Solr) the services can enforce 
 permissions on corresponding entities (databases, tables, views, columns, 
 search collections, documents). It is desirable, when the data is accessed 
 directly by users accessing the underlying data files (i.e. from a MapReduce 
 job), that the permission of the data files map to the permissions of the 
 corresponding data entity (i.e. table, column family or search collection).
 To enable this we need to have the necessary hooks in place in the NameNode 
 to delegate authorization to an external system that can map HDFS 
 files/directories to data entities and resolve their permissions based on the 
 data entities permissions.
 I’ll be posting a design proposal in the next few days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7748) Separate ECN flags from the Status in the DataTransferPipelineAck

2015-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372371#comment-14372371
 ] 

Hadoop QA commented on HDFS-7748:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706048/hdfs-7748.004.patch
  against trunk revision 586348e.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.tracing.TestTracing

  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10010//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10010//console

This message is automatically generated.

 Separate ECN flags from the Status in the DataTransferPipelineAck
 -

 Key: HDFS-7748
 URL: https://issues.apache.org/jira/browse/HDFS-7748
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Anu Engineer
Priority: Blocker
 Attachments: hdfs-7748.001.patch, hdfs-7748.002.patch, 
 hdfs-7748.003.patch, hdfs-7748.004.patch


 Prior to the discussions on HDFS-7270, the old clients might fail to talk to 
 the newer server when ECN is turned on. This jira proposes to separate the 
 ECN flags in a separate protobuf field to make the ack compatible on both 
 versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-20 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372404#comment-14372404
 ] 

Colin Patrick McCabe commented on HDFS-7960:


bq. there's a TODO: FIXME, we aren't passing in the BlockReportContext.

Yeah, mea culpa.

bq. processReport doesn't need that last parameter anymore either I think, 
since the information is in the BR context.

The last parameter is needed because we want to eliminate zombie storages only 
after all storages have been processed, and a single call to 
{{NameNodeRpcServer#blockReport}} can handle multiple storages

bq. Is there a need for BR ids to be monotonic increasing? Else using a random 
number seems better. I see you do a fixup by checking with the previous ID, but 
with random this shouldn't be necessary

I like the idea of monotonic increasing BR ids for two reasons: it makes it 
easier to see in the logs what block report came after what block report, and 
it effectively removes the (admittedly very, very small) chance of a collision 
between two subsequent BR IDs.  The monotonic timer in Linux (or other OS) only 
gets reset when a node reboots, so even restarting the DN process will not 
normally reset the ID.

bq. If you wanted to add comments about all this, BlockReportContext's class 
javadoc would be a good choice.

Good idea, I added some comments there.

bq. space after assert

fixed

 The full block report should prune zombie storages even if they're not empty
 

 Key: HDFS-7960
 URL: https://issues.apache.org/jira/browse/HDFS-7960
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Colin Patrick McCabe
Priority: Critical
 Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
 HDFS-7960.004.patch


 The full block report should prune zombie storages even if they're not empty. 
  We have seen cases in production where zombie storages have not been pruned 
 subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
 is a block in some old storage which is actually not there.  In this case, 
 the block will not show up in the new storage (once old is renamed to new) 
 and the old storage will linger forever as a zombie, even with the HDFS-7596 
 fix applied.  This also happens with datanode hotplug, when a drive is 
 removed.  In this case, an entire storage (volume) goes away but the blocks 
 do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372454#comment-14372454
 ] 

Hadoop QA commented on HDFS-7960:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706075/HDFS-7960.004.patch
  against trunk revision 586348e.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 14 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestDFSClientRetries
  org.apache.hadoop.tracing.TestTracing
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
  org.apache.hadoop.hdfs.server.balancer.TestBalancer

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10013//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10013//console

This message is automatically generated.

 The full block report should prune zombie storages even if they're not empty
 

 Key: HDFS-7960
 URL: https://issues.apache.org/jira/browse/HDFS-7960
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Colin Patrick McCabe
Priority: Critical
 Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
 HDFS-7960.004.patch


 The full block report should prune zombie storages even if they're not empty. 
  We have seen cases in production where zombie storages have not been pruned 
 subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
 is a block in some old storage which is actually not there.  In this case, 
 the block will not show up in the new storage (once old is renamed to new) 
 and the old storage will linger forever as a zombie, even with the HDFS-7596 
 fix applied.  This also happens with datanode hotplug, when a drive is 
 removed.  In this case, an entire storage (volume) goes away but the blocks 
 do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7212) Huge number of BLOCKED threads rendering DataNodes useless

2015-03-20 Thread Frode Halvorsen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371277#comment-14371277
 ] 

Frode Halvorsen commented on HDFS-7212:
---

Sometimes the datanoe dosen't come back by itself, and I have to restart it. 
Then even more blocks has too many replicas...

 Huge number of BLOCKED threads rendering DataNodes useless
 --

 Key: HDFS-7212
 URL: https://issues.apache.org/jira/browse/HDFS-7212
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.4.0
 Environment: PROD
Reporter: Istvan Szukacs

 There are 3000 - 8000 threads in each datanode JVM, blocking the entire VM 
 and rendering the service unusable, missing heartbeats and stopping data 
 access. The threads look like this:
 {code}
 3415 (state = BLOCKED)
 - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information may 
 be imprecise)
 - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
 line=186 (Compiled frame)
 - 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt() 
 @bci=1, line=834 (Interpreted frame)
 - 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(java.util.concurrent.locks.AbstractQueuedSynchronizer$Node,
  int) @bci=67, line=867 (Interpreted frame)
 - java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(int) @bci=17, 
 line=1197 (Interpreted frame)
 - java.util.concurrent.locks.ReentrantLock$NonfairSync.lock() @bci=21, 
 line=214 (Compiled frame)
 - java.util.concurrent.locks.ReentrantLock.lock() @bci=4, line=290 (Compiled 
 frame)
 - 
 org.apache.hadoop.net.unix.DomainSocketWatcher.add(org.apache.hadoop.net.unix.DomainSocket,
  org.apache.hadoop.net.unix.DomainSocketWatcher$Handler) @bci=4, line=286 
 (Interpreted frame)
 - 
 org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.createNewMemorySegment(java.lang.String,
  org.apache.hadoop.net.unix.DomainSocket) @bci=169, line=283 (Interpreted 
 frame)
 - 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(java.lang.String)
  @bci=212, line=413 (Interpreted frame)
 - 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(java.io.DataInputStream)
  @bci=13, line=172 (Interpreted frame)
 - 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(org.apache.hadoop.hdfs.protocol.datatransfer.Op)
  @bci=149, line=92 (Compiled frame)
 - org.apache.hadoop.hdfs.server.datanode.DataXceiver.run() @bci=510, line=232 
 (Compiled frame)
 - java.lang.Thread.run() @bci=11, line=744 (Interpreted frame)
 {code}
 Has anybody seen this before?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7963) Fix expected tracing spans in TestTracing along with HDFS-7054

2015-03-20 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-7963:
-
 Description: Setting the target version to 2.7.0. We don't want to 
release it with the test broken.
Priority: Critical  (was: Minor)
Target Version/s: 2.7.0

 Fix expected tracing spans in TestTracing along with HDFS-7054
 --

 Key: HDFS-7963
 URL: https://issues.apache.org/jira/browse/HDFS-7963
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.7.0
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Critical
 Attachments: HDFS-7963.001.patch


 Setting the target version to 2.7.0. We don't want to release it with the 
 test broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7930) commitBlockSynchronization() does not remove locations

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371372#comment-14371372
 ] 

Hudson commented on HDFS-7930:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #129 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/129/])
HDFS-7930. commitBlockSynchronization() does not remove locations. (yliu) 
(yliu: rev e37ca221bf4e9ae5d5e667d8ca284df9fdb33199)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 commitBlockSynchronization() does not remove locations
 --

 Key: HDFS-7930
 URL: https://issues.apache.org/jira/browse/HDFS-7930
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Yi Liu
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7930.001.patch, HDFS-7930.002.patch, 
 HDFS-7930.003.patch


 When {{commitBlockSynchronization()}} has less {{newTargets}} than in the 
 original block it does not remove unconfirmed locations. This results in that 
 the the block stores locations of different lengths or genStamp (corrupt).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7962) Remove duplicated logs in BlockManager

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371370#comment-14371370
 ] 

Hudson commented on HDFS-7962:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #129 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/129/])
HDFS-7962. Remove duplicated logs in BlockManager. (yliu) (yliu: rev 
978ef11f26794c22c7289582653b32268478e23e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Remove duplicated logs in BlockManager
 --

 Key: HDFS-7962
 URL: https://issues.apache.org/jira/browse/HDFS-7962
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7962.001.patch


 There are few duplicated log in {{BlockManager}}.
 Also do few refinement of log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7816) Unable to open webhdfs paths with +

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371364#comment-14371364
 ] 

Hudson commented on HDFS-7816:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2070 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2070/])
HDFS-7816. Unable to open webhdfs paths with +. Contributed by Haohui Mai 
(kihwal: rev e79be0ee123d05104eb34eb854afcf9fa78baef2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ParameterParser.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/TestParameterParser.java


 Unable to open webhdfs paths with +
 -

 Key: HDFS-7816
 URL: https://issues.apache.org/jira/browse/HDFS-7816
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.7.0
Reporter: Jason Lowe
Assignee: Haohui Mai
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7816.002.patch, HDFS-7816.patch, HDFS-7816.patch


 webhdfs requests to open files with % characters in the filename fail because 
 the filename is not being decoded properly.  For example:
 $ hadoop fs -cat 'webhdfs://nn/user/somebody/abc%def'
 cat: File does not exist: /user/somebody/abc%25def



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7816) Unable to open webhdfs paths with +

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371373#comment-14371373
 ] 

Hudson commented on HDFS-7816:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #129 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/129/])
HDFS-7816. Unable to open webhdfs paths with +. Contributed by Haohui Mai 
(kihwal: rev e79be0ee123d05104eb34eb854afcf9fa78baef2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/TestParameterParser.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ParameterParser.java


 Unable to open webhdfs paths with +
 -

 Key: HDFS-7816
 URL: https://issues.apache.org/jira/browse/HDFS-7816
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.7.0
Reporter: Jason Lowe
Assignee: Haohui Mai
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7816.002.patch, HDFS-7816.patch, HDFS-7816.patch


 webhdfs requests to open files with % characters in the filename fail because 
 the filename is not being decoded properly.  For example:
 $ hadoop fs -cat 'webhdfs://nn/user/somebody/abc%def'
 cat: File does not exist: /user/somebody/abc%25def



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7932) Speed up the shutdown of datanode during rolling upgrade

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371375#comment-14371375
 ] 

Hudson commented on HDFS-7932:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #129 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/129/])
HDFS-7932. Speed up the shutdown of datanode during rolling upgrade. 
Contributed by Kihwal Lee. (kihwal: rev 
61a4c7fc9891def0e85edf7e41d74c6b92c85fdb)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


 Speed up the shutdown of datanode during rolling upgrade
 

 Key: HDFS-7932
 URL: https://issues.apache.org/jira/browse/HDFS-7932
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.7.0

 Attachments: HDFS-7932.patch, HDFS-7932.patch


 Datanode normally exits in 3 seconds after receiving {{shutdownDatanode}} 
 command. However, sometimes it doesn't, especially when the IO is busy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7962) Remove duplicated logs in BlockManager

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371361#comment-14371361
 ] 

Hudson commented on HDFS-7962:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2070 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2070/])
HDFS-7962. Remove duplicated logs in BlockManager. (yliu) (yliu: rev 
978ef11f26794c22c7289582653b32268478e23e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Remove duplicated logs in BlockManager
 --

 Key: HDFS-7962
 URL: https://issues.apache.org/jira/browse/HDFS-7962
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7962.001.patch


 There are few duplicated log in {{BlockManager}}.
 Also do few refinement of log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7932) Speed up the shutdown of datanode during rolling upgrade

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371368#comment-14371368
 ] 

Hudson commented on HDFS-7932:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2070 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2070/])
HDFS-7932. Speed up the shutdown of datanode during rolling upgrade. 
Contributed by Kihwal Lee. (kihwal: rev 
61a4c7fc9891def0e85edf7e41d74c6b92c85fdb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Speed up the shutdown of datanode during rolling upgrade
 

 Key: HDFS-7932
 URL: https://issues.apache.org/jira/browse/HDFS-7932
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.7.0

 Attachments: HDFS-7932.patch, HDFS-7932.patch


 Datanode normally exits in 3 seconds after receiving {{shutdownDatanode}} 
 command. However, sometimes it doesn't, especially when the IO is busy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7930) commitBlockSynchronization() does not remove locations

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371363#comment-14371363
 ] 

Hudson commented on HDFS-7930:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2070 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2070/])
HDFS-7930. commitBlockSynchronization() does not remove locations. (yliu) 
(yliu: rev e37ca221bf4e9ae5d5e667d8ca284df9fdb33199)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


 commitBlockSynchronization() does not remove locations
 --

 Key: HDFS-7930
 URL: https://issues.apache.org/jira/browse/HDFS-7930
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Yi Liu
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7930.001.patch, HDFS-7930.002.patch, 
 HDFS-7930.003.patch


 When {{commitBlockSynchronization()}} has less {{newTargets}} than in the 
 original block it does not remove unconfirmed locations. This results in that 
 the the block stores locations of different lengths or genStamp (corrupt).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-7212) Huge number of BLOCKED threads rendering DataNodes useless

2015-03-20 Thread Frode Halvorsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frode Halvorsen reopened HDFS-7212:
---

Still a problem in 2.6.0.

I have 6 datanodes in two racks. Periodically one or two nodes is 'suspended' 
with anything from 500 to 3000 blocked threads like this:
DataXceiver for client  at /62.148.41.209:39602 [Receiving block 
BP-874555352-10.34.17.40-1403595404176:blk_1133797477_60059942]
The datanode is marked as dead and the namenode starts to replicate the bloks. 
After some time, the datanode suddenly comes back, and the namenode has to 
delete a lot of blocks again. 

 Huge number of BLOCKED threads rendering DataNodes useless
 --

 Key: HDFS-7212
 URL: https://issues.apache.org/jira/browse/HDFS-7212
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.4.0
 Environment: PROD
Reporter: Istvan Szukacs

 There are 3000 - 8000 threads in each datanode JVM, blocking the entire VM 
 and rendering the service unusable, missing heartbeats and stopping data 
 access. The threads look like this:
 {code}
 3415 (state = BLOCKED)
 - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information may 
 be imprecise)
 - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
 line=186 (Compiled frame)
 - 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt() 
 @bci=1, line=834 (Interpreted frame)
 - 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(java.util.concurrent.locks.AbstractQueuedSynchronizer$Node,
  int) @bci=67, line=867 (Interpreted frame)
 - java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(int) @bci=17, 
 line=1197 (Interpreted frame)
 - java.util.concurrent.locks.ReentrantLock$NonfairSync.lock() @bci=21, 
 line=214 (Compiled frame)
 - java.util.concurrent.locks.ReentrantLock.lock() @bci=4, line=290 (Compiled 
 frame)
 - 
 org.apache.hadoop.net.unix.DomainSocketWatcher.add(org.apache.hadoop.net.unix.DomainSocket,
  org.apache.hadoop.net.unix.DomainSocketWatcher$Handler) @bci=4, line=286 
 (Interpreted frame)
 - 
 org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.createNewMemorySegment(java.lang.String,
  org.apache.hadoop.net.unix.DomainSocket) @bci=169, line=283 (Interpreted 
 frame)
 - 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(java.lang.String)
  @bci=212, line=413 (Interpreted frame)
 - 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(java.io.DataInputStream)
  @bci=13, line=172 (Interpreted frame)
 - 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(org.apache.hadoop.hdfs.protocol.datatransfer.Op)
  @bci=149, line=92 (Compiled frame)
 - org.apache.hadoop.hdfs.server.datanode.DataXceiver.run() @bci=510, line=232 
 (Compiled frame)
 - java.lang.Thread.run() @bci=11, line=744 (Interpreted frame)
 {code}
 Has anybody seen this before?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6599) 2.4 addBlock is 10 to 20 times slower compared to 0.23

2015-03-20 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371310#comment-14371310
 ] 

Kihwal Lee commented on HDFS-6599:
--

There have been many performance improvements since 2.3. Examples: HDFS-7097 
and HDFS-7615 fix writers being unfairly penalized.  HDFS-7217 reduces the 
write locking of namesystem by 30-40%. This is a big deal for busy clusters. If 
you use the audit logging, try async audit logging (HDFS-5241). This should be 
already in 2.3.

 2.4 addBlock is 10 to 20 times slower compared to 0.23
 --

 Key: HDFS-6599
 URL: https://issues.apache.org/jira/browse/HDFS-6599
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0, 2.4.0
Reporter: Kihwal Lee
Assignee: Daryn Sharp
Priority: Blocker
 Fix For: 2.5.0

 Attachments: HDFS-6599.patch


 From one of our busiest 0.23 clusters:
 {panel}
 AddBlockAvgTime : 0.9514711501719515
 CreateAvgTime : 1.7564162389174
 CompleteAvgTime : 1.3310406035056548
 BlockReceivedAndDeletedAvgTime : 0.661210005151392
 {panel}
 From a not-so-busy 2.4 cluster:
 {panel}
 AddBlockAvgTime : 10.084
 CreateAvgTime : 1.0
 CompleteAvgTime : 1.1112
 BlockReceivedAndDeletedAvgTime : 0.07692307692307694
 {panel}
 When the 2.4 cluster gets a moderate amount of write requests, the latency is 
 terrible. E.g. addBlock goes upward of 60ms. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7963) Fix expected tracing spans in TestTracing along with HDFS-7054

2015-03-20 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-7963:
-
Description: There are no tracing spans named DFSOutputStream any more. In 
addition, spans having multiple parents do not have specific trace id.

 Fix expected tracing spans in TestTracing along with HDFS-7054
 --

 Key: HDFS-7963
 URL: https://issues.apache.org/jira/browse/HDFS-7963
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.7.0
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Critical
 Attachments: HDFS-7963.001.patch


 There are no tracing spans named DFSOutputStream any more. In addition, spans 
 having multiple parents do not have specific trace id.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7963) Fix expected tracing spans in TestTracing along with HDFS-7054

2015-03-20 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-7963:
-
Description: (was: Setting the target version to 2.7.0. We don't want 
to release it with the test broken.)

Setting the target version to 2.7.0. We don't want to release it with the test 
broken.

 Fix expected tracing spans in TestTracing along with HDFS-7054
 --

 Key: HDFS-7963
 URL: https://issues.apache.org/jira/browse/HDFS-7963
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.7.0
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Critical
 Attachments: HDFS-7963.001.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7597) DNs should not open new NN connections when webhdfs clients seek

2015-03-20 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-7597:
--
Summary: DNs should not open new NN connections when webhdfs clients seek  
(was: Clients seeking over webhdfs may crash the NN)

 DNs should not open new NN connections when webhdfs clients seek
 

 Key: HDFS-7597
 URL: https://issues.apache.org/jira/browse/HDFS-7597
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Affects Versions: 2.0.0-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HDFS-7597.patch, HDFS-7597.patch


 Webhdfs seeks involve closing the current connection, and reissuing a new 
 open request with the new offset.  The RPC layer caches connections so the DN 
 keeps a lingering connection open to the NN.  Connection caching is in part 
 based on UGI.  Although the client used the same token for the new offset 
 request, the UGI is different which forces the DN to open another unnecessary 
 connection to the NN.
 A job that performs many seeks will easily crash the NN due to fd exhaustion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7212) Huge number of BLOCKED threads rendering DataNodes useless

2015-03-20 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371285#comment-14371285
 ] 

Jason Lowe commented on HDFS-7212:
--

Wondering if you were seeing the same thing as HADOOP-11333 which is fixed in 
2.7.0.  Does the stacktrace in HADOOP-11333 match what you were seeing?

 Huge number of BLOCKED threads rendering DataNodes useless
 --

 Key: HDFS-7212
 URL: https://issues.apache.org/jira/browse/HDFS-7212
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.4.0
 Environment: PROD
Reporter: Istvan Szukacs

 There are 3000 - 8000 threads in each datanode JVM, blocking the entire VM 
 and rendering the service unusable, missing heartbeats and stopping data 
 access. The threads look like this:
 {code}
 3415 (state = BLOCKED)
 - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information may 
 be imprecise)
 - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
 line=186 (Compiled frame)
 - 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt() 
 @bci=1, line=834 (Interpreted frame)
 - 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(java.util.concurrent.locks.AbstractQueuedSynchronizer$Node,
  int) @bci=67, line=867 (Interpreted frame)
 - java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(int) @bci=17, 
 line=1197 (Interpreted frame)
 - java.util.concurrent.locks.ReentrantLock$NonfairSync.lock() @bci=21, 
 line=214 (Compiled frame)
 - java.util.concurrent.locks.ReentrantLock.lock() @bci=4, line=290 (Compiled 
 frame)
 - 
 org.apache.hadoop.net.unix.DomainSocketWatcher.add(org.apache.hadoop.net.unix.DomainSocket,
  org.apache.hadoop.net.unix.DomainSocketWatcher$Handler) @bci=4, line=286 
 (Interpreted frame)
 - 
 org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.createNewMemorySegment(java.lang.String,
  org.apache.hadoop.net.unix.DomainSocket) @bci=169, line=283 (Interpreted 
 frame)
 - 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(java.lang.String)
  @bci=212, line=413 (Interpreted frame)
 - 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(java.io.DataInputStream)
  @bci=13, line=172 (Interpreted frame)
 - 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(org.apache.hadoop.hdfs.protocol.datatransfer.Op)
  @bci=149, line=92 (Compiled frame)
 - org.apache.hadoop.hdfs.server.datanode.DataXceiver.run() @bci=510, line=232 
 (Compiled frame)
 - java.lang.Thread.run() @bci=11, line=744 (Interpreted frame)
 {code}
 Has anybody seen this before?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5241) Provide alternate queuing audit logger to reduce logging contention

2015-03-20 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-5241:
-
Fix Version/s: 2.3.0

 Provide alternate queuing audit logger to reduce logging contention
 ---

 Key: HDFS-5241
 URL: https://issues.apache.org/jira/browse/HDFS-5241
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 2.3.0

 Attachments: HDFS-5241.patch, HDFS-5241.patch


 The default audit logger has extremely poor performance.  The internal 
 synchronization of log4j causes massive contention between the call handlers 
 (100 by default) which drastically limits the throughput of the NN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7854) Separate class DataStreamer out of DFSOutputStream

2015-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371294#comment-14371294
 ] 

Hadoop QA commented on HDFS-7854:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12705883/HDFS-7854-006.patch
  against trunk revision 8041267.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.tracing.TestTracing

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10001//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10001//console

This message is automatically generated.

 Separate class DataStreamer out of DFSOutputStream
 --

 Key: HDFS-7854
 URL: https://issues.apache.org/jira/browse/HDFS-7854
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-7854-001.patch, HDFS-7854-002.patch, 
 HDFS-7854-003.patch, HDFS-7854-004-duplicate.patch, 
 HDFS-7854-004-duplicate2.patch, HDFS-7854-004-duplicate3.patch, 
 HDFS-7854-004.patch, HDFS-7854-005.patch, HDFS-7854-006.patch


 This sub task separate DataStreamer from DFSOutputStream. New DataStreamer 
 will accept packets and write them to remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7963) Fix expected tracing spans in TestTracing along with HDFS-7054

2015-03-20 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371344#comment-14371344
 ] 

Kihwal Lee commented on HDFS-7963:
--

[~cmccabe], I think you are most familiar with this. Could you review this?

 Fix expected tracing spans in TestTracing along with HDFS-7054
 --

 Key: HDFS-7963
 URL: https://issues.apache.org/jira/browse/HDFS-7963
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.7.0
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Critical
 Attachments: HDFS-7963.001.patch


 There are no tracing spans named DFSOutputStream any more. In addition, spans 
 having multiple parents do not have specific trace id.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7941) SequenceFile.Writer.hsync() not working?

2015-03-20 Thread Sverre Bakke (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371459#comment-14371459
 ] 

Sverre Bakke commented on HDFS-7941:


Ok, I just tested this will BLOCK compression, RECORD compression as well as 
NONE compression, and same result on all modes. Furthermore, I tested without 
sequence files (i.e. normal files) and also here it only syncs at the very 
beginning and never again until file is closed.

 SequenceFile.Writer.hsync() not working?
 

 Key: HDFS-7941
 URL: https://issues.apache.org/jira/browse/HDFS-7941
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.6.0
 Environment: HDP 2.2 running on Redhat
Reporter: Sverre Bakke

 When using SequenceFile.Writer and appending+syncing to file repeatedly, the 
 sync does not appear to work other than:
 - once after writing headers
 - when closing.
 Imagine the following test case:
 http://pastebin.com/Y9xysCRX
 This code would append a new record every second and then immediately sync 
 it. One would also imagine that the file would grow for every append, 
 however, this does not happen.
 After watching the behavior I have noticed that it only syncs the headers at 
 the very beginning (providing a file of 164 bytes) and then never again until 
 its closed. This despite it is asked to hsync() after every append.
 Looking into the debug logs, this also claims the same behavior (executed the 
 provided code example and grepped for sync):
 SLF4J: Failed to load class org.slf4j.impl.StaticLoggerBinder.
 SLF4J: Defaulting to no-operation (NOP) logger implementation
 SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
 details.
 2015-03-17 15:55:14 DEBUG ProtobufRpcEngine:253 - Call: fsync took 11ms
 This was the only time the code ran fsync throughout the entire execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7962) Remove duplicated logs in BlockManager

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371473#comment-14371473
 ] 

Hudson commented on HDFS-7962:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2088 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2088/])
HDFS-7962. Remove duplicated logs in BlockManager. (yliu) (yliu: rev 
978ef11f26794c22c7289582653b32268478e23e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Remove duplicated logs in BlockManager
 --

 Key: HDFS-7962
 URL: https://issues.apache.org/jira/browse/HDFS-7962
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7962.001.patch


 There are few duplicated log in {{BlockManager}}.
 Also do few refinement of log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7816) Unable to open webhdfs paths with +

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371476#comment-14371476
 ] 

Hudson commented on HDFS-7816:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2088 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2088/])
HDFS-7816. Unable to open webhdfs paths with +. Contributed by Haohui Mai 
(kihwal: rev e79be0ee123d05104eb34eb854afcf9fa78baef2)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ParameterParser.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/TestParameterParser.java


 Unable to open webhdfs paths with +
 -

 Key: HDFS-7816
 URL: https://issues.apache.org/jira/browse/HDFS-7816
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.7.0
Reporter: Jason Lowe
Assignee: Haohui Mai
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7816.002.patch, HDFS-7816.patch, HDFS-7816.patch


 webhdfs requests to open files with % characters in the filename fail because 
 the filename is not being decoded properly.  For example:
 $ hadoop fs -cat 'webhdfs://nn/user/somebody/abc%def'
 cat: File does not exist: /user/somebody/abc%25def



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7962) Remove duplicated logs in BlockManager

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371452#comment-14371452
 ] 

Hudson commented on HDFS-7962:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #138 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/138/])
HDFS-7962. Remove duplicated logs in BlockManager. (yliu) (yliu: rev 
978ef11f26794c22c7289582653b32268478e23e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Remove duplicated logs in BlockManager
 --

 Key: HDFS-7962
 URL: https://issues.apache.org/jira/browse/HDFS-7962
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7962.001.patch


 There are few duplicated log in {{BlockManager}}.
 Also do few refinement of log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7932) Speed up the shutdown of datanode during rolling upgrade

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371457#comment-14371457
 ] 

Hudson commented on HDFS-7932:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #138 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/138/])
HDFS-7932. Speed up the shutdown of datanode during rolling upgrade. 
Contributed by Kihwal Lee. (kihwal: rev 
61a4c7fc9891def0e85edf7e41d74c6b92c85fdb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Speed up the shutdown of datanode during rolling upgrade
 

 Key: HDFS-7932
 URL: https://issues.apache.org/jira/browse/HDFS-7932
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.7.0

 Attachments: HDFS-7932.patch, HDFS-7932.patch


 Datanode normally exits in 3 seconds after receiving {{shutdownDatanode}} 
 command. However, sometimes it doesn't, especially when the IO is busy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7930) commitBlockSynchronization() does not remove locations

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371454#comment-14371454
 ] 

Hudson commented on HDFS-7930:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #138 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/138/])
HDFS-7930. commitBlockSynchronization() does not remove locations. (yliu) 
(yliu: rev e37ca221bf4e9ae5d5e667d8ca284df9fdb33199)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 commitBlockSynchronization() does not remove locations
 --

 Key: HDFS-7930
 URL: https://issues.apache.org/jira/browse/HDFS-7930
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Yi Liu
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7930.001.patch, HDFS-7930.002.patch, 
 HDFS-7930.003.patch


 When {{commitBlockSynchronization()}} has less {{newTargets}} than in the 
 original block it does not remove unconfirmed locations. This results in that 
 the the block stores locations of different lengths or genStamp (corrupt).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7816) Unable to open webhdfs paths with +

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371455#comment-14371455
 ] 

Hudson commented on HDFS-7816:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #138 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/138/])
HDFS-7816. Unable to open webhdfs paths with +. Contributed by Haohui Mai 
(kihwal: rev e79be0ee123d05104eb34eb854afcf9fa78baef2)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/TestParameterParser.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ParameterParser.java


 Unable to open webhdfs paths with +
 -

 Key: HDFS-7816
 URL: https://issues.apache.org/jira/browse/HDFS-7816
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.7.0
Reporter: Jason Lowe
Assignee: Haohui Mai
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7816.002.patch, HDFS-7816.patch, HDFS-7816.patch


 webhdfs requests to open files with % characters in the filename fail because 
 the filename is not being decoded properly.  For example:
 $ hadoop fs -cat 'webhdfs://nn/user/somebody/abc%def'
 cat: File does not exist: /user/somebody/abc%25def



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7932) Speed up the shutdown of datanode during rolling upgrade

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371478#comment-14371478
 ] 

Hudson commented on HDFS-7932:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2088 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2088/])
HDFS-7932. Speed up the shutdown of datanode during rolling upgrade. 
Contributed by Kihwal Lee. (kihwal: rev 
61a4c7fc9891def0e85edf7e41d74c6b92c85fdb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Speed up the shutdown of datanode during rolling upgrade
 

 Key: HDFS-7932
 URL: https://issues.apache.org/jira/browse/HDFS-7932
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.7.0

 Attachments: HDFS-7932.patch, HDFS-7932.patch


 Datanode normally exits in 3 seconds after receiving {{shutdownDatanode}} 
 command. However, sometimes it doesn't, especially when the IO is busy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7930) commitBlockSynchronization() does not remove locations

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371475#comment-14371475
 ] 

Hudson commented on HDFS-7930:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2088 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2088/])
HDFS-7930. commitBlockSynchronization() does not remove locations. (yliu) 
(yliu: rev e37ca221bf4e9ae5d5e667d8ca284df9fdb33199)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 commitBlockSynchronization() does not remove locations
 --

 Key: HDFS-7930
 URL: https://issues.apache.org/jira/browse/HDFS-7930
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Yi Liu
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7930.001.patch, HDFS-7930.002.patch, 
 HDFS-7930.003.patch


 When {{commitBlockSynchronization()}} has less {{newTargets}} than in the 
 original block it does not remove unconfirmed locations. This results in that 
 the the block stores locations of different lengths or genStamp (corrupt).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6826) Plugin interface to enable delegation of HDFS authorization assertions

2015-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372517#comment-14372517
 ] 

Hadoop QA commented on HDFS-6826:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706099/HDFS-6826.15.patch
  against trunk revision 7f1e2f9.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.tracing.TestTracing

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10017//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10017//console

This message is automatically generated.

 Plugin interface to enable delegation of HDFS authorization assertions
 --

 Key: HDFS-6826
 URL: https://issues.apache.org/jira/browse/HDFS-6826
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HDFS-6826-idea.patch, HDFS-6826-idea2.patch, 
 HDFS-6826-permchecker.patch, HDFS-6826.10.patch, HDFS-6826.11.patch, 
 HDFS-6826.12.patch, HDFS-6826.13.patch, HDFS-6826.14.patch, 
 HDFS-6826.15.patch, HDFS-6826v3.patch, HDFS-6826v4.patch, HDFS-6826v5.patch, 
 HDFS-6826v6.patch, HDFS-6826v7.1.patch, HDFS-6826v7.2.patch, 
 HDFS-6826v7.3.patch, HDFS-6826v7.4.patch, HDFS-6826v7.5.patch, 
 HDFS-6826v7.6.patch, HDFS-6826v7.patch, HDFS-6826v8.patch, HDFS-6826v9.patch, 
 HDFSPluggableAuthorizationProposal-v2.pdf, 
 HDFSPluggableAuthorizationProposal.pdf


 When Hbase data, HiveMetaStore data or Search data is accessed via services 
 (Hbase region servers, HiveServer2, Impala, Solr) the services can enforce 
 permissions on corresponding entities (databases, tables, views, columns, 
 search collections, documents). It is desirable, when the data is accessed 
 directly by users accessing the underlying data files (i.e. from a MapReduce 
 job), that the permission of the data files map to the permissions of the 
 corresponding data entity (i.e. table, column family or search collection).
 To enable this we need to have the necessary hooks in place in the NameNode 
 to delegate authorization to an external system that can map HDFS 
 files/directories to data entities and resolve their permissions based on the 
 data entities permissions.
 I’ll be posting a design proposal in the next few days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-20 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372485#comment-14372485
 ] 

Colin Patrick McCabe commented on HDFS-7960:


ok, this version adds a good unit test and addresses the previous issues with 
not passing the context.  Fixed up some of the logs to include the block report 
id and added some more comments

 The full block report should prune zombie storages even if they're not empty
 

 Key: HDFS-7960
 URL: https://issues.apache.org/jira/browse/HDFS-7960
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Colin Patrick McCabe
Priority: Critical
 Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
 HDFS-7960.004.patch, HDFS-7960.005.patch


 The full block report should prune zombie storages even if they're not empty. 
  We have seen cases in production where zombie storages have not been pruned 
 subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
 is a block in some old storage which is actually not there.  In this case, 
 the block will not show up in the new storage (once old is renamed to new) 
 and the old storage will linger forever as a zombie, even with the HDFS-7596 
 fix applied.  This also happens with datanode hotplug, when a drive is 
 removed.  In this case, an entire storage (volume) goes away but the blocks 
 do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372487#comment-14372487
 ] 

Hadoop QA commented on HDFS-7960:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706118/HDFS-7960.005.patch
  against trunk revision e1feb4e.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10018//console

This message is automatically generated.

 The full block report should prune zombie storages even if they're not empty
 

 Key: HDFS-7960
 URL: https://issues.apache.org/jira/browse/HDFS-7960
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Colin Patrick McCabe
Priority: Critical
 Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
 HDFS-7960.004.patch, HDFS-7960.005.patch


 The full block report should prune zombie storages even if they're not empty. 
  We have seen cases in production where zombie storages have not been pruned 
 subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
 is a block in some old storage which is actually not there.  In this case, 
 the block will not show up in the new storage (once old is renamed to new) 
 and the old storage will linger forever as a zombie, even with the HDFS-7596 
 fix applied.  This also happens with datanode hotplug, when a drive is 
 removed.  In this case, an entire storage (volume) goes away but the blocks 
 do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6658) Namenode memory optimization - Block replicas list

2015-03-20 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372508#comment-14372508
 ] 

Colin Patrick McCabe commented on HDFS-6658:


Daryn, I apologize for not being more responsive on this.  I've been dealing 
with some burning fires around here and haven't had time to look at it more.  
It would be nice if this could help with the goals of HDFS-7836, especially 
multi-threading block report processing and getting the heap below 32GB in the 
long term.  Right now I don't see a path from this patch to there but very 
possibly I'm missing something.  Let's chat about it sometime next week.

 Namenode memory optimization - Block replicas list 
 ---

 Key: HDFS-6658
 URL: https://issues.apache.org/jira/browse/HDFS-6658
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.4.1
Reporter: Amir Langer
Assignee: Daryn Sharp
 Attachments: BlockListOptimizationComparison.xlsx, BlocksMap 
 redesign.pdf, HDFS-6658.patch, HDFS-6658.patch, HDFS-6658.patch, Namenode 
 Memory Optimizations - Block replicas list.docx, New primative indexes.jpg, 
 Old triplets.jpg


 Part of the memory consumed by every BlockInfo object in the Namenode is a 
 linked list of block references for every DatanodeStorageInfo (called 
 triplets). 
 We propose to change the way we store the list in memory. 
 Using primitive integer indexes instead of object references will reduce the 
 memory needed for every block replica (when compressed oops is disabled) and 
 in our new design the list overhead will be per DatanodeStorageInfo and not 
 per block replica.
 see attached design doc. for details and evaluation results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-20 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7960:
---
Attachment: HDFS-7960.006.patch

rebase on trunk

 The full block report should prune zombie storages even if they're not empty
 

 Key: HDFS-7960
 URL: https://issues.apache.org/jira/browse/HDFS-7960
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Colin Patrick McCabe
Priority: Critical
 Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
 HDFS-7960.004.patch, HDFS-7960.005.patch, HDFS-7960.006.patch


 The full block report should prune zombie storages even if they're not empty. 
  We have seen cases in production where zombie storages have not been pruned 
 subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
 is a block in some old storage which is actually not there.  In this case, 
 the block will not show up in the new storage (once old is renamed to new) 
 and the old storage will linger forever as a zombie, even with the HDFS-7596 
 fix applied.  This also happens with datanode hotplug, when a drive is 
 removed.  In this case, an entire storage (volume) goes away but the blocks 
 do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6353) Handle checkpoint failure more gracefully

2015-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372484#comment-14372484
 ] 

Hadoop QA commented on HDFS-6353:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706082/HDFS-6353.002.patch
  against trunk revision fe5c23b.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 14 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal:

  org.apache.hadoop.tracing.TestTracing

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10014//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10014//console

This message is automatically generated.

 Handle checkpoint failure more gracefully
 -

 Key: HDFS-6353
 URL: https://issues.apache.org/jira/browse/HDFS-6353
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS-6353.000.patch, HDFS-6353.001.patch, 
 HDFS-6353.002.patch


 One of the failure patterns I have seen is, in some rare circumstances, due 
 to some inconsistency the secondary or standby fails to consume editlog. The 
 only solution when this happens is to save the namespace at the current 
 active namenode. But sometimes when this happens, unsuspecting admin might 
 end up restarting the namenode, requiring more complicated solution to the 
 problem (such as ignore editlog record that cannot be consumed etc.).
 How about adding the following functionality:
 When checkpointer (standby or secondary) fails to consume editlog, based on a 
 configurable flag (on/off) to let the active namenode know about this 
 failure. Active namenode can enters safemode and saves namespace. When  in 
 this type of safemode, namenode UI also shows information about checkpoint 
 failure and that it is saving namespace. Once the namespace is saved, 
 namenode can come out of safemode.
 This means service unavailability (even in HA cluster). But it might be worth 
 it to avoid long startup times or need for other manual fixes. Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-20 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7960:
---
Status: Open  (was: Patch Available)

 The full block report should prune zombie storages even if they're not empty
 

 Key: HDFS-7960
 URL: https://issues.apache.org/jira/browse/HDFS-7960
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Colin Patrick McCabe
Priority: Critical
 Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
 HDFS-7960.004.patch, HDFS-7960.005.patch


 The full block report should prune zombie storages even if they're not empty. 
  We have seen cases in production where zombie storages have not been pruned 
 subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
 is a block in some old storage which is actually not there.  In this case, 
 the block will not show up in the new storage (once old is renamed to new) 
 and the old storage will linger forever as a zombie, even with the HDFS-7596 
 fix applied.  This also happens with datanode hotplug, when a drive is 
 removed.  In this case, an entire storage (volume) goes away but the blocks 
 do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-20 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7960:
---
Status: Patch Available  (was: Open)

 The full block report should prune zombie storages even if they're not empty
 

 Key: HDFS-7960
 URL: https://issues.apache.org/jira/browse/HDFS-7960
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Colin Patrick McCabe
Priority: Critical
 Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
 HDFS-7960.004.patch, HDFS-7960.005.patch


 The full block report should prune zombie storages even if they're not empty. 
  We have seen cases in production where zombie storages have not been pruned 
 subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
 is a block in some old storage which is actually not there.  In this case, 
 the block will not show up in the new storage (once old is renamed to new) 
 and the old storage will linger forever as a zombie, even with the HDFS-7596 
 fix applied.  This also happens with datanode hotplug, when a drive is 
 removed.  In this case, an entire storage (volume) goes away but the blocks 
 do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-20 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7960:
---
Attachment: (was: HDFS-7960.004.patch)

 The full block report should prune zombie storages even if they're not empty
 

 Key: HDFS-7960
 URL: https://issues.apache.org/jira/browse/HDFS-7960
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Colin Patrick McCabe
Priority: Critical
 Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
 HDFS-7960.004.patch, HDFS-7960.005.patch


 The full block report should prune zombie storages even if they're not empty. 
  We have seen cases in production where zombie storages have not been pruned 
 subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
 is a block in some old storage which is actually not there.  In this case, 
 the block will not show up in the new storage (once old is renamed to new) 
 and the old storage will linger forever as a zombie, even with the HDFS-7596 
 fix applied.  This also happens with datanode hotplug, when a drive is 
 removed.  In this case, an entire storage (volume) goes away but the blocks 
 do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-20 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7960:
---
Attachment: HDFS-7960.005.patch

 The full block report should prune zombie storages even if they're not empty
 

 Key: HDFS-7960
 URL: https://issues.apache.org/jira/browse/HDFS-7960
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Colin Patrick McCabe
Priority: Critical
 Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
 HDFS-7960.004.patch, HDFS-7960.005.patch


 The full block report should prune zombie storages even if they're not empty. 
  We have seen cases in production where zombie storages have not been pruned 
 subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
 is a block in some old storage which is actually not there.  In this case, 
 the block will not show up in the new storage (once old is renamed to new) 
 and the old storage will linger forever as a zombie, even with the HDFS-7596 
 fix applied.  This also happens with datanode hotplug, when a drive is 
 removed.  In this case, an entire storage (volume) goes away but the blocks 
 do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7713) Improve the HDFS Web UI browser to allow creating dirs

2015-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372493#comment-14372493
 ] 

Hadoop QA commented on HDFS-7713:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706095/HDFS-7713.07.patch
  against trunk revision fe5c23b.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.tracing.TestTracing

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10015//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10015//console

This message is automatically generated.

 Improve the HDFS Web UI browser to allow creating dirs
 --

 Key: HDFS-7713
 URL: https://issues.apache.org/jira/browse/HDFS-7713
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HDFS-7713.01.patch, HDFS-7713.02.patch, 
 HDFS-7713.03.patch, HDFS-7713.04.patch, HDFS-7713.05.patch, 
 HDFS-7713.06.patch, HDFS-7713.07.patch


 This sub-task JIRA is for improving the NN HTML5 UI to allow the user to 
 create directories. It uses WebHDFS and adds to the great work done in 
 HDFS-6252



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-20 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7960:
---
Attachment: HDFS-7960.004.patch

 The full block report should prune zombie storages even if they're not empty
 

 Key: HDFS-7960
 URL: https://issues.apache.org/jira/browse/HDFS-7960
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Colin Patrick McCabe
Priority: Critical
 Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
 HDFS-7960.004.patch, HDFS-7960.004.patch


 The full block report should prune zombie storages even if they're not empty. 
  We have seen cases in production where zombie storages have not been pruned 
 subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
 is a block in some old storage which is actually not there.  In this case, 
 the block will not show up in the new storage (once old is renamed to new) 
 and the old storage will linger forever as a zombie, even with the HDFS-7596 
 fix applied.  This also happens with datanode hotplug, when a drive is 
 removed.  In this case, an entire storage (volume) goes away but the blocks 
 do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7847) Modify NNThroughputBenchmark to be able to operate on a remote NameNode

2015-03-20 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb resolved HDFS-7847.

   Resolution: Fixed
Fix Version/s: HDFS-7836

Committed to HDFS-7836 branch.

 Modify NNThroughputBenchmark to be able to operate on a remote NameNode
 ---

 Key: HDFS-7847
 URL: https://issues.apache.org/jira/browse/HDFS-7847
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7836
Reporter: Colin Patrick McCabe
Assignee: Charles Lamb
 Fix For: HDFS-7836

 Attachments: HDFS-7847.000.patch, HDFS-7847.001.patch, 
 HDFS-7847.002.patch, HDFS-7847.003.patch, make_blocks.tar.gz


 Modify NNThroughputBenchmark to be able to operate on a NN that is not in 
 process. A followon Jira will modify it some more to allow quantifying native 
 and java heap sizes, and some latency numbers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7962) Remove duplicated logs in BlockManager

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371193#comment-14371193
 ] 

Hudson commented on HDFS-7962:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #872 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/872/])
HDFS-7962. Remove duplicated logs in BlockManager. (yliu) (yliu: rev 
978ef11f26794c22c7289582653b32268478e23e)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


 Remove duplicated logs in BlockManager
 --

 Key: HDFS-7962
 URL: https://issues.apache.org/jira/browse/HDFS-7962
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7962.001.patch


 There are few duplicated log in {{BlockManager}}.
 Also do few refinement of log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7816) Unable to open webhdfs paths with +

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371196#comment-14371196
 ] 

Hudson commented on HDFS-7816:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #872 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/872/])
HDFS-7816. Unable to open webhdfs paths with +. Contributed by Haohui Mai 
(kihwal: rev e79be0ee123d05104eb34eb854afcf9fa78baef2)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/TestParameterParser.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ParameterParser.java


 Unable to open webhdfs paths with +
 -

 Key: HDFS-7816
 URL: https://issues.apache.org/jira/browse/HDFS-7816
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.7.0
Reporter: Jason Lowe
Assignee: Haohui Mai
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7816.002.patch, HDFS-7816.patch, HDFS-7816.patch


 webhdfs requests to open files with % characters in the filename fail because 
 the filename is not being decoded properly.  For example:
 $ hadoop fs -cat 'webhdfs://nn/user/somebody/abc%def'
 cat: File does not exist: /user/somebody/abc%25def



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7930) commitBlockSynchronization() does not remove locations

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371195#comment-14371195
 ] 

Hudson commented on HDFS-7930:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #872 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/872/])
HDFS-7930. commitBlockSynchronization() does not remove locations. (yliu) 
(yliu: rev e37ca221bf4e9ae5d5e667d8ca284df9fdb33199)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 commitBlockSynchronization() does not remove locations
 --

 Key: HDFS-7930
 URL: https://issues.apache.org/jira/browse/HDFS-7930
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Yi Liu
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7930.001.patch, HDFS-7930.002.patch, 
 HDFS-7930.003.patch


 When {{commitBlockSynchronization()}} has less {{newTargets}} than in the 
 original block it does not remove unconfirmed locations. This results in that 
 the the block stores locations of different lengths or genStamp (corrupt).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7932) Speed up the shutdown of datanode during rolling upgrade

2015-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14371198#comment-14371198
 ] 

Hudson commented on HDFS-7932:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #872 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/872/])
HDFS-7932. Speed up the shutdown of datanode during rolling upgrade. 
Contributed by Kihwal Lee. (kihwal: rev 
61a4c7fc9891def0e85edf7e41d74c6b92c85fdb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Speed up the shutdown of datanode during rolling upgrade
 

 Key: HDFS-7932
 URL: https://issues.apache.org/jira/browse/HDFS-7932
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.7.0

 Attachments: HDFS-7932.patch, HDFS-7932.patch


 Datanode normally exits in 3 seconds after receiving {{shutdownDatanode}} 
 command. However, sometimes it doesn't, especially when the IO is busy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6353) Handle checkpoint failure more gracefully

2015-03-20 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372090#comment-14372090
 ] 

Jitendra Nath Pandey commented on HDFS-6353:


+1, the patch looks good to me.

 Handle checkpoint failure more gracefully
 -

 Key: HDFS-6353
 URL: https://issues.apache.org/jira/browse/HDFS-6353
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS-6353.000.patch, HDFS-6353.001.patch


 One of the failure patterns I have seen is, in some rare circumstances, due 
 to some inconsistency the secondary or standby fails to consume editlog. The 
 only solution when this happens is to save the namespace at the current 
 active namenode. But sometimes when this happens, unsuspecting admin might 
 end up restarting the namenode, requiring more complicated solution to the 
 problem (such as ignore editlog record that cannot be consumed etc.).
 How about adding the following functionality:
 When checkpointer (standby or secondary) fails to consume editlog, based on a 
 configurable flag (on/off) to let the active namenode know about this 
 failure. Active namenode can enters safemode and saves namespace. When  in 
 this type of safemode, namenode UI also shows information about checkpoint 
 failure and that it is saving namespace. Once the namespace is saved, 
 namenode can come out of safemode.
 This means service unavailability (even in HA cluster). But it might be worth 
 it to avoid long startup times or need for other manual fixes. Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6353) Handle checkpoint failure more gracefully

2015-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372108#comment-14372108
 ] 

Hadoop QA commented on HDFS-6353:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12691823/HDFS-6353.001.patch
  against trunk revision 586348e.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10008//console

This message is automatically generated.

 Handle checkpoint failure more gracefully
 -

 Key: HDFS-6353
 URL: https://issues.apache.org/jira/browse/HDFS-6353
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS-6353.000.patch, HDFS-6353.001.patch


 One of the failure patterns I have seen is, in some rare circumstances, due 
 to some inconsistency the secondary or standby fails to consume editlog. The 
 only solution when this happens is to save the namespace at the current 
 active namenode. But sometimes when this happens, unsuspecting admin might 
 end up restarting the namenode, requiring more complicated solution to the 
 problem (such as ignore editlog record that cannot be consumed etc.).
 How about adding the following functionality:
 When checkpointer (standby or secondary) fails to consume editlog, based on a 
 configurable flag (on/off) to let the active namenode know about this 
 failure. Active namenode can enters safemode and saves namespace. When  in 
 this type of safemode, namenode UI also shows information about checkpoint 
 failure and that it is saving namespace. Once the namespace is saved, 
 namenode can come out of safemode.
 This means service unavailability (even in HA cluster). But it might be worth 
 it to avoid long startup times or need for other manual fixes. Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7839) Erasure coding: implement facilities in NameNode to create and manage EC zones

2015-03-20 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-7839:

Description: 
As a quick first step to facilitate initial development and testing, HDFS-7347 
added EC configuration in file header as one storage policy. We have discussed 
and [concluded 
|https://issues.apache.org/jira/browse/HDFS-7285?focusedCommentId=14296210page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14296210]
 that EC configurations should be part of XAttr. This JIRA aims to add the 
basic EC XAttr structure. HDFS-7337 will add configurable and pluggable schema 
info.

This JIRA will follow the [plan | 
https://issues.apache.org/jira/browse/HDFS-7285?focusedCommentId=14370307page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14370307]
 we made under HDFS-7285.

  was:
As a quick first step to facilitate initial development and testing, HDFS-7347 
added EC configuration in file header as one storage policy. We have discussed 
and [concluded 
|https://issues.apache.org/jira/browse/HDFS-7285?focusedCommentId=14296210page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14296210]
 that EC configurations should be part of XAttr. This JIRA aims to add the 
basic EC XAttr structure. HDFS-7337 will add configurable and pluggable schema 
info.

To summarize, this will focus on providing relevant facilities in NameNode side 
to create and manage EC zones. EC zone will associate, reference and store the 
needed schema name as XAttr. EC schema information will be persisted in 
NameNode elsewhere centrally.


 Erasure coding: implement facilities in NameNode to create and manage EC zones
 --

 Key: HDFS-7839
 URL: https://issues.apache.org/jira/browse/HDFS-7839
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-7839-000.patch, HDFS-7839-001.patch


 As a quick first step to facilitate initial development and testing, 
 HDFS-7347 added EC configuration in file header as one storage policy. We 
 have discussed and [concluded 
 |https://issues.apache.org/jira/browse/HDFS-7285?focusedCommentId=14296210page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14296210]
  that EC configurations should be part of XAttr. This JIRA aims to add the 
 basic EC XAttr structure. HDFS-7337 will add configurable and pluggable 
 schema info.
 This JIRA will follow the [plan | 
 https://issues.apache.org/jira/browse/HDFS-7285?focusedCommentId=14370307page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14370307]
  we made under HDFS-7285.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >