[jira] [Commented] (HDFS-9565) TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is flaky due to race condition

2015-12-16 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060839#comment-15060839
 ] 

Arpit Agarwal commented on HDFS-9565:
-

+1 pending Jenkins.

> TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is flaky due 
> to race condition
> -
>
> Key: HDFS-9565
> URL: https://issues.apache.org/jira/browse/HDFS-9565
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9565.001.patch
>
>
> TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes occasionally 
> fails with the following error:
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/699/testReport/org.apache.hadoop.hdfs/TestDistributedFileSystem/testLocatedFileStatusStorageIdsTypes/
> {noformat}
> FAILED:  
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes
> Error Message:
> Unexpected num storage ids expected:<2> but was:<1>
> Stack Trace:
> java.lang.AssertionError: Unexpected num storage ids expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes(TestDistributedFileSystem.java:855)
> {noformat}
> It appears that this test failed due to race condition: it does not wait for 
> the file replication to finish, before checking the file's status. 
> This flaky test can be fixed by using DFSTestUtil.waitForReplication()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9565) TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is flaky

2015-12-16 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-9565:
-

 Summary: 
TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is flaky
 Key: HDFS-9565
 URL: https://issues.apache.org/jira/browse/HDFS-9565
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs, test
Affects Versions: 3.0.0
 Environment: Jenkins
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Minor


TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes occasionally 
fails with the following error:
https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/699/testReport/org.apache.hadoop.hdfs/TestDistributedFileSystem/testLocatedFileStatusStorageIdsTypes/
{noformat}
FAILED:  
org.apache.hadoop.hdfs.TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes

Error Message:
Unexpected num storage ids expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: Unexpected num storage ids expected:<2> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes(TestDistributedFileSystem.java:855)

{noformat}

It appears that this test failed due to race condition: it does not wait for 
the file replication to finish, before checking the file's status. 

This flaky test can be fixed by using DFSTestUtil.waitForReplication()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9198) Coalesce IBR processing in the NN

2015-12-16 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-9198:
--
Attachment: HDFS-9198-trunk.patch

No functional change, just updated to account for line number drift.

> Coalesce IBR processing in the NN
> -
>
> Key: HDFS-9198
> URL: https://issues.apache.org/jira/browse/HDFS-9198
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-9198-branch2.patch, HDFS-9198-trunk.patch, 
> HDFS-9198-trunk.patch, HDFS-9198-trunk.patch, HDFS-9198-trunk.patch, 
> HDFS-9198-trunk.patch
>
>
> IBRs from thousands of DNs under load will degrade NN performance due to 
> excessive write-lock contention from multiple IPC handler threads.  The IBR 
> processing is quick, so the lock contention may be reduced by coalescing 
> multiple IBRs into a single write-lock transaction.  The handlers will also 
> be freed up faster for other operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9487) libhdfs++ Enable builds with no compiler optimizations

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060300#comment-15060300
 ] 

Hadoop QA commented on HDFS-9487:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
32s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 33s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 30s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 5m 12s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 5m 10s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 55s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12778027/HDFS-9487.HDFS-8707.001.patch
 |
| JIRA Issue | HDFS-9487 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  cc  |
| uname | Linux 7c15831886f3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 522610d |
| unit | 

[jira] [Comment Edited] (HDFS-402) Display the server version in dfsadmin -report

2015-12-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060341#comment-15060341
 ] 

Allen Wittenauer edited comment on HDFS-402 at 12/16/15 5:21 PM:
-

With rolling upgrade, this is very important now: at least, to reflect the 
datanode version.  The fact that this wasn't completed as part of that JIRA 
goes to the level of 'fit and finish' that seems to be permeating Hadoop the 
past few years.


was (Author: aw):
With rolling upgrade, this is very important now.  The fact that this wasn't 
completed as part of that JIRA goes to the level of 'fit and finish' that seems 
to be permeating Hadoop the past few years.

> Display the server version in dfsadmin -report
> --
>
> Key: HDFS-402
> URL: https://issues.apache.org/jira/browse/HDFS-402
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jakob Homan
>Assignee: Uma Maheswara Rao G
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-402.patch, HDFS-402.patch, HDFS-402.patch, 
> hdfs-402.txt
>
>
> As part of HADOOP-5094, it was requested to include the server version in the 
> dfsadmin -report, to avoid the need to screen scrape to get this information:
> bq. Please do provide the server version, so there is a quick and non-taxing 
> way of determine what is the current running version on the namenode.
> Currently there is nothing in the dfs client protocol to query this 
> information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-402) Display the server version in dfsadmin -report

2015-12-16 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reopened HDFS-402:
---

With rolling upgrade, this is very important now.  The fact that this wasn't 
completed as part of that JIRA goes to the level of 'fit and finish' that seems 
to be permeating Hadoop the past few years.

> Display the server version in dfsadmin -report
> --
>
> Key: HDFS-402
> URL: https://issues.apache.org/jira/browse/HDFS-402
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jakob Homan
>Assignee: Uma Maheswara Rao G
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-402.patch, HDFS-402.patch, HDFS-402.patch, 
> hdfs-402.txt
>
>
> As part of HADOOP-5094, it was requested to include the server version in the 
> dfsadmin -report, to avoid the need to screen scrape to get this information:
> bq. Please do provide the server version, so there is a quick and non-taxing 
> way of determine what is the current running version on the namenode.
> Currently there is nothing in the dfs client protocol to query this 
> information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9325) Allow the location of hadoop source tree resources to be passed to CMake during a build.

2015-12-16 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9325:
-
Attachment: HDFS-9325.HDFS-8707.003.patch

New patch: include connection in the lib output

> Allow the location of hadoop source tree resources to be passed to CMake 
> during a build.
> 
>
> Key: HDFS-9325
> URL: https://issues.apache.org/jira/browse/HDFS-9325
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Bob Hansen
> Attachments: HDFS-9325.HDFS-8707.001.patch, 
> HDFS-9325.HDFS-8707.002.patch, HDFS-9325.HDFS-8707.003.patch
>
>
> It would be nice if CMake could take an optional parameter with the location 
> of hdfs.h that typically lives at libhdfs/includes/hdfs/hdfs.h, otherwise it 
> would default to this location.  This would be useful for projects using 
> libhdfs++ that gather headers defining library APIs in a single location.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8674) Improve performance of postponed block scans

2015-12-16 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-8674:
--
Attachment: HDFS-8674.patch

Updated patch since a data structure change prevented it from applying cleanly.

> Improve performance of postponed block scans
> 
>
> Key: HDFS-8674
> URL: https://issues.apache.org/jira/browse/HDFS-8674
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-8674.patch, HDFS-8674.patch
>
>
> When a standby goes active, it marks all nodes as "stale" which will cause 
> block invalidations for over-replicated blocks to be queued until full block 
> reports are received from the nodes with the block.  The replication monitor 
> scans the queue with O(N) runtime.  It picks a random offset and iterates 
> through the set to randomize blocks scanned.
> The result is devastating when a cluster loses multiple nodes during a 
> rolling upgrade. Re-replication occurs, the nodes come back, the excess block 
> invalidations are postponed. Rescanning just 2k blocks out of millions of 
> postponed blocks may take multiple seconds. During the scan, the write lock 
> is held which stalls all other processing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8477) describe dfs.ha.zkfc.port in hdfs-default.xml

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060698#comment-15060698
 ] 

Hadoop QA commented on HDFS-8477:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 58s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 6s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 5s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 50s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 203m 23s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 194m 48s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 1m 4s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 457m 10s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.TestDFSClientFailover |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestDFSStripedInputStream |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.TestDFSUpgrade |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.TestPersistBlocks |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.TestLocalDFS |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | 

[jira] [Commented] (HDFS-9525) hadoop utilities need to support provided delegation tokens

2015-12-16 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060622#comment-15060622
 ] 

Daryn Sharp commented on HDFS-9525:
---

bq. If we want to distcp from non-kerberos cluster to kerberos cluster, WebHDFS 
does not use the delegationToken even ugi has the webHDFS's token.

I thought the issue at hand is how to access 2 kerberos clusters?  If the other 
cluster is insecure, then just set 
ipc.client.fallback-to-simple-auth-allowed=true.  Even though the key has ipc 
in it, it still applies to webhdfs too.

bq. It supports to use token for WebHDFS on non-kerberos cluster.

This is the part that completely confuses me.  If it's an insecure cluster, 
tokens aren't issued.  Did you (finish what I started long ago) and issue 
tokens even with security off?  If no, then what issued the token you are 
attempting to use on the insecure cluster?

> hadoop utilities need to support provided delegation tokens
> ---
>
> Key: HDFS-9525
> URL: https://issues.apache.org/jira/browse/HDFS-9525
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: HeeSoo Kim
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HDFS-7984.001.patch, HDFS-7984.002.patch, 
> HDFS-7984.003.patch, HDFS-7984.004.patch, HDFS-7984.005.patch, 
> HDFS-7984.006.patch, HDFS-7984.007.patch, HDFS-7984.patch, HDFS-9525.008.patch
>
>
> When using the webhdfs:// filesystem (especially from distcp), we need the 
> ability to inject a delegation token rather than webhdfs initialize its own.  
> This would allow for cross-authentication-zone file system accesses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9552) Document types of permission checks performed for HDFS operations.

2015-12-16 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060889#comment-15060889
 ] 

Arpit Agarwal commented on HDFS-9552:
-

Nice work [~cnauroth], thanks for documenting this.

Should this be {{WRITE (target)}} instead of {{WRITE (source)}}?
{code}
concat| NO [2]| WRITE (source) | N/A | READ 
(source), WRITE (destination) | N/A
{code}

The rest looks good to me.

> Document types of permission checks performed for HDFS operations.
> --
>
> Key: HDFS-9552
> URL: https://issues.apache.org/jira/browse/HDFS-9552
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-9552.001.patch, HDFS-9552.002.patch, 
> hadoop-site.tar.bz2
>
>
> The HDFS permissions guide discusses our use of a POSIX-like model with read, 
> write and execute permissions associated with users, groups and the catch-all 
> other class.  However, there is no documentation that describes exactly what 
> permission checks are performed by user-facing HDFS operations.  This is a 
> frequent source of questions, so it would be good to document this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9557) Reduce object allocation in PB conversion

2015-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060629#comment-15060629
 ] 

Hudson commented on HDFS-9557:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #8976 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8976/])
HDFS-9557. Reduce object allocation in PB conversion. Contributed by (cnauroth: 
rev c470c8953d4927043b6383fad8e792289c634c09)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java


> Reduce object allocation in PB conversion
> -
>
> Key: HDFS-9557
> URL: https://issues.apache.org/jira/browse/HDFS-9557
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0
>
> Attachments: HDFS-9557.patch, HDFS-9557.patch
>
>
> PB conversions use {{ByteString.copyFrom}} to populate the builder.  
> Unfortunately this creates unique instances for empty arrays instead of 
> returning the singleton {{ByteString.EMPTY}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9173) Erasure Coding: Lease recovery for striped file

2015-12-16 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060590#comment-15060590
 ] 

Zhe Zhang commented on HDFS-9173:
-

Agreed on #3 above. This is the 9 RBW => 9 RUR => 9 Finalized option as 
discussed [above | 
https://issues.apache.org/jira/browse/HDFS-9173?focusedCommentId=14960092=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14960092].
 It will work if we have a way to append to RUR replicas.

I think we should have a simple solution to recover to the safe length first. 
Good thoughts Jing.

> Erasure Coding: Lease recovery for striped file
> ---
>
> Key: HDFS-9173
> URL: https://issues.apache.org/jira/browse/HDFS-9173
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-9173.00.wip.patch, HDFS-9173.01.patch, 
> HDFS-9173.02.step125.patch, HDFS-9173.03.patch, HDFS-9173.04.patch, 
> HDFS-9173.05.patch, HDFS-9173.06.patch, HDFS-9173.07.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9565) TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is flaky

2015-12-16 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9565:
--
Attachment: HDFS-9565.001.patch

rev01: use DFSTestUtil.waitForReplication() to avoid race condition between 
file creation and listLocatedStatus.

> TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is flaky
> ---
>
> Key: HDFS-9565
> URL: https://issues.apache.org/jira/browse/HDFS-9565
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9565.001.patch
>
>
> TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes occasionally 
> fails with the following error:
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/699/testReport/org.apache.hadoop.hdfs/TestDistributedFileSystem/testLocatedFileStatusStorageIdsTypes/
> {noformat}
> FAILED:  
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes
> Error Message:
> Unexpected num storage ids expected:<2> but was:<1>
> Stack Trace:
> java.lang.AssertionError: Unexpected num storage ids expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes(TestDistributedFileSystem.java:855)
> {noformat}
> It appears that this test failed due to race condition: it does not wait for 
> the file replication to finish, before checking the file's status. 
> This flaky test can be fixed by using DFSTestUtil.waitForReplication()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9565) TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is flaky due to race condition

2015-12-16 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9565:
--
Summary: TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is 
flaky due to race condition  (was: 
TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is flaky)

> TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is flaky due 
> to race condition
> -
>
> Key: HDFS-9565
> URL: https://issues.apache.org/jira/browse/HDFS-9565
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9565.001.patch
>
>
> TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes occasionally 
> fails with the following error:
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/699/testReport/org.apache.hadoop.hdfs/TestDistributedFileSystem/testLocatedFileStatusStorageIdsTypes/
> {noformat}
> FAILED:  
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes
> Error Message:
> Unexpected num storage ids expected:<2> but was:<1>
> Stack Trace:
> java.lang.AssertionError: Unexpected num storage ids expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes(TestDistributedFileSystem.java:855)
> {noformat}
> It appears that this test failed due to race condition: it does not wait for 
> the file replication to finish, before checking the file's status. 
> This flaky test can be fixed by using DFSTestUtil.waitForReplication()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9565) TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is flaky due to race condition

2015-12-16 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9565:
--
Status: Patch Available  (was: Open)

> TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is flaky due 
> to race condition
> -
>
> Key: HDFS-9565
> URL: https://issues.apache.org/jira/browse/HDFS-9565
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9565.001.patch
>
>
> TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes occasionally 
> fails with the following error:
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/699/testReport/org.apache.hadoop.hdfs/TestDistributedFileSystem/testLocatedFileStatusStorageIdsTypes/
> {noformat}
> FAILED:  
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes
> Error Message:
> Unexpected num storage ids expected:<2> but was:<1>
> Stack Trace:
> java.lang.AssertionError: Unexpected num storage ids expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes(TestDistributedFileSystem.java:855)
> {noformat}
> It appears that this test failed due to race condition: it does not wait for 
> the file replication to finish, before checking the file's status. 
> This flaky test can be fixed by using DFSTestUtil.waitForReplication()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8674) Improve performance of postponed block scans

2015-12-16 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060295#comment-15060295
 ] 

Ming Ma commented on HDFS-8674:
---

Thanks [~daryn]. Your explanation makes sense. It is good to know the perf 
difference between "the # of items < 5M" and "larger number of items". +1 on 
the patch.

> Improve performance of postponed block scans
> 
>
> Key: HDFS-8674
> URL: https://issues.apache.org/jira/browse/HDFS-8674
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-8674.patch
>
>
> When a standby goes active, it marks all nodes as "stale" which will cause 
> block invalidations for over-replicated blocks to be queued until full block 
> reports are received from the nodes with the block.  The replication monitor 
> scans the queue with O(N) runtime.  It picks a random offset and iterates 
> through the set to randomize blocks scanned.
> The result is devastating when a cluster loses multiple nodes during a 
> rolling upgrade. Re-replication occurs, the nodes come back, the excess block 
> invalidations are postponed. Rescanning just 2k blocks out of millions of 
> postponed blocks may take multiple seconds. During the scan, the write lock 
> is held which stalls all other processing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-3356) When dfs.block.size is configured to 0 the block which is created in rbw is never deleted

2015-12-16 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HDFS-3356.
--
Resolution: Won't Fix

This is not an issue since we now have minimum block size enforcement.

> When dfs.block.size is configured to 0 the block which is created in rbw is 
> never deleted
> -
>
> Key: HDFS-3356
> URL: https://issues.apache.org/jira/browse/HDFS-3356
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: J.Andreina
>Priority: Minor
>
> dfs.block.size=0
> step 1: start NN and DN
> step 2: write a file "a.txt"
> The block is created in rbw and since the blocksize is 0 write fails and the 
> file is not closed. DN sents in the block report , number of blocks as 1
> Even after the DN has sent the block report and directory scan has been done 
> , the block is not invalidated for ever.
> But In earlier version when the block.size is configured to 0 default value 
> will be taken and write will be successful.
> NN logs:
> 
> {noformat}
> 2012-04-24 19:54:27,089 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> processReport: from DatanodeRegistration(.18.40.117, 
> storageID=DS-452047493-xx.xx.xx.xx-50076-1335277451277, infoPort=50075, 
> ipcPort=50077, 
> storageInfo=lv=-40;cid=CID-742fda5f-68f7-40a5-9d52-a2a15facc6af;nsid=797082741;c=0),
>  blocks: 0, processing time: 0 msecs
> 2012-04-24 19:54:29,689 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.allocateBlock: /1._COPYING_. 
> BP-1612285678-xx.xx.xx.xx-1335277427136 
> blk_-262107679534121671_1002{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[xx.xx.xx.xx:50076|RBW]]}
> 2012-04-24 19:54:30,113 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> processReport: from DatanodeRegistration(xx.xx.xx.xx, 
> storageID=DS-452047493-xx.xx.xx.xx-50076-1335277451277, infoPort=50075, 
> ipcPort=50077, 
> storageInfo=lv=-40;cid=CID-742fda5f-68f7-40a5-9d52-a2a15facc6af;nsid=797082741;c=0),
>  blocks: 1, processing time: 0 msecs{noformat}
> Exception message while writing a file:
> ===
> {noformat}
> ./hdfs dfs -put hadoop /1
> 12/04/24 19:54:30 WARN hdfs.DFSClient: DataStreamer Exception
> java.io.IOException: BlockSize 0 is smaller than data size.  Offset of packet 
> in block 4745 Aborting file /1._COPYING_
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:467)
> put: BlockSize 0 is smaller than data size.  Offset of packet in block 4745 
> Aborting file /1._COPYING_
> 12/04/24 19:54:30 ERROR hdfs.DFSClient: Failed to close file /1._COPYING_
> java.io.IOException: BlockSize 0 is smaller than data size.  Offset of packet 
> in block 4745 Aborting file /1._COPYING_
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:467){noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5042) Completed files lost after power failure

2015-12-16 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060292#comment-15060292
 ] 

Kihwal Lee commented on HDFS-5042:
--

We can make it optional and put in after HDFS-8791. 

> Completed files lost after power failure
> 
>
> Key: HDFS-5042
> URL: https://issues.apache.org/jira/browse/HDFS-5042
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: ext3 on CentOS 5.7 (kernel 2.6.18-274.el5)
>Reporter: Dave Latham
>Priority: Critical
>
> We suffered a cluster wide power failure after which HDFS lost data that it 
> had acknowledged as closed and complete.
> The client was HBase which compacted a set of HFiles into a new HFile, then 
> after closing the file successfully, deleted the previous versions of the 
> file.  The cluster then lost power, and when brought back up the newly 
> created file was marked CORRUPT.
> Based on reading the logs it looks like the replicas were created by the 
> DataNodes in the 'blocksBeingWritten' directory.  Then when the file was 
> closed they were moved to the 'current' directory.  After the power cycle 
> those replicas were again in the blocksBeingWritten directory of the 
> underlying file system (ext3).  When those DataNodes reported in to the 
> NameNode it deleted those replicas and lost the file.
> Some possible fixes could be having the DataNode fsync the directory(s) after 
> moving the block from blocksBeingWritten to current to ensure the rename is 
> durable or having the NameNode accept replicas from blocksBeingWritten under 
> certain circumstances.
> Log snippets from RS (RegionServer), NN (NameNode), DN (DataNode):
> {noformat}
> RS 2013-06-29 11:16:06,812 DEBUG org.apache.hadoop.hbase.util.FSUtils: 
> Creating 
> file=hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
>  with permission=rwxrwxrwx
> NN 2013-06-29 11:16:06,830 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.allocateBlock: 
> /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c.
>  blk_1395839728632046111_357084589
> DN 2013-06-29 11:16:06,832 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block 
> blk_1395839728632046111_357084589 src: /10.0.5.237:14327 dest: 
> /10.0.5.237:50010
> NN 2013-06-29 11:16:11,370 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addStoredBlock: blockMap updated: 10.0.6.1:50010 is added to 
> blk_1395839728632046111_357084589 size 25418340
> NN 2013-06-29 11:16:11,370 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addStoredBlock: blockMap updated: 10.0.6.24:50010 is added to 
> blk_1395839728632046111_357084589 size 25418340
> NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addStoredBlock: blockMap updated: 10.0.5.237:50010 is added to 
> blk_1395839728632046111_357084589 size 25418340
> DN 2013-06-29 11:16:11,385 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Received block 
> blk_1395839728632046111_357084589 of size 25418340 from /10.0.5.237:14327
> DN 2013-06-29 11:16:11,385 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 2 for block 
> blk_1395839728632046111_357084589 terminating
> NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: Removing 
> lease on  file 
> /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
>  from client DFSClient_hb_rs_hs745,60020,1372470111932
> NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.completeFile: file 
> /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
>  is closed by DFSClient_hb_rs_hs745,60020,1372470111932
> RS 2013-06-29 11:16:11,393 INFO org.apache.hadoop.hbase.regionserver.Store: 
> Renaming compacted file at 
> hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
>  to 
> hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/n/6e0cc30af6e64e56ba5a539fdf159c4c
> RS 2013-06-29 11:16:11,505 INFO org.apache.hadoop.hbase.regionserver.Store: 
> Completed major compaction of 7 file(s) in n of 
> users-6,\x12\xBDp\xA3,1359426311784.b5b0820cde759ae68e333b2f4015bb7e. into 
> 6e0cc30af6e64e56ba5a539fdf159c4c, size=24.2m; total size for store is 24.2m
> ---  CRASH, RESTART -
> NN 2013-06-29 12:01:19,743 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addStoredBlock: addStoredBlock request received for 
> blk_1395839728632046111_357084589 on 10.0.6.1:50010 size 21978112 but was 
> rejected: Reported as block being written but is a block of closed file.
> NN 2013-06-29 12:01:19,743 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addToInvalidates: 

[jira] [Commented] (HDFS-9523) libhdfs++: failure to connect to ipv6 host causes CI unit tests to fail

2015-12-16 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060356#comment-15060356
 ] 

James Clampffer commented on HDFS-9523:
---

Since the only difference between the last patch and the one posted yesterday 
was rebasing I went ahead and committed to HDFS-8707.  Thanks for the 
contribution [~bobthansen]!

> libhdfs++: failure to connect to ipv6 host causes CI unit tests to fail
> ---
>
> Key: HDFS-9523
> URL: https://issues.apache.org/jira/browse/HDFS-9523
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9523.HDFS-8707.000.patch, 
> HDFS-9523.HDFS-8707.001.patch, HDFS-9523.HDFS-8707.002.patch, 
> HDFS-9523.HDFS-8707.test1.patch, failed_docker_run.txt
>
>
> When run under Docker, libhdfs++ is not connecting to the mini DFS cluster.   
> This is the reason the CI tests have been failing in the 
> libhdfs_threaded_hdfspp_test_shim_static test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9563) DiskBalancer: Refactor Plan Command

2015-12-16 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HDFS-9563:
---

 Summary: DiskBalancer: Refactor Plan Command
 Key: HDFS-9563
 URL: https://issues.apache.org/jira/browse/HDFS-9563
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: balancer & mover
Affects Versions: 2.8.0
Reporter: Xiaobing Zhou
Assignee: Xiaobing Zhou


It's quite helpful to do:
1) report node information for the top X of DataNodes that will benefit from 
running disk balancer
2) report volume level information for any specific DataNode. 

This is done by:
1) reading the cluster info, sorting the DiskbalancerNodes by their 
NodeDataDensity and printing out their corresponding information.
2) reading the cluster info, and print out volume level information for that 
DataNode requested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9563) DiskBalancer: Refactor Plan Command

2015-12-16 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9563:

Description: This is used to track refactoring plan command.  (was: It's 
quite helpful to do:
1) report node information for the top X of DataNodes that will benefit from 
running disk balancer
2) report volume level information for any specific DataNode. 

This is done by:
1) reading the cluster info, sorting the DiskbalancerNodes by their 
NodeDataDensity and printing out their corresponding information.
2) reading the cluster info, and print out volume level information for that 
DataNode requested.)

> DiskBalancer: Refactor Plan Command
> ---
>
> Key: HDFS-9563
> URL: https://issues.apache.org/jira/browse/HDFS-9563
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> This is used to track refactoring plan command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9564) DiskBalancer: Refactor Execute Command

2015-12-16 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9564:

Description: This is used to track refactoring execute command.  (was: This 
is used to track refactoring plan command.)

> DiskBalancer: Refactor Execute Command
> --
>
> Key: HDFS-9564
> URL: https://issues.apache.org/jira/browse/HDFS-9564
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> This is used to track refactoring execute command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9564) DiskBalancer: Refactor Execute Command

2015-12-16 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HDFS-9564:
---

 Summary: DiskBalancer: Refactor Execute Command
 Key: HDFS-9564
 URL: https://issues.apache.org/jira/browse/HDFS-9564
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: balancer & mover
Affects Versions: 2.8.0
Reporter: Xiaobing Zhou
Assignee: Xiaobing Zhou


This is used to track refactoring plan command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9373) Show friendly information to user when client succeeds the writing with some failed streamers

2015-12-16 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060444#comment-15060444
 ] 

Zhe Zhang commented on HDFS-9373:
-

Thanks Bo. Could you address the checkstyle issue by adding a comma on line 
254? +1 pending that.

> Show friendly information to user when client succeeds the writing with some 
> failed streamers
> -
>
> Key: HDFS-9373
> URL: https://issues.apache.org/jira/browse/HDFS-9373
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-9373-001.patch, HDFS-9373-002.patch
>
>
> When not more than PARITY_NUM streamers fail for a block group, the client 
> may still succeed to write the data. But several exceptions are thrown to 
> user and user has to check the reasons.  The friendly way is just inform user 
> that some streamers fail when writing a block group. It’s not necessary to 
> show the details of exceptions because a small number of stream failures is 
> not vital to the client writing.
> When only DATA_NUM streamers succeed, the block group is in a high risk 
> because the corrupt of any block will cause all the six blocks' data lost. We 
> should give obvious warning to user when this occurs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-402) Display the server version in dfsadmin -report

2015-12-16 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060446#comment-15060446
 ] 

Kihwal Lee commented on HDFS-402:
-

I am not suggesting that it is the right way, but merely pointing out that 
people easily lose interest/motivation if "it's no longer my immediate 
problem."  Do you think Hadoop needs to provide a basic rolling upgrade tool in 
order to call rolling upgrade feature complete?  If so, why don't you file a 
jira and put this one under that?

> Display the server version in dfsadmin -report
> --
>
> Key: HDFS-402
> URL: https://issues.apache.org/jira/browse/HDFS-402
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jakob Homan
>Assignee: Uma Maheswara Rao G
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-402.patch, HDFS-402.patch, HDFS-402.patch, 
> hdfs-402.txt
>
>
> As part of HADOOP-5094, it was requested to include the server version in the 
> dfsadmin -report, to avoid the need to screen scrape to get this information:
> bq. Please do provide the server version, so there is a quick and non-taxing 
> way of determine what is the current running version on the namenode.
> Currently there is nothing in the dfs client protocol to query this 
> information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-402) Display the server version in dfsadmin -report

2015-12-16 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060505#comment-15060505
 ] 

Kihwal Lee commented on HDFS-402:
-

Sorry, I missed that. I couldn't find it by a quick search. I would appreciate 
if you can share the jira number.

> Display the server version in dfsadmin -report
> --
>
> Key: HDFS-402
> URL: https://issues.apache.org/jira/browse/HDFS-402
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jakob Homan
>Assignee: Uma Maheswara Rao G
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-402.patch, HDFS-402.patch, HDFS-402.patch, 
> hdfs-402.txt
>
>
> As part of HADOOP-5094, it was requested to include the server version in the 
> dfsadmin -report, to avoid the need to screen scrape to get this information:
> bq. Please do provide the server version, so there is a quick and non-taxing 
> way of determine what is the current running version on the namenode.
> Currently there is nothing in the dfs client protocol to query this 
> information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9562) libhdfs++ RpcConnectionImpl::Connect should acquire connection state lock

2015-12-16 Thread James Clampffer (JIRA)
James Clampffer created HDFS-9562:
-

 Summary: libhdfs++ RpcConnectionImpl::Connect should acquire 
connection state lock
 Key: HDFS-9562
 URL: https://issues.apache.org/jira/browse/HDFS-9562
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer
Priority: Critical


RpcConnectionImpl::Connect calls pending_requests_.push_back() without holding 
the connection_state_lock_.  This isn't a huge issue at the moment because 
Connect generally shouldn't be called on the same instance from many threads 
but it wouldn't hurt to add to prevent future confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9523) libhdfs++: failure to connect to ipv6 host causes CI unit tests to fail

2015-12-16 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060322#comment-15060322
 ] 

James Clampffer commented on HDFS-9523:
---

Thanks for the rebase.  The patch looks good to me.  CI output will be a whole 
lot more useful once failures aren't expected with every run :) 

+1, will commit once I see a clean CI run.

> libhdfs++: failure to connect to ipv6 host causes CI unit tests to fail
> ---
>
> Key: HDFS-9523
> URL: https://issues.apache.org/jira/browse/HDFS-9523
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9523.HDFS-8707.000.patch, 
> HDFS-9523.HDFS-8707.001.patch, HDFS-9523.HDFS-8707.002.patch, 
> HDFS-9523.HDFS-8707.test1.patch, failed_docker_run.txt
>
>
> When run under Docker, libhdfs++ is not connecting to the mini DFS cluster.   
> This is the reason the CI tests have been failing in the 
> libhdfs_threaded_hdfspp_test_shim_static test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9557) Reduce object allocation in PB conversion

2015-12-16 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-9557:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I took a closer look, and those tests are very unpredictable, with or without 
this patch.  sigh

+1, and committed to trunk, branch-2 and branch-2.8.  Thank you, Daryn.

> Reduce object allocation in PB conversion
> -
>
> Key: HDFS-9557
> URL: https://issues.apache.org/jira/browse/HDFS-9557
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0
>
> Attachments: HDFS-9557.patch, HDFS-9557.patch
>
>
> PB conversions use {{ByteString.copyFrom}} to populate the builder.  
> Unfortunately this creates unique instances for empty arrays instead of 
> returning the singleton {{ByteString.EMPTY}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-402) Display the server version in dfsadmin -report

2015-12-16 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HDFS-402.
-
Resolution: Won't Fix

Please reopen if anyone thinks it is important and wants to work on it.

> Display the server version in dfsadmin -report
> --
>
> Key: HDFS-402
> URL: https://issues.apache.org/jira/browse/HDFS-402
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jakob Homan
>Assignee: Uma Maheswara Rao G
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-402.patch, HDFS-402.patch, HDFS-402.patch, 
> hdfs-402.txt
>
>
> As part of HADOOP-5094, it was requested to include the server version in the 
> dfsadmin -report, to avoid the need to screen scrape to get this information:
> bq. Please do provide the server version, so there is a quick and non-taxing 
> way of determine what is the current running version on the namenode.
> Currently there is nothing in the dfs client protocol to query this 
> information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9559) Add haadmin command to get HA state of all the namenodes

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060335#comment-15060335
 ] 

Hadoop QA commented on HDFS-9559:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 40s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 34s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 38s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 58s 
{color} | {color:red} Patch generated 1 new checkstyle issues in root (total 
was 27, now 28). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 44s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 58s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 26s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 53s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 29s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 188m 27s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.fs.shell.TestCopyPreserveFlag |
|   | hadoop.metrics2.impl.TestGangliaMetrics |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.TestDFSClientRetries |
| JDK v1.7.0_91 Failed junit tests | 
hadoop.hdfs.server.datanode.TestBlockReplacement |
\\
\\

[jira] [Updated] (HDFS-9564) DiskBalancer: Refactor Execute Command

2015-12-16 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9564:

Attachment: HDFS-9564-HDFS-1312.001.patch

I posted patch v001, but can be committed only after committing HDFS-9546.

> DiskBalancer: Refactor Execute Command
> --
>
> Key: HDFS-9564
> URL: https://issues.apache.org/jira/browse/HDFS-9564
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9564-HDFS-1312.001.patch
>
>
> This is used to track refactoring execute command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9563) DiskBalancer: Refactor Plan Command

2015-12-16 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9563:

Attachment: HDFS-9563-HDFS-1312.001.patch

I posted patch v001, but can be committed only after committing HDFS-9545.

> DiskBalancer: Refactor Plan Command
> ---
>
> Key: HDFS-9563
> URL: https://issues.apache.org/jira/browse/HDFS-9563
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9563-HDFS-1312.001.patch
>
>
> This is used to track refactoring plan command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9525) hadoop utilities need to support provided delegation tokens

2015-12-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060431#comment-15060431
 ] 

Allen Wittenauer commented on HDFS-9525:


javac issues are directly related to YETUS-187.

> hadoop utilities need to support provided delegation tokens
> ---
>
> Key: HDFS-9525
> URL: https://issues.apache.org/jira/browse/HDFS-9525
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: HeeSoo Kim
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HDFS-7984.001.patch, HDFS-7984.002.patch, 
> HDFS-7984.003.patch, HDFS-7984.004.patch, HDFS-7984.005.patch, 
> HDFS-7984.006.patch, HDFS-7984.007.patch, HDFS-7984.patch, HDFS-9525.008.patch
>
>
> When using the webhdfs:// filesystem (especially from distcp), we need the 
> ability to inject a delegation token rather than webhdfs initialize its own.  
> This would allow for cross-authentication-zone file system accesses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-402) Display the server version in dfsadmin -report

2015-12-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060454#comment-15060454
 ] 

Allen Wittenauer commented on HDFS-402:
---

I don't consider rolling upgrade *stable* much less *feature complete* given 
the data loss we had with it.

> Display the server version in dfsadmin -report
> --
>
> Key: HDFS-402
> URL: https://issues.apache.org/jira/browse/HDFS-402
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jakob Homan
>Assignee: Uma Maheswara Rao G
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-402.patch, HDFS-402.patch, HDFS-402.patch, 
> hdfs-402.txt
>
>
> As part of HADOOP-5094, it was requested to include the server version in the 
> dfsadmin -report, to avoid the need to screen scrape to get this information:
> bq. Please do provide the server version, so there is a quick and non-taxing 
> way of determine what is the current running version on the namenode.
> Currently there is nothing in the dfs client protocol to query this 
> information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9523) libhdfs++: failure to connect to ipv6 host causes CI unit tests to fail

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060318#comment-15060318
 ] 

Hadoop QA commented on HDFS-9523:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
45s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 6s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 9s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 56s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 31s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 32s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 13s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12778026/HDFS-9523.HDFS-8707.002.patch
 |
| JIRA Issue | HDFS-9523 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 799df574fad2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 522610d |
| JDK v1.7.0_91  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13897/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Max memory used | 79MB |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13897/console |


This message was automatically generated.



> libhdfs++: failure to connect to ipv6 host causes CI unit tests to fail
> ---
>
> Key: HDFS-9523
> URL: https://issues.apache.org/jira/browse/HDFS-9523
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>

[jira] [Commented] (HDFS-9525) hadoop utilities need to support provided delegation tokens

2015-12-16 Thread HeeSoo Kim (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060397#comment-15060397
 ] 

HeeSoo Kim commented on HDFS-9525:
--

The test failures are unrelated to the change made for this jira.
[~daryn] and [~aw], would you please review this new patch?

Thanks,

> hadoop utilities need to support provided delegation tokens
> ---
>
> Key: HDFS-9525
> URL: https://issues.apache.org/jira/browse/HDFS-9525
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: HeeSoo Kim
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HDFS-7984.001.patch, HDFS-7984.002.patch, 
> HDFS-7984.003.patch, HDFS-7984.004.patch, HDFS-7984.005.patch, 
> HDFS-7984.006.patch, HDFS-7984.007.patch, HDFS-7984.patch, HDFS-9525.008.patch
>
>
> When using the webhdfs:// filesystem (especially from distcp), we need the 
> ability to inject a delegation token rather than webhdfs initialize its own.  
> This would allow for cross-authentication-zone file system accesses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-1595) DFSClient may incorrectly detect datanode failure

2015-12-16 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060275#comment-15060275
 ] 

Kihwal Lee commented on HDFS-1595:
--

It has been improved in HDFS-9178. This takes care of the most common failure 
diagnostic problem.

> DFSClient may incorrectly detect datanode failure
> -
>
> Key: HDFS-1595
> URL: https://issues.apache.org/jira/browse/HDFS-1595
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Priority: Critical
> Attachments: hdfs-1595-idea.txt
>
>
> Suppose a source datanode S is writing to a destination datanode D in a write 
> pipeline.  We have an implicit assumption that _if S catches an exception 
> when it is writing to D, then D is faulty and S is fine._  As a result, 
> DFSClient will take out D from the pipeline, reconstruct the write pipeline 
> with the remaining datanodes and then continue writing .
> However, we find a case that the faulty machine F is indeed S but not D.  In 
> the case we found, F has a faulty network interface (or a faulty switch port) 
> in such a way that the faulty network interface works fine when transferring 
> a small amount of data, say 1MB, but it often fails when transferring a large 
> amount of data, say 100MB.
> It is even worst if F is the first datanode in the pipeline.  Consider the 
> following:
> # DFSClient creates a pipeline with three datanodes.  The first datanode is F.
> # F catches an IOException when writing to the second datanode. Then, F 
> reports the second datanode has error.
> # DFSClient removes the second datanode from the pipeline and continue 
> writing with the remaining datanode(s).
> # The pipeline now has two datanodes but (2) and (3) repeat.
> # Now, only F remains in the pipeline.  DFSClient continues writing with one 
> replica in F.
> # The write succeeds and DFSClient is able to *close the file successfully*.
> # The block is under replicated.  The NameNode schedules replication from F 
> to some other datanode D.
> # The replication fails for the same reason.  D reports to the NameNode that 
> the replica in F is corrupted.
> # The NameNode marks the replica in F is corrupted.
> # The block is corrupted since no replica is available.
> We were able to manually divide the replicas into small files and copy them 
> out from F without fixing the hardware.  The replicas seems uncorrupted.  
> This is a *data availability problem*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-402) Display the server version in dfsadmin -report

2015-12-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060371#comment-15060371
 ] 

Allen Wittenauer commented on HDFS-402:
---

bq.  It is perhaps because there are already other ways to get the relevant 
information. E.g., I do not have any experience with Ambari, but it must be 
checking and reporting versions through jmx, etc.

In other words, rather than having Hadoop's built-in reporting mechanism 
actually work and give useful information, we should require a third party tool 
or make ops teams write their own reporting tools even though it's going to be 
an absolutely common request/requirement after rolling upgrades become more 
common place?


> Display the server version in dfsadmin -report
> --
>
> Key: HDFS-402
> URL: https://issues.apache.org/jira/browse/HDFS-402
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jakob Homan
>Assignee: Uma Maheswara Rao G
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-402.patch, HDFS-402.patch, HDFS-402.patch, 
> hdfs-402.txt
>
>
> As part of HADOOP-5094, it was requested to include the server version in the 
> dfsadmin -report, to avoid the need to screen scrape to get this information:
> bq. Please do provide the server version, so there is a quick and non-taxing 
> way of determine what is the current running version on the namenode.
> Currently there is nothing in the dfs client protocol to query this 
> information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-402) Display the server version in dfsadmin -report

2015-12-16 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060464#comment-15060464
 ] 

Kihwal Lee commented on HDFS-402:
-

Please file jiras if you haven't already.  Maybe we didn't see that since our 
custom script checks the hdfs integrity as it rolls.  If a data loss problem 
exists and it can be dealt with using a proper upgrade tool, we should include 
that in Hadoop.

> Display the server version in dfsadmin -report
> --
>
> Key: HDFS-402
> URL: https://issues.apache.org/jira/browse/HDFS-402
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jakob Homan
>Assignee: Uma Maheswara Rao G
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-402.patch, HDFS-402.patch, HDFS-402.patch, 
> hdfs-402.txt
>
>
> As part of HADOOP-5094, it was requested to include the server version in the 
> dfsadmin -report, to avoid the need to screen scrape to get this information:
> bq. Please do provide the server version, so there is a quick and non-taxing 
> way of determine what is the current running version on the namenode.
> Currently there is nothing in the dfs client protocol to query this 
> information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-402) Display the server version in dfsadmin -report

2015-12-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060467#comment-15060467
 ] 

Allen Wittenauer commented on HDFS-402:
---

Already done. Last year.

> Display the server version in dfsadmin -report
> --
>
> Key: HDFS-402
> URL: https://issues.apache.org/jira/browse/HDFS-402
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jakob Homan
>Assignee: Uma Maheswara Rao G
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-402.patch, HDFS-402.patch, HDFS-402.patch, 
> hdfs-402.txt
>
>
> As part of HADOOP-5094, it was requested to include the server version in the 
> dfsadmin -report, to avoid the need to screen scrape to get this information:
> bq. Please do provide the server version, so there is a quick and non-taxing 
> way of determine what is the current running version on the namenode.
> Currently there is nothing in the dfs client protocol to query this 
> information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-402) Display the server version in dfsadmin -report

2015-12-16 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060357#comment-15060357
 ] 

Kihwal Lee commented on HDFS-402:
-

bq. The fact that this wasn't completed as part of that JIRA 
It is perhaps because there are already other ways to get the relevant 
information. E.g., I do not have any experience with Ambari, but it must be 
checking and reporting versions through jmx, etc.

> Display the server version in dfsadmin -report
> --
>
> Key: HDFS-402
> URL: https://issues.apache.org/jira/browse/HDFS-402
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jakob Homan
>Assignee: Uma Maheswara Rao G
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-402.patch, HDFS-402.patch, HDFS-402.patch, 
> hdfs-402.txt
>
>
> As part of HADOOP-5094, it was requested to include the server version in the 
> dfsadmin -report, to avoid the need to screen scrape to get this information:
> bq. Please do provide the server version, so there is a quick and non-taxing 
> way of determine what is the current running version on the namenode.
> Currently there is nothing in the dfs client protocol to query this 
> information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9523) libhdfs++: failure to connect to ipv6 host causes CI unit tests to fail

2015-12-16 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-9523:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> libhdfs++: failure to connect to ipv6 host causes CI unit tests to fail
> ---
>
> Key: HDFS-9523
> URL: https://issues.apache.org/jira/browse/HDFS-9523
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9523.HDFS-8707.000.patch, 
> HDFS-9523.HDFS-8707.001.patch, HDFS-9523.HDFS-8707.002.patch, 
> HDFS-9523.HDFS-8707.test1.patch, failed_docker_run.txt
>
>
> When run under Docker, libhdfs++ is not connecting to the mini DFS cluster.   
> This is the reason the CI tests have been failing in the 
> libhdfs_threaded_hdfspp_test_shim_static test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9566) Remove expensive getStorages method

2015-12-16 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060928#comment-15060928
 ] 

Mingliang Liu commented on HDFS-9566:
-

+1 (non-binding) pending on Jenkins.

> Remove expensive getStorages method
> ---
>
> Key: HDFS-9566
> URL: https://issues.apache.org/jira/browse/HDFS-9566
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-9566.branch-2.patch, HDFS-9566.patch
>
>
> HDFS-5318 added a {{BlocksMap#getStorages(Block, State)}} which is based on 
> iterables and predicates.  The method is very expensive compared to a simple 
> comparison/continue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9525) hadoop utilities need to support provided delegation tokens

2015-12-16 Thread HeeSoo Kim (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061013#comment-15061013
 ] 

HeeSoo Kim commented on HDFS-9525:
--

{quote}
I thought the issue at hand is how to access 2 kerberos clusters? If the other 
cluster is insecure, then just set 
ipc.client.fallback-to-simple-auth-allowed=true. 
{quote}
[~daryn] This uses case can use when source is kerberos cluster and target is 
non-kerberos(simple) cluster.
However, this use case is a contrary concept. Our source is 
non-kerberos(simple) cluster and target is kerberos cluster.
This is the use case.
# I get the token from target cluster with kerberos using fetchdt.
# The source cluster get the delegation token file anyhow. 
# In the source cluster, we set the delegation token file in hadoop.token.files 
parameter.
# The source cluster with cluster tried to connect the target cluster with 
kerberos.

Even I set up the delegation token file on source cluster with simple, it does 
not use the token.
I agreed that if the source cluster do not have the token information of the 
target, WebHDFS needs to request GETDELEGATIONTOKEN.
However, if the source cluster has the right service token, WebHDFS needs to 
use the service token.

> hadoop utilities need to support provided delegation tokens
> ---
>
> Key: HDFS-9525
> URL: https://issues.apache.org/jira/browse/HDFS-9525
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: HeeSoo Kim
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HDFS-7984.001.patch, HDFS-7984.002.patch, 
> HDFS-7984.003.patch, HDFS-7984.004.patch, HDFS-7984.005.patch, 
> HDFS-7984.006.patch, HDFS-7984.007.patch, HDFS-7984.patch, HDFS-9525.008.patch
>
>
> When using the webhdfs:// filesystem (especially from distcp), we need the 
> ability to inject a delegation token rather than webhdfs initialize its own.  
> This would allow for cross-authentication-zone file system accesses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9568) Support NFSv4 interface to HDFS

2015-12-16 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061054#comment-15061054
 ] 

John Zhuge commented on HDFS-9568:
--

Thx [~aw] for bringing the attention to HDFS-7499 which seems to focus on 
Kerberos authentication. Should we it a sub-task of this umbrella jira?

> Support NFSv4 interface to HDFS
> ---
>
> Key: HDFS-9568
> URL: https://issues.apache.org/jira/browse/HDFS-9568
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> [HDFS-4750|https://issues.apache.org/jira/browse/HDFS-4750] added NFSv3 
> interface to HDFS. As NFSv4 client support in many OSes has matured, we can 
> addd NFSv4 interface to HDFS. There are some NFSv4 features quite suitable in 
> Hadoop's distributed environment in addition to simplified configuration and 
> added security.
> This JIRA is to track NFSv4 support to access HDFS.
> We will upload the design doc and then the initial implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9552) Document types of permission checks performed for HDFS operations.

2015-12-16 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061108#comment-15061108
 ] 

Arpit Agarwal commented on HDFS-9552:
-

Thanks, +1 for the v003 patch pending Jenkins.

> Document types of permission checks performed for HDFS operations.
> --
>
> Key: HDFS-9552
> URL: https://issues.apache.org/jira/browse/HDFS-9552
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-9552.001.patch, HDFS-9552.002.patch, 
> HDFS-9552.003.patch, hadoop-site.tar.bz2
>
>
> The HDFS permissions guide discusses our use of a POSIX-like model with read, 
> write and execute permissions associated with users, groups and the catch-all 
> other class.  However, there is no documentation that describes exactly what 
> permission checks are performed by user-facing HDFS operations.  This is a 
> frequent source of questions, so it would be good to document this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9565) TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is flaky due to race condition

2015-12-16 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061137#comment-15061137
 ] 

Wei-Chiu Chuang commented on HDFS-9565:
---

Test failures are all unrelated, because this patch only touches 
TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes

> TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is flaky due 
> to race condition
> -
>
> Key: HDFS-9565
> URL: https://issues.apache.org/jira/browse/HDFS-9565
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9565.001.patch
>
>
> TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes occasionally 
> fails with the following error:
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/699/testReport/org.apache.hadoop.hdfs/TestDistributedFileSystem/testLocatedFileStatusStorageIdsTypes/
> {noformat}
> FAILED:  
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes
> Error Message:
> Unexpected num storage ids expected:<2> but was:<1>
> Stack Trace:
> java.lang.AssertionError: Unexpected num storage ids expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes(TestDistributedFileSystem.java:855)
> {noformat}
> It appears that this test failed due to race condition: it does not wait for 
> the file replication to finish, before checking the file's status. 
> This flaky test can be fixed by using DFSTestUtil.waitForReplication()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9568) Support NFSv4 interface to HDFS

2015-12-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061140#comment-15061140
 ] 

Allen Wittenauer commented on HDFS-9568:


It's not actually that new.  To quote from RFC5403: "Version 2 is the same as 
version 1 (specified in RFC 2203) except that support for channel bindings has 
been added."

Here's some backstory. 

RPCSEC was optional in NFSv3.  Inside Sun (and I'd like to think NetApp since 
Eisler was ex-Sun), it was viewed as a mistake to make it optional since not 
everyone implemented it (e.g., Hadoop implementation).  This meant that NFS 
gained a reputation for "not being secure" when really it was individual 
implementations that were insecure. So when the v4 RFC was being written, it 
was codified as a requirement to help dispel that myth. (Source: the devs I was 
working with to deploy Kerberized NFS internally at Sun.)

So while RPCSEC is required for NFSv4, it's effectively the same implementation 
for NFSv3.

There's also the issue that that JIRA has more watchers and has been opened 
longer. It also has the title of "Add NFSv4 + Kerberos", which is ultimately 
the same as this one. But I get the feeling this is one of those "Cloudera 
wants credit" things given the swarm of Cloudera folks suddenly watching this 
one.

> Support NFSv4 interface to HDFS
> ---
>
> Key: HDFS-9568
> URL: https://issues.apache.org/jira/browse/HDFS-9568
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> [HDFS-4750|https://issues.apache.org/jira/browse/HDFS-4750] added NFSv3 
> interface to HDFS. As NFSv4 client support in many OSes has matured, we can 
> addd NFSv4 interface to HDFS. There are some NFSv4 features quite suitable in 
> Hadoop's distributed environment in addition to simplified configuration and 
> added security.
> This JIRA is to track NFSv4 support to access HDFS.
> We will upload the design doc and then the initial implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-9568) Support NFSv4 interface to HDFS

2015-12-16 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HDFS-9568.
--
Resolution: Duplicate

> Support NFSv4 interface to HDFS
> ---
>
> Key: HDFS-9568
> URL: https://issues.apache.org/jira/browse/HDFS-9568
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> [HDFS-4750|https://issues.apache.org/jira/browse/HDFS-4750] added NFSv3 
> interface to HDFS. As NFSv4 client support in many OSes has matured, we can 
> addd NFSv4 interface to HDFS. There are some NFSv4 features quite suitable in 
> Hadoop's distributed environment in addition to simplified configuration and 
> added security.
> This JIRA is to track NFSv4 support to access HDFS.
> We will upload the design doc and then the initial implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7499) Add NFSv4 + Kerberos / client authentication support

2015-12-16 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reassigned HDFS-7499:


Assignee: John Zhuge

> Add NFSv4 + Kerberos / client authentication support
> 
>
> Key: HDFS-7499
> URL: https://issues.apache.org/jira/browse/HDFS-7499
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 2.4.0
> Environment: HDP2.1
>Reporter: Hari Sekhon
>Assignee: John Zhuge
>
> We have a requirement for secure file share access to HDFS on a kerberized 
> cluster.
> This is spun off from HDFS-7488 where adding Kerberos to the front end client 
> was considered, I believe this would require NFSv4 support?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9566) Remove expensive getStorages method

2015-12-16 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-9566:
--
Attachment: HDFS-9566.patch
HDFS-9566.branch-2.patch

Just removes it.

> Remove expensive getStorages method
> ---
>
> Key: HDFS-9566
> URL: https://issues.apache.org/jira/browse/HDFS-9566
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-9566.branch-2.patch, HDFS-9566.patch
>
>
> HDFS-5318 added a {{BlocksMap#getStorages(Block, State)}} which is based on 
> iterables and predicates.  The method is very expensive compared to a simple 
> comparison/continue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9566) Remove expensive getStorages method

2015-12-16 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-9566:
--
Status: Patch Available  (was: Open)

> Remove expensive getStorages method
> ---
>
> Key: HDFS-9566
> URL: https://issues.apache.org/jira/browse/HDFS-9566
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-9566.branch-2.patch, HDFS-9566.patch
>
>
> HDFS-5318 added a {{BlocksMap#getStorages(Block, State)}} which is based on 
> iterables and predicates.  The method is very expensive compared to a simple 
> comparison/continue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9325) Allow the location of hadoop source tree resources to be passed to CMake during a build.

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060921#comment-15060921
 ] 

Hadoop QA commented on HDFS-9325:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
0s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 6s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 38s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 31s 
{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 3m 31s {color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 3m 31s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 29s 
{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 3m 29s {color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_91. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 3m 29s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 3m 30s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 3m 35s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 6s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12778091/HDFS-9325.HDFS-8707.003.patch
 |
| JIRA Issue | HDFS-9325 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  cc  |
| uname | Linux 1ecc24a4c6a0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 

[jira] [Resolved] (HDFS-9567) LlapServiceDriver can fail if only the packaged logger config is present

2015-12-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin resolved HDFS-9567.

Resolution: Invalid

Wrong project

> LlapServiceDriver can fail if only the packaged logger config is present
> 
>
> Key: HDFS-9567
> URL: https://issues.apache.org/jira/browse/HDFS-9567
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>
> I was incrementally updating my setup on some VM and didn't have the logger 
> config file, so the packaged one was picked up apparently, which caused this:
> {noformat}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: 
> jar:file:/home/vagrant/llap/apache-hive-2.0.0-SNAPSHOT-bin/lib/hive-llap-server-2.0.0-SNAPSHOT.jar!/llap-daemon-log4j2.properties
>   at org.apache.hadoop.fs.Path.initialize(Path.java:205)
>   at org.apache.hadoop.fs.Path.(Path.java:171)
>   at 
> org.apache.hadoop.hive.llap.cli.LlapServiceDriver.run(LlapServiceDriver.java:234)
>   at 
> org.apache.hadoop.hive.llap.cli.LlapServiceDriver.main(LlapServiceDriver.java:58)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> jar:file:/home/vagrant/llap/apache-hive-2.0.0-SNAPSHOT-bin/lib/hive-llap-server-2.0.0-SNAPSHOT.jar!/llap-daemon-log4j2.properties
>   at java.net.URI.checkPath(URI.java:1823)
>   at java.net.URI.(URI.java:745)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:202)
>   ... 3 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9568) Support NFSv4 interface to HDFS

2015-12-16 Thread John Zhuge (JIRA)
John Zhuge created HDFS-9568:


 Summary: Support NFSv4 interface to HDFS
 Key: HDFS-9568
 URL: https://issues.apache.org/jira/browse/HDFS-9568
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Reporter: John Zhuge
Assignee: John Zhuge


[HDFS-4750|https://issues.apache.org/jira/browse/HDFS-4750] added NFSv3 
interface to HDFS. As NFSv4 client support in many OSes has matured, we can 
addd NFSv4 interface to HDFS. There are some NFSv4 features quite suitable in 
Hadoop's distributed environment in addition to simplified configuration and 
added security.
This JIRA is to track NFSv4 support to access HDFS.
We will upload the design doc and then the initial implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9565) TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is flaky due to race condition

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061123#comment-15061123
 ] 

Hadoop QA commented on HDFS-9565:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 1s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 16s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 33s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 26s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 155m 28s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.TestHFlush |
|   | hadoop.hdfs.server.namenode.TestAuditLogs |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 |
| JDK v1.7.0_91 Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| 

[jira] [Commented] (HDFS-9568) Support NFSv4 interface to HDFS

2015-12-16 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061164#comment-15061164
 ] 

John Zhuge commented on HDFS-9568:
--

:) You are right, I also recall no security mechanism was mandated by NFSv4, 
then I saw the sentence in 3.2.1. So the RFC might have been revised along the 
way, even in the days of 3530.
Now I see your point, I am ok with 7499 as the tracking jira. Since it is 
unassigned, do you mind if I take it?

> Support NFSv4 interface to HDFS
> ---
>
> Key: HDFS-9568
> URL: https://issues.apache.org/jira/browse/HDFS-9568
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> [HDFS-4750|https://issues.apache.org/jira/browse/HDFS-4750] added NFSv3 
> interface to HDFS. As NFSv4 client support in many OSes has matured, we can 
> addd NFSv4 interface to HDFS. There are some NFSv4 features quite suitable in 
> Hadoop's distributed environment in addition to simplified configuration and 
> added security.
> This JIRA is to track NFSv4 support to access HDFS.
> We will upload the design doc and then the initial implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9568) Support NFSv4 interface to HDFS

2015-12-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061166#comment-15061166
 ] 

Allen Wittenauer commented on HDFS-9568:


Go for it.

Thanks!

> Support NFSv4 interface to HDFS
> ---
>
> Key: HDFS-9568
> URL: https://issues.apache.org/jira/browse/HDFS-9568
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> [HDFS-4750|https://issues.apache.org/jira/browse/HDFS-4750] added NFSv3 
> interface to HDFS. As NFSv4 client support in many OSes has matured, we can 
> addd NFSv4 interface to HDFS. There are some NFSv4 features quite suitable in 
> Hadoop's distributed environment in addition to simplified configuration and 
> added security.
> This JIRA is to track NFSv4 support to access HDFS.
> We will upload the design doc and then the initial implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9552) Document types of permission checks performed for HDFS operations.

2015-12-16 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-9552:

Attachment: HDFS-9552.003.patch

bq. Nitpick you may consider fixing during commit, source should be sources 
since the check is enforced for each source file.

That's a good idea.  Thanks!  Here is patch v003 with that change.

> Document types of permission checks performed for HDFS operations.
> --
>
> Key: HDFS-9552
> URL: https://issues.apache.org/jira/browse/HDFS-9552
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-9552.001.patch, HDFS-9552.002.patch, 
> HDFS-9552.003.patch, hadoop-site.tar.bz2
>
>
> The HDFS permissions guide discusses our use of a POSIX-like model with read, 
> write and execute permissions associated with users, groups and the catch-all 
> other class.  However, there is no documentation that describes exactly what 
> permission checks are performed by user-facing HDFS operations.  This is a 
> frequent source of questions, so it would be good to document this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9568) Support NFSv4 interface to HDFS

2015-12-16 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061103#comment-15061103
 ] 

John Zhuge commented on HDFS-9568:
--

[NFSv4|https://tools.ietf.org/html/rfc7530] added a new security flavor 
[RPCSEC_GSS|https://tools.ietf.org/html/rfc5403] which uses 
[GSS-API|https://tools.ietf.org/html/rfc2743] to supports multiple mechanisms 
that provide security services. "For interoperability, NFSv4 clients and 
servers MUST support the Kerberos V5 security mechanism." (a quote from [NFSv4 
spec section 3.2.1|https://tools.ietf.org/html/rfc7530#section-3.2.1])

With that, I think [HDFS-7499|https://issues.apache.org/jira/browse/HDFS-7499] 
is dependent on a sub-task (Add RPCSEC_GSS support) of this umbrella jira. What 
do you think?

> Support NFSv4 interface to HDFS
> ---
>
> Key: HDFS-9568
> URL: https://issues.apache.org/jira/browse/HDFS-9568
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> [HDFS-4750|https://issues.apache.org/jira/browse/HDFS-4750] added NFSv3 
> interface to HDFS. As NFSv4 client support in many OSes has matured, we can 
> addd NFSv4 interface to HDFS. There are some NFSv4 features quite suitable in 
> Hadoop's distributed environment in addition to simplified configuration and 
> added security.
> This JIRA is to track NFSv4 support to access HDFS.
> We will upload the design doc and then the initial implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9565) TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is flaky due to race condition

2015-12-16 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060923#comment-15060923
 ] 

Wei-Chiu Chuang commented on HDFS-9565:
---

Thanks for +1! [~arpitagarwal]

> TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is flaky due 
> to race condition
> -
>
> Key: HDFS-9565
> URL: https://issues.apache.org/jira/browse/HDFS-9565
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9565.001.patch
>
>
> TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes occasionally 
> fails with the following error:
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/699/testReport/org.apache.hadoop.hdfs/TestDistributedFileSystem/testLocatedFileStatusStorageIdsTypes/
> {noformat}
> FAILED:  
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes
> Error Message:
> Unexpected num storage ids expected:<2> but was:<1>
> Stack Trace:
> java.lang.AssertionError: Unexpected num storage ids expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes(TestDistributedFileSystem.java:855)
> {noformat}
> It appears that this test failed due to race condition: it does not wait for 
> the file replication to finish, before checking the file's status. 
> This flaky test can be fixed by using DFSTestUtil.waitForReplication()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9565) TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is flaky due to race condition

2015-12-16 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060934#comment-15060934
 ] 

Mingliang Liu commented on HDFS-9565:
-

+1 (non-binding)

> TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is flaky due 
> to race condition
> -
>
> Key: HDFS-9565
> URL: https://issues.apache.org/jira/browse/HDFS-9565
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9565.001.patch
>
>
> TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes occasionally 
> fails with the following error:
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/699/testReport/org.apache.hadoop.hdfs/TestDistributedFileSystem/testLocatedFileStatusStorageIdsTypes/
> {noformat}
> FAILED:  
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes
> Error Message:
> Unexpected num storage ids expected:<2> but was:<1>
> Stack Trace:
> java.lang.AssertionError: Unexpected num storage ids expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes(TestDistributedFileSystem.java:855)
> {noformat}
> It appears that this test failed due to race condition: it does not wait for 
> the file replication to finish, before checking the file's status. 
> This flaky test can be fixed by using DFSTestUtil.waitForReplication()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7964) Add support for async edit logging

2015-12-16 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060950#comment-15060950
 ] 

Daryn Sharp commented on HDFS-7964:
---

I'm trying to whittle down my backlog of internal features before I go on year 
end vacation.  The patch still applies, may I please have a final review?

> Add support for async edit logging
> --
>
> Key: HDFS-7964
> URL: https://issues.apache.org/jira/browse/HDFS-7964
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.2-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-7964.patch, HDFS-7964.patch, HDFS-7964.patch
>
>
> Edit logging is a major source of contention within the NN.  LogEdit is 
> called within the namespace write log, while logSync is called outside of the 
> lock to allow greater concurrency.  The handler thread remains busy until 
> logSync returns to provide the client with a durability guarantee for the 
> response.
> Write heavy RPC load and/or slow IO causes handlers to stall in logSync.  
> Although the write lock is not held, readers are limited/starved and the call 
> queue fills.  Combining an edit log thread with postponed RPC responses from 
> HADOOP-10300 will provide the same durability guarantee but immediately free 
> up the handlers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9568) Support NFSv4 interface to HDFS

2015-12-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061018#comment-15061018
 ] 

Allen Wittenauer commented on HDFS-9568:


This is basically going to end up a dupe of HDFS-7499.

> Support NFSv4 interface to HDFS
> ---
>
> Key: HDFS-9568
> URL: https://issues.apache.org/jira/browse/HDFS-9568
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> [HDFS-4750|https://issues.apache.org/jira/browse/HDFS-4750] added NFSv3 
> interface to HDFS. As NFSv4 client support in many OSes has matured, we can 
> addd NFSv4 interface to HDFS. There are some NFSv4 features quite suitable in 
> Hadoop's distributed environment in addition to simplified configuration and 
> added security.
> This JIRA is to track NFSv4 support to access HDFS.
> We will upload the design doc and then the initial implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9566) Remove expensive getStorages method

2015-12-16 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-9566:
-

 Summary: Remove expensive getStorages method
 Key: HDFS-9566
 URL: https://issues.apache.org/jira/browse/HDFS-9566
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.8.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp


HDFS-5318 added a {{BlocksMap#getStorages(Block, State)}} which is based on 
iterables and predicates.  The method is very expensive compared to a simple 
comparison/continue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9515) NPE in TestDFSZKFailoverController due to binding exception in MiniDFSCluster.initMiniDFSCluster()

2015-12-16 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060951#comment-15060951
 ] 

Arpit Agarwal commented on HDFS-9515:
-

+1 thanks for this improvement [~jojochuang]. When cluster/fs are class members 
perhaps it also makes sense to set {{cluster = null}} after invoking 
{{cluster.shutdown()}}, so @After invocations of subsequent test methods don't 
call shutdown twice on the same object.

> NPE in TestDFSZKFailoverController due to binding exception in 
> MiniDFSCluster.initMiniDFSCluster()
> --
>
> Key: HDFS-9515
> URL: https://issues.apache.org/jira/browse/HDFS-9515
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9515.001.patch, HDFS-9515.002.patch
>
>
> If MiniDFSCluster constructor throws an exception, the cluster object is not 
> assigned, so shutdown() call not be called on the object.
> I saw in a recent Jenkins job where binding error threw an exception, and 
> later on the NPE of cluster.shutdown() hid the real cause of the test failure.
> HDFS-9333 has a patch that fixes the bind error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9498) Move code that tracks orphan blocks to BlockManagerSafeMode

2015-12-16 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9498:

Attachment: HDFS-9498.002.patch

The v2 patch rebases from {{trunk}} branch and resolves some trivial conflicts.

> Move code that tracks orphan blocks to BlockManagerSafeMode
> ---
>
> Key: HDFS-9498
> URL: https://issues.apache.org/jira/browse/HDFS-9498
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9498.000.patch, HDFS-9498.001.patch, 
> HDFS-9498.002.patch
>
>
> [HDFS-4015] counts and reports orphaned blocks  
> {{numberOfBytesInFutureBlocks}} in safe mode. It was implemented in 
> {{BlockManager}}. Per discussion in [HDFS-9129] which introduces the 
> {{BlockManagerSafeMode}}, we can move code that maintaining orphaned blocks 
> to this class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9198) Coalesce IBR processing in the NN

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061148#comment-15061148
 ] 

Hadoop QA commented on HDFS-9198:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 7m 13s {color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66 with JDK v1.8.0_66 
generated 3 new issues (was 32, now 32). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 8m 9s {color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91 with JDK v1.7.0_91 
generated 3 new issues (was 34, now 34). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} Patch generated 5 new checkstyle issues in 
hadoop-hdfs-project/hadoop-hdfs (total was 384, now 384). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 43s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 48s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 155m 53s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | 

[jira] [Commented] (HDFS-9566) Remove expensive getStorages method

2015-12-16 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060919#comment-15060919
 ] 

Arpit Agarwal commented on HDFS-9566:
-

+1

> Remove expensive getStorages method
> ---
>
> Key: HDFS-9566
> URL: https://issues.apache.org/jira/browse/HDFS-9566
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-9566.branch-2.patch, HDFS-9566.patch
>
>
> HDFS-5318 added a {{BlocksMap#getStorages(Block, State)}} which is based on 
> iterables and predicates.  The method is very expensive compared to a simple 
> comparison/continue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9347) Invariant assumption in TestQuorumJournalManager.shutdown() is wrong

2015-12-16 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061021#comment-15061021
 ] 

Zhe Zhang commented on HDFS-9347:
-

Thanks Wei-Chiu for the work and Walter for the comment. LGTM overall. A few 
comments:
# The semantics of {{waitForThreadTermination}} should be to throw 
{{TimeoutException}} if the specified {{waitForMillis}} expires. So 
{{assertNoThreadsMatching}} should catch this {{TimeoutException}} and 
{{Assert.fail}}. In other words {{assertNoThreadsMatching}} should be an actual 
assertion method, and {{waitForThreadTermination}} should be an actual wait 
method, without assertion logic. The current special handling of 
{{waitForMillis == 0}} works, but creates unnecessary dependency b/w the 2 
methods.
# There are some unnecessary white space changes.



> Invariant assumption in TestQuorumJournalManager.shutdown() is wrong
> 
>
> Key: HDFS-9347
> URL: https://issues.apache.org/jira/browse/HDFS-9347
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9347.001.patch, HDFS-9347.002.patch
>
>
> The code
> {code:title=TestTestQuorumJournalManager.java|borderStyle=solid}
> @After
>   public void shutdown() throws IOException {
> IOUtils.cleanup(LOG, toClose.toArray(new Closeable[0]));
> 
> // Should not leak clients between tests -- this can cause flaky tests.
> // (See HDFS-4643)
> GenericTestUtils.assertNoThreadsMatching(".*IPC Client.*");
> 
> if (cluster != null) {
>   cluster.shutdown();
> }
>   }
> {code}
> implicitly assumes when the call returns from IOUtils.cleanup() (which calls 
> close() on QuorumJournalManager object), all IPC client connection threads 
> are terminated. However, there is no internal implementation that enforces 
> this assumption. Even if the bug reported in HADOOP-12532 is fixed, the 
> internal code still only ensures IPC connections are terminated, but not the 
> thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9516) truncate file fails with data dirs on multiple disks

2015-12-16 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-9516:
--
Target Version/s: 2.8.0, 2.7.3

[~shv], unfortunately this came in too late for 2.7.2. That said, I don’t see 
any reason why this shouldn’t be in 2.8.0 and 2.7.3. Setting the 
target-versions accordingly on JIRA.

If you agree, appreciate backport help to those branches (branch-2.8.0, 
branch-2.7).


> truncate file fails with data dirs on multiple disks
> 
>
> Key: HDFS-9516
> URL: https://issues.apache.org/jira/browse/HDFS-9516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Bogdan Raducanu
>Assignee: Plamen Jeliazkov
> Fix For: 2.9.0
>
> Attachments: HDFS-9516_1.patch, HDFS-9516_2.patch, HDFS-9516_3.patch, 
> HDFS-9516_testFailures.patch, Main.java, truncate.dn.log
>
>
> FileSystem.truncate returns false (no exception) but the file is never closed 
> and not writable after this.
> It seems to be because of copy on truncate which is used because the system 
> is in upgrade state. In this case a rename between devices is attempted.
> See attached log and repro code.
> Probably also affects truncate snapshotted file when copy on truncate is also 
> used.
> Possibly it affects not only truncate but any block recovery.
> I think the problem is in updateReplicaUnderRecovery
> {code}
> ReplicaBeingWritten newReplicaInfo = new ReplicaBeingWritten(
> newBlockId, recoveryId, rur.getVolume(), 
> blockFile.getParentFile(),
> newlength);
> {code}
> blockFile is created with copyReplicaWithNewBlockIdAndGS which is allowed to 
> choose any volume so rur.getVolume() is not where the block is located.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9561) PIpieline recovery near the end of a block may fail

2015-12-16 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061059#comment-15061059
 ] 

Zhe Zhang commented on HDFS-9561:
-

Thanks for reporting this Kihwal. It seems {{addDatanode2ExistingPipeline}} has 
logic to process last packet in a block:
{code}
} else if (stage == BlockConstructionStage.PIPELINE_CLOSE
|| stage == BlockConstructionStage.PIPELINE_CLOSE_RECOVERY) {
  //pipeline is closing
  return;
{code}

Did you see this error in tests / production?

> PIpieline recovery near the end of a block may fail
> ---
>
> Key: HDFS-9561
> URL: https://issues.apache.org/jira/browse/HDFS-9561
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>
> When the client wants to add additional nodes to the pipeline during a 
> recovery, it will fail if all existing replicas are already finalized.  This 
> is because the partial block copy only works when the replica is in rbw.  
> Clients cannot reliably tell whether a node has finalized the replica during 
> a recovery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9552) Document types of permission checks performed for HDFS operations.

2015-12-16 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061060#comment-15061060
 ] 

Chris Nauroth commented on HDFS-9552:
-

[~arpitagarwal], thank you for the review.

I think the concat entry as already written is accurate.  
{{FSDirConcatOp#verifySrcFiles}} contains this line:

{code}
fsd.checkParentAccess(pc, iip, FsAction.WRITE); // for delete
{code}

I think the rationale is that since the original source files don't exist after 
the concat completes, it's like a delete of those inodes, so it ought to 
enforce write on the parent just like delete.

> Document types of permission checks performed for HDFS operations.
> --
>
> Key: HDFS-9552
> URL: https://issues.apache.org/jira/browse/HDFS-9552
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-9552.001.patch, HDFS-9552.002.patch, 
> hadoop-site.tar.bz2
>
>
> The HDFS permissions guide discusses our use of a POSIX-like model with read, 
> write and execute permissions associated with users, groups and the catch-all 
> other class.  However, there is no documentation that describes exactly what 
> permission checks are performed by user-facing HDFS operations.  This is a 
> frequent source of questions, so it would be good to document this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9567) LlapServiceDriver can fail if only the packaged logger config is present

2015-12-16 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HDFS-9567:
--

 Summary: LlapServiceDriver can fail if only the packaged logger 
config is present
 Key: HDFS-9567
 URL: https://issues.apache.org/jira/browse/HDFS-9567
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Sergey Shelukhin


I was incrementally updating my setup on some VM and didn't have the logger 
config file, so the packaged one was picked up apparently, which caused this:
{noformat}
java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path 
in absolute URI: 
jar:file:/home/vagrant/llap/apache-hive-2.0.0-SNAPSHOT-bin/lib/hive-llap-server-2.0.0-SNAPSHOT.jar!/llap-daemon-log4j2.properties
at org.apache.hadoop.fs.Path.initialize(Path.java:205)
at org.apache.hadoop.fs.Path.(Path.java:171)
at 
org.apache.hadoop.hive.llap.cli.LlapServiceDriver.run(LlapServiceDriver.java:234)
at 
org.apache.hadoop.hive.llap.cli.LlapServiceDriver.main(LlapServiceDriver.java:58)
Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
jar:file:/home/vagrant/llap/apache-hive-2.0.0-SNAPSHOT-bin/lib/hive-llap-server-2.0.0-SNAPSHOT.jar!/llap-daemon-log4j2.properties
at java.net.URI.checkPath(URI.java:1823)
at java.net.URI.(URI.java:745)
at org.apache.hadoop.fs.Path.initialize(Path.java:202)
... 3 more
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7964) Add support for async edit logging

2015-12-16 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060976#comment-15060976
 ] 

Jing Zhao commented on HDFS-7964:
-

Sorry for the late response, Daryn. The current patch looks good to me. I will 
take a final review later today.

> Add support for async edit logging
> --
>
> Key: HDFS-7964
> URL: https://issues.apache.org/jira/browse/HDFS-7964
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.2-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-7964.patch, HDFS-7964.patch, HDFS-7964.patch
>
>
> Edit logging is a major source of contention within the NN.  LogEdit is 
> called within the namespace write log, while logSync is called outside of the 
> lock to allow greater concurrency.  The handler thread remains busy until 
> logSync returns to provide the client with a durability guarantee for the 
> response.
> Write heavy RPC load and/or slow IO causes handlers to stall in logSync.  
> Although the write lock is not held, readers are limited/starved and the call 
> queue fills.  Combining an edit log thread with postponed RPC responses from 
> HADOOP-10300 will provide the same durability guarantee but immediately free 
> up the handlers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9173) Erasure Coding: Lease recovery for striped file

2015-12-16 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9173:

Attachment: HDFS-9173.08.patch

Upload a patch that simplifies the logic according to the above discussion.

> Erasure Coding: Lease recovery for striped file
> ---
>
> Key: HDFS-9173
> URL: https://issues.apache.org/jira/browse/HDFS-9173
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-9173.00.wip.patch, HDFS-9173.01.patch, 
> HDFS-9173.02.step125.patch, HDFS-9173.03.patch, HDFS-9173.04.patch, 
> HDFS-9173.05.patch, HDFS-9173.06.patch, HDFS-9173.07.patch, HDFS-9173.08.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9568) Support NFSv4 interface to HDFS

2015-12-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061057#comment-15061057
 ] 

Allen Wittenauer commented on HDFS-9568:


I'm inclined to say no because you can't claim NFSv4 compliance without support 
RPCSEC.

> Support NFSv4 interface to HDFS
> ---
>
> Key: HDFS-9568
> URL: https://issues.apache.org/jira/browse/HDFS-9568
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> [HDFS-4750|https://issues.apache.org/jira/browse/HDFS-4750] added NFSv3 
> interface to HDFS. As NFSv4 client support in many OSes has matured, we can 
> addd NFSv4 interface to HDFS. There are some NFSv4 features quite suitable in 
> Hadoop's distributed environment in addition to simplified configuration and 
> added security.
> This JIRA is to track NFSv4 support to access HDFS.
> We will upload the design doc and then the initial implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-9568) Support NFSv4 interface to HDFS

2015-12-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061057#comment-15061057
 ] 

Allen Wittenauer edited comment on HDFS-9568 at 12/16/15 10:50 PM:
---

I'm inclined to say no because you can't claim NFSv4 compliance without RPCSEC 
support.


was (Author: aw):
I'm inclined to say no because you can't claim NFSv4 compliance without support 
RPCSEC.

> Support NFSv4 interface to HDFS
> ---
>
> Key: HDFS-9568
> URL: https://issues.apache.org/jira/browse/HDFS-9568
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> [HDFS-4750|https://issues.apache.org/jira/browse/HDFS-4750] added NFSv3 
> interface to HDFS. As NFSv4 client support in many OSes has matured, we can 
> addd NFSv4 interface to HDFS. There are some NFSv4 features quite suitable in 
> Hadoop's distributed environment in addition to simplified configuration and 
> added security.
> This JIRA is to track NFSv4 support to access HDFS.
> We will upload the design doc and then the initial implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9552) Document types of permission checks performed for HDFS operations.

2015-12-16 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061068#comment-15061068
 ] 

Arpit Agarwal commented on HDFS-9552:
-

Thanks. That makes sense. +1 pending Jenkins.

Nitpick you may consider fixing during commit, _source_ should be _sources_ 
since the check is enforced for each source file. 

> Document types of permission checks performed for HDFS operations.
> --
>
> Key: HDFS-9552
> URL: https://issues.apache.org/jira/browse/HDFS-9552
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-9552.001.patch, HDFS-9552.002.patch, 
> hadoop-site.tar.bz2
>
>
> The HDFS permissions guide discusses our use of a POSIX-like model with read, 
> write and execute permissions associated with users, groups and the catch-all 
> other class.  However, there is no documentation that describes exactly what 
> permission checks are performed by user-facing HDFS operations.  This is a 
> frequent source of questions, so it would be good to document this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8674) Improve performance of postponed block scans

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061120#comment-15061120
 ] 

Hadoop QA commented on HDFS-8674:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 16s {color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66 with JDK v1.8.0_66 
generated 1 new issues (was 32, now 32). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 57s {color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91 with JDK v1.7.0_91 
generated 1 new issues (was 34, now 34). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-hdfs-project/hadoop-hdfs (total was 161, now 160). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 16s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 8s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 128m 44s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.TestDFSClientRetries |

[jira] [Resolved] (HDFS-9551) Random VolumeChoosingPolicy

2015-12-16 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-9551.
---
Resolution: Won't Fix

Based on discussion here, I don't want to add the maintenance burden of a new 
volume policy unless there are some demonstrated benefits. Thanks for the 
interest though!

> Random VolumeChoosingPolicy
> ---
>
> Key: HDFS-9551
> URL: https://issues.apache.org/jira/browse/HDFS-9551
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: BELUGA BEHR
>Priority: Minor
> Attachments: RandomVolumeChoosingPolicy.java, 
> TestRandomVolumeChoosingPolicy.java
>
>
> Please find attached a new implementation of VolumeChoosingPolicy.  This 
> implementation chooses volumes at random to place blocks.  It is thread-safe 
> and un-synchronized so there is less thread contention.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9347) Invariant assumption in TestQuorumJournalManager.shutdown() is wrong

2015-12-16 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9347:
--
Attachment: HDFS-9347.003.patch

Hi [~zhz] Thanks for the suggestion. Agreed the code was not clean. I 
refactored the code a bit, creating a new method anyThreadMatching() in order 
to make it cleaner. The idea is to assert name of the thread which matches the 
regular expression.

An alternative approach would be to create a "assumeNoThreadsMatching()", and 
let it throw an exception if there's any thread that matches it.

> Invariant assumption in TestQuorumJournalManager.shutdown() is wrong
> 
>
> Key: HDFS-9347
> URL: https://issues.apache.org/jira/browse/HDFS-9347
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9347.001.patch, HDFS-9347.002.patch, 
> HDFS-9347.003.patch
>
>
> The code
> {code:title=TestTestQuorumJournalManager.java|borderStyle=solid}
> @After
>   public void shutdown() throws IOException {
> IOUtils.cleanup(LOG, toClose.toArray(new Closeable[0]));
> 
> // Should not leak clients between tests -- this can cause flaky tests.
> // (See HDFS-4643)
> GenericTestUtils.assertNoThreadsMatching(".*IPC Client.*");
> 
> if (cluster != null) {
>   cluster.shutdown();
> }
>   }
> {code}
> implicitly assumes when the call returns from IOUtils.cleanup() (which calls 
> close() on QuorumJournalManager object), all IPC client connection threads 
> are terminated. However, there is no internal implementation that enforces 
> this assumption. Even if the bug reported in HADOOP-12532 is fixed, the 
> internal code still only ensures IPC connections are terminated, but not the 
> thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9552) Document types of permission checks performed for HDFS operations.

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061210#comment-15061210
 ] 

Hadoop QA commented on HDFS-9552:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 58 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 2m 14s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12778130/HDFS-9552.003.patch |
| JIRA Issue | HDFS-9552 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 41534912c4b7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3c0adac |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13908/artifact/patchprocess/whitespace-eol.txt
 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Max memory used | 29MB |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13908/console |


This message was automatically generated.



> Document types of permission checks performed for HDFS operations.
> --
>
> Key: HDFS-9552
> URL: https://issues.apache.org/jira/browse/HDFS-9552
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-9552.001.patch, HDFS-9552.002.patch, 
> HDFS-9552.003.patch, hadoop-site.tar.bz2
>
>
> The HDFS permissions guide discusses our use of a POSIX-like model with read, 
> write and execute permissions associated with users, groups and the catch-all 
> other class.  However, there is no documentation that describes exactly what 
> permission checks are performed by user-facing HDFS operations.  This is a 
> frequent source of questions, so it would be good to document this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7964) Add support for async edit logging

2015-12-16 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061297#comment-15061297
 ] 

Jing Zhao commented on HDFS-7964:
-

The patch looks good to me. One nit is that two places in FSEditLogAsync have 
some commented code:
{code}
  //if (LOG.isDebugEnabled()) {
LOG.info("logSync "+edit);
  //}
{code}

+1 after addressing this.

Besides, the current postponeResponse/sendResponse code uses a counter to delay 
the response. This looks a little hacky to me. But I understand this may be the 
only way to add the new functionality without changing the original code. Maybe 
we can do some extra refactoring in the future.

> Add support for async edit logging
> --
>
> Key: HDFS-7964
> URL: https://issues.apache.org/jira/browse/HDFS-7964
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.2-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-7964.patch, HDFS-7964.patch, HDFS-7964.patch
>
>
> Edit logging is a major source of contention within the NN.  LogEdit is 
> called within the namespace write log, while logSync is called outside of the 
> lock to allow greater concurrency.  The handler thread remains busy until 
> logSync returns to provide the client with a durability guarantee for the 
> response.
> Write heavy RPC load and/or slow IO causes handlers to stall in logSync.  
> Although the write lock is not held, readers are limited/starved and the call 
> queue fills.  Combining an edit log thread with postponed RPC responses from 
> HADOOP-10300 will provide the same durability guarantee but immediately free 
> up the handlers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9051) webhdfs should support recursive list

2015-12-16 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-9051:
-
Assignee: (was: Surendra Singh Lilhore)

> webhdfs should support recursive list
> -
>
> Key: HDFS-9051
> URL: https://issues.apache.org/jira/browse/HDFS-9051
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Allen Wittenauer
>
> There currently doesn't appear to be a way to recursive list a directory via 
> webhdfs without making an individual liststatus call per dir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9373) Show friendly information to user when client succeeds the writing with some failed streamers

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061592#comment-15061592
 ] 

Hadoop QA commented on HDFS-9373:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
49s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 53s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 57s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 48s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12778184/HDFS-9373-003.patch |
| JIRA Issue | HDFS-9373 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a0eb9cbb762f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HDFS-9198) Coalesce IBR processing in the NN

2015-12-16 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-9198:
--
Attachment: HDFS-9198-Branch-2.8-withamend.diff
HDFS-9198-Branch-2-withamend.diff

While merging to branch-2, I did the following edits to resolve conflicts.
{code}
-  DatanodeStorageInfo[] getStorageInfos() {
+  @VisibleForTesting
+  public DatanodeStorageInfo[] getStorageInfos() {
{code} 
Test depending on this. This change already exist in trunk, but not in 
branch-2, just made it visible to testcode. Attached the patches what I 
committed to branch-2 and 2.8 for reference.

> Coalesce IBR processing in the NN
> -
>
> Key: HDFS-9198
> URL: https://issues.apache.org/jira/browse/HDFS-9198
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0
>
> Attachments: HDFS-9198-Branch-2-withamend.diff, 
> HDFS-9198-Branch-2.8-withamend.diff, HDFS-9198-branch2.patch, 
> HDFS-9198-trunk.patch, HDFS-9198-trunk.patch, HDFS-9198-trunk.patch, 
> HDFS-9198-trunk.patch, HDFS-9198-trunk.patch
>
>
> IBRs from thousands of DNs under load will degrade NN performance due to 
> excessive write-lock contention from multiple IPC handler threads.  The IBR 
> processing is quick, so the lock contention may be reduced by coalescing 
> multiple IBRs into a single write-lock transaction.  The handlers will also 
> be freed up faster for other operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8296) BlockManager.getUnderReplicatedBlocksCount() is not giving correct count if namenode in safe mode.

2015-12-16 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-8296:
-
Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

>  BlockManager.getUnderReplicatedBlocksCount() is not giving correct count if 
> namenode in safe mode.
> ---
>
> Key: HDFS-8296
> URL: https://issues.apache.org/jira/browse/HDFS-8296
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>  Labels: BB2015-05-RFC
> Attachments: HDFS-8296.patch
>
>
> {{underReplicatedBlocksCount}} update by the {{updateState()}} API.
> {code}
>  void updateState() {
> pendingReplicationBlocksCount = pendingReplications.size();
> underReplicatedBlocksCount = neededReplications.size();
> corruptReplicaBlocksCount = corruptReplicas.size();
>   }
>  {code}
>  but this will not call when NN in safe mode. This is happening because 
> "computeDatanodeWork()" we will return 0 if NN in safe mode 
>  {code}
>   int computeDatanodeWork() {
>.
> if (namesystem.isInSafeMode()) {
>   return 0;
> }
> 
> 
> this.updateState();
> 
> 
>   }
>  {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9373) Show friendly information to user when client succeeds the writing with some failed streamers

2015-12-16 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-9373:

Attachment: HDFS-9373-003.patch

Sorry for the checkstyle error.Update patch 003 to fix it.

> Show friendly information to user when client succeeds the writing with some 
> failed streamers
> -
>
> Key: HDFS-9373
> URL: https://issues.apache.org/jira/browse/HDFS-9373
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-9373-001.patch, HDFS-9373-002.patch, 
> HDFS-9373-003.patch
>
>
> When not more than PARITY_NUM streamers fail for a block group, the client 
> may still succeed to write the data. But several exceptions are thrown to 
> user and user has to check the reasons.  The friendly way is just inform user 
> that some streamers fail when writing a block group. It’s not necessary to 
> show the details of exceptions because a small number of stream failures is 
> not vital to the client writing.
> When only DATA_NUM streamers succeed, the block group is in a high risk 
> because the corrupt of any block will cause all the six blocks' data lost. We 
> should give obvious warning to user when this occurs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9198) Coalesce IBR processing in the NN

2015-12-16 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-9198:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks [~daryn] for the nice work here. I have committed this to trunk, 
branch-2 and branch-2.8 

> Coalesce IBR processing in the NN
> -
>
> Key: HDFS-9198
> URL: https://issues.apache.org/jira/browse/HDFS-9198
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0
>
> Attachments: HDFS-9198-branch2.patch, HDFS-9198-trunk.patch, 
> HDFS-9198-trunk.patch, HDFS-9198-trunk.patch, HDFS-9198-trunk.patch, 
> HDFS-9198-trunk.patch
>
>
> IBRs from thousands of DNs under load will degrade NN performance due to 
> excessive write-lock contention from multiple IPC handler threads.  The IBR 
> processing is quick, so the lock contention may be reduced by coalescing 
> multiple IBRs into a single write-lock transaction.  The handlers will also 
> be freed up faster for other operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6804) race condition between transferring block and appending block causes "Unexpected checksum mismatch exception"

2015-12-16 Thread Max Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061626#comment-15061626
 ] 

Max Schmidt commented on HDFS-6804:
---

Hadoop version is 2.7.1 used on Ubuntu 14.04.3 LTS.

> race condition between transferring block and appending block causes 
> "Unexpected checksum mismatch exception" 
> --
>
> Key: HDFS-6804
> URL: https://issues.apache.org/jira/browse/HDFS-6804
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.2.0
>Reporter: Gordon Wang
>
> We found some error log in the datanode. like this
> {noformat}
> 2014-07-22 01:49:51,338 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Ex
> ception for BP-2072804351-192.168.2.104-1406008383435:blk_1073741997_9248
> java.io.IOException: Terminating due to a checksum error.java.io.IOException: 
> Unexpected checksum mismatch while writing 
> BP-2072804351-192.168.2.104-1406008383435:blk_1073741997_9248 from 
> /192.168.2.101:39495
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:536)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:703)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:575)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
> at java.lang.Thread.run(Thread.java:744)
> {noformat}
> While on the source datanode, the log says the block is transmitted.
> {noformat}
> 2014-07-22 01:49:50,805 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Da
> taTransfer: Transmitted 
> BP-2072804351-192.168.2.104-1406008383435:blk_1073741997
> _9248 (numBytes=16188152) to /192.168.2.103:50010
> {noformat}
> When the destination datanode gets the checksum mismatch, it reports bad 
> block to NameNode and NameNode marks the replica on the source datanode as 
> corrupt. But actually, the replica on the source datanode is valid. Because 
> the replica can pass the checksum verification.
> In all, the replica on the source data is wrongly marked as corrupted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9570) Minor typos, grammar, and case sensitivity cleanup in HdfsPermissionsGuide.md's

2015-12-16 Thread Travis Campbell (JIRA)
Travis Campbell created HDFS-9570:
-

 Summary: Minor typos, grammar, and case sensitivity cleanup in 
HdfsPermissionsGuide.md's
 Key: HDFS-9570
 URL: https://issues.apache.org/jira/browse/HDFS-9570
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Travis Campbell
Priority: Trivial


Ran across a few minor grammatical/capitalization errors while reading through 
the HdfsPermissionsGuide in the Super-user section. 

* my -> may
* my -> by
* lowercasing use of "Sticky" mid-sentence

Additionally, a few formatting/consistency issues:

* markdown formatting around kinit to match other command style
* "name node" -> NameNode to match the casing usage consistently 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9393) After choosing favored nodes, choosing nodes for remaining replicas should go through BlockPlacementPolicy

2015-12-16 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061642#comment-15061642
 ] 

Vinayakumar B commented on HDFS-9393:
-

Thanks [~andreina] for the patch. 
Changes looks good.

Jenkins report looks flaky. 
Kicked one more build, will wait for the report.

> After choosing favored nodes, choosing nodes for remaining replicas should go 
> through BlockPlacementPolicy
> --
>
> Key: HDFS-9393
> URL: https://issues.apache.org/jira/browse/HDFS-9393
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: J.Andreina
>Assignee: J.Andreina
> Attachments: HDFS-9393.1.patch, HDFS-9393.2.patch, HDFS-9393.3.patch
>
>
> Current Behavior is :
> After choosing replicas from passed favored nodes , choosing nodes for 
> remaining replica does not go through BlockPlacementPolicy.
> Hence eventhough there is a local client datanode is available and not passed 
> as part of favored nodes , probability for choosing local datanode is less.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9114) NameNode and DataNode metric log file name should follow the other log file name format.

2015-12-16 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-9114:
-
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

> NameNode and DataNode metric log file name should follow the other log file 
> name format.
> 
>
> Key: HDFS-9114
> URL: https://issues.apache.org/jira/browse/HDFS-9114
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9114-branch-2.01.patch, 
> HDFS-9114-branch-2.02.patch, HDFS-9114-trunk.01.patch, 
> HDFS-9114-trunk.02.patch
>
>
> Currently datanode and namenode metric log file name is 
> {{datanode-metrics.log}} and {{namenode-metrics.log}}.
> This file name should be like {{hadoop-hdfs-namenode-metric-host192.log}} 
> same as namenode log file {{hadoop-hdfs-namenode-host192.log}}.
> This will help when we will copy log for issue analysis from different node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9347) Invariant assumption in TestQuorumJournalManager.shutdown() is wrong

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061490#comment-15061490
 ] 

Hadoop QA commented on HDFS-9347:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 54s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 55s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 59s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
1s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 57s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 33s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 2s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 3s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 48s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 26s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 220m 22s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot |
|   | hadoop.hdfs.TestRollingUpgrade |
| JDK v1.7.0_91 Failed junit tests | hadoop.ipc.TestIPC |
|   | hadoop.hdfs.server.namenode.TestNameNodeResourceChecker |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |

[jira] [Commented] (HDFS-9569) Log the name of the fsimage being loaded for better supportability

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061588#comment-15061588
 ] 

Hadoop QA commented on HDFS-9569:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 51m 16s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 28s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 131m 7s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.server.namenode.TestNNThroughputBenchmark |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
| JDK v1.7.0_91 Failed junit tests | 
hadoop.hdfs.server.datanode.TestBlockReplacement |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.TestAclsEndToEnd |
|   | hadoop.hdfs.TestEncryptedTransfer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12778168/HDFS-9569.001.patch |
| JIRA Issue | HDFS-9569 |
| Optional 

[jira] [Updated] (HDFS-9570) Minor typos, grammar, and case sensitivity cleanup in HdfsPermissionsGuide.md's

2015-12-16 Thread Travis Campbell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Campbell updated HDFS-9570:
--
Attachment: HDFS-9570-1.patch

> Minor typos, grammar, and case sensitivity cleanup in 
> HdfsPermissionsGuide.md's
> ---
>
> Key: HDFS-9570
> URL: https://issues.apache.org/jira/browse/HDFS-9570
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Travis Campbell
>Priority: Trivial
> Attachments: HDFS-9570-1.patch
>
>
> Ran across a few minor grammatical/capitalization errors while reading 
> through the HdfsPermissionsGuide in the Super-user section. 
> * my -> may
> * my -> by
> * lowercasing use of "Sticky" mid-sentence
> Additionally, a few formatting/consistency issues:
> * markdown formatting around kinit to match other command style
> * "name node" -> NameNode to match the casing usage consistently 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9515) NPE in TestDFSZKFailoverController due to binding exception in MiniDFSCluster.initMiniDFSCluster()

2015-12-16 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061250#comment-15061250
 ] 

Wei-Chiu Chuang commented on HDFS-9515:
---

Thanks [~arpitagarwal] for the suggestion. Do you intend to say @AfterClass? 
@Before/@After are run for each test, while @BeforeClass/@AfterClass are run 
once for the entire test class.

> NPE in TestDFSZKFailoverController due to binding exception in 
> MiniDFSCluster.initMiniDFSCluster()
> --
>
> Key: HDFS-9515
> URL: https://issues.apache.org/jira/browse/HDFS-9515
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9515.001.patch, HDFS-9515.002.patch
>
>
> If MiniDFSCluster constructor throws an exception, the cluster object is not 
> assigned, so shutdown() call not be called on the object.
> I saw in a recent Jenkins job where binding error threw an exception, and 
> later on the NPE of cluster.shutdown() hid the real cause of the test failure.
> HDFS-9333 has a patch that fixes the bind error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >